halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
8
| timestamp
stringclasses 938
values | year
stringclasses 55
values | url
stringlengths 43
389
| text
stringlengths 16
2.18M
| size
int64 16
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 0
229
|
---|---|---|---|---|---|---|---|---|---|
01463834 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463834/file/978-3-642-39218-4_23_Chapter.pdf | Bernd Zwattendorfer
email: [email protected]
Daniel Slamanig
email: [email protected]
On Privacy-Preserving Ways to Porting the Austrian eID System to the Public Cloud
Secure authentication and unique identification of Austrian citizens are the main functions of the Austrian eID system. To facilitate the adoption of this eID system at online applications, the open source module MOA-ID has been developed, which manages identification and authentication based on the Austrian citizen card (the official Austrian eID) for service providers. Currently, the Austrian eID system treats MOA-ID as a trusted entity, which is locally deployed in every service provider's domain. While this model has indeed some benefits, in some situations a centralized deployment approach of MOA-ID may be preferable. In this paper, we therefore propose a centralized deployment approach of MOA-ID in the public cloud. However, the move of a trusted service into the public cloud brings up new obstacles since the cloud can not be considered trustworthy. We encounter these obstacles by introducing and evaluating three distinct approaches, thereby retaining the workflow of the current Austrian eID system and preserving citizens' privacy when assuming that MOA-ID acts honest but curious.
Introduction
The Austrian eID system constitutes one major building block within the Austrian e-Government strategy. Secure authentication and unique identification of Austrian citizens -by still preserving citizens' privacy -are the main functions of the Austrian eID system. The basic building block for secure authentication and unique identification in the Austrian eID system is the Austrian citizen card [START_REF] Leitold | Security Architecture of the Austrian Citizen Card Concept[END_REF], the official eID in Austria.
To facilitate the adoption of this eID concept at online applications, the open source module MOA-ID has been developed. Basically, MOA-ID manages the identification and authentication process based on the Austrian citizen card for various service providers. Currently, the Austrian eID concept treats MOA-ID as a trusted entity, which is deployed locally in every service provider's domain. While this model has indeed some benefits, in some situations a centralized deployment approach of MOA-ID may be preferable. For instance, a centralized MOA-ID can save service providers a lot of operational and maintenance costs. However, in terms of scalability -theoretically the whole Austrian population could use this central service for identification and authentication at service providers -the existing approach is advantageous.
To bypass the issue of scalability, in this paper, we propose a centralized deployment approach of MOA-ID in the public cloud. The public cloud is able to provide nearly unlimited computing resources and hence the scalability problem can easily be compensated. However, the move of a trusted service into the public cloud brings up new obstacles. In particular, MOA-ID, since now running in the public cloud, can no longer be considered a trustworthy entity. We encounter these obstacles by introducing three different approaches, each describing how the current Austrian eID system can be securely migrated into the public cloud. All approaches retain the workflow of the current Austrian eID system and preserve citizens' privacy when assuming that MOA-ID acts honest but curious. The first approach uses both proxy re-encryption and redactable signatures, the second one relies on anonymous credentials, and the third one sets up on fully homomorphic encryption.
The Current Austrian eID System
In the following subsections we describe the basic ideas of the Austrian eID concept by presenting involved components and processes.
The Austrian Citizen Card Concept
Unique identification and secure authentication are essential processes in e-Government. Particularly, unique identification is essential when a large amount of users comes into play, such as the population of a whole country. In such a huge population, identification of citizens based on first name, last name, and date of birth may be ambiguous. To mitigate this problem, each Austrian citizen is registered in a central register and is assigned a unique identification number. Furthermore, another unique identifier is computed from this number and stored on each citizen card. This so-called sourcePIN is created by a trusted entity, the so-called SourcePIN Register Authority (SRA), and can be used for unique citizen identification at online applications. However, the sourcePIN requires special protection as it is forbidden by law to permanently store the sourcePIN outside the citizen card. Therefore, the Austrian eID concept uses a sector-specific model for identification at online applications. In this sector-specific model, the sourcePIN is used to derive unique sector-specific identifiers, so called sectorspecific PINs (ssPINs) for every different governmental sector, e.g., tax, finance, etc. Thereby, citizens' privacy is assured as the sourcePIN cannot be derived from a given ssPIN and different ssPINs of one citizen cannot be linked together.
The key element of the Austrian eID concept constitutes the Austrian citizen card [START_REF] Leitold | Security Architecture of the Austrian Citizen Card Concept[END_REF], which is basically an abstract definition of a secure eID token possessed by every Austrian citizen. Due to this abstract definition, the Austrian citizen card is a technology-neutral concept, which allows for different implementations. Currently, implementations based on smart cards and mobile phones are in use.
In general, the main functions of the Austrian citizen card are 1) identification and authentication of citizens and 2) secure and qualified electronic signature creation. Citizen identification is based on a special data structure (the Identity Link ), which is solely stored on the Austrian citizen card. This special data structure contains the citizen's first name, last name, date of birth, a unique identifier (sourcePIN), and the citizen's qualified signature certificate. To guarantee its integrity and authenticity, the Identity Link is digitally signed by the SourcePIN Register Authority at issuance. Citizen authentication is carried out by creating a qualified electronic signature according to the EU Signature Directive.
Identification and Authentication at Online Services
To facilitate the integration of the citizen card's identification and authentication functionality into online services, the open source module MOA-ID is available. The current Austrian eID system relies on a local deployment model, where MOA-ID is deployed and operated in basically every service provider's domain. Due to that fact, MOA-ID is assumed to be trusted, i.e., it will not leak sensitive information such as the citizen's sourcePIN. Figure 1 illustrates in an abstract way the typical identification and authentication scenario of Austrian citizens using MOA-ID. Service Provider: The service provider usually provides web-based services, which require unique identification and secure authentication by using the Austrian citizen card. This organization can be either a public authority or a private sector company.
Client-Side Middleware: The Austrian eID concept foresees an abstract and generic access layer to the citizen card, irrespective of its implementation. The client-side middleware implements this interface, which provides online applications easy access to citizen card functionality without the need of knowing any citizen card specifics. The identity provider MOA-ID uses this interface for accessing diverse citizen card functions.
Identity Provider (MOA-ID): MOA-ID represents an identity provider for governmental or private sector service providers. On the one hand, MOA-ID manages the communication with the citizen and her citizen card and, on the other hand, MOA-ID provides specific and authentic citizen card attributes to the service provider for further processing.
In the following we briefly explain an authentication process flow at online services, whereas steps 1 and 2 represent the identification and steps 3 and 4 the authentication process of the Austrian citizen.
Setup: The SRA as trusted entity is responsible for managing citizens' Identity Links. Identity Links can be stored on smart card-based citizen card implementations or server-based (in a hardware security module) using the Austrian Mobile Phone Signature.
Citizen registration: All Austrian citizens are registered in a central register. In order to activate the citizen card, a citizen must prove her identity, e.g. by using a personal ID. This can be done through various channels, either proving the identity personally in a registration office or via certified mail.
Service provider registration: Governmental service providers can be identified either by a special domain ending ("gv.at") or by including a specific object identifier in the service provider's SSL certificate.
Authentication at online services:
1 Reading and verifying citizen's Identity Link: After having received an authentication request from the service provider, MOA-ID starts the citizen identification process by requesting the citizen's Identity Link through the citizen's client-side middleware. After that, MOA-ID verifies the signature of the returned Identity Link to check its integrity and authenticity. 2 Calculation of the citizen's ssPIN according to the Austrian eID concept: MOA-ID calculates the ssPIN by applying a cryptographic hash function H (SHA-1) to the concatenation of the sourcePIN and a sector-specific identifier s of the service provider, i.e., ssPIN = H(sourcePIN s). 3 Requesting the generation of a qualified electronic signature of the citizen: MOA-ID requests a qualified electronic signature from the citizen through her client-side middleware. By signing a specific message, the citizen gives her consent that she is willing to authenticate at the respective service provider. 4 Verification of the citizen signature: MOA-ID verifies the citizen's qualified signature. 5 Assembling citizen identification and authentication data in a structured way and providing it to the service provider: MOA-ID assembles a special data structure including authentic identity information of the citizen from the Identity Link. These data are structured according to the specifications of the Security Assertion Markup Language (SAML, http://saml.xml.org) and are delivered to the authentication requesting service provider using a SAML defined protocol, thereby ensuring integrity and authenticity of the data transfer.
Cryptographic Building Blocks
In this section we introduce the cryptographic building blocks. We note that we do not provide an explicit description of a conventional digital signature scheme (DSS) since this should be clear from the other signature primitives.
Redactable Signatures
A conventional digital signature does not allow for alterations of a signed document without invalidating the signature. However, there are scenarios where it would be valuable to have the possibility to replace or remove (specified) parts of a message after signature creation such that the original signature stays valid (and no interaction with the original signer is required). Signature schemes which allow removal of content (replacement by some special symbol ⊥) by any party are called redactable [START_REF] Johnson | Homomorphic Signature Schemes[END_REF], while signature schemes which allow (arbitrary) replacements of admissible parts by a designated party are called sanitizable signature schemes [START_REF] Ateniese | Sanitizable Signatures[END_REF]. Below, we present an abstract definition of redactable signatures:
Anonymous Signatures
Anonymous signature schemes allow group members to issue signatures on behalf of a group, while hiding for each signature which group member actually produced it. There are several flavors of anonymous signatures: Group signatures [START_REF] Ateniese | A Practical and Provably Secure Coalition-Resistant Group Signature Scheme[END_REF] which involve a dedicated entity (the group manager), who runs a setup and an explicit join protocol for every group member to create the respective members signing key. Furthermore, the group manager is able to open signatures issued by group members to identify the respective signer.
Ring signatures [START_REF] Rivest | How to leak a secret: Theory and applications of ring signatures[END_REF] are conceptually similar to group signatures, but there is no group manager and the anonymity provided is unconditional. They are "ad-hoc", meaning that a user may take an arbitrary set (ring) of valid public keys to construct a ring signature and the ring represents the anonymity set. We choose to use ring signatures for one of our approaches and present an abstract definition of this signature scheme below, where the key generation is that of a standard digital signature scheme (DSS) and hence omitted here:
AS.Sign: This (probabilistic) signing algorithm gets as input the signing key ski s.t. pki ∈ R, a ring of public keys R = (pk1, . . . , pkn), a message m and outputs a signature σ = AS.Sign(ski, R, m). AS.Verify: This deterministic signature verification algorithm gets as input a ring of public keys R = (pk1, . . . , pkn), a message m, and a signature σ and outputs a single bit b = AS.Verify(R, m, σ), b ∈ {true, false}, indicating whether σ is a valid signature for m under R.
Proxy Re-Encryption
Proxy re-encryption is a public key encryption paradigm where a semi-trusted proxy can transform a message encrypted under the key of party A into another ciphertext, containing the initial plaintext, such that another party B can decrypt with its key. Although the proxy can perform this re-encryption operation, it neither gets access to the plaintext nor to the decryption keys.
According to the direction of this re-encryption, such schemes can be classified into bidirectional, i.e., the proxy can transform from A to B and vice versa, and unidirectional, i.e., the proxy can convert in one direction only, schemes. Furthermore, one can distinguish between multi-use schemes, i.e., the ciphertext can be transformed from A to B to C etc., and single-use schemes, i.e., the ciphertext can be transformed only once. We use the unidirectional single-use identity-based proxy re-encryption scheme of [START_REF] Green | Identity-Based Proxy Re-encryption[END_REF], but note that we could also use non-identity-based ones.
Anonymous Credentials
Anonymous credential systems [START_REF] Brands | Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy[END_REF][START_REF] Camenisch | An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation[END_REF] enable anonymous attribute-based authentication, i.e., they hide the identity of the credential's owner. Multi-show approaches support unlinkability, i.e., different showings of a credential remain unlinkable and are unlinkable to the issuing [START_REF] Camenisch | An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation[END_REF], while others are one-show [START_REF] Brands | Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy[END_REF]. Anonymous credentials are very expressive since they allow to encode arbitrary attributes into the credential. Additonally, during the proof of possession of a credential a user can selectively reveal values of attributes or prove that certain relations among attributes hold, without revealing the attribute values. We use an abstract definition of an anonymous credential system as follows:
We note that the Prove algorithm may also be non-interactive, i.e., the credential holder produces a signature of knowledge which can then be given to the verifier to check the validity of the proof locally.
Fully Homomorphic Encryption
Fully homomorphic encryption (FHE) schemes are semantically secure (publickey) encryption schemes which allow arbitrary functions to be evaluated on ciphertexts given the (public) key and the ciphertext. Gentry [START_REF] Gentry | Fully Homomorphic Encryption using Ideal Lattices[END_REF] provided the first construction along with a general blue-print to construct (bootstrap) such schemes from less powerful ones. Since then lots of improvements and alternate approaches have been proposed (cf. [START_REF] Vaikuntanathan | Computing Blindfolded: New Developments in Fully Homomorphic Encryption[END_REF]). However, it seems to require some more years of research to make them practical in general [START_REF] Gentry | Homomorphic Evaluation of the AES Circuit[END_REF]. A fully homomorphic (public-key) encryption scheme is defined by the following efficient algorithms. In this definition messages are bits, but this can easily be generalized to larger spaces. Let us consider arbitrary message spaces in the following. For one approach we need to assume that FHE schemes exists which are "key-homomorphic". Loosely speaking, this means that for each pair of public keys pk 1 and pk 2 one can derive f 1,2 and evk 1,2 such that
m = FHE.Dec(FHE.Eval(f 1,2 , FHE.Enc(m, pk 1 ), evk 1,2 ), sk 2 ).
This means that by using f 1,2 one performs a "re-encryption" of m encrypted under pk 1 to another ciphertext under pk 2 , which can then be decrypted using sk 2 . Such a scheme can trivially be realized using any FHE scheme by letting f 1,2 represent the circuit, which firstly decrypts the ciphertext c using sk 1 obtaining m and then encrypts m using pk 2 and evk 1,2 = evk 1 . However, since now sk 1 would be explicitly wired in the circuit, this would reveal the secret key which is clearly undesirable. Since we are currently not aware of an FHE construction which supports this (loosely defined) property, we need to assume that such a scheme will be available in the future.
Porting the Austrian eID System to the Public Cloud
The current local deployment model of MOA-ID has some benefits in terms of end-to-end security or scalability, but still some issues can be identified compared to a centralized deployment model of MOA-ID. The adoption of a centralized model may have the following advantages and disadvantages: On the one hand, the use of one single and central instance of MOA-ID has a clear advantage for citizens as they only need to trust one specific identity provider. In addition, users could benefit from a comfortable single sign-on (SSO). On the other hand, especially service providers can save a lot of costs because they do not need to operate and maintain a separate MOA-ID installation. Nevertheless, still some disadvantages can be identified. Namely, a single instance of MOA-ID constitutes a single point of failure or attack. Particularly, scalability may be an issue as all citizen authentications will run through this centralized system. This is probably the main issue, as theoretically the whole Austrian population could use this service for identification and authentication at service providers. However, the issue on scalability can be tackled by moving MOA-ID into a public cloud, which is able to theoretically provide unlimited computing resources. Needless to say, a move of a trusted service into the public cloud, however, brings up some new obstacles.
In order to make a migration of the Austrian eID system and MOA-ID into the public cloud possible, we have identified three approaches to adapt the existing Austrian eID system for running it in the public cloud. The adapted Austrian eID system of the respective solution will provide all functions of MOA-ID (identification, ssPIN generation, and authentication) as in the current status, but protects citizen's privacy with respect to the cloud provider. For providing compact descriptions, we denote the SourcePIN Register Authority by SRA and the Identity Link by I = ((A 1 , a 1 ), . . . , (A k , a k )) as a sequence of attribute labels and attribute values. Let the set of citizens be C = {C 1 , . . . , C n } and the set of service providers be S = {S 1 , . . . , S } as well as the citizen's client-side middleware be denoted as M . Moreover, let us assume that Citizen C i wants to authenticate at service provider S j who requires the set of attributes A j from I and exactly one "pseudonym", i.e., the ssPIN for the sector s the service provider S j is associated to. Additionally, recall that every citizen C i has a signing key sk Ci stored on the card and the public key pk Ci is publicly available.
Using Proxy Re-Encryption and Redactable Signatures
Here, the Identity Link I is modified in a way that it does not include the sourcePIN, but additionally all ssPINs according to all possible governmental sectors. In this augmented Identity Link I , every attribute a i is encrypted using an uni-directional single-use proxy re-encryption scheme under a public key (the identity of MOA-ID) such that the corresponding private key is not available to MOA-ID and is only known to the SRA. Furthermore, instead of using a conventional digital signature scheme, I is signed by the SRA using a redactable signature scheme such that every a i from I can be redacted. The public verification key is available to MOA-ID. Every service provider S j obtains a key pair for the proxy re-encryption scheme when registering at the SRA. The latter entity produces a re-encryption key, which allows to re-encrypt ciphertexts intended for MOA-ID to S j , and gives it to MOA-ID. Below we present the detailed workflow:
Setup: SRA generates (pkSRA, skSRA) = RS.KeyGen(κ), (params RE , msk RE ) = RE.Setup(κ, 1) as well as skMOA-ID = RE.KeyGen(params RE , msk RE , idMOA-ID). It keeps secret (sk RS , msk RE , skMOA-ID) and publishes params RE as well as pk RS .
Citizen registration: The registration of a citizen Ci at the SRA works as it is done now with the exception that I includes additional attributes a k+1 , . . . , am representing ssPINs for all sectors. Furthermore, for every (Ai, ai) ∈ I the SRA replaces ai by ca i = RE.Enc(params, ai, idMOA-ID) and produces a redactable signature σ I = RS.Sign(skSRA, I ). Then, (σ I , I ) is stored on Ci's citizen card.
Service provider registration:
The registration for service provider Sj at the SRA works as follows. SRA produces a private key sk S j = RE.KeyGen(params RE , msk RE , id S j ) for Sj and a reencryption key rk MOA-ID→S j = RE.RKGen(params, skMOA-ID, MOA-ID, Sj ) and gives sk S j to Sj and rk MOA-ID→S j to MOA-ID respectively. be linked together, it is advisable that every citizen Ci uses a fixed ring all the time, i.e., all citizens in R use the same ring, since otherwise, e.g., when they are sampled uniform at random, then intersection attacks on the rings will soon reveal Ci.). 5: MOA-ID takes all remaining attributes ca i from I (or Î ) and computes for every such attribute c a i = RE.ReEnc(ca i , rk MOA-ID→S j ) and assembles all these resulting c a i into the SAML structure, which is then communicated to Sj . Sj can then decrypt all the attributes using sk S j .
Using Anonymous Credentials
The Identity Link I is augmented to I in a way that it does not include the sourcePIN, but additionally all ssPIN's. Now, the SRA issues an anonymous credential Cred to every citizen for attr being all attributes in I . Essentially, a citizen then authenticates to a service provider by proving to MOA-ID the possession of a valid credential, i.e., MOA-ID checks whether the credential has been revoked or not. Note that for one show credentials, if the entire credential Cred is shown to MOA-ID, this amounts to a simple lookup in a blacklist. If the credential is not revoked, MOA-ID signs the credential to confirm that it is not revoked and the citizen performs via M a (non-interactive) proof by revealing the necessary attributes A j including the required ssPIN to S j , who can then in turn verify the proof(s) as well as MOA-ID's signature.
Setup: SRA generates (pkSRA, skSRA) = AC.KeyGen(κ) and keeps secret skSRA and publishes pkSRA. Furthermore, MOA-ID produces a key pair for a digital signature scheme (pkMOA-ID, skMOA-ID) = DSS.KeyGen(κ) and publishes pkMOA-ID.
Citizen registration: At registration of citizen Ci at the SRA a modified Identity Link I is generated, which includes additional attributes a k+1 , . . . , am representing ssPINs for all sectors and other citizen attributes. Then, SRA and Ci run AC.Issue and the resulting credential Cred is stored on Ci's Citizen Card.
Service provider registration:
The registration for service provider Sj works as it is done now. Note that in this approach Cred is shown to MOA-ID, which however does not reveal the attribute values but makes revocation easier, since it only requires blacklist lookups. One could also use multi-show credentials, whereas M would then have to perform a proof with MOA-ID which convinces MOA-ID that the credentials are not revoked [START_REF] Lapon | Analysis of Revocation Strategies for Anonymous Idemix Credentials[END_REF], which provides stronger privacy guarantees.
Using Fully Homomorphic Encryption
This approach is a rather theoretic one and requires an FHE scheme which is also "key-homomorphic" as already discussed before. The idea for this approach is the following: The Identity Link I of a citizen holds the same attributes as now (and in particular the sourcePIN), but every attribute a i is encrypted using an FHE scheme with the above described property under MOA-ID's public key for which MOA-ID does not hold the private key. Furthermore, this resulting I is conventionally signed by the SRA. Then, for authentication at S j , the resulting I and the signature σ are sent to MOA-ID who checks the signature and homomorphically computes the respective ssPIN from the encrypted sourcePIN (without learning neither sourcePIN nor ssPIN). Then, for all encrypted attributes required by S j (including the afore computed encrypted ssPIN), MOA-ID performs the "FHE re-encryption" to S j 's public key. On receiving the respective information from MOA-ID, the service provider can decrypt all attribute values.
Setup: SRA generates (pkMOA-ID, evkMOA-ID, skMOA-ID) = FEH.KeyGen(κ) and keeps secret skMOA-ID and publishes (pkMOA-ID, evkMOA-ID). Furthermore, SRA produces a key pair for a digital signature scheme (pkSRA, skSRA) = DSS.KeyGen(κ) and publishes pkSRA.
Citizen registration: During registration of citizen Ci at the SRA, for every (Ai, ai) ∈ I SRA replaces ai by ca i = FHE.Enc(ai, pkMOA-ID) and produces a signature σ I = DSS.Sign(skSRA, I ). Then, (σ I , I ) is stored on Ci's citizen card.
Service provider registration: For the registration of service provider Sj , SRA computes (pk S j , evk S j , sk S j ) = FEH.KeyGen(κ) as well as evk MOA-ID,S j and f MOA-ID,S j , and gives sk S j to Sj as well as evk MOA-ID,S j and f MOA-ID,S j to MOA-ID.
Authentication at online services:
1 & 2: After having received an authentication request from Sj , MOA-ID starts the citizen identification process by requesting Ci's Identity Link I and its corresponding signature σ I . MOA-ID runs b = DSS.Verify(pkSRA, I , σ I ) and proceeds if b = true and aborts otherwise. Let ca k be the encrypted sourcePIN, then MOA-ID computes c a k = FHE.Eval(f H , ca k FHE.Enc(sj , pkMOA-ID), evkMOA-ID) where sj is the sector specific identifier required by Sj and f H is a circuit representing the evaluation of the SHA-1 hash function, which is used for ssPIN generation. attribute ĉa i = FHE.Eval(f MOA-ID,S j , ca i , evk MOA-ID,S j ), thus performing a re-encryption to pk S j , and assembles all these resulting ĉa i into the SAML structure, which is then communicated to Sj . Sj can now decrypt all attributes using sk S j .
Evaluation
In this section we evaluate the different approaches based on selected criteria targeting several aspects, e.g. evaluating the overall architecture or aspects regarding the individual entities. We briefly describe the selected criteria for evaluation below and Table 1 shows a comparison of our three approaches.
Re-use of existing infrastructure: How much of the existing infrastructure of the Austrian eID system can be re-used or do a lot of parts need to be exchanged or modified? Conformance to current workflow: Is the authentication process flow of the approach conform to the existing citizen card authentication process flow? Scalability: Is the approach applicable in a large scale or not? Practicability: Can the authentication process be carried out within a reasonable time frame? Extensibility: Is the applied infrastructure of the approach easily extensible to new requirements, e.g., adding new sectors and thus requiring new ssPINs. Middleware complexity: Does the approach require high complexity or computational power from the client-side middleware? Service provider effort: How much effort is required by the service provider adopting a particular approach? Trust in MOA-ID: Does the approach require MOA-ID being trusted? Anonymity: Does the approach support citizens to be anonymous with respect to MOA-ID? Unlinkability: Are users unlinkable to MOA-ID, i.e., can different authentications of one citizen be linked together? Authentication without prior registration: The current Austrian eID system allows registration-less authentications. Hence, is this feature still possible or not? In the following, we give some explanations why specific criteria could be fulfilled, partly fulfilled, or not-fulfilled by the respective approach.
Criterion
Re-use of existing infrastructure: This criterion can only be partly fulfilled by all approaches since all approaches require some modification of the existing Austrian eID infrastructure. Approach 1 and 3 require some kind of additional governance structure, as proxy re-encryption keys for service providers have to be generated and managed by SRA. Additionally, the attribute values of the existing Identity Link structure must be exchanged by encrypted values and the Identity Link needs to be augmented. For approach 1, the conventional signature of the Identity Link must also be exchanged by a redactable signature. In contrast to that, Approach 2 using anonymous credentials requires a complete re-structuring of the Identity Link. However, all approaches can still rely on the same basic architectural concept of the Austrian eID infrastructure, using MOA-ID as identity provider.
Conformance to current workflow: Approach 1 and 3 fully comply with the current citizen card authentication process flow, hence they follow the steps identification, ssPIN provision, and authentication. Approach 2 is slightly different, as MOA-ID just checks if a provided credential is not revoked. The actual verification of the credential is carried out directly at the service provider.
Scalability: Basically, all approaches can be adopted in a large scale. Approach 1 and 3 are similar to the existing Austrian eID system, as only a few attributes need to be exchanged within the Identity Link and the computational requirements for the middleware remain low. For approach 2, it must be distinguished whether one-show or multi-show anonymous credentials will be used. For oneshow credentials, revocation checking is a very light-weight process and hence easy adoptable. In contrast to that, revocation for multi-show credentials is much more complex and not easily applicable for a large amount of users such as the Austrian population. Finally, any scalability doubts concerning MOA-ID can be neglected as it is running in a public cloud providing nearly unlimited resources.
Practicability: Approach 1 and 2 seem to be to date the most promising practical approaches. Approach 1 relies only on cryptographic mechanisms, which can already efficiently implemented. For approach 2, again we must distinguish between one-show and multi-show credentials. For one-show credentials, proof generation requires moderate effort. For multi-show credentials, proof generation for non-revocation proofs is complex and computational expensive. This gives a lot of load to the client-side middleware, which makes approach 2 using multishow credentials quite impracticable. For approach 3, the assumptions we made for FHE still require further research activities and are far away from any implementation. Although we rely on public clouds, FHE is currently not practicable.
Extensibility: For adding new sectors, approach 1 would require a full exchange of the Identity Link, as it must be re-signed when adding a new encrypted ssPIN. The same issue holds for approach 2, since a new credential incorporating the new ssPIN must be stored on the citzen card with exception when using scope-exclusive pseudonyms as proposed in ABC4Trust (https://abc4trust.eu). In approach 3, ssPIN's are computed from the encrypted version of the sourcePIN and no modifications of the Identity Link are required.
Middleware complexity:
In approach 1, client-side middleware complexity is low as only redaction of the Identity Link is required. Middleware complexity in approach 2 depends on the type of anonymous credentials used. Proof computation of multi-show credentials is computational expensive, which would impose a significant computational burden on M [START_REF] Lapon | Analysis of Revocation Strategies for Anonymous Idemix Credentials[END_REF] when taking into account that the system covers all citizens of Austria. For approach 3, middleware complexity is low again as its functionality is equal to current middleware implementations.
Service provider effort: The effort for service providers adopting approach 1 is low. Service providers just need to verify the data received by MOA-ID and do some decryption operations. For approach 2, the effort is slightly higher because service providers need to set up appropriate verification mechanisms for the claims provided by the user. The effort for service providers in approach 3 is the highest as FHE decryption is currently still computationally expensive.
Trust in MOA-ID: Since no sensitive citizen data such as the sourcePIN or any ssPIN are revealed to MOA-ID, no trust is required. In approach 1 and 3 MOA-ID only sees encrypted citizen data. In approach 2 MOA-ID does only see the credential but non of its attribute values. However, some trust assumptions are required that MOA-ID works correctly.
Anonymity: For approach 2, anonymity is obvious as the whole approach sets up on anonymous credentials. Achieving anonymity in approach 1 and 3 depends on the sub-processes to be chosen for citizen authentication (signature creation). Both approaches 1 and 3 rely on three similar alternative sub-processes. Subprocess 1 does not request a citizen signature and fully relies on the Identity Link's signature for citizen authentication, as the Identity Link is only available to the citizen. In this case, the citizen stays fully anonymous in the face of MOA-ID. In sub-process 2, citizen signature creation is requested by MOA-ID for citizen authentication. In this case, citizens are uniquely identifiable by MOA-ID due to pk Ci . Finally, within sub-process 3 ring signatures are created and enable citizen anonymity with respect to the defined ring.
Unlinkability: For our approaches, it is very hard to achieve unlinkability with respect to MOA-ID. In approach 1 and 3 citizens are linkable because they always present the same Identity Link and corresponding signature. Citizens could only be unlinkable in approach 2, where one-show credentials provide linkability and multi-show credentials provide unlinkability.
Authentication without prior registration: This criterion can still be fulfilled by all of our approaches.
Based on the results of our evaluation, we conclude that all approaches might be feasible but not all of them might be really practical when considering an implementation of a cloud-based approach instead of the current Austrian eID system. Approach 1 might be the best as it could be quickly realized and requires less effort for the client-side middleware and the service provider. However, linkability and higher efforts for extensions are the drawbacks of this approach. Depending on the type of anonymous credential system, approach 2 might also be practicable and possible to implement. Although it provides more complexity and efforts for the client-side middleware, compared to approach 1 it could provide full anonymity and unlinkability. Finally, although approach 3 has its advantages, e.g., in terms of extensibility, and would be promising for the future, it is currently not practicable. Implementations of fully homomorphic encryption schemes are currently still in the early stages which definitely hinder a fast adoption of this approach.
Fig. 1 .
1 Fig. 1. Simplified illustration of MOA-ID based authentication.
RS.KeyGen:
This probabilistic key generation algorithm takes a security parameter and produces and outputs a public (verification) key pk and a private (signing) key sk. RS.Sign: This (probabilistic) signing algorithm gets as input the signing key sk and a message m = (m[1], . . . , m[ ]), m[i] ∈ {0, 1} * and outputs a signature σ = RS.Sign(sk, m). RS.Verify: This deterministic signature verification algorithm gets as input a public key pk, a message m = (m[1], . . . , m[ ]), m[i] ∈ {0, 1} * , and a signature σ and outputs a single bit b = RS.Verify(pk, m, σ), b ∈ {true, false}, indicating whether σ is a valid signature for m. RS.Redact: This (probabilistic) redaction algorithm takes as input a message m = (m[1], . . . , m[ ]), m[i] ∈ {0, 1} * , the public key pk, a signature σ, and a list MOD of indices of blocks to be redacted. It returns a modified message and signature pair ( m, σ) = RS.Redact(m, pk, σ, MOD) or an error. Note that for any such signature ( m, σ) we have RS.Verify(pk, m, σ) = true
FHE.KeyGen:
This probabilistic key generation algorithm takes a security parameter and produces and outputs a public-key pk, a public evaluation key evk, and a private key sk. FHE.Enc: This probabilistic encryption algorithm takes a message m ∈ {0, 1} n and a public-key pk and outputs a ciphertext c = FHE.Enc(m, pk). FHE.Dec: This deterministic algorithm takes a ciphertext c and a private key sk and outputs m = FHE.Dec(c, sk). FHE.Eval: This homomorphic evaluation algorithm takes an evaluation key evk, a function f : {0, 1} n → {0, 1} and k ciphertexts and outputs a ciphertext c f = FHE.Eval(f, c1, . . . , c k , evk).
1 & 2 : 3 :
23 After having received an authentication request from Sj , MOA-ID starts the citizen identification process by requesting Ci's Identity Link I through M . Thereby, we have two possibilities: 1. If MOA-ID tells M which attributes Aj are required by Sj , then M runs ( Î , σ I ) = RS.Redact(I , pk RS , σ I , MOD) wheres MOD contains all the indices of ca i from I with exception of Aj (including the ssPIN required by Sj ). Then, M sends ( Î , σ I ) to MOA-ID which runs b = RS.Verify(pk RS , Î , σ I ) and proceeds if b = true and aborts otherwise. 2. M sends (I , σ I ) to MOA-ID which runs b = RS.Verify(pk RS , I , σ I ) and proceeds if b = true and aborts otherwise. Then, MOA-ID runs ( Î , σ I ) = RS.Redact(I , pk RS , σ I , MOD), whereas MOD contains the indices of all attributes in I with exception of Aj (including the ssPIN required by Sj ). In this step, MOA-ID usually requests the generation of a qualified electronic signature from Ci. Here we have the following possibilities: 1. MOA-ID requests no signature, since I is signed and only available to Ci. 2. M produces a standard signature σ = DSS.Sign(sk C i , m * ) for a special message m * on behalf of Ci (which, however, allows unique identification of Ci by MOA-ID). 3. M produces a ring signature σ = AS.Sign(sk C i , R, m * ) for a special message m * on behalf of ring R including pk C i . 4: MOA-ID verifies the validity of signature σ either by running b = DSS.Verify(pk C i , m * , σ) or b = AS.Verify(R, m * , σ) (note that due to σ I and it's potentially redacted version can always
Authentication at online services: 1 , 2 & 3 : 5 :
1235 After having received an authentication request from Sj , MOA-ID starts the citizen identification process by requesting Ci's credential Cred and checks whether Cred has not been revoked. If Cred has not been revoked MOA-ID produces a signature σ = DSS.Sign(skMOA-ID, Cred, σ) and sends σ along with a description of Aj to M . 4: M runs b = DSS.Verify(pkMOA-ID, Cred, σ) and if b = true produces a non-interactive proof π which opens all attribute values of Aj including the ssPIN required by Sj and sends (Cred, π, σ) to Sj . Otherwise, M aborts. Sj computes b = DSS.Verify(pkMOA-ID, Cred, σ) and if b = true verifies the proof π. If both checks verify, Ci is authenticated, otherwise Sj aborts.
3 :
3 In this step MOA-ID requests the generation of a qualified electronic signature from Ci. Here we have the following possibilities:1. MOA-ID requests no signature, since I is signed and only available to Ci. 2. M produces a standard signature σ = DSS.Sign(sk C i , m * ) for a special message m * on behalf of Ci (which, however, allows unique identification of Ci by MOA-ID). 3. M produces a ring signature σ = AS.Sign(sk C i , R, m * ) for a special message m * on behalf of ring R including pk C i . 4: MOA-ID verifies the validity of signature σ either by running b = DSS.Verify(pk C i , m * , σ) or b = AS.Verify(R, m * , σ). 5: MOA-ID takes all attributes ca i in Aj from I including ca k and computes for every such
RE.Setup: This probabilistic algorithm gets a security parameter and a value MaxLevel indicating the maximum number of consecutive re-encryptions permitted by the scheme (in case of single-use we set MaxLevel=2). It outputs the master public parameters params, which are distributed to users, and the master private key msk, which is kept private. RE.KeyGen: This probabilistic key generation algorithm gets params, the master private key msk, and an identity id ∈ {0, 1} * and outputs a private key sk id corresponding to that identity. RE.Enc: This probabilistic encryption algorithm gets params, an identity id ∈ {0, 1} * , and a plaintext m and outputs c id = RE.Enc(params, id, m). RE.RKGen: This probabilistic re-encryption key generation algorithm gets params, a private key sk id 1 (derived via RE.KeyGen), and two identities (id1, id2) ∈ {0, 1} * and outputs a reencryption key rk id 1 →id 2 = RE.RKGen(params, sk id 1 , id1, id2). RE.ReEnc: This (probabilistic) re-encryption algorithm gets as input a ciphertext c id 1 under identity id1 and a re-encryption key rk id 1 →id 2 (generated by RE.RKGen) and outputs a re-encrypted ciphertext c id 2 = RE.ReEnc(c id 1 , rk id 1 →id 2 ). RE.Dec: This decryption algorithm gets params, a private key sk id , and a ciphertext c id and outputs m = RE.Dec(params, sk id , c id ) or an error.
Table 1 .
1 Evaluation of the various approaches. We use to indicate as the criterion being full applicable, × as not applicable, and ≈ as partly applicable. For quantitative criteria we use L for low, M for medium, and H for high.
Approach 1 Approach 2 Approach 3
Re-use of existing infras- ≈ ≈ ≈
tructure
Conformance to current ≈
workflow
Scalability , ≈
Practicability , ≈ ×
Extensibility × ≈
Middleware complexity L L, H L
Service provider effort L M H
Trust in MOA-ID L L L
Anonymity ×, ×,
Unlinkability × ×, ×
Authentication without
prior registration
AC.KeyGen: This probabilistic key generation algorithm is run by an authority and takes a security parameter and produces and outputs a public key pk and a private key sk. AC.Issue: This interactive algorithm is run between a user U and an authority A. U has as input a list of attributes with corresponding values attr and wants to obtain a credential for attr (U may also have as input a long term secret). U executes the credential issuing protocol for attr with A by using U 's input attr and A has as input it's private key sk. Both algorithms have as input pk and at the end of this interaction U obtains a credential Cred corresponding to attr. AC.Prove: This interactive algorithm is run between a user U and a verifier V . U proves the possession of Cred for attr', which represents some subset of attr, to a verifier V . At the end of the protocol, V outputs accept if U has a valid credential Cred for attr', otherwise V outputs reject.
Acknowledgements: We would like to thank the anonymous reviewers for their valuable suggestions. The second author has been supported by the European Commission through project FP7-FutureID, grant agreement number 318424. | 44,882 | [
"994638",
"994639"
] | [
"65509",
"65509"
] |
01463835 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463835/file/978-3-642-39218-4_24_Chapter.pdf | Lisa Rajbhandari
Einar Snekkenes
Using the Conflicting Incentives Risk Analysis Method
Keywords: Risk analysis, risk, privacy, conflicting incentives
Risk is usually expressed as a combination of likelihood and consequence but obtaining credible likelihood estimates is difficult. The Conflicting Incentives Risk Analysis (CIRA) method uses an alternative notion of risk. In CIRA, risk is modeled in terms of conflicting incentives between the risk owner and other stakeholders in regards to the execution of actions. However, very little has been published regarding how CIRA performs in non-trivial settings. This paper addresses this issue by applying CIRA to an Identity Management System (IdMS) similar to the eGovernment IdMS of Norway. To reduce sensitivity and confidentiality issues the study uses the Case Study Role Play (CSRP) method. In CSRP, data is collected from the individuals playing the role of fictitious characters rather than from an operational setting. The study highlights several risk issues and has helped in identifying areas where CIRA can be improved.
Introduction
Risk is usually expressed as a combination of likelihood and consequence but obtaining credible likelihood estimates is difficult. Thus, there is a need to improve the predictability and the coverage of the risk identification process. This challenge is a consequence of limited availability of representative historic data relevant for new and emerging systems. Besides, people are not well calibrated at estimating probabilities [START_REF] Shanteau | Why study expert decision making? Some historical perspectives and comments[END_REF]. Furthermore, to improve the efficiency of the identification process, there is a need to identify issues that are key to risk discovery, and avoid activities that shed little or no light on potential problem areas. The Conflicting Incentives Risk Analysis (CIRA) [START_REF] Rajbhandari | Intended Actions: Risk Is Conflicting Incentives[END_REF] method addresses these issues by using an alternative notion of risk. In CIRA, risk is modeled in terms of conflicting incentives between the risk owner and other stakeholders in regards to the execution of actions. However, little evidence exists to suggest that CIRA is feasible to analyze risk in non-trivial settings.
In this paper, we explore to what extent CIRA is feasible for analyzing risk in non-trivial settings. We look into the feasibility of CIRA for analyzing privacy risks in a case study of an identity management system. Privacy is "too complicated a concept to be boiled down to a single essence" [START_REF] Danile | A Taxonomy of Privacy[END_REF]. We agree with the view of Solove [START_REF] Danile | A Taxonomy of Privacy[END_REF] that it is important to understand the socially recognized activities that cause privacy problems to an individual in order to protect it. As the data collected using CIRA will be sensitive and confidential, data is collected through Case Study Role Play (CSRP). CSRP is developed from the integration of case study [START_REF] Yin | of Applied Social Research Method Series[END_REF], persona [START_REF] Cooper | The Inmates Are Running the Asylum[END_REF] and role play [START_REF] Krysia | Role play: theory and practice[END_REF]. Personas are "hypothetical archetypes of actual users" and embody their goals [START_REF] Cooper | The Inmates Are Running the Asylum[END_REF]. Each role as described in the persona is played by a real person. Using CSRP, data is collected from the individuals playing the role of fictitious characters rather than from an operational setting. In this paper, we have extended the previous work on CIRA by [START_REF]RAMCAP(Risk Analysis and Management for Critical Asset Protection) Framework[END_REF] improving the data collection and analysis phase, and [START_REF]Risk management[END_REF] showing that it is feasible to use CIRA in non-trivial settings. Our work has contributed to the development of CIRA and helped to identify practical problems that can be addressed in future research.
The rest of the paper is organized as follows. Related work is given in Sect. 2 followed by a description of the case in Sect. 3. In Sect. [START_REF] Chulef | A Hierarchical Taxonomy of Human Goals[END_REF], we present the analysis of the case. We further present and discuss the result of our analysis in Sect. 5. Sect. 6 concludes the paper.
Related Work
There are many classical risk management approaches and guidelines. Usually, in these approaches, risk is specified as a combination of likelihood and consequence. The ISO/IEC 27005 [START_REF]Information technology -Security techniques -Information security risk management. ISO/IEC[END_REF] standard (its new version ISO/IEC 27005:2011), the ISO 31000 [START_REF]Risk Management -Principles and Guidelines[END_REF] standard (that supersedes AS/NZS 4360:2004 [START_REF]Risk management[END_REF]) and NIST 800-39 [16] provide the guidance on the entire risk management process. NIST 800-39 [16] supersedes NIST SP 800-30 [START_REF] Stoneburner | Risk Management Guide for Information Technology[END_REF]; its revised version NIST 800-30 Rev. 1 [START_REF]Guide for Conducting Risk Assessments[END_REF] is a supporting document to NIST 800-39. CORAS [START_REF] Mass | A Guided Tour of the CORAS Method[END_REF] is a model based method that uses Unified Modeling Language (UML) for security risk analysis. ISRAM [START_REF] Karabacak | ISRAM: information security risk analysis method[END_REF] is a survey based model to analyze risk in information security; surveys are conducted for gathering probability and consequence. In Risk IT [START_REF]Rolling Meadows. The Risk IT Framework[END_REF] framework (which is integrated into COBIT 5 [START_REF]COBIT 5: A Business Framework for the Governance and Management of Enterprise IT[END_REF]), risk is estimated as the combination of frequency (rate by which an event occurs over a given period of time) and magnitude of IT risk scenarios. In RAMCAP [START_REF]RAMCAP(Risk Analysis and Management for Critical Asset Protection) Framework[END_REF] (its updated version RAMCAP Plus), risk is estimated as the combination of threat, vulnerability and consequence. Cox has shown the limitations of estimating risk as the combination of threat, vulnerability and consequence [START_REF] Jr | Some limitations of "Risk = Threat x Vulnerability x Consequence" for risk analysis of terrorist attacks[END_REF].
There are several methods that specifically look into privacy risks, and are usually called Privacy Impact Assessment (PIA). For instance, there are Privacy Impact Guidelines of the Treasury Board of Canada Secretariat [START_REF]Privacy Impact Assessment Guidelines: A Framework to Manage Privacy Risks Guidelines[END_REF] and PIA of the Information Commissioner's Office, United Kingdom [START_REF]Privacy Impact Assessment Handbook[END_REF]. PIA is a "systematic process for evaluating the potential effects on privacy of a project, initiative, or proposed system or scheme" [START_REF] Wright | Should privacy impact assessments be mandatory?[END_REF]. It helps to identify and manage privacy risks for an organization that deals with personal data of its stakeholders. However, these methods usually do not attribute the events to people. Wright [START_REF] Wright | Should privacy impact assessments be mandatory?[END_REF] states that PIA should be integrated into risk management along with other strategic planning tools.
The CIRA Method [START_REF] Rajbhandari | Intended Actions: Risk Is Conflicting Incentives[END_REF] identifies stakeholders, their actions and consequences of actions in terms of perceived value changes to the utility factors that characterize the risk situation. The idea being that risk is the combination of the strength of the force that motivates the stakeholder that is in the position to trigger the action to send the risk owner to an undesirable state and the magnitude of this undesirability. Risk magnitude is related to the degree of change to perceived utility caused by potential state changes.
Case Description: NorgID Identity Management System
The case description is fictitious but the design of the system is inspired by MinID [START_REF] Difi | Direktoratet for forvaltning og IKT[END_REF]. The Identity Management System (IdMS) helps to manage the partial identities of end-users. IdMS usually consists of three class of stakeholders: End-user, Identity Provider (IdP) and Service Provider (SP). IdP is the organization that issues the credentials/ electronic identity to the end-user. SP is the organization that provides services to end-user after verifying their identities. A-SOLUTIONS is an organization with 20 employees that manages a federated IdMS. It developed an authentication system called NorgID and a portal (ID-Portal). Their goal is to provide secure access to digital public services. NorgID is one of the IdPs which provides authentication for logging on to a federation called 'ID-portal' as shown in Fig. 1. It provides the end-user crossdomain Single Sign-On (SSO), i.e. the end-user needs to authenticate only once and can gain access to many services by using the portal such as tax, health care, pension, labor and other eGovernment services. The end-user can log on to the ID-portal using NorgID, by providing his personal ID, a password and a one-time PIN code. NorgID uses two databases: (a) for storing personal data about the users and (b) for storing logs containing usage of IdMS for each user (the details regarding the collected information are not mentioned in the privacy policy). The personal information collected includes his social security number, PIN-codes, password, email address, telephone number and address. NorgID has been quickly and widely adopted because of its easy access and features that have convinced enough people to use the application.
Analyzing Privacy Risks Using CIRA
In this section, we first provide the assumptions and considerations, along with the scoping for the risk analysis activity. We provide a brief summary of the method along with the steps for data collection (1-9) and analysis [START_REF]Rolling Meadows. The Risk IT Framework[END_REF][START_REF]COBIT 5: A Business Framework for the Governance and Management of Enterprise IT[END_REF][START_REF]Risk Management -Principles and Guidelines[END_REF][START_REF]Information technology -Security techniques -Information security risk management. ISO/IEC[END_REF]. We then implement the procedure on the given case of an IdMS. The analysis focuses on the risks faced by an end-user.
Assumptions and Considerations
For investigating the case, we used the Case Study Role Play (CSRP) method. We developed personas of the stakeholders based on empirical data collected for the representative stakeholders. However, for instance, in the case of a hacker, as the empirical data might not be easily elicitable, we used assumption persona [START_REF] Atzeni | Here's Johnny: a Methodology for Developing Attacker Personas[END_REF]. According to Atzeni et al. [START_REF] Atzeni | Here's Johnny: a Methodology for Developing Attacker Personas[END_REF], the assumptions may be derived from different sources of data for the type of individuals that are known to attack the systems. The scenarios were written to provide background information of the role to the participants. We assumed that the participants are honest when interacting with the risk analyst. During the data collection phase, the participants were presented with a set consisting of 3 relevant utility factors. We also asked the participants to provide other factors that they valued or gave them perceived benefit. However, for the simplification of the case we have not considered those factors.
Scoping
Scoping consists of the activities used to determine the boundary for the risk analysis activity. We (as the risk analyst) assumed that the system is in a certain initial state. Moreover, we focused on privacy risk events that are caused by the intentional behavior of a stakeholder.
Summary of CIRA
CIRA identifies stakeholders, actions and perceived expected consequences that characterize the risk situation. In CIRA, a stakeholder is an individual (i.e. physical person) that has some interest in the outcome of actions that are taking place within the scope of significance. There are two classes of stakeholders: the strategy owner and the risk owner. Strategy owner is the stakeholder who is capable of triggering an action to increase his perceived benefit. Typically, each stakeholder has associated a collection of actions that he owns. The risk owner is the stakeholder whose perspective we consider when performing the risk analysis, i.e., he is the stakeholder at risk. By utility, we mean the benefit as perceived by the corresponding stakeholder. Utility comprises of utility factors. Chule et. al. [START_REF] Chulef | A Hierarchical Taxonomy of Human Goals[END_REF] identify the utility factors relevant for our work. Each factor captures a specific aspect of utility e.g. prospect of wealth, reputation, social relationship. The procedure is as given in Table 1 along with the approximate time required for each of the steps when implementing the NorgID case study (the required time will be further explained in Sect. 5).
Table 1. Procedure in CIRA with approximate time required for each step when implementing NorgID IdMS.
Steps
Time (mins) 1. Identify the risk owner (includes development of persona) 30 2. Identify the risk owners' key utility factors 30 3. Given an intuition of the scope/ system-identify the kind of strategies/ operations which can potentially influence the above utility factors The application of CIRA to the NorgID IdMS is presented below.
1. Identify the risk owner. At first we need to determine the risk owner. The user (Bob) is the risk owner. We assume he represents the general users of NorgID. The persona of Bob is given in Table . 2.
2. Identify the risk owners' key utility factors. This step consists of determining the key utility factors for the risk owner. We presented Bob with three utility factors: privacy, satisfaction from the service and usability along with the explanation for each. We collected his opinion on whether he thought (as a user of NorgID), these factors are important and would give him perceived benefit.
3.
Given an intuition of the scope/ system-identify the kind/ classes of operations/ strategies which can potentially influence the above utility factors. For determining the strategies, we look into the taxonomy of activities that cause privacy problems as provided by Solove [START_REF] Danile | A Taxonomy of Privacy[END_REF]. The strategies that we considered are:
-Secondary use of Bob's information (SecUse): It is related with using Bob's information for another purpose than that is mentioned in the policy without getting his consent. -Breach of confidentiality of Bob's information: It is "breaking a promise to keep a person's information confidential" [START_REF] Danile | A Taxonomy of Privacy[END_REF]. We consider two strategies that can lead to breach of confidentiality: Sharing credentials (ShareCred) and Stealing Information (StealInfo).
4. Identify the roles/ functions that may have the opportunities and capabilities to perform these operations. There can be many strategy owners capable of executing these strategies. However, for this paper we consider only three stakeholders as the objective is to show the feasibility of the CIRA method.
The stakeholders are CEO and System Administrator of A-SOLUTIONS, and a hacker capable of executing SecUse, ShareCred and StealInfo operations respectively.
5. Identify the named strategy owner(s) that can take on this role. In this step, we pin point the strategy owner(s) that are in the position of executing the above strategies. We consider the stakeholders: John (CEO), Nora (System Admin) and X (Hacker). Their personas are provided in Table 2.
Identify the utility factors of interest to this strategy owner(s).
In CIRA, as we consider the perception of an individual, each relevant stakeholder is an expert. Like before, we provided a list of utility factors for John, Nora and Hacker X to choose from. For the hacker, we identified his utility factors from the existing literature [START_REF]The Honeynet Project[END_REF]. The identified utility factors for John (CEO): privacy reputation, wealth for business continuity, compliance; for Nora (System Admin): availability, trust, free time and for X (Hacker): wealth, status, ego.
7. Determine how the utility factors can be operationalized. For each identified utility factor, we determine the scale, measurement procedure, semantics of values and explain the underlying assumptions, if any. The brief explantion of the metrics presented in Table 3 and Table 4 are a flavor of the metric we used in the analysis for the stakeholders Bob (User) and John (CEO). It is to be noted that different flavors of the metric exist and can be used according to the context. Due to space constraint, we leave out the details of the metrics for the utility factors of Nora (System Admin) and X (Hacker).
8. Determine how the utility factors are weighted by each of the stakeholders. We asked Bob to rank the utility factors based on its importance. Then, for collecting the weights for the utility factors the following question was asked-"Given that you have assigned a weight of 100 to utility factor #1, how much would you assign to utility factor #2, #3 and so on (on a scale of 0-99)?". Bob ranked and assigned weights of 100, 80, 70 to the utility factors privacy, satisfaction and usability respectively as given in Table 5.
Similarly, the weights of the utility factors according to their ranking for each of the strategy owners were also collected. John (CEO) assigned weights of 100, 80 and 50 to the utility factors compliance, privacy reputation and wealth respectively. Nora (System Admin) assigned weights of 100, 80 and 78 to the utility factors service availability, free time and trust respectively. X (Hacker) assigned weights of 100, 90 and 85 to the utility factors wealth, ego and status respectively. 9. Determine how the various operations result in changes to the utility factors for each of the stakeholders (start with risk owner). We assume the system/ environment to be in a fixed initial state and all the players Table 3. Metrics for the utility factors of the risk owner Bob (User).
Utility factor Definition
Measurement Procedure Privacy(%)
It refers to the extent to which you have control over your personal information. Defined by
1/(1 + N ) (1)
where N-expected/ projected number of incidents per month.
N is obtained from the analysis of the scenario directly or indirectly caused by the events triggered by various stakeholders [START_REF] Rajbhandari | Intended Actions: Risk Is Conflicting Incentives[END_REF]. If N = 0, the value of privacy is 100%; if N = 1, the value of privacy decreases to 50% and so on.
That is with increasing number of incidents, the value of privacy decreases.
Satisfaction(%) It refers to the extent to which you perceive the continuance usage of the portal to access services based on your experience. Model as expectation fulfillment relating to function: service availability, support(reponsiveness (scale: %), effectiveness (scale: %)) and service completeness.
Service availability is the number of interactions with a response time of less than 1 second divided by the total number of interactions.
Responsiveness is given as
1/(1 + t) ( 2
)
where t is the average time in mins required to 'solve' a problem reported by the user. Effectiveness is the 'extent' to which the problem is solved. Service completeness relates to the number of features that the service actually delivers divided by the number of features that the user could reasonably expect (see [START_REF] Rajbhandari | Intended Actions: Risk Is Conflicting Incentives[END_REF]). Usability(%) It refers to the extent to which a user perceives the ease of interaction with the portal. Model as user's past experience with using the service.
The value can be obtained by doing the survey. A scale of 0 to 100% is used, a value of 0 denotes it takes more than 30 mins to get acquainted with the service; 25% denotes it can be done within 20-30 mins; 50% denotes it takes 10-20 mins; 75% it takes less than 10 mins; 100% denotes it takes less than 5 mins.
Table 4. Metrics for the utility factors of the strategy owner John (CEO).
Utility factor Definition
Measurement Procedure Privacy Reputation(%) It refers to the reputation of the company with respect to privacy incidents (e.g. loss, misuse or breach of personal information). Model as user's expectation relating to future behavior of the company in terms of: experience of others and own experience; both defined by
1/(1 + P ) ( 3
)
where P is the number of privacy incidents.
P is obtained from the survey. If P = 0, the value of reputation is 100%; if P = 1, the value decreases to 50% and so on. That is with increasing number of incidents, the value of reputation decreases (see [START_REF] Rajbhandari | Intended Actions: Risk Is Conflicting Incentives[END_REF]).
Wealth(Million e)
The unit for wealth is currency units. The weight for wealth will then specify how much utility each currency unit will give.
It is obtained from the investigation of the entity by the risk analyst.
Compliance(%) It refers to the extent to which you think the company would benefit by following the rules and regulations. This demonstrates the willingness of the company to take necessary steps to protect the personal information of its stakeholders. Model as percent of compliance with legislation (e.g. Data Protection Act, EU directive).
At first the risk analyst needs to gather the rules that needs to be followed by the company. A value of 0 means that no rules are followed; 25% means that 1/4 of thoes rules are followed; 50% means that half of those rules are followed; 75% means 3/4 of the rules are followed and 100% means all rules are followed. are utility optimizing. By utility optimizing, we mean that they are optimizing their behavior relative to the weighted sum of the elements in their utility factor vector. For each of the identified utility factors, we determine the initial and final values after the strategies of the players are executed (for the utility factors' valuation, we utilize the metrics explained above). We use the additive utility function of MAUT to estimate the utility. The additive utility function for a given player is defined to be the weighted average of its individual utility functions [START_REF] Clemen | Making Hard Decision: An Introduction to Decision Analysis[END_REF] given as:
U = m k =1 w k • u(a k ) (4)
where, m is the number of utility factors of the player, w k is the assigned weight of utility factor a k and m k =1 w k = 1, and u(a k ) is the utility function for the utility factor 'a k '. For our case study, Table 6 depicts the normalized weights (for the assigned weights in Step 8) for the utility factors, its initial value (IV) and its final values, if the strategies of the stakeholders were to be executed. For the other elements comprising the utility factors, we make the assumption that the stakeholders perceive each of these to be equally important. The values for the metrics are obtained either based on our investigation or by conducting interviews/surveys with the participants. Usually, the individual utility functions (i.e. utility factors in our case) are assigned values in the interval of 0 (worst) to 1 (best) when using MAUT. For instance, in our case, we can easily compress the wealth to the interval 0 to 1. However, this would not be particularly helpful as most of the values will be clustered right at the end. Thus, it is more intuitive to utilize the given scales for the utility factors' valuation. Moreover, the units of the weights are such that the utility is unit less. Next, the values for each of the stakeholders are determined.
For Bob (User). We determine the values of the first two utility factors for Bob from our investigation and the last one (usability) is based on the survey. To determine the value of privacy to the user, we investigated the number of privacy incidents at each state. Our findings are based on several studies on issues such as how secondary usage of data and breach of confidentiality will impact the end-user. Based on our study, N = 0 per month at the initial state. N = 11, N = 5 and N = 20 when John, Nora and Hacker X use their respective strategies. By instantiating (1) with the value of N, we obtain the IV of privacy as 100% and its final values as 8%, 17% and 5% respectively.
Note that the values for satisfaction are obtained using the techniques borrowed from MAUT and from our investigation. For support (an element of satisfaction), the values for the responsiveness are obtained after instantiating (2) with t = 6 at the initial state and t = 5 when the other strategies of the stakeholders are executed. Thus, responsiveness increased from the IV of 14% to 17% for all three strategies. Besides, it was determined that effectiveness also increased from 90% to 92% when the three strategies of the stakeholders are executed. We then evaluate the values for support instantiating (4) with the obtained values of responsiveness and effectiveness: for the IV as 0.50*14+0.50*90 = 52%. Similarly, the final values for the three strategies are evaluated as 55%. The following values were determined for the other elements of satisfaction: availability increases from 85% to 87% and service completeness increases from 80% to 82% after the three strategies are executed. Thus, using (4) and the values determined for the other elements comprising our satisfaction utility factor, the obtained IV is 72% and the final values for the other strategies are evaluated as 74%. The value of usability as obtained from Bob was 80% for all cases.
Due to lack of space, we leave out the details of the computations of changes to the utility factors belonging to the other stakeholders. The results can be found in Table 6.
10. Estimate the utility. We again use the techniques from MAUT to estimate the utility for each of the strategies for each player using (4). We make the simplifying assumption that utility is linear. For our case study, we use to compute the utilities for the stakeholders with the values given in Table 6. In the initial state, the utilities are given as follows:
For Bob (User): 0. Similarly, for other stakeholders, the utilities are obtained as given in Table 7.
11. Compute the incentives. We need to compute the incentives (i.e. changes in utilities) for each of the strategies for each player. The change in utility ∆ is the difference between the utility of the player in the state resulting from strategy use and the initial state. In our case study, from Table 7, when John uses the SecUse option, ∆ for Bob and himself are -36 and -22 respectively. When Nora uses the ShareCred option, the ∆ for Bob and herself are -32 and 22 respectively. In addition, when the hacker uses the StealInfo operation, the ∆ for Bob and himself are -37 and 47 respectively.
12. Determine risk. This can be achieved by investigating each of the strategies with respect to sign and magnitude of the changes determined in the previous step. In our case study, when John uses the SecUse option, it results in a negative change in utility for both the players (falls in the third quadrant in the incentive graph as shown in Fig. 2). Thus, we know it is an undesirable situation for both the players and they both want to move out of this quadrant. Thus, this might result in co-operation. However, Nora's degree of desirability to play the ShareCred is slightly more as it leads her to a better position with a gain of 22. In this case, 22 is the strength of the force that motivates Nora to send Bob to an undesirable state and -32 is the magnitude of this undesirability and the combination of these is the risk (-32, 22). Similarly, it is clear that the Hacker X's degree of desirability to play the StealInfo is high as it leads him to a better position with a gain of 47 and -37 is the magnitude of the undesirability faced by Bob, which results in the risk (-37, 47).
13. Evaluate risk. We identity the risk acceptance and rejection criteria for the risk owner to determine whether a specified level of risk is acceptable or not. In our model, we make the simplifying assumption that all strategy owners will need the same time to act if they have the same magnitude of incentive. (-32,22) and h. (-37, 47) obtained when the strategy owners execute the strategies SecUse, ShareCred and StealInfo respectively. The risk pairs are represented by (C, I i ) where C is the consequence for the risk owner and I refers to how strong is strategy owner i's incentive to make the first move or the magnitude of incentive. For instance, for the risk pair 'b', Bob gets the value of C as -11 when the final values for privacy, satisfaction and usability in the execution of any of the strategies would be 95%, 70% and 50% respectively (keeping the weights of the utility factors and their initial values as obtained before). Note that this is one of the several possible combinations that gives Bob the consequence of -11. Nora has an incentive of 6, when the final values for availability, freedom and trust are 90%, 10% and 53% respectively. Similarly for other stakeholders the possible combinations can be determined.
To determine the risk acceptance criteria, we asked Bob (User): 'How strong a temptation is it acceptable to give a strategy owner to execute the strategy, so as to cause him (i.e. Bob) a given loss?'. From the above risk pairs, he accepted the risk pairs a and b (represented by the light gray square) as shown in Fig. 2. However, for the risk pair, c he was willing to accept the risk only if Nora was in the position of executing the strategy (represented by the triangle) and unsure in case other strategy owners executed their strategy. The remaining risk pairs were not acceptable to him (represented by the black square).
Our findings can be grouped in the following categories: (1) application of CIRA to NorgID IdMS, (2) feasibility of CIRA in terms of its complexity and risk analyst effort required, (3) improvements made and (4) some limitations of CIRA that require further work. Application of CIRA to NorgID IdMS, resulted in the determination of risks faced by the risk owner. We were further able to represent acceptable/ unacceptable risk events by means of an incentive graph which was easy to communicate to the risk owner.
Assuming we have n stakeholders, each stakeholder owns s strategies and has u utility factors that go into the computation of his utility, then the effort of the various tasks can be estimated as follows: The total number of strategies to be considered will be n * s. The total number of utility factors to be considered will be n * u. However, in practice, it is expected that utility factors will be taken from a limited set. To determine the risk acceptance criteria, it will suffice to ask the risk owner n * s yes/no (i.e. accept/reject) questions. Thus the complexity of CIRA in terms of human effort will be in the order of n * (u + s)
By instantiating [START_REF] Clemen | Making Hard Decision: An Introduction to Decision Analysis[END_REF] with the value of n = 4, s = 1 and u = 3 as in the NorgID case study, we obtain the estimate of complexity as 16. Furthermore, the effort in terms of total amount of time spent in doing the case study was determined to be approximately 27 hrs (which includes the time given in Table 1 along with the time for initial preparation (1 hr), scenario construction to provide the background information of the role to the participants (2 hrs), role play selection and guidance (2 hrs) and documentation (1.5 hrs)). The given hours are approximate values; the values were jotted down only after the actual process was completed. It is clear that steps for determining the changes to the utility factors with respect to the operations (Step 9) and the operationalization of utility factors (Step 7) required the highest amount of time i.e. approximately 280 and 240 mins respectively. When the problem space grows, for instance the values of n = 8, s = 10 and u = 5, we would expect that the risk analyst would have to spend in the order of 200 hours to complete the analysis. Note that the elapsed time may be longer. CIRA is still in development phase and the steps will be optimized. For e.g. a comprehensive library of utility factors will be developed. It is expected that this library will speed up the data collection phase. Moreover, tools will be developed to support the risk analyst.
Learning from the case study, we discovered the following issues that resulted in improvements: the procedure was updated to ease the data collection process and the data collection manual was developed for the risk analyst. Interviews/ survey responses indicated that it was essential that the risk analyst and the participants have the same understanding of the concepts (e.g. utility factors) used during the data collection phase. Thus, even though a lot of resources were required for instance, in the operationalization of the metrics for the utility factors and also determining their value, we focused on these key issues in order to improve data quality.
The following limitations of CIRA were identified: [START_REF]RAMCAP(Risk Analysis and Management for Critical Asset Protection) Framework[END_REF] We have assumed that all the participants are honest when interacting with the risk analyst. However, the fact that they might be reluctant to provide information or give wrong information during the interview/ survey needs further investigation. (2) As metrics have always been a challenge in information security, for some of the utility factors it was difficult to formulate the metrics. Hence, we need to collect definitions of utility factors and perform their validation. (3) To determine whether an obtained set of utility factors represents the complete set for a particular stakeholder in a given context requires further work. (4) More work is also needed in capturing the uncertainties in relation to estimates using interval arithmetic or bounded probabilities instead of point values. (5) When assigning weights, the same scale is used for all the stakeholders. The mapping of scale of one stakeholder with another also needs further investigation. ( 6) Finally, CIRA tool support.
Conclusion
In this paper, we have explored the feasibility of CIRA to analyze risk in a non-trivial setting. The CIRA method is still at an early stage of development. However, the results from our case study suggests that it is possible to use CIRA in such settings, and that the method helps the analyst to get a better understanding of the risks. Our work has contributed to the development of CIRA and helped to identify practical problems that can be addressed in future research.
Fig. 1 .
1 Fig. 1. NorgID Identity Management System.
30 4 . 5 . 6 . 9 .
4569 Identify roles/ functions that may have the opportunities and capabilities to perform these operations 60 Identify the named strategy owner(s) that can take on this role (includes development of persona) 90 Identify the utility factors of interest to this strategy owner(s) 90 7. Determine how the utility factors can be operationalized 240 8. Determine how the utility factors are weighted by each of the stakeholders 120 Determine how the various operations result in changes to the utility factors for each of the stakeholders 280 10. Estimate the utility for each stakeholder 20
Fig. 2 .
2 Fig. 2. The Incentive graph
Table 2 .
2 Personas of risk owner and strategy owners. old, known for her friendly behavior and highly trusts her co-workers; ensures both the NorgID and ID-Portal are functioning properly and secure; manages the access permission for internal staff to the server; in her absence, to assure that coworkers get proper system function, she usually lets them access servers and even shares important credentials to the server. Hacker X 28 years old, skilled in computing and interested in new challenges; to pursue his interest he left his job a year ago and now completely spends his time by gathering knowledge through firsthand experience; wants to earn money and also build status for himself in the so-called hackers' community.
Role Name Description
End-user Bob 30 years old, local school teacher, regular user of NorgID with
general IT knowledge; aware of some privacy issues mainly due
to the media coverage of data breaches (associated with services
such as social networking and health care).
CEO John 50 years old, ensures the overall development and relationship
with its stakeholders; has motivation to increase the company's
service delivery capacity.
System Nora 29 years
Admin
Table 5 .
5 Utility factors for Bob (User).
Rank Utility factors Weights
1 Privacy 100
2 Satisfaction 80
3 Usability 70
Table 6 .
6 Final Values of the Utility Factors after the Strategy of the Strategy Owners are Executed.
Final Values
Table 7 .
7 Matrix of Utilities and Change in Utilities w.r.t. Strategy of the Strategy Owners.
Utilities Changes in Utilities (∆)
Stakeholders IV SecUse ShareCred StealInfo SecUse ShareCred StealInfo
Bob(User) 85 49 53 48 -36 -32 -37
John(CEO) 59 37 -22
Nora(Sys Admin) 48 70 22
X(Hacker) 29 76 47
40 • 100 + 0.32 • 72 + 0.28 • 80 = 85 For John (CEO): 0.43 • 80 + 0.35 • 67 + 0.22 • 5 = 59
Acknowledgement. The work reported in this paper is part of the PETweb II project sponsored by The Research Council of Norway under grant 193030/S10. We would like to thank the anonymous reviewers for their valuable comments. | 38,138 | [
"1001107",
"1001108"
] | [
"50838",
"50838"
] |
01463837 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463837/file/978-3-642-39218-4_26_Chapter.pdf | Chris B Simmons
email: [email protected]
Sajjan G Shiva
email: [email protected]
Harkeerat Bedi
email: [email protected]
Vivek Shandilya
email: [email protected]
ADAPT: A Game Inspired Attack-Defense And Performance Metric Taxonomy
Keywords: Game Theory, Taxonomy, Security Management 1
Game theory has been researched extensively in network security demonstrating an advantage of modeling the interactions between attackers and defenders. Game theoretic defense solutions have continuously evolved in most recent years. One of the pressing issues in composing a game theoretic defense system is the development of consistent quantifiable metrics to select the best game theoretic defense model. We survey existing game theoretic defense, information assurance, and risk assessment frameworks that provide metrics for information and network security and performance assessment. Coupling these frameworks, we propose a game theoretic approach to attack-defense and performance metric taxonomy (ADAPT). ADAPT uses three classifications of metrics: (i) Attacker, (ii) Defender (iii) Performance. We proffer ADAPT with an attempt to aid game theoretic performance metrics. We further propose a game decision system (GDS) that uses ADAPT to compare competing game models. We demonstrate our approach using a distributed denial of service (DDoS) attack scenario.
INTRODUCTION
Game theory has received increased attention from network security researchers, investigating defense solutions. The game theory approach has the advantage of modeling the interactions between attackers and defenders, where players have the ability to analyze other player's behavior. This may enable an administrator to develop better strategic defenses for the system. For instance, when there are many actions available to the attacker and defender, it becomes difficult to develop solution strategies. Hamilton, et al. [START_REF] Hamilton | The role of game theory in information warfare[END_REF] outlined the areas of game theory which are relevant to information warfare using course of actions with predicted outcomes and what-if scenarios. Jiang, et al. [START_REF] Jiang | A Stochastic Game Theoretic Approach to Attack Prediction and Optimal Active Defense Strategy Decision[END_REF] proposed an attack-defense stochastic game model to predict the next actions of an attacker using the interactions between an attacker and defender. Therefore, it is vital to provide a network administrator the capability to compare multiple strategies using the appropriate metrics to optimize the network.
In this work we consider various metrics for game theoretic models. Bellovin [START_REF] Bellovin | On the Brittleness of Software and the Infeasibility of Security Metrics[END_REF] inferred that designing proper metrics for security measurement is a tough problem that should not be underestimated. Current research is lacking in terms of providing information a system administrator can use in determining metrics to quantify performance of diverse game theoretic defense models. One of the problems faced by research pertaining to security games is how to evaluate different network security game models, in terms of performance, accuracy, and effectiveness. The Institute for Information and Infrastructure Protection (I3P) has identified security metrics as priority for current research and development [START_REF]Challenges Related to Economics, Physical Infrastructure and Human Behavior: An Industry, Academic and Government Perspective, The Institute for Information Infrastructure Protection (I3P)[END_REF]. We extend this notion to provide a comprehensive taxonomy to aid in assessing the overall performance and quality of a game theoretic model. Prior game theoretic research mainly focused on classifying metrics based on a distribution of games across various game types and models. Further, the game theoretic defense mechanisms in literature are arbitrary and ad hoc in nature. This makes game theoretic defense models very complex and designed towards application specific scenarios [START_REF] Gopalakrishnan | An architectural view of game theoretic control[END_REF]. We propose an alternative real world approach by classifying our metrics based on a real world distributed denial of service (DDoS) scenario.
In this paper, we attempt to address limitations in research through the proposed game theoretic attack-defense and performance metric taxonomy (ADAPT), which is a taxonomy of game related metrics. We define a game as the interactions between two players with conflicting goals. In our case these players are the attacker (hacker) and system administrator (defender). Game metrics are a set of tools which are used to measure the various kinds of impact a game model has on each of its players. We classify these game metrics based on their impact on attacker, defender, and the performance of the game model on the system which is being run. Prior research has shown, with the use of game theory, how the interaction should take place based on the strategy and the strategy selected from the game model. In this traditional scenario one game model is assessed relative to a particular attack. He, et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] proposed a Game Theoretical Attack-Defense Model (GTADM), similar to ADAPT, that quantifies the probability of threats in constructing a risk assessment framework. We extend these general game theory steps and concepts proposed in He, et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] with the use of ADAPT being able to assess competing game models and select the game model which is suitable for defense. This provides a defender with a preliminary view of multiple game models associated to a particular attack. This research is composed of attack attributes and associated metrics that can be used to assess and compare competing game models. Thus, ADAPT provides a metric-centric approach to selecting the optimal game model. A game model is to evaluate the security level, performance, and quality of a system that will aid in selecting the appropriate game defense model at a specific time of the game. These metrics belong to different game theoretic defense models, information assurance, and risk assessment frameworks. Prior work towards developing a security metric taxonomy focuses on three core relationships of metric classifications involving organization, operation, and technical [START_REF] Bryant | Developing a framework for evaluating organizational information assurance metrics programs[END_REF][START_REF] Savola | A Novel Security Metrics Taxonomy for R&D Organizations[END_REF][START_REF] Wesner | Winning with quality: Applying quality principles in product development[END_REF]. In proposing ADAPT, we focus on metrics with technical association.
This paper is organized as follows: In section 2 we provide a motivating scenario and in section 3 we define characteristics for good security metrics followed by our proposed metric taxonomy. In section 4 we define the metrics used in a game inspired attack-defense and performance metric taxonomy. In section 5 we introduce a game model comparing system based on ADAPT and the methodology used to map metrics within ADAPT, followed by ADAPT applied within the Game Inspired Defense Architecture (GIDA). In section 6 we provide a brief literature review on performance and security metrics. In section 7, we conclude our paper and highlight future work.
MOTIVATING SCENARIO
In this section we start with a brief overview of game theory concepts and provide a motivating example, which highlight the relationship to the proposed metrics that will assess game defense models. There are four basic concepts of Game Theory : (i) A player is the basic entity of a game who decides to perform a particular action (ii) A game is a precise description of the strategic interaction that includes the constraints of, and payoffs for, actions that the players can take, but does not correspond to actual actions taken (iii) A strategy for a player is a complete plan of actions in all possible situations throughout the game (iv) A Nash equilibrium is a solution concept that describes a steady state condition of the game; no player would prefer to change his strategy as that would lower his payoffs given that all other players are adhering to the prescribed strategy. Roy, et al. [START_REF] Roy | A Survey of Game Theory as Applied to Network Security[END_REF] surveyed existing game theoretic solutions designed to enhance network security. They emphasized that Game Theory has the advantage of treating explicitly intelligent decision makers having divergent interests. Now, let us consider a scenario, in which a DDoS attack is taking place. There are multiple game models to choose for defense, but the defender is unsure which model has performed the best historically to make a determination. The defender can view the strategy spaces of all the games associated to the DDoS attack; however it will take the defender a significant amount of time to select the best game available. In modeling such player strategies, the DDoS attack presents a challenging scenario, which has increased in sophistication [START_REF] Mirkovic | A Taxonomy of DDoS Attack and DDoS Defense Mechanisms[END_REF] and motivates our research in this paper. Although research has evolved relative to the DDoS attack, it is continuously a scenario that deserves much attention due to its simplicity and dominate nature of coordinated botnet use [START_REF] Li | Automating analysis of large-scale botnet probing events[END_REF] to cause an enormous amount of damage. Moreover, the punishment relative to a DDoS attack is minimal to non-existent. Typically, when a DDoS attack takes place in the real world, attackers lease nodes to conduct an attack against a target, or set of targets. Once the attack is complete, the leased nodes are returned to the pool; where another party will lease those nodes allowing a constant change in IP addresses. Due to the nature of the DDoS attack, the most common defense against DDoS attacks is to block nodes. Parameswaran, et al. [START_REF] Parameswaran | A game theoretic model and empirical analysis of spammer strategies[END_REF] utilized blocklist as a defense mechanism in a spammer's game theoretic model. Majority of the DDoS attacks are just blocked, which does not sustain a punitive cost and punishment by legal action is rare.
Therefore, in this work the DDoS example is considered by and large a static one shot game to provide an intuitive example of how the proposed taxonomy can be implemented within a system. When we look at network attacks in general, there are fundamental components that are likely present in a DDoS attack. Mirkovic and Reihner [START_REF] Mirkovic | A Taxonomy of DDoS Attack and DDoS Defense Mechanisms[END_REF] echoed this point by placing emphasis on crucial features of an attack to comprehend the detailed differences. Hence, we believe the network has some tangible attack components that will allow experiential knowledge mapping to ADAPT metrics. The goal is to produce a summary of metrics, which will in turn be used to determine the best game model pursuant to the metrics selected within the ADAPT framework. Thus answering the question from Mirkovic and Reihner [START_REF] Mirkovic | A Taxonomy of DDoS Attack and DDoS Defense Mechanisms[END_REF], how would two different defense models perform under a given attack? We represent a generalization of how each attribute will be mapped to the attacker, defender, and the performance of the target system. The scope of this work investigates metrics selected based on experiential knowledge, as opposed to metrics autonomously selected by the system.
Continuing our scenario, an attacker initiating a DDoS attack acquires a number of nodes to conduct the attack. This increases the amount of bandwidth consumed by the attacker and introduces an increase in the attacker's probability of being caught by the defender. We observe by generalizing attack components and associating them to game inspired metrics, where we are able to provide an overview of game model performance. This enables the defender to select the optimal game model for defense. We further illustrate our scenario in section 5.
CHARACTERISTICS OF GAME INSPIRED METRICS
We use characteristics of security metrics to further assist with evaluating metrics for game theoretic defense models. A performance study requires a set of metrics to be chosen for analysis [START_REF] Bryant | Developing a framework for evaluating organizational information assurance metrics programs[END_REF]. Performance analysis requires comparing two or more systems and finding the best among them [START_REF] Bryant | Developing a framework for evaluating organizational information assurance metrics programs[END_REF]. We extend this to game theoretic defense models, where the network administrator has the ability to select the best game suitable for optimal defense at a specific time. With a dynamic selection process of the best game permits a network administrator to systematically choose a defense solution applicable for defense. The game selection is based on the knowledge of how well a game model represents the considered security situation. Our methodology of game model selection is highlighted in section 5.
There is increased research involving the development of taxonomy for security metrics, where characteristics are provided to ensure organizations understand the metrics when quantifying and evaluating security. Understanding the metrics require a distinction between metric and measurement. Metrics are the resultant of a comparison of two or more baseline measurement over time, whereas measurement is a single point in time view of specific factors [START_REF] Payne | A Guide to Security Metrics[END_REF]. Swanson [START_REF] Swanson | NIST Special[END_REF] defined a metric as tools designed to facilitate the appropriate decision for a specific situation, improve performance and accountability through collection, analysis, and reporting of pertinent performance information.
In the Federal Plan for Cyber security and Information Assurance Research and Development of 2006, the National Science and Technology Council (NSTC) has recommended developing information assurance metrics as a priority in federal agencies [START_REF]Federal plan for cyber security and information assurance research and development[END_REF]. Vaughn et al. [START_REF] Vaughn | Information Assurance Measures and Metrics: State of Practice and Proposed Taxonomy[END_REF] described one of the pressing issues involving security engineering is the adoption of measures or metrics that can reliably depict hardware and software system assurance. Research has suggested the characteristics of good metrics [START_REF] Bryant | Developing a framework for evaluating organizational information assurance metrics programs[END_REF][START_REF] Savola | A Novel Security Metrics Taxonomy for R&D Organizations[END_REF][START_REF] Payne | A Guide to Security Metrics[END_REF][START_REF] Swanson | NIST Special[END_REF][START_REF] Vaughn | Information Assurance Measures and Metrics: State of Practice and Proposed Taxonomy[END_REF]. We encompass a list of metric characteristics from literature that provides a foundation to develop comprehensive game theoretic defense taxonomy. Wesner [START_REF] Wesner | Winning with quality: Applying quality principles in product development[END_REF] introduced the concept of a metric being S.M.A.R.T.(specific, measurable, actionable, relevant, timely). Manadathata and Wing [START_REF] Manadhata | An attack surface metric[END_REF] described a system action can potentially be part of an attack, and hence contributes to attack surface, which also includes the contribution of system resources. We use the notion for validation of our game theoretic defense architecture to measure which game is providing a higher level of security compared to another.
Applying relevant metric characteristics from research illustrates our proposed game inspired approach to an attackdefense and performance metric taxonomy ADAPT (Figure 1). As mentioned earlier, it utilizes three classifications of metrics: attacker, defender, and performance. ADAPT enables a network administrator to view and apply pertinent metrics to evaluate performance in multiple security games.
ADAPT: ATTACK-DEFENSE AND PERFORMANCE METRIC TAXONOMY
As seen from (Figure 1), ADAPT produces relevant metrics to assign values to the components of the attack-defense cost and benefit as well as the performance. These metrics and their calculations are determined based on a review of literature. We utilized these metrics from literature being the same domain in which relevance is closely related to cyber security. An information security measurement standard provides insight to how well a system is performing and analyze whether investments, in information security, are beneficial. Potential benefits include increasing information security performance and providing quantifiable inputs for investment. We identify, in ADAPT, the following three classifiers: (i) Attacker, (ii) Defender (iii) Performance. We assume that these metrics are generic and not specific for a particular game. The attacker and defender metrics have relation to the game models. The performance metrics are used separately from the defender metrics, mainly because the performance metrics have association to the performance of the game model as a whole. Furthermore, the performance metrics relate to the performance of the system in which the game model is run. Its classification provides additional information associated to the game that will assist a defender in selecting the optimal competing game models for defense.
Attacker Metrics
In this section we provide insight into the metrics selected regarding the cost and benefit from the perspective of the attacker.
Cost of Attacker.
The cost of an attacker to attack a specific target can be divided into the following metrics. He et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] used cost of launching an attack and punishment to the attacker as metrics to define the cost of attack.
• Cost of launching attack (COLA): Consists of money and time that an attacker can pay in order to launch an attack against a target. • Punishment after being detected (PABD): Consists of the legal loss of the attacker, which involves one of the metrics used to define the cost of an attacker.
He et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] used four instances in game scenarios involving non-cooperative non-zero-sum static game with complete information, where the relations between Strategy Profile and attacker cost are: o When the attacker and defender both take actions:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 𝐶𝑂𝐿𝐴 + 𝑃×𝑃𝐴𝐵𝐷 (1)
𝑃 is the detection rate of attacks. o When the attacker takes an action and the defender does not:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 𝐶𝑂𝐿𝐴 (2)
o When the attacker does not take an action and the defender takes an action:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 0 (3)
o When the attacker and defender do not take an action:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 0 (4)
Carin et al. [START_REF] Carin | Quantitative Evaluation of Risk for Investment Efficient Strategies in Cyber security: The QuERIES Methodology[END_REF] proposed the following metrics to cyber risk assessment evaluating the Attack/Protect Model. These metrics are based on generating a probability distribution for cost, in terms of time, of successfully defeating the protections applied to critical intellectual property (IP).
• Expected cost of defeating a protection (ECDP): Involves the cost in man hours an attacker would exhibit to successfully defeat the protection. The probability distribution (𝑃𝑟) is based on historical data of successfully attacking the IP. The cost of the i th man-hour in the attack is denoted by (𝑐 6 ).
c 8 𝑃𝑟 i : 8;< (5)
• Expected time to defeat the protection (ETDP): Involves the hours an attacker contributes to successfully defeat the protection. The probability distribution (Pr) is based on historical data of successfully attacking the IP.
𝑖 𝑃𝑟(𝑖)
: 6;< (6)
Benefit of Attacker. Benefit of attacker entails the benefit the attacker receives when implementing an attack against a specific target (i.e. Fame or Monetary Value). Below we provide various metrics from literature assessing benefit of attacker.
Lye [START_REF] Lye | Game strategies in network security[END_REF] divided the benefit of an attacker into the following metrics. Although the parameters used calculate the benefit, it can be inferred with an example (e.g. the damage can involve the reduced bandwidth of a system due to a DoS attack, whereas the recovery effort a network administrator puts forth in the amount of time to bring the system to its original state prior to the attack).
• Damage of the attack (DOA): Consists of the degree of damage in which the attacker is able to cause on the target system.
• Recovery effort (time) required by defender (RERD): Involves the time it takes for a defender to bring the system to a safe state of execution. • Expected income by the attacker (EIBA): Involves the monetary value received by the attacker when an attack is successful. This value can be computed using the amount of effort exhibited by the defender in terms of time to bring the system to a safe state prior to the attack.
He et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] indicated the benefit of an attacker is based on the loss of defending a system. The damage of defender when the attack action is undetected by the IDS (𝑆𝐷) as:
𝑆𝐷 = 𝐶𝑜𝑛 D ×𝐶𝑜𝑛 E + 𝐼𝑛𝑡 D ×𝐼𝑛𝑡 E + 𝐴𝑣𝑎 D ×𝐴𝑣𝑎 E (7)
𝐶𝑜𝑛 D ,𝐼𝑛𝑡 D , 𝐴𝑣𝑎 D are the damage degrees the attack action has made on the attack object respectively in Confidentiality, Integrity and Availability. 𝐶𝑜𝑛 E ,𝐼𝑛𝑡 E , 𝐴𝑣𝑎 E are the objects assets in Confidentiality, Integrity and Availability. These values are not constants, and they can be set by the network administrator. The damage when the attack is detected (𝐹𝐷) is defined as:
𝐹𝐷 = (𝐶𝑜𝑛 D ×𝐶𝑜𝑛 E + 𝐼𝑛𝑡 D ×𝐼𝑛𝑡 E + 𝐴𝑣𝑎 D ×𝐴𝑣𝑎 E ) -𝑅𝑒𝑠𝑡𝑜𝑟𝑒 (8)
Restore is the recovery on the attack action.
𝑅𝑒𝑠𝑡𝑜𝑟𝑒 = 𝐶𝑜𝑛 D K × 𝐶𝑜𝑛 E + 𝐼𝑛𝑡 D K × 𝐼𝑛𝑡 E + 𝐴𝑣𝑎 D K × 𝐴𝑣𝑎 E (9)
As with the benefit of attacker, He et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] uses four instances in the case of non-cooperative non-zero-sum static game with complete information, the relations between Strategy Profile and attacker benefit are:
o When the attacker and defender take an action:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 𝑆𝐷 × 1 -𝑃 + 𝐹𝐷 ×𝑃 (10)
○ When the attacker takes an action and the defender does not:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 𝑆𝐷 (11)
○ When the attacker fails take an action and the defender takes an action:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 0 (12)
o When the attacker and defender do not take an action:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 0 (13)
Plainly stated, the benefit of the attacker is based on the loss of defending the system.
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = -𝐵𝑒𝑛𝑓𝑖𝑡 𝑜𝑓 𝐷𝑒𝑓𝑒𝑛𝑑𝑒𝑟 (14)
Cremonini and Nizovtsev [START_REF] Cremonini | Understanding and Influencing Attackers Decisions: Implications for Security Investment Strategies[END_REF] defined the benefit of attacker in terms of the amount of effort, measured by time, put by an attacker into an attack. They provide the below calculation.
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 𝐸 𝐵 𝑥 [START_REF] Swanson | NIST Special[END_REF] 𝑥: The amount of effort placed in the attack.
𝐸 𝐵 𝑥 = 𝜋 𝑥 ×𝐺 [START_REF]Federal plan for cyber security and information assurance research and development[END_REF] 𝜋 𝑥 : Probability of success of attack given the amount of effort put into attack. 𝐺: One time payoff the attacker receives in the case of successful attack.
Defender Metrics
In this section we provide insight into the metrics selected involving the cost and benefit from the perspective of the defender.
Cost of Defender.
The cost of defender involves the cost of a defender to defend a system against an attack. Below we incorporate literature applying cost of defense.
He et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] indicated the cost of a defender consists of Operational Cost, Response Cost and Response Negative.
• Operational Cost (OC): Can be derived from the risk assessment knowledge library.
• Response Negative Cost (RNC): Can be derived using the following formula:
𝑅𝑁𝐶 = -𝑃 T × 𝐴𝑣𝑎 E ( 17
)
𝑃 T is in [0, 1] being the damage degree to the availability of the system caused by response actions.
• Response Cost (RC): Involves the values derived from the Attack-defense Knowledge Library.
He et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] also provided four instances in relation between the Strategy Profile and defender costs in the case of noncooperative non-zero-sum static game with complete information, which are: o When the attacker and defender take an action:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = -𝑅𝐶 + 𝑃 T ×𝐴𝑣𝑎 ×𝑃 [START_REF] Manadhata | An attack surface metric[END_REF] o When the attacker takes an action and the defender decides to not defend:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = 0 (19)
o When the attacker doesn't take any action and the defender takes an action:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = -𝑅𝐶 + 𝑃 T ×𝐴𝑣𝑎 ×𝑃 U (20)
o When the attacker doesn't take any action nor the defender:
𝐶𝑜𝑠𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = 0 (21)
𝑃 U : False detection rate of the IDS. You and Shiyong [START_REF] You | A kind of network security behavior model based on game theory[END_REF] provided metrics that help compute the cost and payoff of an attacker and defender. Using the performance metrics of exposure factor and average rate of occurrence, we compute single loss expectancy and annual loss expectancy.
• Single Loss Expectancy (SLE): Involves the dollar amount associated to a single asset, which is computed using the Asset Value (dollar amount assigned by the network administrator) and the exposure factor (retrieved from a performance metric).
𝑆𝐿𝐸 = 𝐴𝑠𝑠𝑒𝑡 𝑉𝑎𝑙𝑢𝑒 ×𝐸𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝐹𝑎𝑐𝑡𝑜𝑟 (22)
• Annual Loss Expectancy (ALE): Involves the dollar amount or time associated to an asset over a particular period of time. The single loss expectancy used above and average rate of occurrence (retrieved from a performance metric) to compute ALE.
𝐴𝐿𝐸 = 𝑆𝐿𝐸×𝐴𝑅𝑂 (23)
Benefit of Defender. Benefit of defender involves the benefit of a defender to defend a system against an attack, either prior to or following an attack. Below we provide research assessing benefit of defense.
• Recovery by Restore (RBR): Involves the ability for the defender to recover a target system to its original state from an attack action. • Resources used by the attacker (RUBA): Involves quantitatively reflecting the number of nodes used by the attacker, which is 𝑚.
𝑅𝑈𝐵𝐴 = 𝑚 (24)
He, et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] defined the benefit of a defender based on damage of defender when attack is successful 𝑆𝐷 , damage of defender when attack is detected 𝐹𝐷 and Restore, as explained in the previous section of Benefit of Attacker.
In the case of non-cooperative non-zero-sum static game with complete information, He, et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] uses four instances to describe the relations between Strategy Profile and defender benefit as:
o The attacker and defender both take actions:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = 𝑆𝐷 × 1 -𝑃 + 𝐹𝐷 ×𝑃 [START_REF] Alpcan | A game theoretic analysis of intrusion detection in access control systems[END_REF] o When the attacker takes an action and the defender does not:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = -𝑆𝐷 [START_REF] Shiva | An Imperfect Information Stochastic Game Model for Cyber Security[END_REF] o The attacker does not take an action and the defender takes an action:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑑𝑒𝑓𝑒𝑛𝑑𝑒𝑟 = 0 [START_REF]The CIS Security Metrics[END_REF] o When the attacker and defender do not take an action:
𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝑜𝑓 𝑎𝑡𝑡𝑎𝑐𝑘𝑒𝑟 = 0 (28)
• Loss When Attack is Successful (LWAS): Involves the degree of damage in which the attacker is able to cause on the target system. This metric is a negative benefit to the attacker, capturing the historical data to improve a defender's incentive to defend. • Loss When Attack is Detected (LWAD): Involves the ability for the defender to recover a target system to a noncompromising state from an attack action. This metric is a positive benefit to the attacker, capturing the historical data to improve a defender's incentive to defend.
Performance Metrics
Performance metrics entail the assessment of the system performance and evaluation of unlike game theoretic defense models. Typically, the payoff metrics in game models are used to gauge the cost-benefit analysis between the attacker and defender. This alone is not sufficient to measure and validate a particular game model. Therefore, the attacker and defender metrics represent the game, whereas the additional metrics provided under the performance classification represent asset performance towards selecting the best competing game models for defense. The premise involving the performance metrics gives further insight into the knowledge of the attack relative to the asset. Performance metrics use cost-benefit assessment of attack and defense, risk assessment, and a game theoretic approach to construct an assessment of performance. This will support a network administrator view appropriate metrics when analyzing and selecting a particular game theoretic defense model. Initially the performance metrics are computed using the attack information received, then updated with each attack instance using ADAPT and the defending system. For instance, items such as, false positive (FP) or mean time to incident discovery (MTTID) are set to zero, once computed by the initial attack, these values are updated to provide asset performance relative to the game models. This performance assessment relative to game models provides contribution to existing taxonomies.
In this section we list various performance metrics from literature that can be applied to game theoretic defense models and used for model assessment.
• Number of rounds to reach Nash Equilibrium (NORRE): Burke [START_REF] Burke | Towards a game theory model of information warfare[END_REF] proposed a metric which provides the number of rounds to reach a Nash Equilibrium, in order to evaluate a game theory model of information warfare, based upon the repeated games of incomplete information model. Burke [START_REF] Burke | Towards a game theory model of information warfare[END_REF] stated equilibrium provides the ability to analyze a game theory model's predictive power, which is evaluated in terms of accuracy and performance.
𝑁𝑂𝑅𝑅𝐸 = 𝐶𝑜𝑢𝑛𝑡 𝑎𝑐𝑡𝑖𝑜𝑛𝑠 𝑝𝑙𝑎𝑦𝑒𝑑 𝑢𝑛𝑡𝑖𝑙 𝑛𝑎𝑠ℎ 𝑒𝑞𝑢𝑖𝑙𝑖𝑏𝑟𝑖𝑢𝑚 -1 (29)
• Overall Game Quality (OGQ): Jansen [START_REF] Jansen | Directions in Security Metrics Research[END_REF] stated qualitative assignments can be used to represent quantitative measures of security properties (e.g., vulnerabilities found). We define a metric overall game quality, where the game model is determined based on the availability of the system (e.g. percentage of available bandwidth), the performance of the game (e.g. average NORRE), and the quality of the system (e.g. false positive rate). This metric is based on the overall equipment effectiveness, where game theory parameters are applied to measure the efficiency of various games [START_REF] Alpcan | A game theoretic analysis of intrusion detection in access control systems[END_REF]. Other works utilized false positive rate as a part the actual game model [START_REF] Shiva | An Imperfect Information Stochastic Game Model for Cyber Security[END_REF]. This metric is resilient to both options of the false positive rate when determining the overall game quality.
𝑂𝐺𝑄 = 𝐴𝑣𝑎𝑖𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑦 ×𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒×𝑄𝑢𝑎𝑙𝑖𝑡𝑦 (30)
• Exposure Factor (EF): Exposure factor represents the percentage of loss a threat may have on a particular asset. Exposure factor with a combination of other metrics will provide insight to the level of importance a system may have in the event of an attack.
𝐸𝐹 = abbcd efbb
gfd.abbcd ecEci [START_REF] Wu | On Modeling and Simulation of Game Theory-based Defense Mechanisms against DoS and DDoS Attacks[END_REF] • Average Rate of Occurrence (ARO): Average Rate of Occurrence is an estimate of the frequency of attack probability. Average Rate of Occurrence can assist with determining defense strategies of a specific asset. Minimizing the ARO provides insight to how well a game theoretic defense solution is performing.
𝐴𝑅𝑂 = jfkld (mnnkKKclncb) g6Uc oldcKETi (32)
• Loss of Availability (LOA): Loss of availability refers to the loss of resource which is currently unavailable to the legitimate requesting processes. The higher the value of this metric incurs an increased loss.
𝐿𝑂𝐴 =
jfkld (plTET6iTqic rcbfkKncb) gfd.sf.ft ecu6d6UTdc rcvkcbd6lu wKfncbbcb
• Incident Rate (IR): Incident Rate indicates the number of detected security breaches a system or asset experienced during an allotted time period. Using incident rate, with a combination of other metrics, can indicate the level of threats, effectiveness of security controls, or attack detection capabilities [START_REF]The CIS Security Metrics[END_REF].
𝐼𝑅 = 𝐶𝑜𝑢𝑛𝑡(𝐼𝑛𝑐𝑖𝑑𝑒𝑛𝑡𝑠) (34)
• Mean Time to Incident Discovery (MTTID): Mean-Time-To-Incident-Discovery characterizes the efficiency of detecting attacks, by computing the average elapsed time between the initial occurrence of an incident and its subsequent discovery. The MTTID metric also serves as a leading indicator of flexibility in system or administrator's ability to defend as it measures detection of attacks from known and unknown vectors [START_REF]The CIS Security Metrics[END_REF].
𝑀𝑇𝑇𝐼𝐷 = zTdc {| }~•€{•'ƒ" … zTdc {| †€€ ‡ƒƒ'ˆ€' jfkld(oln6‰cldb) (35)
• Mean Time to Incident Recovery (MTTIR): Mean Time to Incident Recovery measures the effectiveness of recovering from an attack. The more responsive a system or administrator is, the less impact the attack may have on the asset [START_REF]The CIS Security Metrics[END_REF].
𝑀𝑇𝑇𝐼𝑅 = zTdc {| Š'€{•'ƒ" … zTdc {| †€€ ‡ƒƒ'ˆ€' jfkld(oln6‰cldb) (36)
• Mean Time to Mitigate Vulnerability (MTTMV): Mean time to mitigate vulnerabilities measures the average time exhibited to mitigate identified vulnerabilities in a particular asset. This metric indicates a system or administrator's ability to patch and/or mitigate a known vulnerability to reduce exploitation risk [START_REF]The CIS Security Metrics[END_REF].
𝑀𝑇𝑇𝑀𝑉 = zTdc {| ‹~OE~•ŽOE~{ˆ… zTdc {| }'OE'€OE~{ĵ fkld(•6d6uTdc‰ • ‡'ˆ'ƒŽ'~'~OE~'• ) (37)
• False Negative Rate (FNR): The frequency in which the system fails to report malicious activity occurs. It involves the number of incidents that are not detected, which are present within the system [START_REF] Mcgraw | Software Security: Building Security In[END_REF].
𝐹𝑁𝑅 = (•6bbc‰ oln6‰cldb) jfkld(oln6‰cldb) (38)
• False Positive Rate (FPR): The frequency in which the system reports a malicious activity in error. It involves the number of incidents that were detected and upon further discovery produced a false incident [START_REF] Roy | A Survey of Game Theory as Applied to Network Security[END_REF].
𝐹𝑃𝑅 = ("Tibc wfb6d6Ecb) jfkld(oln6‰cldb) (39)
A GAME MODEL COMPARING SYSTEM BASED ON ADAPT
In this section we describe the process in which ADAPT will be used to compare game models followed by a scenario of its application using a distributed denial of service (DDoS) attack. Lastly, we highlight ADAPT's application to the Game Inspired Defense Architecture (GIDA), wherein a game decision system (GDS) uses ADAPT to compare competing game models. The GDS facilitates selecting the optimal game theoretic defense model.
Methodology
In this section we present the method to compare the candidate game models relevant to an identified attack using metrics in ADAPT. The identified attack is resolved into attack vectors, which is used to locate the relevant metrics within ADAPT. Using these metrics the game models are compared to select the game model most suitable for defense. In a given attack scenario a certain set of anomalies are identified. Those anomalies are used to identify the attack using the AVOIDIT taxonomy proposed in Simmons, et al. [START_REF] Simmons | AVOIDIT: A cyber attack taxonomy[END_REF]. This identified attack is resolved into "attack components". These attack components are parameters indicating some aspect of the system, albeit malfunction and/or failure, affected by the attack. They are composed of various anomalies which are observed by sensors such as Firewalls, IDS, and their values indicate their severity. Using these attack components a set of metrics that fittingly quantize the system's current security state are identified in ADAPT. Using these metrics the game models with their respective game model components which correspond to these metrics in their interaction modeling in terms of actions-payoff of players are selected. These models are compared with each other to pick the one, which corresponds/maps best to the selected metrics.
The present experiment had a simple case. To achieve the above flow we used the following 5 steps.
1. Given an attack, A, and a target system T, we identify a set of attack components AC.
2. We map the attack components 𝐴𝐶 l with its respective ADAPT metric, 𝐴𝑀 l .
3. Given the game model and the game model components we provide the Boolean value (0 or 1) to all the metrics. If a game model component corresponds to a selected metric then the component gets a value 1 else a 0. This is done for all the game model components of each of the competing game models. 4. All values associated with each game model component of a game model are summed to give a total score of evaluation of the competing game models. 5. The game model with the highest score is selected as being the most relevant for defense, which is appropriate for instantiation.
In a given model, temporal consideration is not parameterized separately. In terms of actions at a given state of the game and how and when the game transits between the states is considered as the mode to keep track of the time. For more complex scenarios time must be taken into consideration in more explicit ways in the modeling. In the future work we intend to exhibit temporal considerations and improve the evaluation based on weighted values and not just 0/1 for greater precision.
The ADAPT taxonomy is constructed in a way to evaluate the holistic view of a game model, along with its respective system. It requires some resources to instantiate each game model to run a game. The metrics in the performance branch evaluate the overhead of instantiating a game model. The attacker/defender branch metrics evaluate the parameters which affect the attacker/defender payoff. The next section illustrates the ADAPT methodology using a zero-sum game scenario where the game model components correspond to a benefit to the defender, thus correspond to the cost of the attacker and vice versa. Due to space constrains, the reader is encouraged to refer to Bedi, et al. [START_REF] Bedi | A Game Inspired Defense Architecture[END_REF] for an elaborate discussion.
A Case Study: DDoS Attack Scenario
We continue our example from section 2, wherein we analyze a DDoS attack and ADAPT's applicability to discern the main features of the attack. This offers the framework for game model selection with a relevant set of metrics. We focus on the bandwidth reduction where multiple attacking nodes attempt to push their packets to exhaust limited bandwidth of a link in the network. The attacker's strategy is to maximize either the botnet size or the sending rate to flood the pipe. We will call this strategy a flood strategy by the attacker, as he is not concerned with detection, but to overwhelm its target. Whereas the defender's strategy is to implement the optimal firewall setting which will allow legitimate flows and block malicious flows. This defense strategy is simply to defend or not defend. Experiential knowledge is used to evaluate the crucial features of our DDoS attack example to capture the appropriate attack components for analysis. We illustrate these components with an example. This example is based on prior work in this domain [START_REF] Wu | On Modeling and Simulation of Game Theory-based Defense Mechanisms against DoS and DDoS Attacks[END_REF][START_REF] Bedi | Game Theory-based Defense Mechanisms against DDoS Attacks on TCP/TCP-friendly Flows[END_REF].
Attack Components
In our example scenario, the attack is a network based DDoS and it consists of the following attack components:
• 𝑣 q : Average bandwidth used by the attacker,
• 𝑣 l : Ratio of the number of lost legitimate users to the total number of legitimate users,
• 𝑣 n : Number of nodes used by the attacker to launch an attack The values of these components define the impact of the attack over a target system. In this example, the attacker's goal is to increase 𝑣 q and 𝑣 l , which are the rewards. An assumption is made on the attacker's cost 𝑣 n is linearly proportional to the number of attacking nodes employed and 𝑣 n = 𝑚.
Continuing our DDoS example, the IDS captures a fixed number of properties to begin facilitating situational analysis for decision making, whereas the firewall has a default drop threshold set. Various sources provide input properties used by ADAPT. It is assumed the mapping is preset, which is initially performed manually via expert knowledge and keywords. The initial input properties to ADAPT for the DDoS example are: (a) total bit rate, (b) total number of flows, (c) drop rate, and (d) number of flows dropped.
A legitimate flow is one in which the network bandwidth is used in a fair manner, being the flow per node being less than or equal to the ratio of total bandwidth to the number of nodes. The loss of legitimate flows is used in this example to determine if the flow is negatively impacted. This provides a way to distinguish attacker flows.
The bitrates sum is computed per IP address. The IPs which consume above the amount of bandwidth than their predefined share are considered malicious nodes. For example, for the attacker to break the initial threshold set by the defender there must be a minimum number of unfriendly nodes required to drop at least one friendly (legitimate) node. If the defender initiates a response to the attack, an incurred cost to the defender, is accounted in terms of resources and Similarly, the following attack components, which are used in our example, are mapped to corresponding attacker, defender, and performance metrics.
The first component 𝑣 q , being the average bandwidth used by the attacker maps to the following metrics:
(a) The SLE metric in ADAPT is classified under the cost of defender. It captures the dollar amount associated to a single asset, which is computed using the Asset value and the Exposure factor. In our scenario, the asset is the bandwidth of the pipe and its value can be determined by the network administrator. We associate the Exposure Factor as the ability of the attacker to access and exploit the asset. (b) The EIBA metric in ADAPT is classified under the benefit of attacker. In our DDoS example, this is associated to the zero-sum game to express the attacker's monetary success. (c) The EF metric in ADAPT is classified under performance. In our example, this metrics is associated with the percentage of loss on the bandwidth.
The second component 𝑣 n , being the number of nodes used by the attacker to launch an attack maps the following metrics:
(a) The RNC metric in ADAPT is classified under the cost of defender. In our example, this metric is associated with the damage the attack was able to accomplish considering the defender's response. (b) The DOA metric in ADAPT is classified under the benefit of attacker. It involves the monetary value received by the attacker when an attack is successful (c) The LOA metric in ADAPT is classified under performance. It represents the percentage of loss a threat may have on a particular asset.
The third component 𝑣 l , being the ratio of the number of lost legitimate users to the total number of legitimate users maps to the following metrics:
(a) The RUBA metric in ADAPT is classified under the cost of defender. It relates to quantitatively reflecting the number of nodes used by the attacker (b) The COLA metric in ADAPT is classified under the benefit of attacker. It involves the cost incurred by the attacker when an attack is launched. (c) The IR metric in ADAPT is classified under performance. It represents the incident rate associated to the target system.
Table 1 highlights a visual representation of the attack components we use to map metrics with ADAPT. The column titled "ADAPT Metrics" contain the metrics mapped using the attacker, defender, and performance classifiers. Each component gets mapped to either cost or benefit (but not both) for each of the players; attacker and defender. Also, a component corresponding to the cost (or benefit) of a defender cannot correspond to the cost (or benefit) of the attacker.
SLE X X EIBA EF 𝑣 l RNC X X DOA LOA 𝑣 n X RUBA COLA X IR
Table 1 illustrates the ADAPT metrics, which depicts the player and performance related metrics for mapping. Using the described game scenario the defender is able to use ADAPT to systematically retrieve potential game models suitable for defense based on the attack components received and its metric mapping. The scenario has to be evaluated with respect to these three factors, which the metrics in ADAPT capture. Once this is done a relationship between the quantified components of the game governing equations as discussed in the example are evaluated. This makes the game model involving the obtained attack components best depicting the scenario, will be chosen to be the game model that best suits the present scenario. The metrics in ADAPT quantifies the parameters of the scenario. Using these values, the correlation of a model can be evaluated using a suitable algorithm as described in Bedi, et al [START_REF] Bedi | A Game Inspired Defense Architecture[END_REF]. As with any sensor, there are instances where false positives occur, in which human intervention is required for the improvement of those sensors. For the purpose of this paper, we assume the attack has a relevant game model in the repository, where human intervention and expert knowledge is required to update the repository for increased accuracy of an ADAPT based system. In future work, we are developing a frame work for constructing game models, which facilitate dynamic analyses of imperfect information and respond with changes in the strategies dynamically for optimum response in real world scenarios. This future work is based on our prior work [START_REF] Shiva | An Imperfect Information Stochastic Game Model for Cyber Security[END_REF] where we recommend game theoretic defense strategies to network security problems assuming imperfect sensory information.
In our example the strategy of the attacker and defender does not change. For the sake of discussion, let us consider an instance in which the strategy of the attacker changes, by increasing or decreasing the number of nodes exhibited in the DDoS attack. Also, let us consider, the defender is able to change its strategy, as well. In both cases, of the attacker and defender, ADAPT is resilient to the change, as the generalized metrics remain mapped within the taxonomy. Paruchuri, et al. [START_REF] Bedi | A Game Inspired Defense Architecture[END_REF] proposed an efficient heuristic approach for security against multiple adversaries where the attacker is unknown to the defender. This work is in line with our DDoS example, due to its unknown nature of the true attacker.
ADAPT in the Game Inspired Defense Architecture
The Game Inspired Defense Architecture (GIDA) is foreseen as a holistic approach designed to counter cyber-attacks [START_REF] Shiva | An Imperfect Information Stochastic Game Model for Cyber Security[END_REF][START_REF] Bedi | A Game Inspired Defense Architecture[END_REF][START_REF] Shiva | Game Theory for Cyber Security[END_REF][START_REF] Shiva | Holistic Game Inspired Defense Architecture[END_REF]. GIDA (Figure 2) focuses on the concept of offering defense strategies against probable and committed attacks by modeling situations as multi-player game scenarios. The attack-defense analysis is done by ADAPT. GIDA provides security by operating in the following fashion: Identification of attack, Extraction of game models relevant to the identified attack, and Assessment of candidate game models and execution of the one which is most relevant to present attack. GIDA consists of three components, namely, ADAPT (our taxonomy), a Knowledge Base (KB), and a Game Decision System (GDS). The GDS is a preventative system, within the GIDA framework, to collect input from various sources for continuous attack information updates relative to game models. The knowledge base (KB) consists of game models mapped to the types of attacks identified and additional attack related data. The GDS operates in a preventative fashion through the assessment of candidate game models respective to a particular attack and executes the game model which is best among them. ADAPT provides the metrics to be mapped to the components, and evaluate them in terms of different aspects of the player's payoffs, and the game's performance. This gives the GDS the specific set of game metrics defining the ongoing attack. The GDS acts as the brain with provisions to process input information and take the appropriate action.
One implementation of our proposed defense architecture is depicted (Figure 2). Our network topology consists of a Target Network which our architecture aims to protect. This network is connected to the Internet through a series of Sensors and Actuators. Currently, GIDA uses an intrusion detection system (IDS) as the sensor and a firewall as the actuator. The topology also includes a honeynet, which is a network of honeypots. The honeypot is primarily used as a virtual implementation of the target network for analyzing traffic and gathering additional information from the attacker.
Once an attack is identified against a target, the sensors feed information to the GDS. The GDS contains an attack identification mechanism, which forwards the suspected attack to the KB. The KB is searched for additional attack related information and candidate game models which can defend against the identified attack. In this present case (Figure 2), the knowledge base provides two game models: GM 3 and GM 4. These suggested game models are then sent to ADAPT to assess the attack, defender, and performance metrics for selection of the optimal game model.
The depiction of ADAPT (Figure 2) highlights how ADAPT uses its knowledge of the two game models to classify each component of an attack with the game metrics. Due to space constraints, we provide a single example of a component's selection process using the tree structure of ADAPT (Figure 2). ADAPT navigates its tree for each component of the attack to capture the metrics from the identified attack for analysis. These metrics are used to evaluate the computed cumulative score of the selected game models. The GDS uses ADAPT to select the model which possesses greater relevance to the present observed attack based on each attack components impact to the attacker, defender, and the performance of the system during the game. Once a game model is selected, the GDS executes the game model by sending the proposed defense actions to the respective sensor or actuator. Updated information is obtained via the KB's ability to access vulnerability databases such as National Vulnerability Database (NVD), MITRE Corporation's Common Vulnerabilities and Exposures (CVE) list, etc.
We envision this process of attack identification and defense to be iterative in nature where sensors like IDS constantly provide input to GIDA. Based on these inputs, the GDS, ADAPT, and the KB reevaluates their findings to further improve the proposed defense measures. This process continues until the attack is subdued. It should be noted that GIDA has an option of playing a selected game. Simple games such as firewall setting changes may be performed automatically, however defender interaction may be required for complex games. Nagaraja and Anderson [START_REF] Nagaraja | The topology of covert conflict[END_REF] provided insight into discovering the effectiveness of iterated attack and defense operations through a proposed framework using evolutionary game theory.
Moreover, there are various types of plausible attacks on any given target system. GIDA uses the GDS to address attacks before they reach fruition to observe and attempt to make a decision on the optimal game model for defense. This gives GDS the ability to operate in a reactive manner, as well, considering attacker initiates. We anticipate certain attacks to be continuous in nature and the intention is to impede any or further damage to its respective target, hence the GIDA framework is proactive to prevent damage on a monitored network.
RELATED WORK
There are several recent efforts which consider security games evaluation, involving performance and security metrics. In this section we provide an overview of literature relative to game theory defense models and performance metrics. He, et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF] proposed a novel Game Theoretical Attack-Defense Model (GTADM) which quantifies the probability of threats in order to construct a risk assessment framework. They focus on the computation of the attack probability according to the cost-benefit of the attacker and the defender, and defined relevant metrics to quantify the payoff matrix.
Alpcan and Basar [START_REF] Alpcan | A game theoretic analysis of intrusion detection in access control systems[END_REF] proposed a game theoretic analysis of intrusion detection in an access control environment. They provided several common metrics that were used to help identify the performance of the Intrusion Detection System IDS. Using the metrics they provided, simulation was used to determine the costs and actions of the attacker and IDS.
Bloem, et al. [START_REF] Bloem | Intrusion response as a resource allocation problem[END_REF] proposed an intrusion response as a resource allocation problem, where the resources being used were the IDS and network administrator. They provided insightful metrics regarding the response time of an IDS and its ability to respond without the administrator's involvement. Also, they used an administrator response time metric to determine the time of effort used to compute administrator involvement after an alert from the IDS. This metric can prove beneficial in determining how well a system is able to successfully respond against attacks while minimizing the administrator's involvement.
Liu, et al. [START_REF] Liu | Incentive-based modeling and inference of attacker intent, objectives, and strategies[END_REF] proposed an incentive based modeling and inference of attacker intent, objectives, and strategies. They provided several examples that compute the bandwidth before, during, and after an attack. They specified metrics to compute the absolute impact and relative availability to determine the system degradation. These metrics are used to distinguish how well the system was able to capitalize on the attack, as well as how well the attacker was able to succeed in reducing the bandwidth.
You and Shiyoung [START_REF] You | A kind of network security behavior model based on game theory[END_REF] proposed a network security behavior model based on game theory. They provide a framework for assessing security using the Nash equilibrium. In assessing the security, they also provide metrics used to analyze the payoff and cost of an attacker and defender using the exposure factor, average rate of occurrence, single loss expectancy, and annual loss expectancy.
Savola [START_REF] Savola | A Novel Security Metrics Taxonomy for R&D Organizations[END_REF] surveyed emerging security metrics approaches in various organizations and provided a taxonomy of metrics as applicable to information security. His taxonomy provided a high level approach to classifying security metrics for security management involving organization, operational, and technical aspects. He also included high level classification for metrics involving security, dependability, and trust for products, systems, and services. The metrics provided are all high level, with a lack of specific metrics used for each category, but he provides a good starting point to organizations needing to begin analyzing various security metrics within their organization.
Fink et al. [START_REF] Fink | A metrics-based approach to intrusion detection system evaluation for distributed real-time systems[END_REF] proposed a metrics-based approach to IDS evaluation for distributed real-time systems. They provided a set of metrics to aid administrators of distributed real-time systems to select the best IDS system for their particular organization. They presented valuable information needed to gather the requirements of an organization in order to capture the importance, and use the requirements to successfully measure the performance according to requirements imposed by the organization.
CONCLUSION AND FUTURE WORK
Game theoretic models continue to present information and analysis to initiate defense solutions against an attack for a network administrator. This paper is an attempt to provide an intuitive game theoretic metric taxonomy that a defender can use to synthesize how well a particular game model is performing in a network. We assume the collected metrics are generic and can be used regardless of the type of game theoretic model used for defense. We believe providing a list of metrics for a game inspired defense architecture will provide an administrator with the appropriate information to make an intelligent decision in game theoretic defense analysis. This assumption is not approved through real experiences.
Creative metrics are necessary to enhance a network administrator's ability to compare various defense schemes. We propose a game theory inspired Attack-Defense And Performance metric Taxonomy (ADAPT) to help a network administrator view pertinent metrics during a game theoretic model analysis. Although this work provides game related model selection, alternative solutions of ADAPT can be used without a game theoretic aspect.
Future work involves demonstrating the usefulness of ADAPT through the implementation of the game decision system (GDS), which assists a game inspired defense architecture with model selection. We are currently in progress towards developing the game decision system based on ADAPT using an open source knowledge base to store metrics associated to particular attack and game models. The game strategies will be assessed using a weighted score ranking between models which will assist with selecting the game with the most relevance to the identified attack. The use of ADAPT in this system will have knowledge of the attack and its target to assess the proposed game decision strategies to defend against the attack. In the event an attack is not mapped, we will construct game models to handle such scenarios. We intend to implement the model described within He, et al. [START_REF] He | A Game Theoretical Attack-Defense Model Oriented to Network Security Risk Assessment[END_REF], as well as others, to compare results with an ADAPT based system. Furthermore, an enhancement to the taxonomy may be considered with an additional game theoretic defense model classification distinguishing the various game models. We foresee using an ADAPT based system as a comprehensive solution to optimal game selection.
Fig.
Fig. 1. Attack-Defense And Performance metric Taxonomy (ADAPT)
Fig. 2 .
2 Fig. 2. Game Inspired Defense Architecture
Table 1 .
1 Attack Components Correlation with ADAPT Metrics
DDoS Attack Components Defender Cost Benefit ADAPT Metrics Attacker Cost Benefit Performance
𝑣 q | 61,180 | [
"1001111",
"1001112",
"1001113",
"1001114"
] | [
"130291",
"130291",
"130291",
"130291"
] |
01463840 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463840/file/978-3-642-39218-4_29_Chapter.pdf | Ben Lotfi
Harold Othmane
Rohit Weffers
Pelin Ranchal
email: [email protected]
Bharat Angin
email: [email protected]
Mohd Bhargava
Mohamad Murtadha
email: [email protected]
Lotfi Ben Othmane
email: [email protected]
Harold Weffers
email: [email protected]
Rohit Ranchal
Pelin Angin
Bharat Bhargava
Mohd Murtadha Mohamad
A Case for Societal Digital Security Culture
Keywords: Information Security, Security Culture, Security Usability
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
We commonly use pervasive computing systems, such as remote vehicle control systems [START_REF] Ben Othmane | A Survey of Security and Privacy in Connected Vehicles[END_REF], remote healthcare monitoring systems, and home automation systems to improve our life quality; public information systems [START_REF] Sundgren | What is a public information system[END_REF], such as online banking for personal business; and Internet telephony applications, such as Skype for personal communication. However, these systems have security threats-circumstances and events with the potential to harm an Information System (IS) through unauthorized access, destruction, disclosure, modification of data, and Denial of Service (DoS) [START_REF] Gilgor | A note on the denial-of-service problem[END_REF].
Attackers exploit technical vulnerabilities and security policy violations to trigger security threats and compromise the system's assets. Technical vulnerabilities are weaknesses and flaws in a system's design, implementation, or operation and management [START_REF] Kissel | Glossary of key information security terms[END_REF]. For example, sending data through networks without assuring confidentiality and integrity [START_REF] Kissel | Glossary of key information security terms[END_REF] is a weakness of the system that manages them. Policy violations are faults in applying and enforcing security policies that provide attackers with confidential information or technical weaknesses Table 1. Impacts of security threats to systems.
Impact Example Safety
Attacker controlling the brakes of a vehicle [START_REF] Bailey | iSEC partners presents: The hacked and the furious[END_REF] through remote access compromise to the in-vehicle network of the vehicle using a mobile phone. Financial loss Attacker installing a key logger on the mobile device of a user to capture credentials for performing financial operations on his behalf [START_REF] Grebennikov | Keyloggers: How they work and how to detect them (part 1)[END_REF]. Privacy Use of information on an Online Social Network (OSN) for purposes violation they were not intended, as in the case of a teacher in training being denied her teaching degree due to her photos posted on an OSN [START_REF] Rosen | The web means the end of forgetting[END_REF]. Operational interruption Attacker continuously sending messages to a vehicle to prevent it from sending e-call messages to a service center in case of an accident [START_REF] Society | ecall: Time saved = lives saved[END_REF].
which allow them to compromise assets of the system. For example, an attacker could use social engineering [START_REF] Anderson | Security Engineering: A Guide to Building Dependable Distributed Systems[END_REF] to get the secret password of an individual for online banking (e.g., when he/she gets drunk), which enables him/her to withdraw money from the victim's bank account. Table 1 provides an overview of the impacts of major security threats to information systems.
Figure 1 shows that the security threats for information systems we use fall into several categories: physical security violations, technical attacks, security policy violations, and errors caused by limited human knowledge. Technical security measures attempt to address these threats, but fall short in providing comprehensive security solutions in most cases. This paper investigates two main questions: What are the limitations of technical security solutions used in pervasive systems, social networks, and public information systems? And, how can technical security solutions be supported to reduce the risks of security threats to these systems? We answer the first question through analyzing the efficacy of technical security mechanisms for two case studies: connected vehicle and online banking. The analysis shows that technical security solutions, alone, cannot protect individuals and their assets from attacks. Therefore, we propose to extend the technical solutions with Societal Digital Security Culture (SDSC), which answers the second question.
Digital Security Culture (DSC) in organizations is well investigated, e.g. [START_REF] Williams | What does security culture look like for small organizations?[END_REF], [START_REF] Dodge | Phishing for user security awareness[END_REF], and [START_REF] Schlienger | Information security culture: The socio-cultural dimension in information security management[END_REF]. However, to the best of our knowledge, Colella and Colombini [START_REF] Colella | Security paradigm in ubiquitous computing[END_REF] are the only authors who-briefly-discussed security awareness to address threats related to pervasive computing. There is currently no work on SDSC. The main contributions of this paper are to: [START_REF] Ben Othmane | A Survey of Security and Privacy in Connected Vehicles[END_REF] demonstrate that technical security mechanisms, alone, cannot sufficiently protect individuals and their assets from attacks on systems they use, [START_REF] Sundgren | What is a public information system[END_REF] propose to extend technical security mechanisms through SDSC, and (3) suggest approaches for improving SDSC.
The rest of the paper is organized as follows: Section 2 describes the limitations of efficacy of technical security solutions. Section 3 provides an overview of "usable security" and its limitations. Section 4 defines and describes SDSC. Section 5 suggests some approaches for improving SDSC, Section 6 presents an Fig. 1. Security environment for everyday information systems. (Image references clockwise from top right corner: [START_REF] Bosch | Bosch health buddy[END_REF], [START_REF] Sgrenovation | [END_REF], [START_REF] Mybanktracker | [END_REF], [START_REF]Twitter logo[END_REF], [START_REF]Playstore logo[END_REF], [START_REF]Linkedin logo[END_REF], [START_REF]Android logo[END_REF], [START_REF]Facebook: Facebook logo[END_REF], [START_REF]Skype logo[END_REF].) example for reducing the risk of security threats through improving SDSC, and Section 7 concludes the paper.
Limitations of Efficacy of Technical Security Solutions
Overview of the limitations of technical solutions
Companies which develop systems and applications for public use implement technical security solutions, which cannot alone prevent and protect the user of the systems or applications from attacks (even if they were certified to assure the security of the user). The main limitations of the technical solutions are: L1. Policy violation. Technical security solutions often rely on the user to comply with some security policies, e.g., not disclose a password. However, a user may violate the policy, e.g., provide his/her password to other individuals. L2. Weak mechanisms. Companies often implement ineffective security solutions for protecting users' assets, so they preserve low product cost. For example, pacemakers and implantable cardiac defibrillators have weak security mechanisms although they are widely used [START_REF] Halperin | Pacemakers and implantable cardiac defibrillators: Software radio attacks and zero-power defenses[END_REF]. L3. New attack scenarios. Companies implement security mechanisms for known attacks. However, attackers attack where they are least expected; they discover new vulnerabilities and exploit them.
Demonstration of the limitations of technical security mechanisms
This subsection presents two applications, describes their related digital attacks; and demonstrates the limitations of the technical security solutions for them. 2 shows a scenario for remote access to connected vehicles.
In the last decade, several threat analyses, security solutions, and security and privacy architectures have been proposed for assuring secure communication in in-vehicle networks, between vehicles, between vehicles and personal devices, between vehicles and service centers, as well as detecting malicious data, protection against wormhole attacks, secure data aggregation for VANets, use of devices that include a hardware security module, over-the-air firmware update, protection against denial of service attacks, and access control to applications [START_REF] Ben Othmane | A Survey of Security and Privacy in Connected Vehicles[END_REF].
Car manufacturers implement security solutions to address the threats. However, there are reports that the security mechanisms they implement are subverted. For instance, Checkoway et al. [START_REF] Checkoway | Comprehensive experimental analyses of automotive attack surfaces[END_REF] performed a set of attacks on a vehicle (a sedan) including the following: A1. Exploit a weakness and a flaw in the authentication program of aqLink protocol implementation, namely, short (8-bits) random numbers and a buffer overflow vulnerability, to upload and run arbitrary code. A2. Use trojan horse for Android-based smart phones to exploit a buffer overflow vulnerability in the car's hands-free application that uses the Bluetooth protocol. (The attack requires the smart phone to be paired with the car's Bluetooth device.) A3. Call car and play a well-crafted "song" from an iPad, that exploits a logic flaw and a buffer overflow vulnerability in the authentication of aqLink protocol implementation to upload and run arbitrary code.
These attacks show the limitations of technical security solutions for connected vehicles. For instance, attack scenario A1 exploits an implementation weakness: random numbers are of 8-bits (limitation L2), which allows the attacker to upload an arbitrary program into the embedded system. The code may provide the attacker with the ability to inject messages into the in-vehicle network of the vehicle, such as increasing speed or disabling the brake. The other attack scenarios exploit source code vulnerabilities that the researchers found in the programs of the device: they are new attack scenarios (limitation L3).
Case 2: Online banking Hackers exploit online banking Web application vulnerabilities and user faults through means like social engineering. Social engineering, e.g. phishing attacks, exploit human cognitive biases-creating flaws in human logic using different ways to perceive reality-to trick humans into performing actions, such as disclosing sensitive information. Phishing attacks are conducted through (a) presenting illegitimate digital information that attempts to fraudulently acquire sensitive information, such as login credentials, personal information, or financial information, or (b) masquerading as a trustworthy entity-e.g. a well-known organization or an acquaintance.
The phishing information is usually distributed through emails that contain an attachment, or a web link. Figure 3 shows a phishing email masquerading as HDFC bank. 4 The attack scenarios posed by phishing email include: B1. Fool online banking users to send the hacker their sensitive information, such as Personally Identifiable Information (PII) and financial information, which could be used for identity theft and financial fraud. B2. Spoof the bank websites, deceive the users to provide their login credentials, and use the information to hack the users' bank accounts.
B3. Deceive users to install malicious software on their computers, which may give the hacker access to the users' computers and other computers accessible from the users' computers or capture their login credentials and personal data and send them to the hacker for malicious use.
Technical and usable security solutions are not sufficient to mitigate attacks B1, B2, and B3. For instance, attack scenario B1 succeeds for users who violate the policy (limitation L1): Banks do not request PII and financial information through emails, so users should not reply to emails requesting such information; attack scenario B2 exploits weak mechanisms (limitation L2) that do not detect Website spoofing; and attack scenario B3 often uses new techniques (limitation L3) to bypass anti-malware software.
Usable Security
Whitten and Tygar [START_REF] Whitten | Why Johnny can't encrypt: a usability evaluation of pgp 5.0[END_REF] have identified the weakest link property: attackers need to exploit only a single error, and human frailty provides this error: humans are, frequently, the "weakest link" in the security chain. Whitten and Tygar [START_REF] Whitten | Why Johnny can't encrypt: a usability evaluation of pgp 5.0[END_REF] pointed out that users do not apply security mechanisms, although they know them, simply because the mechanisms are not usable enough. A security software is usable [START_REF] Whitten | Why Johnny can't encrypt: a usability evaluation of pgp 5.0[END_REF] if the people who are expected to use it: (1) are reliably made aware of the security tasks they need to perform, (2) are able to figure out how to successfully perform those tasks, (3) don't make dangerous errors, and (4) are sufficiently comfortable with the interface to continue using it.
Security usability addresses the question: why users can't apply security mechanisms. The techniques for usable security aim to reduce the complexity of security mechanisms, improve the knowledge of users, and reduce the cost of applying them in terms of efforts and money. However, making security usable and changing users' knowledge doesn't enforce change in their behavior [START_REF] Sasse | Usable Security: What is it? How do we get it? In: Security and Usability: Designing secure systems that people can use[END_REF]. Sasse and Flechais find that security culture, based on a shared understanding of the importance of security, is the key to achieving desired behavior [START_REF] Sasse | Usable Security: What is it? How do we get it? In: Security and Usability: Designing secure systems that people can use[END_REF].
Overview of Societal Digital Security Culture
Members of the society need to gain knowledge and experience sufficient to avoid the consequences of the limitations of technical solutions. Security limitations have been addressed for the case of organizations using DSC, which extends (usable) technical security solutions [START_REF] Schlienger | Information security culture: The socio-cultural dimension in information security management[END_REF]. The most common definition of DSC-that we adopt in this paper-is the collective knowledge, common practices, and intuitive common behavior about digital security (cf. [START_REF] Williams | What does security culture look like for small organizations?[END_REF]). This definition identifies knowledge and behavior (which includes practices) as the main levels of DSC.
Table 2 shows the differences between technical security solutions, usable security solutions, DSC and SDSC. It shows that technical security solutions, usable security solutions, and SDSC complement each other, and that SDSC extends DSC from organizations to the society. SDSC is similar, in principal, to DSC in organizations; it helps individuals use pervasive computing systems, social networks, and public applications while protecting themselves and their assets from digital security threats. Since the limitations of the (usable) technical solutions affect the members of the society in general and an effort at the level of the society should be made to address them, we consider this challenge societal; that is, it does not only concern individuals who happen to be the victims. A second reason for considering the issue societal is the fact that people imitate each others' behaviors.
SDSC and DSC have several differences including the following.
-Organizations decide on the IS they use and can control the threats they are exposed to. In contrast, it is difficult for the society to limit the ISs used by its members-if not impossible. -Organizations control the selection of their members-so it is possible to select only individuals who share certain values. In contrast, the society has limited control on the selection of the citizens. -Organizations set the policies for using their ISs. In contrast, the security policies in the society are set in response to events related to using ISs. -Organizations can set efficient measures for enforcing desired behaviors. In contrast, setting efficient measures for enforcing desired behaviors in the society requires important resources and long time. -Organizations can easily set measures for detecting violations. In contrast, setting such measures in the society may cause privacy violation. (Recall that members of the society use ISs, in most cases, for private business.) SDSC of a group has levels which range between weak and strong. Example for indicators of weak SDSC is the willingness of the members of the group Have you installed any security software or apps on your smartphone in order to make it more secure from viruses or malware? 31% 64% 5% to use the pervasive systems without checking associated security risks: potential threats with their occurrence and impacts [START_REF] Shirey | Internet Security Glossary, Version 2[END_REF]. Example for indicators of strong SDSC is the importance members of the group give to evaluating the risks associated with a system they intend to use.
A survey conducted in USA in 2012 by the National Cyber Security Alliance (NCSA) and McAfee [27] reveals the weak SDSC in USA. For instance, Table 3 shows that 30% of the interviewees either never or do not recall they ever changed their online banking password (and more than 50% for the case of OSN) and Table 4 shows that about 70% of interviewees are either not sure or did not install a security software for their smart phones.
Approaches for Improving Societal Digital Security Culture
This section proposes three approaches for improving SDSC: instituting security policies, spread of knowledge, and behavioral improvement, which are complementary. The approaches are borrowed from DSC in organizations and adapted for society. 5 Table 5 lists the three approaches and the methods that implement these approaches. It specifies for each method whether it affects knowledge and attitude, behavior, or both.
Institute digital security policies
A digital security policy specifies acceptable and unacceptable behavior in relation to security practices. A collection of security policies specifies, indirectly, the target SDSC: DSC that the society wants to "live in." The objective of a security policy is to influence and to direct the behavior of individuals on protecting their own digital assets and themselves (cf. [START_REF] Lim | Embedding information security culture emerging concerns and challenges[END_REF]) from security threats to the systems they use and to discourage compromising the security of others.
In order to be a deterrent for attackers and those justifying the abusive use of people's personal information with loopholes in the system, politicians, citizens, and security experts should collaborate to create SDSC in the form of laws. As evident from the aforementioned case study, instituting security policies will not be sufficient for a complete SDSC. The policies need to be (a) disseminated to individuals and (b) enforced through incentives and punishment by laws.
Spread the knowledge about security threats
This subsection discusses security awareness programs and leadership support as methods that enable spread of knowledge about digital security threats. Security awareness programs. They aim to improve the awareness of individuals about security risks [START_REF]A users' guide: How to raise information security awareness[END_REF]. They are used to change (and improve) the knowledge and attitude of individuals towards digital security threats. These programs may use (1) promotional methods, such as mugs and screen savers; (2) improving methods, such as rewarding mechanisms; (3) educational and interactive methods, such as demonstrations and training; and (4) informative methods, such as emails and newsletters [START_REF] Johnson | Security awareness: switch to a better programme[END_REF]. Another means of raising security awareness is using OSNs to provide an effective way for information dissemination, especially for educating the public about policies and attacks.
Existing security awareness programs, although successful in changing the attitude toward security risks, are not effective in changing users' habits and intuitive behavior to respond as necessary to security threats [START_REF] Kruger | A prototype for assessing information security awareness[END_REF]. Kruger and Kearney [START_REF] Kruger | A prototype for assessing information security awareness[END_REF] report that trainees exhibit good level of awareness attitudes and knowledge, but exhibit poor security behavior. They report that awareness behavior is as low as 18% when it comes to adhering to the security policies. This shows the limitation of security awareness programs in effectively improving the intuitive behavior towards security risks, which further supports the use of the suggested approaches to improve SDSC. Dodge et al. investigated the response of military cadets in USA to the phishing attack [START_REF] Dodge | Phishing for user security awareness[END_REF]. They sent phishing attacks to the students-without previous announcement of the exercise, evaluated the responses, and alerted students about the result of the test. The experiments showed that senior students had better security culture than junior students, which shows the difference between security culture and security awareness. Leadership support. Leaders support and commitment is crucial to changing SDSC (cf. [START_REF] Lapke | Power relationships in information systems security policy formulation and implementation[END_REF], [START_REF] Lim | Embedding information security culture emerging concerns and challenges[END_REF]). Leaders need to embody the security best practices; they should behave according to the policies, be engaged and live up to the security policies they set. The commitment and support of leaders to SDSC change helps disseminate the knowledge because their activities are visible to the society members, which encourage them to, also, practice the policy.
Improve intuitive behavior towards security threats
This subsection describes four methods for behavior improvement: use of incentives, use of games, use of certification, and use of courses. Use of personal incentives. Personal incentives motivate individuals to change their behavior. They can be categorized in three classes:
-Material or morals rewards: Offering small rewards, e.g., money and praise by peers, to the users to keep them interested in the training program. Thus, over time, they undergo behavioral changes towards perceiving and reacting to the attack scenarios. -Moral or material sanctions: The fear of embarrassment and punishment, e.g., penalty and blame, forces users to behave appropriately. -Responsibilities and accountability for complying with policies [START_REF] Lim | Embedding information security culture emerging concerns and challenges[END_REF]: Influence the users to be responsible in following the policies. For instance, nondisclosure agreements help preventing leakage of sensitive information.
The effectiveness of rewards and sanctions depends on the satisfaction of the receiving individual [START_REF] Pahnila | Employees' behavior towards is security policy compliance[END_REF]. For example, (we expect) a small monetary reward may motivate a poor but not a rich individual. Use of games. Games are competitive interactions involving chance and imaginary setting and are bound by rules to achieve specified goals that depend on the player skills. By nature, games are competitive; users like to play the games and get better scores. Games could simulate attacks and protection mechanisms.
We propose to exploit the characteristics of games for creating competitiveness to improve SDSC of individuals. Games are already being used in security awareness programs to help employees gain skills to discover threats and develop reactions to them [START_REF] Cone | A video game for cyber security training and awareness[END_REF]. Users could play a game in which they are required to discover the threats and protect themselves. The games help users understand how to discover threats, know what protection mechanisms are and how they work and how to identify attacks and react to them. They transform the behavior of Education of children. It is very important to introduce children to SDSC when they start using computers and the Internet. Thus, schools need to adopt and offer mandatory classes to teach all children about the SDSC process and its importance. This will help the children easily develop the rightful behavior at an early age when they are just beginning to use digital information systems.
Figure 4 shows how the proposed approaches for improving SDSC should be integrated to achieve a high level of security for any information system. As seen in the figure, while there is a logical time ordering relation between most of the proposed methods, leadership support should come into play at every stage of the SDSC improvement process.
Example on Reducing Risks of Security Threats Using Societal Digital Security Culture
This section shows through an example how to improve the SDSC to address phishing attacks. We assume that the online banking system implements usable technical mechanisms and we develop a program that integrates coherently a set of approaches to improve the security culture of a society.
The first phase of the plan is to create two policies (P1.1): (1) no PII should be disclosed through email, and (2) two-step-authentication mechanisms should be required for accounts that use sensitive information (The second step could be providing a secret answer to a personal question in the case that the first step, the login, was performed at a host unregistered by the user). The first policy aims to prevent users from providing sensitive information to hackers, who pretend to be the bank. The second policy aims to prevent users from using a spoofed bank web page requesting login credentials, as the second step of authentication being unique to every user will not match, making the user aware of the phishing attack. The policies-and possibly other policies-should constitute objects of law, created by a government agency, which regulates instituting the policies. The government should enforce the policies.
The next phase is to communicate the policies to members of the society through security awareness programs (P2.1). Users become aware of policies and threats, learn the proper usage of systems and handling of information, develop the behavior to avoid the attacks, and act in case they occur (as they do for the case of a fire for example). The banks could motivate their users by e.g., offering loyalty rewards points (P3.1) for successful completion of training programs and for reporting phishing attacks. The incentives change the behavior of users towards the attack: they would learn to differentiate emails coming from a generic mail service (e.g., Gmail) and emails coming from a bank and recognize phishing email using their characteristics, such as generic greeting, fake sender address, false sense of urgency, and fake and deceptive web links.
Periodic knowledge check through renewable training and certification (P3.3) keeps the users updated about new policies and new threats.
Conclusion
The use of pervasive computing systems, social networks, and public information systems exposes individuals to the impacts of security threats to these systems. This paper demonstrates that technical security solutions cannot alone, effectively, protect individuals and their assets from attacks on the systems they use, and proposes to complement (usable) technical solutions with SDSC: collective knowledge, common practices, and intuitive common behavior about digital security that the members of a society share. It also suggests a set of approachesborrowed from organizational DSC-for improving SDSC. This work is a first step in investigating SDSC. Our future work will include the development of surveys for assessing the security culture, conduct case studies for improving SDSC (e.g., improve the security culture related to connected vehicles), evaluate the effectiveness of approaches for improving security cultures, investigate how to develop a coherent plan for improving the security culture in a society.
Fig. 2 . 3 .
23 Fig. 2. Remote access to a connected vehicle. Fig. 3. Phishing example.
Fig. 4 .
4 Fig. 4. The SDSC process
Table 2 .
2 Difference between the digital security approaches.
Technical Usable DSC SDSC
security security
solutions solutions
Target entity information human- employees in members of the
systems computer organizations society
interactions
Protection target information information information users of the soci-
systems and systems and systems of ety
their users their users organizations
Beneficiary individual individual organizations society
Liability information information organizations members of the
system opera- system op- society, organi-
tors erators or zations, and law
distributors makers.
Preparation for low low moderate moderate
unknown attacks
Technical knowl- high low low low
edge requirement
Table 3 .
3 Password change habits in the society [27].
weekly monthly twice/year once/year never not sure
How often do you change pass- 8% 16% 19% 18% 28% 12%
words for your banking ac-
count(s)?
How often do you change pass- 6% 11% 13% 19% 42% 10%
words for your social media ac-
count(s)?
Table 4 .
4 Generic interest in security[27]
yes no not sure
Table 5 .
5 Approaches for digital security culture enforcement.
Approach Knowledge and Attitude Behavior
Institute security policies
Develop security policies (P1.1) x
Spread the Knowledge
Security awareness programs (P2.1) x
Leadership support (P2.2) x
Behavioral improvement
Use of personal incentives (P3.1) x
Use of games (P3.2) x
Use of certification (P3.3) x x
Education of children (P3.4) x x
HDFC bank is a fictive name.
In this section we often use "confer" (cf.) because in the references the ideas apply to organizations; we adapt these ideas to individuals/members of society.
Acknowledgment
This work is supported partially by the Dutch national HTAS innovation program; HTAS being an acronym for High Tech Automotive Systems. Any opinions expressed in this paper are those of the authors and do not necessarily reflect those of Dutch national HTAS innovation program.
The authors thank Drs. Reinier Post and Joost Gabriels from LaQuSo, Eindhoven University of Technology for their valuable comments. | 31,586 | [
"1001121",
"1001122",
"1001123",
"1001124",
"1001125",
"1001126"
] | [
"4629",
"4629",
"147250",
"147250",
"147250",
"303395"
] |
01463841 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463841/file/978-3-642-39218-4_2_Chapter.pdf | Suyeon Lee
email: [email protected]
Jehyun Lee
Heejo Lee
Screening Smartphone Applications using Behavioral Signatures
Keywords: Smartphone security, Android, Malware, Runtime semantic signature
The sharp increase of smartphone malwares has become one of the most serious security problems. The most significant part of the growth is the variants of existing malwares. A legacy approach for malware, the signature matching, is efficient in temporal dimension, but it is not practical because of its lack of robustness against the variants. A counter approach, the behavior analysis to handle the variant issue, takes too much time and resources. We propose a variant detection mechanism using runtime semantic signature. Our key idea is to reduce the control and data flow analysis overhead by using binary patterns for the control and data flow of critical actions as a signature. The flow information is a significant part of behavior analysis but takes high analysis overhead. In contrast to the previous behavioral signatures, the runtime semantic signature has higher family classification accuracy without the flow analysis overhead, because the binary patterns of flow parts is hardly shared by the out of family members. Using the proposed signature, we detect the new variants of known malwares by static matching efficiently and accurately. We evaluated our mechanism with 1,759 randomly collected real-world Android applications including 79 variants of 4 malware families. As the experimental result, our mechanism showed 99.89% of accuracy on variant detection. We also showed that the mechanism has a linear time complexity as the number of target applications. It is fully practical and advanced performance than the previous works in both of accuracy and efficiency.
Introduction
Smart devices are now facing a serious threat posed by surging malwares. Smartphone has become the most popular target for malware writers since it contains a great deal of user information and has capability for mobile billing. The majority of smartphone malwares leak user information and perform user unwanted billing by abusing smartphone's original functionality. Recently, smartphone malwares have adopted several obfuscation techniques such as metamorphism to avoid detection. According to F-Secure's report [START_REF] Lee | Mobile threat report q2 2012[END_REF], about 82% of newly discovered mobile malware are revealed as a variant of known malware family. Such trend is especially remarkable on Google Android. Overall, smartphone malwares can cause more direct invasion of privacy and credential damage to the users compared with the desktop malwares. However, the flood of metamorphic malwares on the smartphones impedes an efficient dealing of malware attack. Accordingly, a mechanism which prevents malwares by filtering variant of known malwares, efficiently, is needed to retain smartphone security and user privacy.
Previous approaches for variant detection based on behavior analysis were not appropriate to identify the malware family where a detected malware variant belongs to. Those approaches detect the variants by estimating similarity of behavior such as API call frequency or API call sequence [START_REF] Kwon | Bingraph: Discovering mutant malware using hierarchical semantic signatures[END_REF], with a known malware. Extracting and comparing the behaviors from numbers of target executables takes heavy computing overhead. Detection based on similarity of behavior is a helpful way for unknown detection, but it does not provide and use any evidence to show that certain variants are derived from same malware.
In opposite, a representative signature that usually has been used by AV vendors is effective to define and detect malware family. It is also efficient in contrast to the behavior based approaches in terms of time and space complexity. However, the signatures have not only narrow detection coverage on a malware family due to the lack of semantic but also easily defeated by the malwares which are adopted code obfuscation such as metamorphism. As a conclusion, re-investigation of overall code area for behavior analysis and to make an additional signature for slight modulation of a malware is an inefficient way against the little effort that consumed for making a variant.
In this paper, we propose a variant detection mechanism which filters new variants of known malware family. Since the most Android malwares are repackaged version of legitimate executable file, the malwares in same family retain semantics unchanged. Using this feature, we detect the variants efficiently and accurately by analyzing such semantics in static. The proposed mechanism uses a runtime semantic signature of known malwares. The runtime semantic signature is a malware family signature including the family representative binary patterns for control and data flow instructions and character strings, as well as API calls. The signatures including flow instructions and family representative signatures contribute to achieve accurate variant detection and family classification reducing analysis overhead.
In experimental evaluation, our mechanism show high detection performance and consumes practically low time in variant detection. We evaluated our mechanism with 1,759 real-world Android applications including 79 variants of 4 malware families, DroidDream, Geinimi, KMIN, and PjApps. The experimental set is randomly collected by automated crawler during the period of September 2011-December 2011. For performance evaluation, first, we created runtime semantic signatures for 4 malware families. Our mechanism showed over 99% of detection accuracy and near 97% of recall performance, on average, from 10-fold cross validation. Second, for unknown detection performance, we compared the four family signatures with unidentified 100 malwares. Our mechanism detected 56 malwares from the experiment set. This result shows that our mechanism detects code-level invariants with same semantics, while legacy signature-based approaches are generally not able to detect such variants. Finally, in the scalability evaluation, our mechanism screens a thousand of applications with also a thousand of signatures within 23 seconds. To consider efficiency and accuracy against variant detection of our mechanism, it is applicable for screening Android applications before they are uploaded on public app markets.
Our contributions are two folds:
-We proposed a runtime semantic signature that is used for detecting variants of known malware family accurately and efficiently. The runtime signature is a signature for a malware family sharing its API calls and representative part of codes having control and data flow semantics. It solves an existing malware detection issue, by combining malware semantic with sequence information. This contribution makes it possible to detect malware including their variants, even if the variant adopted evasion technique such as metamorphism.
-We reduced the number of signatures. The runtime signature representing a malware family on a single set of signature and covers numbers of family members including newly appeared variants. The runtime signature is based on the sequence of API calls which are shared among the malwares which have similar behaviors, but the adaptation of family representative binary patterns enable to detect and to classify malware families in practical accuracy. It enables to efficiently respond to the exponentially increasing number of malwares.
The rest of this paper organized as follows: we will start from describe details of mechanisms and assumptions of our proposition in section 2. Next, we will present experimental result of our system on Android in section 3. After distinguishing our work with previous works in section 4, we will discuss about limitations, future work and finally conclude our work in section 5.
2 Malicious Application Detection using Behavioral Signature
Mechanism Overview
Our detection mechanism is an advanced type of signature-based approach. On a smartphone, the number of target of inspection (i.e. applications) are increasing sheerly and malware variants are taking the majority share of new malwares.
Legacy signature-based approach scans a lot of applications in a timely manner. However, since the performance of signature-based approach is highly dependent on its signature, it is not robust against a number of malware variants. Therefore, we added runtime semantic to complement the weakness of signature-based approach.
To detect malware variants belonging to a family by a single robust signature set, we basically use the malicious behaviors shared by family members represented by API calls and control and data flow between the calls. The API call sequence is one of the well-known behavior based approaches for detecting malware variants and reducing the volume of signatures. However, it has false detection problem in the smartphone environment because the legitimate smartphone applications have much more similar behavior to the malicious applications in contrast to the legacy PC environment's. Due to the problem, in the smartphone environment, the variant detection should be performed with more critical evidences representing the membership of a family. We overcome this challenge by adapting binary patterns of instructions between API calls for control and data processing to the behavioral signature. In the legacy PC environments, the binary patterns of control and data processing have too many variations and not so useful due to the numbers of various APIs. However, the executables working on a smartphone have relatively small variety of APIs and instructions enough to use as a behavioral signature. The binary patterns of the control and data flow, the runtime semantics, are general enough to detect the variants of a malware generated by code and class reusing or repackaging but rarely shared to the other family members and benign applications. The proposed detection mechanism using the behavioral signatures shows practical enough detection performance in both of efficiency and accuracy in our evaluation.
The key features of proposed behavioral signature structure are two-folds. First, the signature contains binary patterns of API call instructions on an executable file. In case of the Android environment, an application has its executable code as a Dalvik Executable(DEX) file. Second, the signature contains runtime semantic for reducing control and data flow analysis time and classifying malware families. Runtime semantic is also bytecode patterns that are used for the data and control flow between API calls. While analyzing known malwares, we monitor the taint flow of sensitive APIs and associate flow between APIs with three relationship, flow, call and condition. Figure 1 shows the overall architecture of our proposed mechanism. To efficiently and reliably winnow new malwares from target applications, our mechanism conducts two phases of analysis. Each analysis phases are quite straightforward. We first construct the behavioral signatures based on the known malware binaries and analysis report of them. Then we match the signatures with target DEX files. These analysis phases consequently produce a set of similarities between signatures and target applications as well as the security report of target applications. In the remainder of this section, we will detail each phases.
Signature Construction
In this section, we explain the proposed behavior signature from the definition of malicious behavior. Signature construction is started from detecting the malicious behaviors by dynamic analysis. The signature construction process extracts the binary patterns and character strings from known malwares and estimates the weight of each pattern and string depending on the family where they belong to.
Malicious Behavior Definition Before scanning applications, we need to clarify what behaviors will be considered as malicious behaviors. According to the 'Malicious payload' classification of Y. Zhou et al. [START_REF] Zhou | Dissecting android malware: Characterization and evolution[END_REF], we selected four severe behaviors for malware detection. We detailed the definition of each behavior below.
-Privilege Escalation Since Android platform consists of more than 90 open-source libraries as well as linux kernel, flaws included in such libraries naturally incur vulnerabilities of whole Android system. As the time of research, seven exploits have been reported that are possible to gain the root privilege of an Android device. By adopting the exploits, application is being able to perform kernel-level control of the device without any user notification. The most risky and widespread malwares such as DroidDream and DroidKungfu initially contain exploits to perform high-risk malicious activities surreptitiously. Since the exploitation is the most serious threat for users, we detect known privilege escalation exploits that either contained in known malwares or be searched on internet forums.
-Remote Control Over 90% of currently reported Android malwares have remote control capability. Specifically, Android malwares that have remote control capability follow the commands from designated C&C server via HTTP web request or SMS messages. More recently, malware authors obfuscate the C&C server IP address or the commands to make malware analysis be effortful. We identified three specific behaviors to detect remote control behavior, (1) establishing internet procedure and registering broadcast intent for SMS messages, (2) receiving internet packet or SMS messages, (3) and application or kernel-level runtime execution of received data.
-Financial Charge Financial charge is the most profitable way for malware authors. Since SMS messages can be sent surreptitiously i.e. without any user notification on Android, many malwares are designed to send premium-rate SMS or phone call. In early times, malwares have had hard-coded numbers to make charge for users. However, recent malwares with financial charging capability are being more complicated by changing their phone number gaining method such as runtime push-down from C&C server and to get from a encrypted file inside of the package. To detect financial charge capability, we monitor APIs that send messages (e.g. sendTextMessage) to hard-coded number.
-Information Collection Since a smartphone is one of the most trusted and user-friendly devices, it contains a great deal of information which are deeply related to the owner's social life and credentials. Therefore, malware authors are trying to get wide range of information from device-specific information (e.g. IMEI, IMSI and phone number) to the owner's information (e.g. contact book, SMS messages, call log and credentials). The exfilteration of such information will affect not only the user oneself but also the people around him/her both directly and indirectly. We identify the APIs that provide such information and network transfer API that will possibly send the information to remote server.
Malicious Behavior Detection
We identify each behavior as a set of APIs. However, defining APIs that form each behavior is difficult because the APIs vary depending on the sort of malware. For efficient and reliable analysis, we dynamically analyze known malware samples and extract APIs that are considered to malicious behaviors based on the corresponding malware reports. Additionally, to classify the family if target application has identified as a malware, we extract characteristics from malware that can represent the whole malware family. Variants of many Android malwares have common strings, constants or even classes or methods in practice. We extract the characteristics that only appear in each family. Common strings and constants between families are not considered as proper characteristics.
Signature Construction Our signature is devised to provide knowledge-base for further investigations. To achieve reliable and efficient malware detection, we need to design our signature structure to meet four requirements in below.
Basically, signatures should contain (1) behavioral semantics for basic detection capability. And, if a target application identified as a variant of known malware, then it could be (2) identifiable as a member of certain malware family. Also, to overcome the pitfalls of legacy signature-based approach, our signature should be (3) reliable against different evasion techniques while maintaining (4) the efficiency as a signature-based approach. Figure 2 illustrates the structure of behavioral signature with an example. A signature represents a malware family. In other words, it is capable to know whether the target application is a new member of known malware family with a single matching. The signature consists of three main elements. The first one is malicious APIs and their runtime semantics for control and data flow. We extract APIs that make malicious behavior mentioned in Malicious Behavior Detection. When extracting APIs, runtime semantics such as repeated count of the API or the relationship between the former and latter APIs will also be extracted to infuse semantics into signatures. The second one is family characteristics. Since Android applications share broad range of behaviors even they are the member of each different malware families, identifiable information for malware family is necessary. Android malwares which included in same family tend to share same strings, constants, methods or even classes in most cases. Based on this tendency, we extract family common string, constants, methods and classes as family characteristics for family identification. The third one is weights of each behavior within family. Note that the signature contains sets of APIs and semantics according to that signature represents every behaviors appear in every variants of certain family. More frequently used API will take greater weight.
In conclusion, by using signature, our mechanism detects malicious behaviors semantically with APIs and their runtime semantics and also identifies the family of newly detected malware with API weights and family characteristics simultaneously.
In terms of signature matching, since a signature represents one malware family, it is possible to use only a small number of signatures to cover a large number of malwares. Our mechanism scans the signatures with target applications efficiently in conjunction with a static matching algorithm. We will describe the matching algorithm in detail in following section.
Malware Detection
Similarity Measurement Similarity calculator compares and estimates the similarity between the DEX file of target application and behavioral signatures. Each signature has a weight of each behavior based on its discernment on malware family identification. Similarity calculator first scan all potentially malicious behaviors that contained in a DEX file, based on the APIs and semantics that stored in each signature. Specifically, the behavior is represented as the Signature example of malware family DroidDream adbRoot: prepareRawFile: "rageagainstthecage" adbRoot: prepareRawFile: "profile" adbRoot: runExploid: "rageagainstthecage" adbRoot: runExploid: "/system/bin/sh" adbRoot: runExploid: "chmod 777" Setting: postUrl: "<?xml version=…" Setting: postUrl: "POST" name and DEX bytecode patterns of API calls for faster scanning. However, DEXs are different by applications even if they contain same APIs. Thus direct matching bytecode patterns gathered from a known malware and from a target application is implausible. Instead, we separate constant area and variable area from DEX bytecodes. For example, invoke-virtual (arg1, arg2, arg3, arg4, arg5) methodA in Dalvik instruction is corresponds to 6e 35c (4bit, 4bit, 4bit, 4bit, 4bit, 4bit) 16bit in hexadecimal code. In this case, 6e 35c is constant part that does not differ by application and (4bit, 4bit, 4bit, 4bit, 4bit, 4bit) 16bit is variable part that differs by application. And second, similarity calculator estimates the similarity S(T, A) between target DEX 'T' and a signature 'A'. The similarity measuring is alike as follows:
S(T, A) = n j=1 W (b j | b j ∈ (T ∩ A)) m i=1 W (b i | b i ∈ A)
Security Analysis Security analyzer decides the maliciousness of a target application and discerns the most similar malware family with the target application. The decision method is quite straightforward. Among the resulted similarities for each family signature, the family which has the highest similarity is decided as the original family of target application. If the application has approximately equal similarities with multiple malware family, then the family characteristics are additionally used for more accurate identification.
Performance Evaluation
To evaluate the practicality of our malware detection mechanism, we performed time efficiency and detection performance evaluation experimentally. The experiments are performed on a desktop PC which has 2.8 GHz Intel dual-cores CPU, 2GB RAM and Microsoft Windows XP SP3 as the OS. Our self-developed experimentation program in C++ measures time consumption and detection accuracy on malware variants detection.
Data Set
For the performance evaluation, we gathered 79 variants on four malicious Android applications and 1,680 legitimate applications published in real world. In detail, the variants set consists of 11 variants of DroidDream, 12 variants of Geimini, 40 of KMIN and 16 of PjApps variants. The malwares are gathered from public malwares data bases on the Internet. On average, the DEX files of the malware variants have 340 KB of size and the legitimate applications have 260 KB of size.
Signature Set
Behavioral signatures of known malwares for the detection are constructed from pre-analyzed and published malicious behaviors. We extracted class names, method instruction bodies and internal strings of methods which works for the malicious activities from the DEX files. In our experiments, the signatures which are extracted from randomly chosen training set have approximately 10 KB of size per a malware family. On the other hand, the white signatures which are trained from over a thousand of sample legitimate applications have 3 KB of size.
Experimental Result
Variant Detection Performance The proposed system detects and identifies a new malware as a variant of known malware first. The major part of variant detection is similarity calculation. In contrast to the legacy signature matching method, our detection method investigates how much similar a new application is to the known malicious ones. The similarity to a malware family of an application is determined by the ratio of shared signatures. The detailed way for similarity calculation is explained at the Signature Similarity Calculation section. If it is determined as an unknown in this step, it means that the target application is the legitimate or a new malware family which is not corresponding to any known malware families. The application needs to be analyzed at the dynamic analysis phase.
For variant detection and identification performance evaluation, we performed 10-fold cross validation with the real-world malware samples. We made ten groups per a malware family. Then we took one group for signature extraction and rest nine groups as detection targets. We performed the testing ten with randomly generated group configurations. The variant detection system shows reliable detection performance. In the best threshold configuration, it shows 99.89% of accuracy and 98.73% of Fmeasure value on the 10-fold cross-validation results. Even though the volume of legitimate samples are much more than the malware samples, the performance showed on Table 2 is remarkable compared to other previous approaches.
The experimental result illustrated on left side of Figure 3 shows the recall rate more than 90% of even in the detection thresholds higher than 30% of similarity which have no false positive rate. Recall rate is the rate of detecting variants as a corresponding family. Though several variants of Geimini and DroidDream share some methods and strings and show over 30% of similarity, there are no family-mismatched detection cases. If they share many same malicious functionalities, their signature likely share the same API calls and methods. In these cases, the detection results of malwares belonging to of the similar families have the false negative decisions. At last, analysis for the false detection of legitimate application will be discussed with effect of the white signature.
The malware family which shows the lowest detection performance is Droid-Dream. According to our manual investigation results using decompiling, the major reasons of the lower similarity are the difference of included classes and methods. Several DroidDream variants only take rooter and self-alarm event methods within the known malicious functionalities of the DroidDream. One possible guessing is that the malwares share just a small part of code with the DroidDream such as the popular rooting exploit rageagainstthecage.
The detection system is also effectible to detect unknown malware not yet analyzed. Because several malicious actions and their code are commonly utilized on different kinds of malware, the trained signatures enable to detect the functionally similar unknown malwares. Table 1 shows the detection result on one hundred of non-labeled Android malwares using four signature sets and 25% similarity threshold. As the result, 56% of malwares are detected as the variants of four known malware families. It means that the 56% of malwares has the same name, strings, or same methods whose definition is identical to the one of the four malware families. In contrast, the rest 44% of malwares means that they have new methods and classes which are not included to any of the four signatures. In terms of the detection rate, the unknown malware detection rate could be increased to practical level if the knowledge-base had various and many enough signature sets. Time Efficiency Our malware detection system has an advantage on its time efficiency. A variant of a known malware whose behavioral signature is in a data base is detected as a variant of the known before a detailed and heavy inspection such as source level analysis or run-time testing on a sand-box. In our implemen- tation for the experiments, the behavioral signature matching uses a matching algorithm using a hash-tree which takes a constant time [START_REF]Microsoft: Atl collection classes[END_REF]. As illustrated on Figure 4, the number of signatures has little effect to its time consumption. Consequently, the mechanism shows linearly increasing time consumption along with the amount of target applications. It means that this front line variant detection mechanism reduces the amount of target applications to be analyzed much smaller with a reasonable time overhead.
Effect of White Signature and Redundant Removal
The proposed system reduces the false detection by adopting white signatures. By giving negative points to the applications which have the anti-malicious methods and classes, the detection system avoids false detection of critical but safe applications.
The DEX bytecode patterns and strings which appear in both of malicious and legitimate applications are considered as the redundant. The redundant patterns are removed the signatures and ignored on the detection process. The bytecodes patterns and strings which are only in the legitimate applications and representative for the legitimate applications are considered as a white signature. In contrast to the redundant patterns, a white signature is rarely gathered because a white signature must be only on authenticated legitimate applications but not shared to any malware even though a repackaged malware also has legitimate codes on its DEX file.
For evaluating the effect of white signature and redundant removal, we performed the static variant detection to 1,680 of legitimate Android applications using the signature sets which are used at the variant detection experiment. The right side of Figure 3 shows the effect of adopting white signatures in false positive reduction. Among all the ranges of false positive occurrence, the rates are In the comparison with the static analysis study of Schmidt et al. [START_REF] Schmidt | Static analysis of executables for collaborative malware detection on android[END_REF] on classifying Android executables, our approach shows better performance on the correctly classified instances rate keeping non-false positive rate. In comparison with the study of Shabtai et al. [START_REF] Shabtai | Automated static code analysis for classifying android applications using machine learning[END_REF] applying machine learning using hundreds of features extracted from DEX and XML, our classifying result shows better performance than the accuracy of their best configuration.
Related Work
The previous work for Android malware is mainly focused on the behavior and trainable features of source code and executable files. The studies which take the machine learning approach [START_REF] Shabtai | Automated static code analysis for classifying android applications using machine learning[END_REF][START_REF] Burguera | Crowdroid: behavior-based malware detection system for android[END_REF] attempt to classify the malware from the legitimate applications using characteristic features of malware. The classifying approaches using machine learning are robust to the small changes on malware variants. However, the malware has much different behavior and capability along with their family, and the result of these works are hard to give information and detection evidences. Furthermore, the base legitimate applications significantly affect their function and API call statistics. In contrast, our approach classify the malware into each malware family and it gives the detailed information about their behavior.
The behavior analysis approaches are classified into static approaches [START_REF] Kwon | Bingraph: Discovering mutant malware using hierarchical semantic signatures[END_REF][START_REF] Schmidt | Static analysis of executables for collaborative malware detection on android[END_REF][START_REF] Lee | Detecting metamorphic malwares using code graphs[END_REF] and dynamic approaches [START_REF] Enck | Taintdroid: an information-flow tracking system for realtime privacy monitoring on smartphones[END_REF][START_REF] Gilbert | Vision: automated security validation of mobile apps at app markets[END_REF]. The static approaches not limited on the behavior based approaches are light-weight and scalable. However, they have a limitation on the accuracy because it is hard to tracking the exact behavior even though the target application can be decompiled. In contrast, the dynamic analysis approaches using taint analysis and API monitoring have ability to tracking the behavior accurately on run-time. But the dynamic analysis approaches have the efficiency problem because of the requirement of time and resources including a virtual environment and test execution. In terms of efficiency, our work takes a matching approach which is fast as the static analysis and even more efficient against the numbers of variants using the behavioral signature. And the behavioral signature we proposed proves pre-investigated behaviors with exact evidences.
Conclusion
In this paper, we proposed a scalable and accurate co-operated approach for Android malware detection. The proposed system overcomes the trade-off problem between the efficiency and accuracy. The proposed system solves the efficiency problem of the dynamic analysis approach due to the virtual environment and test execution by adopting static analysis approach using signatures which are faster and lighter. And the accuracy problem of the static analysis caused by the lack of robustness is solved by using a behavioral signature. The proposed behavioral signature, the runtime semantic signature, includes binary patterns for entity names and instructions for control and data flow over the legacy API calls for malicious acitvities. We experimentally showed that the runtime semantic signature improves the accuracy compared with the previous static approaches. In addition, the static analysis for the malware variants detection has practical time consumption, only tens of second to investigate a thousand of targets. And the time consumption has linear increasing manner to the increase of the number of targets. In conclusion, the proposed system has enough investigation performance for responding the rapidly growing numbers of Android applications. Therefore, it is helpful to protect users from information leakage and economic damages by malware on their smartphone.
Fig. 1 .
1 Fig. 1. Overview of the proposed variant detection mechanism
…Fig. 2 .
2 Fig. 2. An example of malware family signature, DroidDream
Fig. 3 .
3 Fig. 3. Average recall(left) and false positive rate(right) on the variants detection experiment
Fig. 4 .
4 Fig. 4. Time consumption for detection as the number of target applications and signatures
Table 1 .
1 Investigation result for non-labeled malwares
DroidDream Geimini KMIN PjApps Total
4% 23% 29% 0% 56%
Table 2 .
2 Detection Performance Comparison Table
Method Accuracy Recall Precision F-measure
Androguard 93.04% 49.58% 99.16% 66.11%
DroidMat 97.87% 87.39% 96.74% 91.83%
Proposed 99.89% 97.73% 99.74% 98.73%
Acknowledgment
This research was supported by the public welfare & safety research program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (2012M3A2A1051118 2012051118) and the KCC(Korea Communications Commission), Korea, under the R&D program supervised by the KCA(Korea Communications Agency)(KCA-2012-12-911-01-111). | 34,108 | [
"1001127",
"1001128",
"989242"
] | [
"466797",
"466797",
"466797"
] |
01463842 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463842/file/978-3-642-39218-4_30_Chapter.pdf | Gurpreet Dhillon
email: [email protected]
Romilla Chowdhuri
Filipe De Sá-Soares
Secure Outsourcing: an investigation of the fit between clients and providers
Keywords: Secure outsourcing, congruence, client vendor fit, Delphi study 1
In this paper we present an analysis of top security issues related to IT outsourcing. Identification of top issues is important since there is a limited understanding of security in outsourcing relationships. Such an analysis will help decision makers in appropriate strategic planning for secure outsourcing. Our analysis is conducted through a two-phase approach. First, a Delphi study is undertaken to identify the top issues. Second, an intensive study of phase one results is undertaken to better understand the reasons for the different perceptions.
Introduction
Information security is a significant sticking point in establishing a relationship between Information Technology (IT) outsourcing vendors and clients. While statistics related to outsourcing risks and failures are abound, there has been a limited emphasis on understanding information security related reasons for outsourcing problems. We believe that many of the problems stem from a lack of fit between what IT outsourcing vendors consider to be the key success factors and what outsourcing clients perceive to be critical for the success of the relationship. It is important to undertake such an investigation because of two primary reasons. First, majority of IT outsourcing projects fail because of a lack of appreciation as to what matters to the clients and the vendors [START_REF] Barthelemy | The hidden costs of IT outsourcing[END_REF], [START_REF] Kaiser | Evolution of offshore software development: From outsourcing to cosourcing[END_REF]. Second, several IT outsourcing projects fall victim to security breaches because of a range of issues -broken processes, failure to appreciate client requirements [START_REF] Earl | The risks of outsourcing IT[END_REF], among others. If strategic alignment between IT outsourcing vendors and clients is maintained, many of the security challenges could be overcome.
A first step in ensuring a strategic fit with respect to information security is to identify as to what is important for the vendors and the clients respectively. In this paper we undertake an extensive Delphi study to identify information security issues related to both the vendors and the clients. This is followed up by an intensive analysis of the issues through in depth interviews with several client and vendor firms.
2
Informing Literature
In recent years there have been several security breaches where privacy and confidentiality of data has been compromised largely because there was a lack of control over the remote sites. In 2011 an Irish hospital reported breach of patient information related to transcription services in the Philippines. Recently US Government Accounting office survey reported that at least 40 percent of federal contractors and state Medicare agencies experienced a privacy breach (see GAO-06-676) 1 . While it is mandatory for the contractor to report breaches, there is limited oversight. Given the challenges, many corporations have begun implementing a range of technical controls to ensure security of their own infrastructures rather than rely on the vendors.
In addressing the security challenges in outsourcing relationships or for that matter any kind of a risk, management of client-vendor relationship has been argued as important. Earlier studies on outsourcing have mainly discussed different phases of client-vendor relationships and the relevant issues in each of the phases [START_REF] Dibbern | Information systems outsourcing: a survey and analysis of the literature[END_REF]. For example, Relationship Structuring involves issues deemed important when the outsourcing contract is being prepared, Relationship Building involves issues that contribute to the strengthening of relationship between client and vendor, and Relationship Management involves issues that are relevant to drive the relationship in the right direction. Another study lists 25 independent variables that can impact the relationship between outsourcing client and vendor [START_REF] Lacity | A review of the IT outsourcing empirical literature and future research directions[END_REF]. The most cited factors include effective knowledge sharing, cultural distance, trust, prior relationship status, and communication.
Studies related to secure outsourcing have been few and far between. In majority of the cases the emphasis has been on contractual aspects of the relationship between the client and the vendor. And many researchers have made calls for clarity in contracts as well as selective outsourcing [START_REF] Lacity | An Empirical Investigation of Information Technology Sourcing Practices: Lessons from Experience[END_REF]. Managing the IT function as a value center [START_REF] Venkatraman | Beyond outsourcing: managing IT resources as a value center[END_REF] has also been proposed as a way for ensuring success of outsourcing arrangements. There is no doubt that prior research has made significant contribution to the manner in which advantages can be achieved from outsourcing relationships, however there has been limited contribution with respect to management of security and privacy.
Internet Security has been considered as one of the technological risks [START_REF] Kern | Application service provision: Risk assessment and mitigation[END_REF], with data confidentiality, integrity and availability as the topmost concerns in an outsourcing arrangement [START_REF] Khalfan | Information security considerations in IS/IT outsourcing projects: a descriptive case study of two sectors[END_REF]. While a few surveys report computer networks, regulations and personnel as the highest security threats to organizations [START_REF] Chang | On security preparations against possible IS threats across industries[END_REF], others recognize that not only technical, but also non-technical threats can be detrimental to an engagement [START_REF] Chang | On security preparations against possible IS threats across industries[END_REF], [START_REF] Dlamini | Information security: The moving target[END_REF], and [START_REF] Pai | Offshore technology outsourcing: overview of management and legal issues[END_REF]. However, most of the work cited under the domain of IS outsourcing risks is generic and has a very limited focus on security [START_REF] Earl | The risks of outsourcing IT[END_REF], and [START_REF] Sakthivel | Managing risk in offshore systems development[END_REF]. Several researchers have provided frameworks to identify organizational assets at risk and to use financial metrics to determine priority of assets that need protection [START_REF] Bojanc | An economic modelling approach to information security risk management[END_REF], and [START_REF] Osei-Bryson | Managing risks in information systems outsourcing: an approach to analyzing outsourcing risks and structuring incentive contracts[END_REF]. Research on security threats prevalent in an outsourcing or offshore environment and risk management models has also been undertaken [START_REF] Colwill | Creating an effective security risk model for outsourcing decisions[END_REF]. The political, cultural and legal differences between supplier and provider environment are supposed to make the environment less favorable for operators. A multi-layer security model to mitigate the security risks, both at technical and nontechnical level, in outsourcing domains is presented by Doomun [START_REF] Doomun | Multi-level information system security in outsourcing domain[END_REF], where eleven steps in an outsourcing arrangement are divided across three layers of security: identification, monitoring and improvement, and measurement.
Wei and Blake [START_REF] Wei | Service-oriented computing and cloud computing: Challenges and opportunities[END_REF] provide a comprehensive list of information security risk factors and corresponding safeguards for IT offshore outsourcing. More recently, Nassimbeni et al [START_REF] Nassimbeni | Security risks in service offshoring and outsourcing[END_REF] categorized the security risks into three phases: strategic planning, supplier selection and contracting, implementation and monitoring. In both the studies the issues have mainly been borrowed from existing literature. Some of the researchers have also classified the risks as external and internal threats to an organization and human and non-human risks. Non-technical concerns such as employees, regulations, and trust have emerged to be more severe than technological risks [START_REF] Loch | Threats to information systems: today's reality, yesterday's understanding[END_REF], [START_REF] Posthumus | A framework for the governance of information security[END_REF], [START_REF] Tickle | Data integrity assurance in a layered security strategy[END_REF], and [START_REF] Tran | Security of personal data across national borders[END_REF]. As such few studies are concerned with a specific type of security concern such as policies [START_REF] Fulford | The application of information security policies in large UK-based organizations: an exploratory investigation[END_REF].
While the prevalent IT outsourcing research has certainly helped in better understanding the client-vendor relationships, an aspect that has largely remained unexplored is that of organizational fit. In the IT strategy domain organization fit has been explored in terms of alignment between IT strategy and business strategy [START_REF] Henderson | Strategic alignment: leveraging information technology for transforming organisations[END_REF]. In the strategy literature it has been studied in terms of the fit between an organization's structure and its strategy. Even though Livari [START_REF] Livari | The organizational fit of information systems[END_REF] made a call for understanding organizational fit of information systems with the environment, little progress has been made to date.
With respect to IT outsourcing the notion of the fit between a client and vendor has also not been well studied. It is suggested that fit can be understood through the elements of congruence theory, which explains the interactions among organizational environment, values, structure, process and reaction-adjustment [START_REF] Nightingale | Toward a multilevel congruence theory of organization[END_REF]. Based on congruence theory, an outsourcing environment thus can be defined as the existence of any condition such as culture, regulations, provider/supplier capabilities, security, and competence that can determine the success of an outsourcing arrangement. Organizational values determine the acceptable and unacceptable behavior. In this respect factors such as trust, transparency and ethics fall under the value system of an organization. Structure of an outsourcing arrangement defines the factors such as reporting hierarchy, ownership and processes for communication. Additionally reaction-adjustments are required, which entail the feedback and outcomes of an engagement and the related modifying strategy in response to the reactions of clients for a better strategic fit and alliance between outsourcing clients and vendors.
Clearly the existing literature on identification and mitigation of security risks is rich. The security risks at technical, human and regulatory levels are well identified; many of the studies highlight that nontechnical risks are more severe than the technical ones. However, the literature is short of two perspectives: First, gap analysis of how outsourcing clients and outsourcing vendors perceive the security risks. Second, the existing literature does not discuss much about the congruence among different concepts in an outsourcing arrangement, particularly in the security domain. Hence to determine a fit between vendors and clients, we need to understand as to what security issues are im-portant to each of them and then to establish a basis for their congruence.
Research Methodology
Given that the purpose of this study was to identify security concerns amongst outsourcing clients and vendors, a two-phased approach was adopted. In the first instance a Delphi study was undertaken. This helped us in identifying the major security issues as perceived by the clients and the vendors. In the second phase an in depth analysis of clients and vendors was undertaken. This helped us in understanding the reasons for significant differences in their perceptions.
Phase 1 -A Delphi Study
To ensure a reliable and validated list of issues that are of concern to the organizations, both from client and vendor perspective, a process to inquire and seek the divergent opinions of different experts is provisioned. A ranking method based on Schmidt's Delphi methodology, designed to elicit the opinions of panel of experts through controlled inquiry and feedback, is employed [START_REF] Schmidt | Managing Delphi surveys using nonparametric statistical techniques[END_REF]. Delphi study allowed factors to converge to the ones that really are important in secure outsourcing.
Panel Demographics
To account for varying experiences, and role of experts, both outsourcing vendors or providers and outsourcing clients or suppliers were chosen as the target panelists. A total of 11 panelists were drawn from the pool of 21 prospective participants. We identified senior IS executives from major corporations and asked them to identify the most useful and experienced people to participate in the survey. The participants were divided into two groups -Outsourcing Providers [START_REF] Colwill | Creating an effective security risk model for outsourcing decisions[END_REF] and Outsourcing Suppliers [START_REF] Dhillon | Organizational competence in harnessing IT: a case study[END_REF]. The panelists had impressive and varied experiences in IT outsourcing and management. The number of panelists suffices the requirement of eliciting diverse opinions and prevents the panelists from being intimidated with the volume of feedback [START_REF] Schmidt | Managing Delphi surveys using nonparametric statistical techniques[END_REF]. Moreover, the comparative size of the two panels is irrelevant since it doesn't have any impact on response analysis. For detecting statistically significant results, the group size is dependent on the group dynamics rather than the number of participants; therefore, 10 to 11 experts is a good sample size [START_REF] Okoli | The Delphi method as a research tool: an example, design considerations and applications[END_REF].
Data Collection
The data collection phase is informed by Schmidt's method, which divides the study into three major phases [START_REF] Schmidt | Managing Delphi surveys using nonparametric statistical techniques[END_REF]. The first round -brainstorming or blank sheet round -was conducted to elicit as many issues as possible from each panelist. Each participant was asked to provide at least 6 issues along with a short description. The authors collated the issues by removing duplicates. The combined list was sent to panelists explaining why certain items were removed and further asked the panelists for their opinion on the integrity and uniformity of the list. In the second round we asked each panelist to pare down the list to most important issues. A total of 26 issues were identified which were sent to the panelists for further evaluation, addition, deletions and /or verification. This is to ensure that a common set of issues is provided for the panelists to rank in subsequent rounds. Ranking of the final 26 issues was done in phase 3. During this phase each panelist was required to rank the issues in order of importance with 1 being the most important security issue and 26 being the least important security issue in outsourcing. The panelists were restricted to have the ties between two or more issues.
Multiple ranking rounds were conducted until a consensus was achieved. To avoid bias a randomly ordered set of issues was sent to each panelist in the first ranking round. For the subsequent rounds, the lists were ordered by average ranks. In this study we used Kendall's Coefficient of Concordance W to evaluate the level of agreement among respondents' opinions in a given round. According to Schmidt [START_REF] Schmidt | Managing Delphi surveys using nonparametric statistical techniques[END_REF], 'W' can range between 0.1 (very weak agreement) and 0.9 (unusually strong agreement). Moreover, Spearman's Rank Correlation Coefficient rho is used to evaluate the level of stability of the panel's opinion between two successive rounds and between two different groups of respondents in a given round. The value of rho can range between -1 (perfect negative correlation) and 1 (perfect positive correlation) Subsequent ranking rounds are stopped either if Kendall's Coefficient of Concordance W indicated a strong consensus (>0.7) or if the level of consensus leveled off in two successive rounds.
At the end of every ranking round, five important pieces of feedback were sent to panelists: (1) mean rank for each issue; (2) level of agreement in terms of Kendall's W; (3) Spearman correlation rho; (4) Pvalue; [START_REF] Colwill | Creating an effective security risk model for outsourcing decisions[END_REF]
relevant comments by the panelists
Data Analysis
The analysis of the results was performed in two parts: First, an analysis of aggregated Delphi study treats all respondents as a global panel and thus presents the unified ranking results. Second, an analysis of partitioned Delphi study presents the ranking results based on respondents group, i.e. outsourcing providers and outsourcing clients.
Phase 2 -Probing for Congruence
The second round of data collection was based on two workshops with representatives from Fortune 500 companies. There were 11 individuals with an average of 8 years of work experience who participated in these workshops. The workshops were conducted from May 2012 to July 2012. In the first workshop, each participant was required to answer three questions for all 26 issues. Suitable probes were added following each question. This helped in developing a rich insight. The probes were:
1. What do you think about the issue? 2. Why do you think it is important for outsourcing provider? 3. Why do you think it is important for outsourcing client?
The second workshop was concentrated to achieve congruence between outsourcing suppliers and outsourcing providers. Different ranks assigned by clients and vendors to particular issues were highlighted. The participants were asked to answer two questions so as to elicit their opinions on the gaps identified in the ranking sought by clients and providers for the issues. On the other hand, by the third ranking round, Clients had fair agreement whereas vendors had very weak agreement. Moreover, a weak positive correlation exists between round 2 and round 3 in global ranking as well as between clients and providers by round 3 (see table 2). The weak consensus in global ranking clearly suggests that outsourcing clients and outsourcing vendors have conflict of interest. Moreover, the weak consensus within vendors indicates that not all vendors perceive the importance of security at same level. And finally the difference between ranks assigned to each issue by clients and vendors further highlights the conflict of interest between the two. Table 3 presents a comparison of the ranks from client and vendor perspectives and shows a significant divide between the two groups. The issues are sorted compositely; however, given the significant difference for most of the issues, the composite rank is irrelevant. For this paper we assume a difference of more than three, between the ranks sought by client and vendors, as significant. Thereby, a total of 16 issues out of 26 show significant difference between the rankings of two groups Audit of outsourced information security operations 25 14
Reviewing Congruence Amongst Issues
It is interesting to note that there is a significant difference in the client and vendor perspectives of the top secure outsourcing issues. In this section we explore these issues further to develop a better understanding. In terms of managing security of outsourcing it makes sense to develop a fit between what the clients and the vendors consider important. Two issues that seem to be of significant concern for both the clients and the vendors is of diversity of laws and the legal and judicial framework of the vendor's environment. Both these concerns are indeed noteworthy. Our discussions with a CIO of a major bank in the US, which has outsourced significant amount of IT services to India, suggest jurisdictional issues to be a major concern. The CIO noted: I can say with absolute certainty that our outsourcing experience has been very positive. We found significantly high level of competence in our vendor. However there are constant challenges of dealing with the regulatory environment. Laws in the US are rather strict in terms of disclosure and we feel that to be an impediment to getting our work done.
The literature has reported similar concerns, albeit with respect to mainstream outsourcing issues rather than security. It has been argued that there are issues of conformance and contractual violations, which can have a detrimental impact on outsourcing relationships [START_REF] Pai | Offshore technology outsourcing: overview of management and legal issues[END_REF]. It is interesting to note though that both issues 10 and 14 rank higher amongst the clients than the vendors. It seems that regulatory compliance and prevalence of a judicial framework is more of a concern to the outsourcing clients than the vendors. Another IT manager in our study commented:
Increased transparency regarding the laws governing the vendor may mitigate the risk for the client. However, the burden is on the vendor to reassure the client that the risk is minimal. Therefore the vendor should be supplying as much information to reassure the client that they are working under the same legal context and that their legal agreements are mutually In the literature several calls have been made that suggest clarity of legal and regulatory frameworks (e.g. [START_REF] Raghu | Cyber-security policies and legal frameworks governing Business Process and IT Outsourcing arrangements[END_REF]). Beyond clarity however there is a need to work on aligning the legal and regulatory frameworks at a national level. Country specific institutions shall play a critical role ensuring such alignment (e.g. NASSCOM in India). To better mitigate the risks and to ensure that the interest of both parties is secure, increased transparency in legal structure is required. The burden lies on the provider though. Therefore the vendor should be making available as much information to reassure the client that they are working under the same legal context and that their legal agreements are mutually beneficial. As a principle we therefore propose:
Principle 1 -Reducing the diversity of laws and ensuring congruence of legislative controls ensure security in outsourcing.
Another issue, dissipation of outsourcing vendors knowledge, emerged to be significant. While this issue seems more critical for the vendors, there are some significant implications for client firms as well. Vendors believe that because of the untoward need to comply with the whims and fancies for the clients, there is usually a dissipation of the knowledge over a period of time. One of the members of our intensive study was the country head for a large Indian outsourcing vendor. When asked to comment of this issue, he said:
The outsourcing industry has a serious problem. While we have our own business processes, we usually have to recreate or reconfigure them based on our client needs and wants. We are usually rather happy to do so. However in the process we lose our tacit knowledge. From our perspective it is important to ensure protection of this knowledge. Many of our security and privacy concerns would be managed if we get a little better in knowledge management.
Perhaps Willcocks et al [START_REF] Willcocks | IT and business process outsourcing: The knowledge potential[END_REF] are among the few researchers who have studied the importance of protection of intellectual property. Most of the emphasis has however been on protecting loss of intellectual property -largely of the client firm. Management of knowledge to protect tacit knowledge has also been studied in the literature (e.g. see [START_REF] Arora | Contracting for tacit knowledge: the provision of technical services in technology licensing contracts[END_REF], [START_REF] Norman | Protecting knowledge in strategic alliances: Resource and relational characteristics[END_REF]), though rarely in connection with outsourcing.
It goes without saying that poor knowledge management structures will disappoint the prospects of procuring of new contracts. In comparison, the clients seem to either assume that the provider has a sustainable structure that prevents or minimizes the loss of intellectual capital and ensures confidentiality, or the client is ready to bear the risk for the perceived potential benefits. Clients expect skilled resources as a contractual requirement. As the risk for clients is minimal, they rank this in less importance in comparison to the vendor. Existing literature mentions that for the better management of expectations, both clients and suppliers need to understand the utility of knowledge management, implications of loss and structural requirement [START_REF] Willcocks | IT and business process outsourcing: The knowledge potential[END_REF]. This is also reflected in the comments of one of security assurance manager: Suppliers need to minimize staff turnover and find ways to ensure staff retention and knowledge sharing. There are many methods to achieve this; such as better wages, benefits, flex time, encouragement, knowledge repositories, education opportunities, etc. They should pair veteran staff member with new staff members to improve their understanding of confidentiality, integrity and availability.
As a principle we therefore propose: Principle 2 Tacit knowledge management and ensuring the integrity of vendor business processes, is a pre-requisite for good and secure outsourcing.
Our research also found information security competency of outsourcing vendor as a significant issue. Many scholars have commented on the importance of vendor competence [START_REF] Goles | The Impact of Client-Vendor Relationship on Outsourcing Success[END_REF], [START_REF] Levina | From the vendor's perspective: exploring the value proposition in information technology outsourcing[END_REF], [START_REF] Willcocks | Relationships in IT Outsourcing: A Stakholder Perspective[END_REF]. It is argued that value based outsourcing outcome should be generated and transferred from the vendor to the client [START_REF] Levina | From the vendor's perspective: exploring the value proposition in information technology outsourcing[END_REF]. However, as is indicative from our study, clients and vendors differ in their opinions on what is most important when selecting and promoting outsourcing security services.
While, vendors often believe that proving their competency through a large list of certifications, awards, and large clientele is important to have to prove their competency, the client's perspective is geared towards the application and utilization of supplier competency. One of the IT managers from a bank noted:
The vendor is expected to be competent in their area of expertise, so the client needs to make clear to the vendor that a basic expectation should not be at the top of their list as there are more important factors that will be used to differentiate the vendors from one another.
As is rightly pointed out by the IT Manager, the issue with managing competence is not to present a baseline of what the vendor knows (i.e. the skill set), but a demonstration of the know-that (see [START_REF] Dhillon | Organizational competence in harnessing IT: a case study[END_REF]). Assessment of competence is outwardly driven and hence a presentation of some sort of maturity in security management is essential (e.g. ISO 21827). As a principle we propose:
Principle 3 -A competence in ensuring secure outsourcing is to develop an ability to define individual know-how and know-that.
Process is a formalized sequence of actions guided "informally" by the organization's structure and organization's value system. There is enough evidence in the literature about the impact of process standardization on outsourcing success [START_REF] Wüllenweber | The impact of process standardization on business process outsourcing success[END_REF]. However, the variations in the ranks of one of the issues identified -ability of outsourcing vendor to comply with client's security policies, standards and processes -is a cause of concern. The issue here is indicative of the need for facilitating communication and coordination required for the alignment of policies, standards and processes guiding information security in an outsourcing engagement. Clients certainly place high importance on its own policies and processes, giving this issue a higher rank. Meanwhile, providers view their policies, procedures, and standards as being best-in-class. Clearly the vendors seem to be ignorant of the fact that having a process framework that is not customizable to the individual requirements of different clients can be a potential hindrance. As one of the client notes:
It is great that a company can claim they are competent in providing outsourced information security but it means nothing to the client unless the client perceives their specific policies as being effectively applied by the provider.
To eliminate the gap, processes and policies need to be comprehensive enough and the contracts need to emphasize the implications of non-compliance. For the sake of continued alliance, the responsibility lies more on vendor to ensure process compliance and governance. Another manager from a client organization commented:
Clients are usually outsourcing to relieve their workload and performing a comprehensive analysis is viewed as adding to the existing workload they are trying to relieve. The more a potential supplier is willing to be an active partner and point out the pros and cons of their own proposals as well as the others, the smaller the gap will be.
As a principle, we propose: Principle 4 -Establishing congruence between client and vendor security policies ensures protection of information resources and a good working arrangement between the client and the vendor.
If leveraging the core competency of suppliers is the main motive to outsource security operations, the lower ranking by clients for the issue -audit of outsourced information security operations -is justified. Clients expect competency of the outsourcing vendor to be in place. However, clients also seem to lack consensus on the need for continued monitoring and governance procedures. Auditing is one of the means for the client to verify whether the vendor is adhering to the security policies. Vendors by virtue of providing a higher rank in comparison to clients, appear to be aware of the importance of proving continued compliance with agreements. Providing audited or auditable information relating to the clients data and processes is a must for establishing trust. Much of the research in IS outsourcing has focused on different dimensions of governance procedures including contractual and non-contractual mechanisms of trust building [START_REF] Miranda | Moments of governance in IS outsourcing: conceptualizing effects of contracts on value capture and creation[END_REF]. Auditing and third party assurance, which leads to increased trust (see issues 4 and 9 in our study), typically do not seem to be touched upon.
A related issue (and also connected to principle 4 above) is that of a competence audit. Any audit of vendor operations must include several aspects including -overall competence in information security (issue 15 in our study) and quality of vendor staff (issue 13). Our research subjects reported several instances where there was a general loss of competence over a period of time. This usually occurs when either the ven-dor organization gets too entrenched with one client and hence overlooking the needs of the other or when internal processes are patched and reconfigured in a reactive manner to ensure compliance with the expectations of a given client (refer to issue 5 in our study). One Chief Information Security Officer from a healthcare organization commented:
There seems to be this half-life of a security competence. I have seen that after a contract has been signed, there is a somewhat exponential decay in quality.
In the literature there is some mention of such decay in quality, although not directly in relation to outsourcing (e.g. see [START_REF] Sterman | Unanticipated side effects of successful quality programs: Exploring a paradox of organizational improvement[END_REF]). It is found that in many of the quality improvement initiatives can interact with prevailing systems and routines to undercut commitment to continuous improvement. While our research does not suggest this to be the case in terms of secure outsourcing, the difference in opinions between the clients and the vendors seem indicative. As a principle we therefore propose:
Principle 5 -An internal audit of both client and vendor operations is critical to understand current weaknesses and potential problems there might be with respect to information security structures, procedures and capabilities.
Based on our research, two major constructs seem to emerge -strategic context of secure outsourcing and organizational capability in outsourcing (Fig. 1). The strategic context is defined by legal/regulatory congruence and security policy alignment. In our research organizational capability is a function of knowledge management, competence and audit. Combined together, our constructs define security congruence. The level of congruence however can only be assessed through outcome measures (e.g. secure outsourcing). Such outcome measures could include reduced incidents of security breaches, high ranks from external vetting organizations etc.
Fig. 1. Modeling security congruence
A central theme in organization strategy literature is that of "fit". Findings from our research seem to be in resonance with that body of work. For instance, and as noted previously, Nightingale and Toulouse [START_REF] Nightingale | Toward a multilevel congruence theory of organization[END_REF] comment on the mutual interaction amongst values, structure, process, reaction-adjustment and environment leads to the congruent organization.
In the context of security of information resources, the need to develop a fit between outsourcing partners seems to be appropriate. Significant variations in the rankings on part of vendors raise some doubts: if they value the sensitivity of client data; if they ensure adequate protection of the assets; if the vendor is aware of the vulnerabilities in their processes. All these issues would also raise concern about the attitude of the client, particularly in relation to shunning responsibilities. This can indeed be a classic example of strife between factions of affordability and availability.
In order to achieve the congruence between clients and vendors, the discussion so far leads to the emergence of one main theme -managing expectations. In the purview of congruence theory this requires elimination of gaps between the two parties and eventually align the two organizations (in our case, around strategy and capability as per Fig. 1). Fig. 1 provides a conceptual design of such an aligned organization. For better management of expectations, the supplier and vendor organizations need to communicate and coordinate their respective operations.
Both the organizations align to the required dimensions and in effect overtime the two organizations involved in an outsourcing contract appear to be one "virtual" organization, which has just one goal -delivering services in a secure manner (i.e. secure outsourcing). As long as a gap exists in processes, structure or values between the two organizations, the alignment is questionable. The time taken by the two organizations to align -alignment latency would be a critical success factor of a secured outsourcing engagement.
Conclusion
In this paper we have presented an in depth study of secure outsourcing. We argued that while several scholars have studied the relative success and failure of IT outsourcing, the emergent security issues have not been addressed adequately. Considering this gap in the literature we conducted a Delphi study to identify the top security outsourcing issues from both the clients and the vendors perspectives. Finally we engaged in an intensive study to understand why there was a significant difference in ranking of the issues by the vendors and the clients. This in depth understanding lead us to propose five principles that organizations should adhere to in order to ensure security of outsourcing relations. A model for security congruence is also proposed. While we believe there should be a positive correlation amongst the proposed constructs, clearly further research is necessary in this regard. Secure outsourcing is an important aspiration for organizations to pursue. There is no doubt that many businesses thrive on getting part of their operations taken care of by a vendor. It not only makes business sense to do so, but it also allows enterprises to tap into the expertise that may reside elsewhere. Security then is simply a means to ensure smooth running of the business. And definition of the pertinent issues allows us to strategically plan secure outsourcing relationships.
1 .
1 Explain what do you think is the reason for assigning different ranks by outsourcing clients and outsourcing providers? 2. Explain what can be done to resolve the difference in order to seek a common ground of understanding between clients and providers?For phase one, the results were analyzed from a global or aggregated view and partitioned or client vs. vendor view. The global panel reached a weak consensus by third ranking round (see table1).
4 Findings from the Delphi Study
Table 1 -Global Consensus
1
Round W (Clients * Provider) Rho
1 0.342(p<0.001)
2 0.279(p<0.001) 0.568 (p<0.01)
3 0.102(p<0.727) 0.497 (p=0.01)
Table 2 -Client and Vendor Consensus
2
Round Clients' W Providers' W Rho
1 0.349(p=0.0121) 0.522(p<0.001) 0.374
2 0.486(p<0.001) 0.266(p=0.0297) 0.479
3 0.569(p=0.287) 0.100(p=0.94) 0.119
Table 3 -Comparison of Client and Vendor ranks (only significant issue are presented)
3
Rank of Issue Description Client Vendor
the issue Rank Rank
2 Comprehensiveness of information security outsourcing 7 2
decision analysis
3 Information security competency of outsourcing vendor 8 1
5 Ability of outsourcing vendor to comply with client's 2 10
security policies, standards and processes
7 Dissipation of outsourcing vendor's knowledge 10 3
8 Technical complexity of outsourcing client's information 13 5
security operations
9 Trust that outsourcing vendor applies appropriate security 1 20
controls
10 Diversity of jurisdictions and laws 4 17
12 Information security credibility of outsourcing vendor 15 9
13 Quality of outsourcing vendor's staff 18 6
14 Legal and judicial framework of outsourcing vendor's 9 16
environment
15 Inability to redevelop competencies on information security 19 11
17 Audit of outsourcing vendor staffing process 20 12
18 Inability to change information security requirements 12 22
20 Transparency of outsourcing vendor billing 14 24
21
http://www.gao.gov/assets/260/251282.pdf. Accessed January
[START_REF] Posthumus | A framework for the governance of information security[END_REF] 2013 | 40,367 | [
"989216",
"989431",
"1001129"
] | [
"147329",
"147329",
"300854"
] |
01463848 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463848/file/978-3-642-39218-4_8_Chapter.pdf | Sarah Abughazalah
email: [email protected]
Konstantinos Markantonakis
email: [email protected]
Keith Mayes
email: [email protected]
A Vulnerability in the Song Authentication Protocol for Low-Cost RFID Tags
Keywords: RFID, mutual authentication, protocol, security, privacy
In this paper, we describe a vulnerability against one of the most efficient authentication protocols for low-cost RFID tags proposed by Song. The protocol defines a weak attacker as an intruder which can manipulate the communication between a reader and tag without accessing the internal data of a tag. It has been claimed that the Song protocol is able to resist weak attacks, such as denial of service (DoS) attack; however, we found that a weak attacker is able to desynchronise a tag, which is one kind of DoS attack. Moreover, the database in the Song protocol must use a brute force search to retrieve the tag's records affecting the operational performance of the server. Finally, we propose an improved protocol which can prevent the security problems in Song protocol and enhance the server's scalability performance.
Introduction
Radio frequency identification (RFID) technology is an identification technology that uses radio waves to identify objects such as products. An RFID system consists of three components, namely a tag, reader and server (database). An RFID tag is an identification device composed of an integrated circuit and antenna. It is designed to receive a radio signal and automatically transmit a reply to the reader. A passive RFID reader is a device that broadcasts a radio frequnecy (RF) signal through its antenna to power, communicate and receive data from tags. It is connected to the server to retrieve data associated with the connected tags. An RFID server is a database containing data related to the associated tags which it manages [START_REF] Weis | Security and privacy in Radio Frequency Identification devices[END_REF].
The major concerns of designing an RFID system are privacy and security [START_REF] Avoine | Cryptography in Radio Frequency Identification and fair exchange protocols[END_REF]. Insecure communication between the reader and tag is inherently vulnerable to interception, modification, fabrication and replay attacks [START_REF] Avoine | Cryptography in Radio Frequency Identification and fair exchange protocols[END_REF]. One of the problems that is encountered in designing an RFID system is a denial of service (DoS) attack. In a desynchronisation attack, which is one kind of DoS attack, the attacker tries to prevent both parties from receiving messages. For example, the attacker can block the exchanged message(s) from reaching the target causing the tag and the server to be unable to update their information synchronously. Thus, the tag and back-end server cannot recognise each other in subsequent transactions [START_REF] Habibi | Practical attacks on a RFID authentication protocol conforming to EPC Class 1 Generation 2 standard[END_REF].
Song et al. [START_REF] Song | RFID authentication protocol for low-cost tags[END_REF] proposed an efficient RFID authentication protocol for low-cost tags. This protocol uses the hash functions, message authentication code (MAC) and PRNG functions for authentication and updating purposes. Each tag stores only the hash of a secret namely (t), and the server stores the old and new values of the secret (s new , s old ), the hashed secret (t new , t old ) and the tag's information (D). This scheme uses a challenge-response protocol, where the server and tag generate random numbers to avoid replay attacks. However, Cai et al. [START_REF] Cai | Attacks and improvements to an RIFD mutual authentication protocol and its extensions[END_REF] presented a paper showing that Song et al.'s protocol does not provide protection against a tag impersonation attack. Moreover, Rizomiliotis et al. [START_REF] Rizomiliotis | Security analysis of the Song-Mitchell authentication protocol for low-cost RFID tags[END_REF] found that an attacker can impersonate the server even without accessing the internal data of a tag and launch a DoS attack.
As a result, a new version has been proposed in [START_REF] Song | RFID Authentication Protocols using Symmetric Cryptography[END_REF] (referred to here as the Song protocol). The Song protocol uses the same data and processes except that the construction of the exchanged message (M2 and M3) has been changed. In the new version of the Song protocol, Song claim that the proposed protocol resists DoS attack by storing the old and new values of the secret and the hashed secret, thus when the attacker blocks the transmitted message, the server still can use the recent old values to resynchronise with the tag.
In this paper, we focus on examining the new version of the Song protocol [START_REF] Song | RFID Authentication Protocols using Symmetric Cryptography[END_REF]. We discover that an attacker is able to desynchronise a tag without even compromising the internal data stored in the tag. Furthermore, this protocol is not scalable, as the server needs to perform a brute force search to retrieve the tag's records, which in turn affects the server performance, especially if it has to handle a large population of tags. After analysing the weaknesses of this protocol, we propose a revised protocol to eliminate these attacks with comparable computational requirements.
The rest of this paper is organised as follows: in Section 2, we present the Song protocol process in detail. In Section 3, the weaknesses of the Song protocol are illustrated. In Section 4, the revised protocol is presented. In Section 5, we analyse the proposed protocols with respect to informal analysis. In Section 6, we conclude and summarise the paper's contribution.
Review of the Song Protocol
This section reviews the Song protocol as shown in the original protocol [START_REF] Song | RFID Authentication Protocols using Symmetric Cryptography[END_REF]. Notation used in this paper are defined as follows:
h: A hash function, h : {0, 1} * → {0, 1} l f k : A keyed hash function, f k : {0, 1} * × {0, 1} l → {0, 1} l (a MAC algorithm) -N: The number of tags l: The bit-length of a tag identifier -T i : The i th tag (1 ≤ i ≤ N) -D i : The detailed information associated with tag T i s i : A string of l bits assigned to i th tag T i t i : T i 's identifier of l bits, which equals h(s i ) -x new : The new (refreshed) value of x x old : The most recent value of x r: A random string of l bits ε: Error message -⊕: XOR operator -: Concatenation operator -←: Substitution operator x k: Right circular shift operator, which rotates all bits of x to the right by k bits, as if the left and right ends of x were joined. -x k: Left circular shift operator, which rotates all bits of x to the left by k bits, as if the left and right ends of x were joined.
-∈ R : The random choice operator, which randomly selects an element from a finite set using a uniform probability distribution
The Song protocol consists of two processes: the initialisation process, and the authentication process, which are summerised below:
Initialisation Process
This stage only occurs during manufacturing when the manufacturer assigns the initial values in the server and tag. The initialisation process is summarised below:
-An initiator (e.g. the tag manufacturer) assigns a string s i of l bits to each tag T i , computes t i = h(s i ), and stores t i in the tag, where l should be large enough so that an exhaustive search to find the l-bit values t i and s i is computationally infeasible. -The initiator stores the entries [(s i , t i ) new , (s i , t i ) old , D i ] for every tag that it manages in the server. D i is for the tag information (e.g., price, date, etc.). Initially (s i , t i ) new is assigned the initial values of s i and t i , and (s i , t i ) old is set to null.
Authentication Process
The authentication process is shown in Table 1 as presented in the new version of the protocol [START_REF] Song | RFID Authentication Protocols using Symmetric Cryptography[END_REF]:
Table 1: The authentication process of the Song protocol 1. Reader: A reader generates a random bit-string r1 ∈ R {0, 1} l and sends it to the tag T i . 2. Tag: The tag T i generates a random bit-string r2 ∈ R {0, 1} l as a temporary secret for the session, and computes M1 = t i ⊕ r2 and M2 = f ti (r1 r2), then sends M1 and M2 to the reader. 3. Reader: The reader transmits M1, M2 and r1 to the server. 4. Server:
(a) The server searches its database using M1, M2 and r1 as follows. i. It chooses t i from amongst the values t i ( new ) or t i ( old ) stored in the database.
ii. It computes M'2 =f ti (r1 (M1 ⊕ t i )). iii. If M'2 = M2, then it has identified and authenticated T i . It then goes to step (b). Otherwise, it returns to step (i). If no match is found, the server sends ε to the reader and stops the session. (b) The server computes M3 = s i ⊕ f ti (r2 r1) and sends it with D i to the reader.
(c) The server updates:
s i ( old ) ← s i ( new ) s i ( new ) ← (s i l/4) ⊕ (t i l/4) ⊕ r1 ⊕ r2 t i ( old ) ← t i ( new ) t i ( new ) ← h(s i ( new ))
5. Reader: The reader forwards M3 to the tag T i . 6. Tag: The tag T i computes s i = M3 ⊕ f ti (r2 r1) and checks that h(s i ) = t i . If the check fails, the tag keeps the current value of t i unchanged. If the check succeeds, the tag has authenticated the server, and sets:
t i ← h((s i l/4) ⊕ (t i l/4) ⊕ r1 ⊕ r2)
3 Weaknesses of the Song Protocol
This section shows that the Song protocol suffers from DoS attack and database overloading.
DoS Attack
The Song protocol aims to meet some of the main security and privacy features. Resistance to DoS attack is one of the main security features. This is achieved by keeping the old values of the tag's secret (s old ) and hashed secret (t old ) in the server database just once; they are then renewed continuously once authentication is achieved. However, the Song protocol does not provide resistance to DoS attacks. Without knowing the secret value (t i ) which is stored in the tag, an adversary can easily cause synchronisation failure by twice intercepting the communication between the reader and the tag. The protocol will fail if the attacker intercepts the communication in this way; if the server's message (M3) is intercepted, tampered or blocked up to twice, the server database will have no matching data to complete the mutual authentication, causing the DoS attack. For example, in the first access of the tag, the server's values (s old , t old ) are set to null, while (s new , t new ) values are set to specific values where (t new ) is equal to the tag's value (t i ). If the authentication succeeds, then (t new ) and (t i ) will be updated to the same value and (s old , t old ) will take the previous values of (s new , t new ). However, if the attacker blocks M3 from reaching the tag, then the server will update the server's data and the tag will be unable to update (t i ). In this situation, the value (t i in the tag will have to match the value (t old ) in the database and mutual authentication can still be achieved. Now we suppose that the attacker blocks M3 for the second time; then the tag will also not update (t i ), while at that moment, (s old , t old ) in the database have been renewed. As a result, the tag's data will not match the server's data, causing an authentication failure.
Database Overloading
The Song protocol claims that the server should be able to handle a large tag population without exhausting the server in identifying the tags. However, as shown in [START_REF] Song | RFID Authentication Protocols using Symmetric Cryptography[END_REF], the server needs to perform [(k+2)*F] computations to authenticate the connected tag, where F is a relatively computationally complex function (such as a MAC or hash function) and k is an integer satisfying 1 ≤ k ≤ 2n, where n is the number of tags. Hence, in every tag access, the server database has to run [k*F] computations on all its records to find the matching record, thereby exhausting the server in the searching process and affecting operational performance.
Revised Protocol
We propose an improvement to the Song protocol by eliminating the two issues discussed in Section 3. In the Song protocol, if the authentication is achieved, the server's data will be updated even if the matching record is found in (s old ) and (t old ). In the revised protocol, we propose that the updating process should only take place when the authentication is achieved and the matching record is found in (s new ) and (t new ); otherwise, the data remains the same. The solution is based on Yeh et al.'s protocol [START_REF] Yeh | Securing RFID systems conforming to EPC Class 1 Generation 2 Standard[END_REF] which was designed to avoid a DoS attack found in Chien et al.'s protocol [START_REF] Chien | Mutual authentication protocol for RFID conforming to EPC Class 1 Generation 2 Standards[END_REF].
In order to reduce the number of computations required by the server to authenticate the tag, we use the notion of indexing. This requires the server and tag to store another value to serve as an index. The server stores a new index (I new ) and an old index (I old ), where the tag stores an index value (I i ). The value of the index is assigned during manufacturing. In addition, the tag stores a flag value, which is kept as either 0 or 1 to show whether the tag has been authenticated by the server or not. Moreover, for calculating the index the server and tag need a new value (k) stored by both parties. We assume all the operations in the tag are atomic i.e. either all of the commands or none are processed.
In the revised protocol, we use the same notation as presented in the Song protocol. The initialisation and authentication processes are as follows:
Initialsation Process
This stage only occurs during manufacturing when the manufacturer assigns the initial values in the server and tag. The initialisation process is summarised below:
-The server assigns random values of L bits for each tag it manages to (s new , t new , k new , I new ) in the server and (t i , k i , I i ) in the tag. -Initially, (s old , t old , k old , I old ) in the server is set to null.
-The Flag value in the tag is set to zero.
Authentication Process
The authentication process is summarised below:
-Reader: A reader generates a random bit-string r1 ∈ R {0, 1} l and sends it to the tag T i .
-Tag: A tag T i generates a random bit-string r2 ∈ R {0, 1} l as a temporary secret for the session, and computes M1 = t i ⊕ r2 and M2 = f ti (r1 r2). The tag then checks the value of the Flag: 1. If Flag=0, which means the tag was authenticated successfully, the tag will use the new updated index which is equal to the server's value (I new ), and sends I i , M1 and M2 to the reader. Finally, the tag sets Flag=1, and recomputes the value of an index I i = h(k i ⊕ r2). 2. If Flag=1, which means the tag has not been authenticated, the tag will use the value of the index computed in the former transaction (after setting Flag=1) which is equal to the server's value (I old ), then the tag transfers I i , M1, and M2 to the reader. Finally, the tag sets Flag=1, and recomputes the value of an index I i = h(k i ⊕ r2).
-Reader: The reader transmits M1, M2, I i and r1 to the server.
-Server:
1. The server searches the received value of (I i ) in (I new ) and (I old ) to find a match and retrieves the attached tag data. If there is a match in I new , it retrieves (s new , t new , k new ) associated to (I new ). Then the server sets r2 ← M1 ⊕ t new , and computes M'2 =f tnew (r1 r2) to authenticate the tag. Then it marks x=new. 2. If there is a match in I old , the server retrieves the associated data (s old , t old , k old ), and computes M1 ⊕ t old to obtain r2. The server computes M'2 =f told (r1 r2). If M'2 = M2, then it has identified and authenticated T i . Then it marks x=old. 3. The server computes M3 = s x ⊕ f tx (r2 r1) and sends it with D i to the reader. 4. In case the index is found in I new , the server sets:
s old ← s new s new ← (s new l/4) ⊕ (t new l/4) ⊕ r1 ⊕ r2 t old ← t new t new ← h(s new ) k old ← k new k new ← h(t new ) I old ←h(k old ⊕ r2) I new ←h(k new ⊕ r2)
Otherwise, if I i is found in I old , the server keeps the data the same without any update except for:
I old ←h(k old ⊕ r2) I new ←h(k new ⊕ r2)
-Reader: The reader forwards M3 to the tag T i .
-Tag: The tag T i computes s i = M3 ⊕ f ti (r2 r1) and checks that h(s i ) = t i . If the check fails, the tag keeps the current values unchanged. If the check succeeds, the tag has authenticated the server, and sets:
t i ← h((s i l/4) ⊕ (t i l/4) ⊕ r1 ⊕ r2) k i ← h(t i ) I i ← h(k i ⊕ r2) Flag ← 0
Analysis
Due to the fact that the server updates its data after each successful authentication, the Song protocol cannot achieve resistance to a DoS attack. In this section, we analyse our revised protocol and show that it can provide immunity to several attacks including the DoS attack and at the same time improve the server performance. Although, the tag's storage, communication and computation costs will be higher than the Song protocol, but the revised protocol appears to meet stronger privacy and security requirements. 2 demonstrates that the Song protocol needs to perform MAC functions on all the stored hashed secrets (t new , t old ) until it finds the matched tag's record and authenticates the connected tag; in the improved protocol, on the other hand, the server can retrieve the associated tag's record directly according to the received value of index (I i ) and apply the MAC function only on the retrieved data. -Tag location tracking: To prevent tracking the location of the tag's holder, the server's and tag's responses should be anonymous. In the proposed protocol, the server and tag update their data after each successful communication, so the exchanged values are changing continuously. Moreover, in the case the authentication failed, the attacker will still not be able to track the location. -Tag impersonation attack: To impersonate the tag, the attacker must be able to compute a valid response (I i , M1, M2) to a server query. However, it is hard to compute such responses without the knowledge of (t i , k i , r2). Moreover, the current values of M1, M2 and I i are independent from the values sent previously due to the existence of fresh random numbers. -Replay attack: The proposed protocol resists replay attack because it utilises challenge-response scheme. In each session the protocol uses a new pair of fresh random numbers (r1, r2), thus the messages cannot be reused in other sessions. -Server impersonation attack: To impersonate the server, the attacker must be able to compute a valid response (M3). However, it is hard to compute such responses without knowledge of s i , ID i and r2. -Traceability: All the messages transmitted by the tag are not static, they change continuously due to the existence of random numbers and the stored data are updated after each successful authentication. In addition, after the unsuccessful authentication, the tag's data will not change, however, M1 and M2 values still will be different in every session due to the existence of random numbers (r2 and r2).
Furthermore, the index of the tag is changed in both cases (successful authentication and unsuccessful authentication).
Conclusion
This paper showed that the Song protocol has a security problem and a performance issue, specifically a DoS attack and database overloading. To improve the Song protocol, we presented a revised protocol which can prevent the desynchronisation issues without violating any other security properties. Moreover, the newly proposed protocol enhances the overall performance, since it is based on using index values for retrieving the data associated to the connected tags.
1 .
1 Reader → Tag: r1 ∈R {0, 1} l 2. Tag → Reader: r2 ∈R {0, 1} l , M1 = ti ⊕ r2 and M2 = fti (r1 r2) 3. Reader → Server: r1, M1 = ti ⊕ r2 and M2 = fti (r1 r2) 4. Server→ Reader: M3 = si ⊕ fti (r2 r1) and Di 5. Reader → Tag: M3 = si ⊕ fti (r2 r1)
Table 2 :
2 Computational requirements DoS attack: We tend to use the old and new values of (s new , s old , t new , t old ) , as pointed in the Song protocol, to avoid DoS attack caused by M3 being intercepted. Moreover, in the proposed improved protocol, the server can still use (s old , t old , I old ) to identify a tag, even when the attacker blocks the message (M3) more than once, and thus can reach synchronisation.
The Song protocol [7] Our improved protocol Section 4
Sending MAC MAC
Tag Authenticating MAC + H Updating H MAC+ H 3H
Total 2MAC + 2H 2MAC + 4H
If x=new If x=old
Sending MAC MAC MAC
Server Authenticating k*MAC Updating H MAC 4H MAC 2H
Total (k+1)*MAC + H 2MAC +4H 2MAC +2H
n : The number of tags
k: An integer satisfying 1 ≤ k ≤ 2n
x: The value kept as either new or old to show whether the tag uses the old or new values of
the tag's record
H: Hash function
MAC: Message authentication code
-
-Database overloading: Table | 21,270 | [
"1001138",
"994012",
"994013"
] | [
"300800",
"300800",
"300800"
] |
01463860 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01463860/file/978-3-642-39218-4_21_Chapter.pdf | Padmanabhan Krishnan
email: [email protected]
Kostyantyn Vorobyov
email: [email protected]
Enforcement of Privacy Requirements
Keywords: Privacy protection, Access control, Formal analysis
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Organisations collect, store and share information with individuals and other organisations. They need to respect the privacy of the entities they interact with and comply with legislative requirements. Privacy policies are used to specify how organisations handle the data in their interactions. These policies can also be shown to be compliant with legislation.
Privacy, especially privacy enhancing technologies (PETs), is focused on protecting an individual's information. Thus, issues such as identity management, user consent, data anonymisation and retention have been the focus of PETs. These issues have implications for enterprises, as they collect data from individuals and use it for various purposes. While access control can limit who can obtain the information, it is not clear (especially to an individual) how an enterprise restricts the use of data. This affects both, individuals (who may be reluctant to transact with an enterprise) and enterprises (which may be inadvertently breaching various privacy guarantees).
While technologies, such as encryption, access control and authorisation can be used to implement a policy, it is important to capture the privacy requirements. The policy then has to be developed from the requirements and finally one can develop enforcement mechanisms.
Policy authoring and enforcement are challenging issues. As it is not possible to anticipate all possible uses of data, it is difficult for designers to indicate the appropriate policy. Thus, policies are often changed when inappropriate usage (i.e., a breach) is detected. Consequently, it is easier to specify what happens when a breach occurs [START_REF] Ayres | Standardizing breach incident reporting: Introduction of a key for hierarchical classification[END_REF]. The difficulty of writing privacy policies is increased as privacy does not have a standard meaning. Each person is likely to have a different interpretation, which could also depend on the application domain [START_REF] Cate | The limits of notice and choice[END_REF]. Additionally, privacy is context dependent and would depend on the user and also the queries handled by the system [START_REF] Farnan | Don't reveal my intension: Protecting user privacy using declarative preferences during distributed query processing[END_REF]. Finally, it is important for policies not to impede normal behaviour [START_REF] Antón | Precluding incongruous behavior by aligning software requirements with security and privacy policies[END_REF].
The purpose of data access [START_REF] Kabir | A conditional purpose-based access control model with dynamic roles[END_REF][START_REF] Jafari | Towards defining semantic foundations for purpose-based privacy policies[END_REF] has attracted attention, especially as there are often conflicting issues between organisations and individuals. For instance, in health systems the importance of surveillance indicates that not all personal information may be private. In general, the purpose for which data is used is important in privacy. Users give permission to enterprises for specific tasks (and they assume that the data will not be used for other tasks). For example, Facebook's privacy policy states that they can use the information they receive for any services they provide including making suggestions of new connections. This is a very broad policy, as anything can be viewed as a service. Amazon allows users to opt out of receiving promotional offers. However, it is not clear if the user's information is not used in creating such offers. Amazon also states that they will not share personally identifiable information to third party providers. But what is personally identifiable is not clearly stated.
Personally identifiable information could include name, date-of-birth, address and national identity number. The chances of identifying an individual from a collection depends on the data. For example, a commonly occurring name or a specific date-of-birth in a census data is unlikely to identify an individual. However, by combining various data types personal information can be identified. Thus, it is important to control the collection of accesses rather than only a single data access.
The main contribution of this paper is a formal approach to determining if access control policies actually implement privacy requirements given a behaviour. We show how a dynamic access control mechanism is not sufficient to enforce privacy requirements. We need to extend the access control mechanism with some monitoring capability.
A prototype implementation that supports this approach is also described. The usefulness of this approach is demonstrated via various examples. In Section 2 we present the formal details. In Section 3 we give some simple examples that illustrate our approach and in Section 4 we give a description of a proofof-concept implementation of our technique. Section 5 presents a review of the related work. Finally, we draw some conclusions and describe future directions in Section 6.
Framework
A specific system in our framework consists of an automaton that represents the behaviour (such as gathering and using the gathered information) and an automaton that represents a controller (including access control). For the behaviour automaton we do not separate individuals into separate automata. We have a single automaton where the label on the transition has the action as well as the individual (and role) who performs the action. The controller can observe actions exhibited by the behaviour automaton, but can also prevent certain actions. This is achieved using dynamic role based access control (RBAC) [START_REF] Ferraiolo | Role-based access controls[END_REF] and a simple semantics of purpose.
Before we describe the formal details, especially of the access control part, we present a motivating example. Assume that Alice releases some personal information. This information can be used for internal purposes, but cannot be used for marketing purposes. Assume Bob can access this information and can decide how to use it. But Bob has to be prevented from using it for marketing purposes. One way is to force Bob to assume different roles for each use. This, however, could increase the number of roles. Also, then there is no difference between purpose (which is a semantic concept) and roles (which is an enforcement concept). Furthermore, the access control mechanism will have to permit and then withdraw the role being assumed. Therefore, it is better to tag certain actions with predicates that represent purposes. The access control entity now either permits or disallows actions. This can be viewed as a mixture of access control and workflow transition enabling.
Thus the control automaton's alphabet will consist of normal actions, access control actions (i.e., permitting and withdrawing roles) and purpose related actions.
We define a composition operator that combines the behaviour automaton with the controller. The composition is based on synchronisation on common actions. However, the access control automaton cannot prevent behaviour purely via the synchronisation requirements. Hence, the composition operator allows actions to occur when the access control automaton cannot exhibit an action.
Formal Details
The main focus of the formalism is to describe the interactions between behaviour, access control and privacy policies.
We assume a set of atomic access control actions indicated by A. These actions correspond to operations on data elements. We also assume a set of individuals I and a set of roles R.
The set of behavioural actions (say Λ) that are performed by individuals assuming a particular role is defined by the set A × I × R. A typical element of this set is indicated by α or by a, i, r where a is an action (element of A), i an individual (element of I) and r is a role (element of R). We define projection functions act, indiv and role which identify the action, the individual and the role respectively. That is, act( a, i, r ) = a, indiv( a, i, r ) = i and role( a, i, r ) = r.
The dynamic access control system uses the set of behavovioral actions of the form a, i, r , i, +r or i, -r . The access control uses actions belonging to Λ to observe the evolution of behaviour. The access control process can keep track of the behaviour and change the permission accordingly. For instance, if an individual has accessed a data item, the access control process can withdraw access to other data items so that the privacy requirements are met.
The action i, +r indicates that user i can assume role r while the action i, -r indicates that user i can no longer assume role r. This will be extended with actions related to the semantics of purposes.
To capture the semantics of purposes, we assume P to be a set of atomic predicates (where a typical element is denoted by p). That is, each element of P represents a specific purpose. We use subsets of P to mark behaviours as a particular behaviour could correspond to many purposes.
We use finite state automata to describe the possible behaviours and access control actions. A behaviour automaton (denoted by
A B ) is of the form (Q B , Λ, -→ B , q B0 ), while an access control (denoted by A C ) automaton is of the form (Q C , Λ C , -→ C , q C0 ).
Here Q B and Q C are the sets of states, Λ and Λ C the alphabets (or transition labels), -→ B and -→ C the transition relations and q B0 and q C0 the initial states of the respective automata. We do not have any notion of accepting states as behaviours are valid. For the sake of simplicity, we will assume that the automata are deterministic and hence the transition relations are functions.
Given a behaviour automaton, a purpose map is a function from the transition function to a subset of P . We let M P = {f f :-→ B → P(P )} be the set of all possible purpose maps. Functions in M P mark each transition in the behavioural automaton with a set of purposes.
Formally the labels of the control automaton are drawn from the set (which was denoted by Λ C ) Λ ∪ (I × {+, -} × R) ∪ P(P ). That is, it can observe actions of the behaviour automaton, can change role permissions and allow or deny purpose related action. We will use β to indicate a typical element of this set.
To define the semantics of how the access control process influences or controls the exhibited behaviour, we need to keep track of the roles that can be assumed by the individuals. That is, we need to track the potential role assignments that are currently permitted. This set of possibilities is denoted by the set S which is the set of all functions from individuals to a subset of roles (i.e., I → P(R)). We use ρ to represent a typical element of S.
Given a specific role assignment, we define if a behavovioral action α is permitted only if the individual performing the action can assume the required rule. The formal definition is given below. Definition 1. The predicate permit(ρ, α) is true if and only if role(α) ∈ ρ(indiv(α)) is true.
We define the ready set of the access control automaton in a given state as the set of actions it can potentially exhibit at the state. The standard definition is given below. Definition 2. For a state q c belonging to Q c , we define ready(q c ) as follows.
ready(q c ) = {β exists q ′ c such that q c β -→ C q ′ c }
In order to ensure that the access control system can indeed control the behaviour, we introduce a notion of stability. Essentially we want a system to evolve only after the access control process has finished making all the access control changes. Definition 3. An access control automaton is stable in state q c , written as stable(q c ), if and only if ready(q c ) ⊆ Λ.
An access control automaton is stable when it can only observe behaviour actions and cannot exhibit any action that can change the role assignment.
This implies that a state that has both observable and access control transitions is not stable.
To define the semantics of the joint behaviour of the behavioural and access control automata, we define the set of possible states of the overall computation. Definition 4. The set of possible system states is the
S × Q B × Q C .
A particular state of the computation is represented by a triple denoted by ⌈ρ, q B , q C ⌉ where ρ ∈ S, q B ∈ Q B and q c ∈ Q C
The transition relation of the automaton obtained by composing the behaviour and access control automata indicated by is defined as follows. For this, we assume a specific purpose map m.
Definition 5. A B A C = (S × Q B × Q C , Λ C , (f ∅ , q B 0 , q C 0 ), -→)
where -→ is defined by the following rules.
1. ⌈ρ, q b , q c ⌉ α -→ ⌈ρ, q ′ b , q ′ c ⌉ if q b α -→ q ′ b , q c α -→ q ′ c provided stable(q c ), permit(ρ, α) and m(q b α -→ q ′ b ) = ∅. 2. ⌈ρ, q b , q c ⌉ α -→ ⌈ρ, q ′ b , q c ⌉ if q b α -→ q ′ b provided stable(q c ), α ∈ ready(q c ), permit(ρ, α) and m(q b α -→ q ′ b ) = ∅ 3. ⌈ρ, q b , q c ⌉ ǫ -→ ⌈ρ ′ , q b , q ′ c ⌉ if q c i,+r -→ q ′ c where ρ ′ (j) = ρ(j) if i = j and ρ ′ (i) = ρ(i) ∪ {r} otherwise. 4. ⌈ρ, q b , q c ⌉ ǫ -→ ⌈ρ ′ , q b , q ′ c ⌉ if q c i,-r -→ q ′ c where ρ ′ (j) = ρ(j) if i = j and ρ ′ (i) = ρ(i) \ {r} otherwise. 5. ⌈ρ, q b , q c ⌉ α -→ ⌈ρ, q ′ b , q ′ c ⌉ if q b α -→ q ′ b , q c ps -→ q ′ c provided stable(q c ) and m(q b α -→ q ′ b ) ⊂ ps.
The first two rules specify permitted behaviour. This requires the action to be permitted in the current state. Furthermore, the access control automaton must be in a stable state and the transition has no specific purpose. Note, that if an action has the right permissions, it cannot be prevented by the access control automaton. That is, there is no need for the behaviour automaton to synchronise on all common actions. The third and fourth rules describe the access control automaton changing the current permissions. Thus, the behaviour automaton does not change its local state. The last rule enforces the required semantics of purpose. If the transition of the behaviour automaton has a purpose related marking, it is permitted only if the controller allows all the purposes present in the marking.
We write ⌈ρ,
q b , q c ⌉ α C Q ⌈ρ ′ , q ′ b , q ′ c ⌉ if there is a sequence of transitons ⌈ρ, q b , q c ⌉ ǫ -→ ⌈ρ 1 , q 1 b , q 1 c ⌉ ǫ -→ • • • ⌈ρ k , q k b , q k c ⌉ α -→ ⌈ρ k+1 , q k+1 b , q k+1 c ⌉ ǫ -→ • • • ⌈ρ ′ , q ′ b , q ′ c ⌉.
That is, there is a sequence of "internal" moves (indicated by the ǫ transitions) around a transition exhibiting α. We write this as the triple (σ, α, σ ′ ) where σ is ⌈ρ, q b , q c ⌉ and σ ′ is ⌈ρ ′ , q ′ b , q ′ c ⌉. Before we present a few simple examples, we make some observations about the structure of the access control automaton.
At any state if the automaton has a transition of the form a, i, r and j, +r or j, -r , the transition with the label a, i, r can be removed without affecting the overall semantics. This is because of the definition of stability; the transition with the label a, i, r will never be taken. Similarly, if there is no transition of the form i, +r from the initial state of the access control automaton, the joint behaviour will not exhibit any action.
For the behaviour automaton, any state that has transitions of the form a, i, r and b, i, r can exhibit neither or both actions unless there is a purpose that distinguishes the two transitions.
Privacy Requirements
We use linear time temporal logic (LTL) to encode the requirements, including privacy, on the behaviour of the composed system. We define two types of atomic predicates. The first is occurs( a, i, r ) which is true at a given state if there is a transition with the label a, i, r . We also define abbreviations where we can leave one of the fields blank. This implicitly implies universal quantification. For instance, occurs( a, -, r ) is an abbreviation for ∀i ∈ I : occurs( a, i, r ).
The second is occurs(p) where p is a purpose and is true if there is a transition marked with a set that contains p. As usual we take runs of the composed automata to define satisfaction. More precisely, (σ 0 , α 0 , σ 1 ), (σ 1 , α 1 , σ 2 ), • • • , i |= occurs(α) iff α i = α. Similarly, (σ 0 , α 0 , σ 1 ), (σ 1 , α 1 , σ 2 ), • • • , i |= occurs(p) iff one of the transitions in (σ i , α i , σ i+1 ) has a marking that contains p.
To express LTL properties we use standard logical and LTL operators, such as ∨, ∧, ¬, →(for implies), U, 2 and 3.
Examples
In this section we present some simple examples that illustrate our approach.
The first example is of a user, say Alice, who writes a blog and also applies for the job. The interviewer (which is a role) is allowed to read the job application and once they have read the application they can not read the blog.
To model the behaviour we use the following abbreviations: This is because after Bob has read Alice's application (α 3 ), he cannot assume the role of an interviewer. Note, that the stability requirements on the access control automaton means that α 4 is not enabled until the transition from r 3 to r 4 is executed. But once this transition is executed, α 4 cannot be exhibited as the permit predicate will evaluate to false.
α 1 =
The second example is when Alice generates some data item and Bob can access it only after it is made anonymous. To model this we let: α 1 be dataW rite, Alice, generator , α 2 be dataAnonimise, Alice, generator , α 3 be dataAccess, Bob, accessor .
The privacy requirement is captured by the formula
¬ occurs(α 3 ) U occurs(α 2 )
The behaviour can be represented by In this case the access control automaton gives Bob the permission to assume the role of accessor only after observing α 2 . Hence, the behaviour automaton cannot perform exhibit α 3 in state q 1 . This is very similar to classical discrete event control systems where the controller can observe certain actions before enabling other actions.
G G ?>=< 89:; q0 α 1 G G ?
Our final example is from [START_REF] Leblanc | Physiotherapists' privacy requirements in Ontario[END_REF]. Sometimes a patient needs to be transferred to another unit. This is normally permitted unless the patient opts out. Also a patient's treatment can be used for training purpose. This, however, requires explicit permission from the patient.
For the sake of simplicity we assume there is only one patient P at who can assume the patient's role (pat) and one doctor Doc who can assume the doctor's role doc.
The abbreviation α 1 stands for optOut, P at, pat which indicates that the patient wants to opt out of the transfer scheme; while the abbreviation α 2 denotes signP erm, P at, pat which indicates that the patient is happy for the treatment information can be used for training. The actions α 3 : diagnosis, Doc, doc and α 4 : treat, Doc, doc are part of the normal medical process. The action α 5 : move, Doc, doc indicates the patient being transferred while the action α 6 : useT rain, Doc, doc indicates the treatment being used in the training process.
We use the predicates f orT raining and transf er as the set of purposes. The privacy requirements are:
¬occurs(α 6 ) U occurs(f orT raining) ∨ 2(¬occurs(f orT raining)), and 2(occurs(α 1 ) → 2(¬occurs(transf er)))
Consider the following behaviour.
G G ?>=< 89:; q0
α 1 Ø Ø α 2 α 3 G G ?>=< 89:; q1 α 4 G G ?>=< 89:; q2 α 5 G G α 6
Ø Ø ?>=< 89:; q3
Let the transition q 2 α6 -→ q 2 be marked with the purpose f orT raining and the transition q 2 α5 -→ q 3 be marked with the purpose transf er.
Consider the following access control automaton.
G G ?>=< 89:; r0 The joint behaviour ensures that whenever the patient selects to opt out (indicated by the action α 1 ), the access control removes the option of the transf er purpose (in this case the ability to exhibit α 5 ). When the patients gives permission (indicated by the action α 2 ), the action α 6 can be exhibited. Note, that all these actions are executed by the medical staff (indicated by role doc) and hence dynamic RBAC by itself cannot enforce such requirements.
Prototype Implementation
We now describe the proof-of-concept implementation of our approach.
The prototype implementation consists of two parts: the front-end (a graphical user interface (Figure 1)) and the back-end (a code generator).
The graphical interface allows a user to specify atomic elements of a system, which includes individuals, access control actions, roles, predicates and states. Using these elements one can construct two labelled transition systems (LTS) that describe behavioural and enforcement automata. Additionally, a user is provided with the interface for specification of properties in LTL.
LTS specified via the front-end are used to generate a specification in the SAL [START_REF] De Moura | SAL 2[END_REF] language. The generated specification consists of two modules that represent behaviour and enforcement automata and SAL properties, which, depending on the user's intent, should or should not hold in a system with enforced privacy requirements.
We now explain how we generate SAL specifications. At the SAL level we remove the notion of users, roles and access control actions, replacing behavioural actions with predicates. Actions observed by enforcement automaton (i.e. a, i, r , a ∈ A, i ∈ I, r ∈ R) are mapped to boolean variables, which indicate whether a particular behavioural action was observed (true) or not (false). Similarly, the set of roles, permitted or forbidden for individuals, is mapped to the set of boolean variables, such that a user can assume a particular role, if the respective variable is set to true and can not otherwise. Finally, a label in the behavioural LTS may be associated with a set of predicates which represent purposes. Values of purposes are specified by the user. Generated modules are composed into an asynchronous system synchronised as follows:
-A transition in an enforcement LTS with a label of the form i, ⊕, r , i ∈ A, ⊕ ∈ {+, -}, r ∈ R is executed unconditionally and sets a boolean variable (say p 1 ) that allows or forbids an individual i to assume some role r to true or false respectively. -A transition in a behaviour LTS with a label of the form a, i, r , a ∈ A, i ∈ I, r ∈ R is executed if and only if permitted by p 1 (i.e., the value of p 1 is true -an individual i can assume role r) set by enforcement automaton. That is, a given user can perform an action assuming a particular role only if that is permitted by privacy requirements. This, in turn, sets to true a boolean variable (say a 1 ), which indicates that some action a, performed by the individual i, assuming the role r was observed via the enforcement LTS. -A transition in an enforcement LTS with a label of the form a, i, r , a ∈ A, i ∈ I, r ∈ R, is executed if and only if permitted by a 1 (set by the behaviour LTS). This indicates that some action action a, performed by the individual i, assuming the role r was observed by the behaviour LTS. Code listings 1 and 2 depict SAL representation of enforcement and behaviour LTS based on example 2. Variables enforcement state and behavior state represent enforcement and behavioral automata states, booleans AccessorBob and GeneratorAlice forbid or allow individuals to assume roles and boolean variables a1, a2, a3 represent observed actions α 1 , α 2 and α 3 respectively. States r0 ... r3 of the enforcement transition system and q0 ... q4 of the behaviour LTS refer to states r 0 , ..., r 3 and q 0 , ..., q 4 of the enforcement and behaviour automata in example 2.
Note how enforcement automaton prevents privacy violation (i.e., a transition from q 1 to q 4 ). The transition is executed only if Bob can assume the role of accessor (i.e., AccessorBob is true), which is set to true only when action α 2 is observed (i.e., a2 is true), which is only possible if data is made anonymous. That is, transition q 1 → q 4 is eliminated by the enforcement LTS, which prevents a privacy violation.
Finally, the generated SAL specification can be checked with sal-smc (SAL symbolic model checker) using LTL properties specified by the user via front-end. For example, one can check whether privacy requirements are indeed enforced or whether the enforcement of privacy requirements does not impede system's behaviour, i.e., a particular state in the behaviour transition system is reached or a particular action is executed. For instance, in the above example, one can specify a property G(not q4) (state q 4 , which constitutes a privacy violation, is never reached) to verify that generated system does prevent the violation of privacy. Our semantic framework is based on parallel composition of finite state automata. Our composition operator is derived from the classical controller [START_REF] Wonham | Modular supervisory control of discrete-event systems[END_REF] for discrete event systems and synchronisation on common actions [START_REF] Zielonka | Notes on finite asynchronous automata[END_REF].
The precise semantics of the different uses of purposes in the privacy policies are not clear. The data-purpose algebra [START_REF] Hanson | Datapurpose algebra: Modeling data usage policies[END_REF] shows how data can be used at each stage in the computation. They use a set of atomic values to indicate purpose. These atomic values are associated with data items indicating if the data can be used for a specific purpose. The semantics in [START_REF] Tschantz | On the semantics of purpose requirements in privacy policies[END_REF] is to support automatic auditing. It is also based on Markov decision processes. Conditional purpose using a hierarchical structure and compliance is presented in [START_REF] Kabir | A conditional purpose-based access control model with dynamic roles[END_REF]. The meaning of purpose via an action is presented in [START_REF] Jafari | Towards defining semantic foundations for purpose-based privacy policies[END_REF]. Semantics of intention [START_REF] Kagal | Preserving privacy based on semantic policy tools[END_REF] provides another look at purpose. Johnson et al. [START_REF] Johnson | Optimizing a policy authoring framework for security and privacy policies[END_REF] present the concepts of template author, policy authors and policy implementers. But it is more about managing privacy policies rather than semantics of the policies themselves.
RBAC [START_REF] Jha | Towards formal verification of role-based access control policies[END_REF] and its extensions [START_REF] Fong | Relationship-based access control policies and their policy languages[END_REF] are very common forms of access control. They can be used to specify who has access to data and also what role they need to assume. One can verify if an implementation technique actually satisfies the policies specified in RBAC. The link between access control and workflow [START_REF] Barletta | Workflow and access control reloaded: a declarative specification framework for the automated analysis of web services[END_REF] is used to verify designs. The formalism is based on Petri nets. Our access control automaton describes a much simpler semantics. However, our requirements are also limited.
There is also need to model dynamic behaviour. Denotic logic [START_REF] Piolle | Representing privacy regulations with deontico-temporal operators[END_REF] and modal logic [START_REF] Jafari | Towards defining semantic foundations for purpose-based privacy policies[END_REF] have been used to give a semantics to purpose. [START_REF] Farnan | Don't reveal my intension: Protecting user privacy using declarative preferences during distributed query processing[END_REF] also develop a notion of privacy where portions of data can be protected.
Conclusions
In this paper we have described a formal approach to determining whether access control policies implement privacy requirements given a system's behaviour. This is achieved by extending dynamic role-based access control mechanism with monitoring capability. We represent a specific system using two automata, such that first, behaviour automaton, represents behaviour (e.g. gathering and using the gathered data) and second, controller automaton, captures privacy requirements of the system (including access control). Enforcement of privacy requirements is achieved via a synchronised composition of the two, such that the controller grants access permissions, observes actions exhibited by the behaviour automaton and prevents actions which may violate privacy. In this paper we show how access control may fail to detect privacy violations and demonstrate the applicability of our approach using various examples. We have implemented our approach in a prototype tool, which provides a simple interface for specification of the system's behaviour and privacy requirements and can automatically generate a specification in the SAL language. One can then model check the generated specification against an arbitrary set LTL properties using SAL symbolic model checker.
1 G
1 G ?>=< 89:; r5 {f orT raining}
TListing 1 .Listing 2 .
12 R A N S I T I O N [ b e h a v i o u r _ s t a t e = q0 AND G e n e r a t o r A l i c e = true --> b e h a v i o u r _ s t a t e' = q1 ; a1 ' = true ; [] b e h a v i o u r _ s t a t e = q1 AND G e n e r a t o r A l i c e = true --> b e h a v i o u r _ s t a t e' = q2 ; a2 ' = true ; [] b e h a v i o u r _ s t a t e = q1 AND A c c e s s o r B o b = true --> b e h a v i o u r _ s t a t e' = q4 ; a3 = true ; [] b e h a v i o u r _ s t a t e = q2 AND A c c e s s o r B o b = true --> b e h a v i o u r _ s t a t e' = q3 ; a3 = true ; ] Example 2. Behaviour LTS T R A N S I T I O N [ e n f o r c e m e n t _ s t a t e = r0 --> e n f o r c e m e n t _ s t a t e' = r1 ; [] e n f o r c e m e n t _ s t a t e = r1 --> e n f o r c e m e n t _ s t a t e' = r2 ; G e n e r a t o r A l i c e' = true ; [] e n f o r c e m e n t _ s t a t e = r2 AND a2 = true --> e n f o r c e m e n t _ s t a t e' = r3 ; [] e n f o r c e m e n t _ s t a t e = r3 --> e n f o r c e m e n t _ s t a t e' = r3 ; A c c e s s o r B o b' = true ; ] Example 2. Enforcement LTS
Fig. 1 .
1 Fig. 1. User Interface of Prototype Implementation
blogW rite, Alice, user , α 2 = apply, Alice, user , α 3 = readApplication, Bob, interviewer , α 4 = readBlog, Bob, interviewer ,
This requirement can be written as
occurs( readApplication, -, interviewer ) → 2¬(occurs( readBlog, -, interviewer ).
If the behaviour automaton had the following structure,
α 1
G G ?>=< 89:; q0 Ø Ø α 2 G G ?>=< 89:; q1 α 3 G G ?>=< 89:; q2 α 4 G G ?>=< 89:; q3
the following automaton can enforce the above requirement:
G G ?>=< 89:; r0 Alice,+alice G G ?>=< 89:; r1 Bob,+interviewer G G ?>=< 89:; r2
α 3
?>=< 89:; r4 Bob,-interviewer o o ?>=< 89:; r3
Acknowledgements
The first author was affiliated with Bond University where most of this work was done. He is currently affiliated with Oracle Labs. The second author was supported by a VC grant from Bond University. | 31,369 | [
"1001146",
"1001147"
] | [
"239213",
"484452"
] |
01464172 | en | [
"chim"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01464172/file/2017%20PROCI%20hexanal.pdf | Anne Rodriguez
Olivier Herbinet
Frédérique Battin-Leclerc
A study of the low-temperature oxidation of a long chain aldehyde: n-hexanal
Keywords: Oxidation, Aldehyde, n-Hexanal, Jet-stirred reactor, Modeling
Large aldehydes are important species among the oxygenated products formed during the oxidation of fuels (e.g. alkanes or alkenes) and biofuels (e.g. long chain alcohols), but their oxidation chemistry has been scarcely studied up to now. In this work, a study of the oxidation of hexanal has been performed in a jet-stirred reactor over the temperature range 475-1100 K, at a residence time of 2 s, a pressure of 106.7 kPa, an inlet fuel mole fraction of 0.005 and at three equivalence ratios (0.25, 1 and 2). Reaction products were quantified using two analytical methods: gas chromatography and cw-cavity ring-down spectroscopy. In addition to species usually measured during oxidation study (such as CO, CO2, water, H2O2 and C1-C3 olefins, aldehydes and ketones), specific low temperature oxidation species were also observed: C6 species, such as δ-and γ-caprolactones and hexanoic acid, but also C5 species, such as 2-methyltetrahydrofuran and pentanal which are typical reaction products detected during the oxidation of n-pentane. A detailed kinetic model has been developed using an automatic generation software. This model was used to highlight the specificities of the oxidation chemistry of long chain aldehydes and to understand the formation routes of the C5 and C6 low-temperature oxidation products. Simulations indicate that the low energy of the aldehyde C-H bond has a significant effect on the fuel reactivity and on the distribution of reaction products.
Introduction
The oxidation chemistry of long chain aldehydes (defined here as species with more than two carbon atoms) is of utmost importance because they are typical intermediates formed during the oxidation of conventional fuels and biofuels. As an example, the formation of important amount of long chain aldehydes was observed during the low-temperature oxidation of olefins via the Waddington mechanism [1], as well as during the low-temperature oxidation of long chain alcohols [2]. To a lesser extent aldehydes are also typical low-temperature oxidation products from alkanes. Note also that new types of fuels, such as bio-oil, produced from biomass pyrolysis contains many molecules which also include aldehyde functions [3]. Yet there are still very few data in the literature about aldehyde oxidation chemistry, especially at low-temperature. To our knowledge, there is no data available for aldehydes larger than n-pentanal.
A literature review shows that the studies of the oxidation and pyrolysis of propanal, n-butanal and n-pentanal were mainly performed using shock tubes [4][5][6][7][8] or burners [9] in the high temperature region. To our knowledge, the only low-temperature oxidation experiments were carried out by Veloo et al. in a flow tube [10,11] over the temperature range 500-1200 K. The species investigated were propanal, n-and iso-butanal. The kinetic analysis of the model at 655 K showed that propanal was mainly consumed by H-atom abstraction involving the H-atom of the aldehyde group forming a radical which decomposed to CO and an ethyl radical through αscission [10]. The oxidation of the ethyl radical was then responsible for the low-temperature reactivity of propanal. A larger low-temperature reactivity was observed for n-butanal than for propanal and no low-temperature reactivity for iso-butanal [11]. According to a kinetic analysis performed at 645 K, n-butanal was mainly consumed via H-atom abstraction involving the Hatom of the aldehyde group (65% of the consumption of the fuel). This radical reacted in two ways: via an α-scission to CO and --propyl radical (30%) and an addition to O2 forming a peroxy radical (70%). This peroxy radical and the n-propyl radical can undergo low-temperature oxidation chemistry providing the formation of branching agents through the formation of peroxy-butanoic acid and n-propyl-hydroperoxide. No specific low-temperature oxidation products such as cyclic ethers with the same skeleton as the fuel were detected during these studies.
The goal of the present work was to investigate the oxidation chemistry of a larger aldehyde, n-hexanal, in a jet-stirred reactor in order to better understand the oxidation chemistry of this type of reactant and to highlight the possible effect of the chain length on the reactivity and the nature of reaction products.
Experimental section
The experimental apparatus and analytical techniques used in the present work have already been described in previous papers and only the main features are discussed in the manuscript (a more detailed description is given as supplementary material).
Experiments were carried out in a fused silica jet-stirred reactor. It is a continuous stirred tank reactor operated at steady state. It was designed to have homogeneous concentrations and temperatures for residence times between 0.5 and 5 s [START_REF] Herbinet | Clean Combustion[END_REF]. The homogeneity is ensured by turbulent jets exiting four nozzles located in the center of the spherical reactor. The reactor is preceded by an annular preheater (made of two concentric tubes) which helps maintain a homogeneous temperature. Both preheater and reactor were heated up to the reaction temperature using Thermocoax resistances with temperature control using type K thermocouples. The reaction temperature was measured with an independent type K thermocouple located in the intra annular part of the preheater with the extremity close to the four nozzles in the center of the reactor (temperature uncertainty of ± 5 K). The liquid fuel flow rate was controlled using a Coriolis flow controller, mixed with the carrier gas (helium) and evaporated in a heat exchanger. Oxygen flow was added at the inlet of the reactor to minimize possible reactions with the fuel before entering the reactor. Helium and oxygen flow rates were controlled using mass flow controllers. The relative uncertainty in flow rates were ± 0.5% according to the manufacturer, which resulted in a relative uncertainty of about ± 0.5% in the residence time of the gas in the reactor. The fuel was provided by Sigma-Aldrich (purity > 98%). Gases (oxygen and helium) were provided by Messer (purities of 99.995% and 99.999%, respectively). Two diagnostics were used to quantify the reaction products. Most of the species were analyzed using gas chromatography (GC) with a flame ionization detector (FID) for the detection of carbon containing species. The use of a methanizer (hydrogenation on nickel catalyst) enabled the quantification of CO and CO2. Formaldehyde was also detected by GC but the quantification was difficult and not precise as the peak had a long tail and a retention time close to that of other species (methanol, acetaldehyde and oxirane). Relative uncertainties in mole fractions were estimated to be ± 5% for species which were calibrated using standards and ± 10% for species which were calibrated using the effective carbon number method and for species whose peaks are co-eluted (as formaldehyde).
Formaldehyde, as well as water and hydrogen peroxide, were quantified using a spectroscopic technique: continuous wave cavity ring-down spectroscopy. This technique relies on the absorption of light (in the near infrared range: 6620-6644 cm -1 in the present case) by species in a spectroscopic cell working at low pressure (10 Torr). A sonic probe was used to couple the reactor to the cell in order to have a sufficient pressure drop between the reactor working at 800 Torr and the cell. Absorption lines used for the quantification of formaldehyde, water and hydrogen peroxide are the same as those used in [START_REF] Rodriguez | [END_REF]. The average relative uncertainty in mole fraction was estimated to be ± 15% depending on the absorption line used for the quantification and the concentration of the species. Absorption lines and cross sections used for the quantification of formaldehyde, water and hydrogen peroxide are given as Supplementary Material.
Experimental results
Experiments were carried out over the temperature range 475-1100 K (with a step of 25 K), at a residence time of 2 s, a pressure of 106.7 kPa, an inlet fuel mole fraction of 0.005 and at three equivalence ratios (0.25, 1 and 2).
Fuel mole fraction profiles are displayed in Figure 1 for the three equivalence ratios investigated. As for n-alkanes, the leaner the mixture, the larger the reactivity between 550 and 800 K (with the inlet fuel mole fraction kept constant for all 𝜑). A maximum conversion of 45% is reached under rich conditions in the range 600-625 K. The maximum conversion reaches about 75% in the same range for the stoichiometric and lean conditions. Hexanal displays a notable reactivity during its oxidation under the conditions of this study. This is demonstrated by some reactant conversion observed at temperatures as low as 475 K under lean conditions. Mole fraction profiles show a strong negative temperature coefficient (NTC) behavior but a significant reactivity can still be spotted at the end of the NTC even under rich conditions. Many reaction products were detected during the oxidation of hexanal (see Figures 1 and2 and Figure S1 in Supplementary Material). Reaction products can be sorted into two groups. The first group includes species which are usually observed during oxidation studies: CO, CO2, water, H2O2, small C1-C3 hydrocarbons, aldehydes and ketones. The second group concerns specific low-temperature oxidation products. As for n-alkanes, some of these specific products had the same skeleton as the reactant: 5-ethyl-dihydrofuranone (γ-caprolactone), 6-methyltetrahydropyranone (δ-caprolactone) and hexanoic acid (see Figure 3 for structures). But here it should be noted that some C5 low-temperature oxidation products with important mole fractions were also observed: 2-methyl-tetrahydrofuran and pentanal. As discussed later, these species are formed from reactions of the n-pentyl radical obtained from the α-scission of the hexanoyl radical. A particularity of these C5-C6 low-temperature products is that their mole fraction profiles exhibit only one large peak whereas two peaks with a NTC area in between (one at low-and one at high-temperature) are observed for smaller species. The effect of the equivalence ratio differs from one species to another. An interesting point is that the formation of some oxygenated species is favored at low-temperature under rich conditions whereas the opposite trend is observed for most products as shown in Figure 2. This unusual promoting effect of 𝜑 under rich conditions is clearly visible for 2-methyltetrahydrofuran. This particularity is due to the reaction of decomposition of the hexanoyl radical (yielding to the n-pentyl radical which is the precursor of 2-methyl-tetrahydrofuran) which is favored under rich conditions to the detriment of the reaction of addition to O2. The formation routes of acid hexanoic are likely complex as suggested by the mole fraction profiles obtained in experiments. The formation of this species seems to be favored in the NTC region under rich conditions with no obvious explanation. Another point is the sensitivity of the mole fractions of hydrogen peroxide and formaldehyde to the equivalence ratio. The maximum mole fraction of hydrogen peroxide at low-temperature obtained under rich conditions is one order of magnitude lower than that measured under lean conditions. For formaldehyde, the maximum mole fraction under lean conditions is 4 times larger than that under rich conditions.
The reactivity of hexanal was compared with that of two fuels including the same number of carbon atoms but belonging to different fuel families: 1-hexanol and n-hexane. Experimental fuel mole fractions obtained at 𝜑 = 1 are displayed in Figure 4. Studies were performed under the same JSR and mixture conditions. Large differences were observed between the reactivity of the three fuels. Some reactivity can already be spotted from 525 K for hexanal, whereas the start of the fuel consumption can only be observed from 600 K and 625 K for n-hexane and 1-hexanol, respectively. The most reactive species at 625 K was hexanal (with 70% of conversion). It was followed by n-hexane (conversion of 40%) and 1-hexanol (conversion of 15%). Another major difference was the absence of NTC behavior for 1-hexanol, whereas a well-marked one was observed for both hexanal and n-hexane. As n-hexane was less reactive than hexanal, the conversion was very low in the range 725-800 K. Note however that above 750 K, 1-hexanol was more reactive than the two other fuels. This comparison highlights the significant effect of the presence of an oxygen atom in the fuel on the reactivity of alcohols and aldehydes, even if the two oxygenated fuels react in very different ways. This reactivity notably depends on how the oxygen atom is linked to the rest of the molecule. An important result is that n-alkanes cannot be used as surrogates for long chain alcohols and aldehydes (whereas they have a reactivity somewhat similar to that of methyl esters [14]).
Modeling
A detailed kinetic model was developed for the oxidation of hexanal. The model was generated using software Exgas. Exgas is a tool for the automatic generation of detailed kinetic models for the oxidation of fuel components as alkanes [15], saturated esters [16] and alkyl-aromatics [17].
A virtual blend of hexanal and n-pentane was considered during the generation to better account for the chemistry of the n-pentyl radical which played an important role in the oxidation of hexanal as suggested by experimental observations. In addition to a C0-C2 reaction base including pressure dependent reactions, the mechanism generated by Exgas for hexanal and npentane considers all the primary and secondary reaction classes usually considered during alkane oxidation [15]. This involves especially all the reactions of the alkylperoxy radicals, including isomerizations and formation of cyclic ethers and ketohydroperoxides.
However some reactions were missing in the generated model because of some new reactions due to the presence of the aldehyde group. As an example, the only reaction written for the hexanoyl radical (R29C6H11OK) was the decomposition through α-scission to CO and n-pentyl radical. The low-temperature oxidation chemistry of the hexanoyl radical was not considered in the generated model and the missing chemistry set was written by hand (see Figure S2 in Supplementary Material). The new reaction considered for the hexanoyl radical was the addition of this radical to O2 leading to the formation of a peroxy radical. Reactions considered for this peroxy radical were isomerizations to hydroperoxy-alkyl radicals and a specific reaction to aldehydes which is the combination with HO2 radicals to form hexanoic acid or peroxy hexanoic acid as proposed by Le Crâne et al. [18] (see Figure 5). Kinetic constants for H-atom abstractions by OH and HO2 are those from Exgas except for the specific H-atoms of the aldehyde group and those in the beta position which were taken from the study by Pelucchi et al. [8]. A comparison of Exgas kinetic data with literature ones is given in Supplementary Material. It shows a good agreement between Veloo et al. [11] and Exgas data. The addition of OH radicals to the carbon atom of the carbonyl function has also been written but this does not contribute significantly to the formation of hexanoic acid. Reactions considered for hydroperoxy-alkyl radicals are the second addition to O2 and reactions of decomposition to cyclic ethers or unsaturated species. Note that these reactions are responsible for the formation of γ-caprolactone and δ-caprolactone that were detected in experiments. Kinetic parameters used for the added low-temperature reactions are those already used in Exgas for alkanes [15] (except the reactions for hexanoic acid or peroxy hexanoic acid, for which the kinetic parameters are those proposed by Le Crâne et al. [18]). The activation energy of the reaction of α-scission of the hexanoyl radical was increased from 17.2 to 20.0 kcal.mol -1 to obtain a better agreement between computed and experimental data. Note that the kinetic constant proposed in [8] for the α-scission was tested but appeared to be too fast. The model described in this paper contains 482 species involved in 2590 reactions. It is given under Chemkin Format in Supplementary Material. Simulations were performed using the PSR module in the Chemkin II package [19].
Figure 5: Formation routes of hexanoic acid at 625 K. Percentages correspond to relative formation flux of hexanoic acid.
Discussion
Mole fractions computed using the model are displayed in Figures 1 and2. As far as the reactivity is concerned, the model predicts well the NTC behavior at all equivalence ratios, but simulations over-predict the fuel consumption under rich conditions between 600 and 700 K. The reactivity over-estimation under rich conditions causes an over-prediction of product mole fractions between 600 and 700 K. Under lean and stoichiometric conditions, the agreement is overall satisfactory for water, CO, formaldehyde, acetaldehyde. The agreement is also acceptable for specific C5 and C6 products, but the discrepancies which can be spotted in the graphs of Figure 2 show that refinements are still needed to better account for the chemistry of the oxidation of hexanal. A kinetic analysis of the model was performed to identify sensitive reactions and to propose possible improvements of the oxidation chemistry of large aldehydes. According to the rate of production analysis performed at 625 K for the stoichiometric mixture (see Figures S3 andS4), the fuel is mainly consumed via H-atom abstractions (with OH radicals). Note that the formation of the hexanoyl radical (R29 in the model) is favored due to the lower C-H bond energy (35% of the consumption of hexanal) but that other H-atom abstractions are not negligible (between 5 and 24% according to the nature of the abstracted H-atom). An important point is that all radicals but one (that with the radical center in position 3) finally isomerize yielding hexanoyl radical (these isomerizations preferentially involve five and six membered ring transition states) and that overall 80% of the consumed fuel lead to this C5H9CO radical. The hexanoyl radical reacts through two competitive pathways: 1) it decomposes via an α-scission to the n-pentyl radical and CO (67%), 2) it adds to O2 to form a peroxy-aldehyde radical (33%). Veloo et al. observed an opposite ratio for the α-scission and the addition to O2 of the butanoyl radical in their butanal study [11]. A test with their kinetic constants led to large discrepancies in the prediction of the mole fractions of C5 and C6 low-temperature oxidation products. Then the npentyl radical obtained via the α-scission undergo the typical low-temperature oxidation chemistry of alkanes (which is included in the chemistry of oxidation of n-pentane virtually considered in the list of reactants for the automatic generation). This chemistry is responsible for the formation of 2-methyl-tetrahydrufuran and pentanal. The peroxy-aldehyde radical (R290 in the flux diagrams in Figure S3) mainly reacts via isomerizations to yield new hydroperoxy radicals which can add to O2 for the branching or decompose to cyclic ethers (such as the two caprolactones observed in experiments). The second consumption route for the peroxyaldehyde radical is the combination with HO2 radical yielding hexanoic acid and peracid (1.7 and 6.5%, respectively). Hexanoic acid is mainly formed from the reaction of combination of R290 with HO2 (99%), whereas the more direct route starting with the addition of OH to the carbonyl group represents less than 1% of the formation of this acid.
Figure 6 displays the sensitivity analysis on the fuel mole fraction performed at 625 K and φ=1 (sensitivity analyses at the three φ are given in Supplementary Material). The sensitive reactions promoting the reactivity are H-atom abstractions (especially that abstracting the H-atom of the carbonyl group forming the hexanoyl radical) and the decomposition of the hexanoyl radical to CO and n-pentyl radical. H-atom abstractions are likely not responsible for the discrepancies observed at low-temperature for φ = 0.5 and 2 as their effect is similar whatever φ. The most sensitive reactions inhibiting the reactivity are concurrent reactions consuming OH radicals such as H-atom abstractions involving acetaldehyde and formaldehyde, the reaction of addition of the hexanoyl radical to O2 (forming the peroxy radical R290) which competes with its decomposition, and the isomerization of R290 to R292 ( O
O OH
). This last reaction is important because R292 decomposes to the γ-caprolactone + OH (an inhibitive propagation pathway). Its inhibitive effect is more important under lean conditions. Under rich conditions, reactions involved in the low-temperature chemistry of the n-pentyl radical are likely responsible for the observed discrepancies (they have a promoting effect). The main lesson to be learnt from the kinetic analysis is that a more accurate set of kinetic parameters would be required for carbonyl containing species to better account for the chemistry of oxidation of long chain aldehydes.
Conclusion
The study of the oxidation of hexanal was performed in a jet-stirred reactor with reaction product measured using gas chromatography and continuous wave cavity ring-down spectroscopy. This study highlighted the high reactivity of long chain aldehydes compared to their alkane homologues, as well as the formation of specific reaction products with the same skeleton as the reactant (hexanoic acid, γ-and δ-caprolactones), but also C5 ones coming from the n-pentyl radical oxidation (e.g., 2-methyl-tetrahydrofuran). Modeling efforts have been carried out to develop a mechanism accounting for the specific chemistry of the oxidation of hexanal with acceptable agreement for the reactivity and main product mole fractions. The kinetic analysis performed at 625 K highlighted the determinant role played by the hexanoyl radical during the oxidation of hexanal. Because of preferential isomerizations due to the lower C-H bond energy, almost 80% of the fuel is transformed to the hexanoyl radical. The two main consumption pathways of the hexanoyl radical (decomposition via an α-scission and addition to O2) are then responsible for the selectivity of C5 and C6 specific low-temperature reaction products which were experimentally observed. More experimental and theoretical studies on the low-temperature oxidation of aldehydes are necessary to improve the chemistry in detailed kinetic models.
Figure 1 :
1 Figure 1: Mole fractions of the fuel and reaction products (up to C3) obtained in the oxidation of hexanal (τ=2s,P=106.7kPa,𝑥 𝑓𝑢𝑒𝑙 𝑖𝑛𝑙𝑒𝑡 =0.005). Symbols are for experiments and lines for simulation.
Figure 2 :Figure 3 :
23 Figure 2: Mole fractions of C5-C6 specific low-temperature reaction products obtained in the oxidation of hexanal (𝜏=2s,P=106.7kPa,𝑥 𝑓𝑢𝑒𝑙 𝑖𝑛𝑙𝑒𝑡 =0.005). Symbols are for experiments and lines for simulation.
Figure 4 :
4 Figure 4: Comparison of experimental fuel mole fractions obtained in the oxidation of n-hexane, hexanal and 1-hexanol under the same conditions (𝜑=1).
Figure 6 :
6 Figure 6: Sensitivity analysis on the fuel mole fraction performed at 625 K (see Figure S2 in Supplementary Material for the structures of the species).
Supplementary Materials
Mechanism file (Chemkin II format) Supplementary file (word file)
The sensitivity coefficient of the mole fraction of a species i with respect to the rate constant of reaction j is given by 𝑆 𝑖𝑗 = 𝜕𝑙𝑜𝑔 𝑥 𝑖 𝜕𝑙𝑜𝑔 𝑘 𝑗 | 23,965 | [
"771950",
"16635"
] | [
"211875",
"211875"
] |
01464521 | en | [
"math"
] | 2024/03/04 23:41:44 | 2018 | https://hal.science/hal-01464521/file/diffusionKA_final.pdf | O Blondel
C Toninelli
email: [email protected]
KINETICALLY CONSTRAINED LATTICE GASES: TAGGED PARTICLE DIFFUSION
Keywords: Kawasaki dynamics, tagged particle, kinetically constrained models
Kinetically constrained lattice gases (KCLG) are interacting particle systems on the integer lattice Z d with hard core exclusion and Kawasaki type dynamics. Their peculiarity is that jumps are allowed only if the configuration satisfies a constraint which asks for enough empty sites in a certain local neighborhood. KCLG have been introduced and extensively studied in physics literature as models of glassy dynamics. We focus on the most studied class of KCLG, the Kob Andersen (KA) models. We analyze the behavior of a tracer (i.e. a tagged particle) at equilibrium. We prove that for all dimensions d ≥ 2 and for any equilibrium particle density, under diffusive rescaling the motion of the tracer converges to a d-dimensional Brownian motion with non-degenerate diffusion matrix. Therefore we disprove the occurrence of a diffusive/non diffusive transition which had been conjectured in physics literature. Our technique is flexible enough and can be extended to analyse the tracer behavior for other choices of constraints.
Introduction
Kinetically constrained lattice gases (KCLG) are interacting particle systems on the integer lattice Z d with hard core exclusion, i.e. with the constraint that on each site there is at most one particle. A configuration is therefore defined by giving for each site x ∈ Z d the occupation variable η(x) ∈ {0, 1}, which represents an empty or occupied site respectively. The dynamics is given by a continuous time Markov process of Kawasaki type, which allows the exchange of the occupation variables across a bond e = (x, y) of neighboring sites x and y with a rate c x,y (η) depending on the configuration η. The simplest case is the simple symmetric exclusion process (SSEP) in which a jump of a particle to a neighboring empty site occurs at rate one, namely c SSEP x,y (η) = (1 -η(x))η(y) + η(x)(1 -η(y)). Instead, for KCLG the jump to a neighboring empty site can occur only if the configuration satisfies a certain local constraint which involves the occupation variables on other sites besides the initial and final position of the particle. More precisely c x,y (η) is of the form c SSEP x,y r x,y (η) where r x,y (η) degenerates to zero for certain choices of {η(z)} z∈Z d \{x,y} . Furthermore r x,y does not depend on the value of η(x) and η(y) and therefore detailed balance w.r.t. ρ-Bernoulli product measure µ ρ is verified for any ρ ∈ [0, 1]. Therefore µ ρ is an invariant reversible measure for the process. However, at variance with the simple symmetric exclusion process, KCLG have several other invariant measures. This is related to the fact that due to the degeneracy of r x,y (η) there exist blocked This work has been supported by the ERC Starting Grant 680275 MALIG . configurations, namely configurations for which all exchange rates are equal to zero. KCLG have been introduced in physics literature (see [START_REF] Ritort | Glassy dynamics of kinetically constrained models[END_REF][START_REF] Garrahan | Dynamical heterogeneities in glasses, colloids, and granular media[END_REF] for a review) to model the liquid/glass transition that occurs when a liquid is suddenly cooled. In particular they were devised to mimic the fact that the motion of a molecule in a low temperature (dense) liquid can be inhibited by the geometrical constraints created by the surrounding molecules. Since the exchange rates are devised to encode this local caging mechanism, they require a minimal number of empty sites in a certain neighborhood of e = (x, y) in order for the exchange at e to be allowed. There exists also a non-conservative version of KCLG, the so called Kinetically Constrained Spin Models, which feature a Glauber type dynamics and have been recently studied in several works (see e.g. [START_REF] Cancrini | Kinetically constrained spin models[END_REF][START_REF] Blondel | Oriane Tracer diffusion at low temperature in kinetically constrained models[END_REF] and references therein).
Let us start by recalling some fundamental issues which, due to the fact that the jump to a neighboring empty site is not always allowed, require for KCLG different techniques from those used to study SSEP. A first basic question is whether the infinite volume process is ergodic, namely whether zero is a simple eigenvalue for the generator of the Markov process in L 2 (µ ρ ). This would in turn imply relaxation to µ ρ in the L 2 (µ ρ ) sense. Since the constraints require a minimal number of empty sites, it is possible that the process undergoes a transition from an ergodic to a non ergodic regime at ρ c with 0 < ρ c < 1. The next natural issue is to establish the large time behavior of the infinite volume process in the ergodic regime, when we start from equilibrium measure at time zero. This in turn is related to the scaling with the system size of the spectral gap and of the inverse of the log Sobolev constant on a finite volume. Recall that for SSEP decay to equilibrium occurs as 1/t d/2 and both the spectral gap and the inverse of the log Sobolev constant decay as 1/L 2 uniformly in the density ρ [START_REF] Quastel | Jeremy Diffusion of color in the simple exclusion process[END_REF][START_REF] Yau | Logarithmic Sobolev inequality for generalized simple exclusion processes[END_REF], where L is the linear size of the finite volume. Numerical simulations for some KCLG suggest the possibility of an anomalous slowing down at high density [START_REF] Kob | Kinetic lattice-gas model of cage effects in high-density liquids and a test of mode-coupling theory of the ideal-glass transition[END_REF][START_REF] Marinari | Spatial correlations in the relaxation of the Kob-Andersen model[END_REF] which could correspond to a scaling of the spectral gap and of the log Sobolev constant different from SSEP. Two other natural issues are the evolution of macroscopic density profiles, namely the study of the hydrodynamic limit, and the large time behavior of a tracer particle under a diffusive rescaling. For SSEP and d ≥ 2 the tracer particle converges to a Brownian motion [START_REF] Spohn | Tracer diffusion in lattice gases[END_REF], more precisely the rescaled position of the tracer at time ε -2 t converges as ε → 0, to a d-dimensional Brownian motion with non-degenerate diffusion matrix. Instead, for some KCLG it has been conjectured that a diffusive/non-diffusive transition occurs at a finite critical density ρ c < 1: the self-diffusion matrix would be non-degenerate only for ρ < ρ c [START_REF] Kob | Kinetic lattice-gas model of cage effects in high-density liquids and a test of mode-coupling theory of the ideal-glass transition[END_REF][START_REF] Kurchan | Aging in lattice-gas models with constrained dynamics[END_REF]. Concerning the hydrodynamic limit, the following holds for SSEP: starting from an initial condition that has a density profile and under a diffusive rescaling, there is a density profile at later times and it can be obtained from the initial one by solving the heat equation [START_REF] Spohn | Large scale dynamics of interacting particles[END_REF]. For KCLG a natural candidate for the hydrodynamic limit is a parabolic equation of porous media type degenerating when the density approaches one. Establishing this result in presence of constraints is particularly challenging.
In order to recall the previous results on KCLG and to explain the novelty of our results, we should distinguish among cooperative and non-cooperative KCLG. A model is said to be non-cooperative if its constraints are such that it is possible to construct a proper finite group of vacancies, the mobile cluster, with the following two properties: (i) for any configuration it is possible to move the mobile cluster to any other position in the lattice by a sequence of allowed exchanges; (ii) any nearest neighbor exchange is allowed if the mobile cluster is in a proper position in its vicinity. All models which are not non-cooperative are said to be cooperative. From the point of view of the modelization of the liquid/glass transition, cooperative models are the most relevant ones. Indeed, very roughly speaking, non cooperative models are expected to behave like a rescaled SSEP with the mobile cluster playing the role of a single vacancy and are less suitable to describe the rich behavior of glassy dynamics. Furthermore, from a mathematical point of view, cooperative models are much more challenging. Indeed, for non-cooperative models the existence of finite mobile clusters simplifies the analysis and allows the application of some standard techniques (e.g. paths arguments) already developed for SSEP.
We can now recall the existing mathematical results for KCLG. Non-cooperative models. Ergodicity in infinite volume at any ρ < 1 easily follows from the fact that with probability one there exists a mobile cluster and using path arguments (see for example [START_REF] Bertini | Cristina Exclusion processes with degenerate rates: convergence to equilibrium and tagged particle[END_REF]). In [START_REF] Bertini | Cristina Exclusion processes with degenerate rates: convergence to equilibrium and tagged particle[END_REF] it is proven in certain cases that both the inverse of the spectral gap and the log Sobolev constant in finite volume of linear size L with boundary sources 1 scale as O(L 2 ). Furthermore for the same models the self-diffusion matrix of the tagged particle is proved to be non-degenerate [START_REF] Bertini | Cristina Exclusion processes with degenerate rates: convergence to equilibrium and tagged particle[END_REF]. The diffusive scaling of the spectral gap has been proved also for some models without boundary sources in [START_REF] Nagahata | Lower bound estimate of the spectral gap for simple exclusion process with degenerate rates[END_REF]. Finally, the hydrodynamic limit has been successfully analyzed for a special class constraints in [START_REF] Gonalves | Hydrodynamic limit for a particle system with degenerate rates[END_REF]. In all these cases the macroscopic density evolves under diffusive rescaling according to a porous medium equation of the type ∂ t ρ(t, u) = ∇(D∇ρ) with D(ρ) = (1 -ρ) m and m an integer parameter.
Cooperative models. The class of cooperative models which has been most studied in physics literature are the so-called Kob Andersen (KA) models [START_REF] Kob | Kinetic lattice-gas model of cage effects in high-density liquids and a test of mode-coupling theory of the ideal-glass transition[END_REF]. KA actually denotes a class of models on Z d characterized by an integer parameter s with s ∈ [2, d]. The nearest neighbor exchange rates are defined as follows: c x,y = c SSEP x,y r x,y (η) with r x,y = 1 if at least s -1 neighbors of x different from y are empty and at least s -1 neighbors of y different from x are empty too, r x,y = 0 otherwise. In other words, a particle is allowed to jump to a neighboring empty site iff it has at least s empty neighbors both in its initial and final position. Hence s is called the facilitation parameter. The choices s = 1 and s > d are discarded for the following reasons: s = 1 coincides with SSEP, while for s > d at any density the model is not ergodic 2 . It is immediate to verify that KA is a cooperative model for all s ∈ [2, d]. For example if s = d = 2 a fully occupied double column which spans the lattice can never be destroyed. Thus no finite cluster of vacancies can be mobile since it cannot overcome the double column. In [START_REF] Toninelli | Cooperative behavior of kinetically constrained lattice gas models of glassy dynamics[END_REF] it has been proven that for all s ∈ [2, d] the infinite volume process is ergodic at any finite density, namely ρ c = 1, thus disproving previous conjectures [START_REF] Kob | Kinetic lattice-gas model of cage effects in high-density liquids and a test of mode-coupling theory of the ideal-glass transition[END_REF][START_REF] Kurchan | Aging in lattice-gas models with constrained dynamics[END_REF][START_REF] Franz | A nonstandard mechanism for the glassy transition[END_REF] on the occurrence of an ergodicity breaking transition. In [START_REF] Cancrini | Kinetically constrained lattice gases[END_REF] a technique has been devised to analyze the spectral gap of cooperative KCLG on finite volume with boundary sources. In particular, for KA model with d = s = 2 it has been proved that in a box of linear size L 1 Namely with the addition of Glauber birth/death terms at the boundary 2 This follows from the fact that if s > d there exists finite clusters of particles which are blocked. For example for s = 3, d = 2 if there is a 2 × 2 square fully occupied by particles all these particles can never jump to their neighboring empty position.
with boundary sources, the spectral gap scales as 1/L 2 (apart from logarithmic corrections) at any density. By using this result it is proved that, again for the choice d = s = 2, the infinite volume time auto-correlation of local functions decays as 1/t (modulo logarithmic corrections) [START_REF] Cancrini | Kinetically constrained lattice gases[END_REF]. The technique of [START_REF] Cancrini | Kinetically constrained lattice gases[END_REF] can be extended to prove for all choices of d and s ∈ [2, d] a diffusive scaling for the spectral gap and a decay of the correlation at least as 1/t. A lower bound as 1/t d/2 follows by comparison with SSEP.
In the present paper we analyze the behavior of a tracer (also called tagged particle) for KA models at equilibrium, namely when the infinite volume system is initialized with ρ-Bernoulli measure. We prove (Theorem 2.2) that for all d, for any choice of s ∈ [2, d] and for any ρ < 1, under diffusive scaling the motion of the tracer converges to a d-dimensional Brownian motion with non-degenerate diffusion matrix. Our result disproves the occurrence of a diffusive/non diffusive transition which had been conjectured in physics literature on the basis of numerical simulations [START_REF] Kob | Kinetic lattice-gas model of cage effects in high-density liquids and a test of mode-coupling theory of the ideal-glass transition[END_REF][START_REF] Kurchan | Aging in lattice-gas models with constrained dynamics[END_REF]. Positivity of the self-diffusion matrix at any ρ < 1 had been later claimed in [START_REF] Toninelli | Dynamical arrest, tracer diffusion and kinetically constrained lattice gases[END_REF]. However, the results in [START_REF] Toninelli | Dynamical arrest, tracer diffusion and kinetically constrained lattice gases[END_REF] do not provide a full and rigorous proof of the positivity of the self-diffusion matrix. Indeed, they rely on a comparison with the behavior of certain random walks in a random environment which is not exact. We follow here a novel route, different from the heuristic arguments sketched in [START_REF] Toninelli | Dynamical arrest, tracer diffusion and kinetically constrained lattice gases[END_REF], which allows us to obtain the first rigorous proof of positivity of the self diffusion matrix for a cooperative KCLG. In particular we prove that positivity holds for any ρ < 1 for all KA models. Our technique is flexible enough and can be extended to analyze other cooperative models in the ergodic regime.
The plan of the paper follows. In Section 2, after setting the relevant notation, we introduce KA models and state our main result (Theorem 2.2). In Section 3 we recall some basic properties of KA models: ergodicity at any ρ < 1 (Proposition 3); the existence of a finite critical scale above which with large probability a configuration on finite volume can be connected to a framed configuration, namely a configuration with empty boundary. In Section 4 we introduce an auxiliary diffusion process which corresponds to a random walk on the infinite component of a certain percolation cluster. Then we prove that this auxiliary process has a non-degenerate diffusion matrix (Proposition 4.2). In Section 5 we prove via path arguments that the diffusion matrix of KA is lower bounded by the one for the auxiliary process (Theorem 5.1). This allows to conclude that the self diffusion matrix for KA model is non-degenerate.
Model and results
The models considered here are defined on the integer lattice Z d with sites x = (x 1 , . . . , x d ) and basis vectors e 1 = (1, . . . , 0), e 2 = (0, 1, . . . , 0), . . . , e d = (0, . . . , 1). Given x and y in Z d we write x ∼ y if they are nearest neighbors, namely d(x, y) = 1 where d(•, •) is the distance associated with the Euclidean norm. Also, given a finite set Λ ⊂ Z d we define its neighborhood ∂Λ as the set of sites outside Λ at distance one and its interior neighborhood ∂ -Λ as the set of sites inside Λ at distance one from Λ c , namely
∂Λ := {x ∈ Λ : ∃y ∈ Λ s.t. d(x, y) = 1} ∂ -Λ := {x ∈ Λ : ∃y ∈ Λ c s.t. d(x, y) = 1}
We denote by Ω the configuration space, Ω = {0, 1} Z d and by the greek letters η, ξ the configurations. Given η ∈ Ω we let η(x) ∈ {0, 1} be the occupation variable at site x. We fix a parameter ρ ∈ [0, 1] and we denote by µ the ρ-Bernoulli product measure. Finally, given η ∈ Ω for any bond e = (x, y) we denote by η xy the configuration obtained from η by exchanging the occupation variables at x and y, namely
η xy (z) := η(z) if z / ∈ {x, y} η(x) if z = y η(y) if z = x.
The Kob-Andersen (KA) models are interacting particle systems with Kawasaki type (i.e. conservative) dynamics on the lattice Z d depending on a parameter s ≤ d (the facilitation parameter) with s ∈ [2, d]. They are Markov processes defined through the generator which acts on local functions f : Ω → R as
L env f (ξ) = x∈Z d y∼x c xy (ξ)[f (ξ xy ) -f (ξ)], (2.1)
where
c xy (ξ) = 1 if ξ(x) = 1, ξ(y) = 0, z∼y (1 -ξ(z)) ≥ s -1 and z∼x (1 -ξ(z)) ≥ s, 0 else. (2.
2) where here and in the following with a slight abuse of notation we let z∼y be the sum over sites z ∈ Z d with z ∼ y. In words, each couple of neighboring sites (x, y) waits an independent mean one exponential time and then the values η(x) and η(y) are exchanged provided : either (i) there is a particle at x and an empty site at y and at least s -1 nearest neighbors for y and at least s nearest neighbors for x or (ii) there is a particle at y and an empty site at x and at least s nearest neighbors for y and at least s -1 nearest neighbors for x. We call the jump of a particle from x to y allowed if c xy (ξ) = 1. For any ρ ∈ (0, 1), the process is reversible w.r.t. µ, the product Bernoulli measure of parameter ρ.
We consider a tagged particle in a KA system at equilibrium. More precisely, we consider the joint process (X t , ξ t ) t≥0 on
Z d × {0, 1} Z d with generator Lf (X, ξ) = y∈Z d \{X} z∼y c yz (ξ)[f (X, ξ yz ) -f (X, ξ)] (2.3) + y∼X c Xy (ξ)[f (y, ξ Xy ) -f (X, ξ)] (2.4) and initial distribution ξ 0 ∼ µ 0 := µ(•|ξ(0) = 1), X 0 = 0.
Here and in the rest of the paper, we denote for simplicity by 0 the origin, namely site x ∈ Z d with e i • x = 0 ∀i ∈ {1, . . . , d}.
In order to study the position of the tagged particle, (X t ) t≥0 , it is convenient to define the process of the environment seen from the tagged particle (η t ) t≥0 := (τ Xt ξ t ) t≥0 , where (τ x ξ)(y) = ξ(x + y). This process is Markovian, has generator
Lf (η) = y∈Z d \{0} z∼y c yz (η)[f (η xy ) -f (η)] + y∼0 c 0y (η)[f (τ y (η 0y )) -f (η)] (2.5)
and is reversible w.r.t. µ 0 . We still say that the jump of a particle from x to y is an allowed move if c xy (η) = 1. In the case x = 0, this jump in fact turns η into τ y (η 0y ). By using the fact that the process seen from the tagged particle is ergodic at any ρ < 1 (see Proposition 3.2) we can apply a classic result [START_REF] Spohn | Tracer diffusion in lattice gases[END_REF] and obtain the following.
Proposition 2.1. [START_REF] Spohn | Tracer diffusion in lattice gases[END_REF] 3 For any ρ ∈ (0, 1), there exists a non-negative
d × d matrix D(ρ) such that εX ε -2 t -→ ε→0 2D(ρ)B t , (2.6)
where B is a standard d-dimensional Brownian motion and the convergence holds in the sense of weak convergence of path measures on D([0, ∞), R d ). Moreover, the matrix D(ρ) is characterized by
u • D(ρ)u = inf f y∈Z d \{0} z∼y µ 0 c yz (η)[f (η xy ) -f (η)] 2 + y∼0 µ 0 c 0y (η)[u • y + f (τ y (η 0y )) -f (η)] 2 (2.7)
for any u ∈ R d , where the infimum is taken over local functions f on {0, 1} Z d .
Our main result is the following.
Ergodicity, frameability and characteristic lengthscale
In this section we recall some key results for KA dynamics. In [START_REF] Cancrini | Kinetically constrained lattice gases[END_REF], following the arguments of [START_REF] Toninelli | Cooperative behavior of kinetically constrained lattice gas models of glassy dynamics[END_REF], it was proved that KA models are ergodic for any ρ < 1. More precisely we have the following. Proposition 3.1 (Theorem 3.5 of [START_REF] Cancrini | Kinetically constrained lattice gases[END_REF]). Fix an integer d and s ∈ [2, d] and consider the KA model on Z d with facilitation parameter s. Fix ρ ∈ (0, 1) and let µ be the ρ-Bernoulli product measure. Then 0 is a simple eigenvalue of the generator L env defined by formula (2.1) considered on L 2 (µ).
Along the same lines one can prove that the process of the environment seen from the tagged particle is ergodic on L 2 (µ 0 ), namely recalling that µ 0 := µ(•|ξ(0) = 1) it holds Proposition 3.2. 0 is a simple eigenvalue of the generator L defined by formula (2.5) considered on L 2 (µ 0 ). Definition 3.1 (Allowed paths). Given Λ ⊂ Z d and two configurations η, σ ∈ Ω, a sequence of configurations P η,σ = (η (1) , η (2) , . . . , η (n) ) 3 This result is proved in [START_REF] Spohn | Tracer diffusion in lattice gases[END_REF] for exclusion processes on Z d but the proof also works in our setting.
starting at η (1) = η and ending at η (n) = σ is an allowed path from η to σ inside Λ if for any i = 1, . . . , n-1 there exists a bond (x i , y i ), namely a couple of neighboring sites, with η (i+1) = (η (i) ) xiyi and c xiyi (η (i) ) = 1. We also require that paths do not go through the same configuration twice, namely for all i, j ∈ [2, n] with i = j it holds η (i) = η (j) . We say that n is the length of the path. Of course the notion of allowed path depends on the choice of the facilitation parameter s which enters in the definition of c xy . It is also useful to define allowed paths for the process seen from the tagged particle. The paths are defined as before, with the only difference that for any i = 1, . . . , n -1 there exists a bond (x i , y i ), namely a couple of neighboring sites, with c xiyi (η (i) ) = 1 and
• either x i = 0 and η (i+1) = τ yi η (i) 0yi
• or x i = 0 and η
(i+1) = (η (i) ) xiyi
Following the terminology of [START_REF] Toninelli | Cooperative behavior of kinetically constrained lattice gas models of glassy dynamics[END_REF] we introduce the notion of frameable and framed configurations. Definition 3.2 (Framed and frameable configurations). Fix a set Λ ⊂ Z d and a configuration ω ∈ Ω. We say that ω is Λ-framed if ω(x) = 0 for any x ∈ ∂ -Λ. Let ω (Λ) be the configuration equal to ω Λ inside Λ and equal to 1 outside Λ. We say that ω is Λ-frameable if there exist a Λ-framed configuration σ (Λ) with at least one allowed configuration path P ω (Λ) →σ (Λ) inside Λ (by definition any framed configuration is also frameable). Sometimes, when from the context it is clear to which geometric set Λ we are referring, we will drop Λ in the names and just say framed and frameable configurations. Of course the notion of frameable configurations depends on the choice of the facilitation parameter s.
The following result, proved in [START_REF] Toninelli | Cooperative behavior of kinetically constrained lattice gas models of glassy dynamics[END_REF][START_REF] Cancrini | Kinetically constrained lattice gases[END_REF], shows that on a sufficiently large lengthscale frameable configurations are typical.
µ (ξ is Λ L -frameable) ≥ 1 -ε (3.1)
where we set Λ L = [0, L] d .
An auxiliary diffusion
In this section we will introduce a bond percolation process on a properly renormalized lattice and an auxiliary diffusion which corresponds to a random walk on the infinite component of this percolation. Then we will prove that this auxiliary process has a non-degenerate diffusion matrix (Proposition 4.2). This result will be the key starting point of the next section, where we will prove our main Theorem 2.2 by comparing the diffusion matrix of the KA model with the diffusion matrix of the auxiliary process (Theorem 5.1).
In order to introduce our bond percolation process we need some auxiliary notation. Fix a parameter L ∈ N and consider the renormalized lattice (L + 2)Z d . For n ∈ {0, . . . d}, let B (n) := {0, 1} d-n × {0, . . . L -1} n . We say that B (n) is the elementary block of L-dimension n. B B (n) by permutations of the coordinates. Notice that one can write the cube of side length L + 2 as a disjoint union of such blocks in the following way (see Figure 1):
Λ L+2 := {0, . . . , L + 1} d = B (0) d n=1 ( d n ) i=1 B (n) i + 2e ji1 + . . . + 2e jin , (4.1)
where the block B
(n) i has length L in the directions e ji1 , . . . , e jin and length 2 in the other directions. By first decomposing Z d in blocks of linear size L + 2 and then using this decomposition, we finally get a paving of Z d by blocks with side lengths in {2, L}. We will speak of liaison tubes or just tubes for blocks of L-dimension 1 and of facilitating blocks for blocks of L-dimension 2 or larger. We will also call faces of B (n) i the 2 d-n (disjoint) regions of the form x ji1 ∈ [0, L -1], . . . , x jin ∈ [0, L -1] and x j = c j with c j ∈ {0, 1} for all j ∈ {j i1 , . . . , j in }. Finally, for x ∈ (L + 2)Z d , i = 1, . . . , d, we define the block neighborhood N x,i of (x, x + (L + 2)e i ) recursively in the L-dimension of the blocks (see also Figure 2):
• B (0) + x and B (0) + x + (L + 2)e i belong to N x,i , • each tube adjacent to B (0) + x or B (0) + x + (L + 2)e i belongs to N x,i , • recursively, each block of L-dimension n + 1 adjacent to some block of L-dimension n in N x,i is also in N x,i .
We are now ready to define our bond percolation process. Let E((L + 2)Z d ) be the set of bonds of (L+2)Z d . Given a configuration η ∈ {0, 1} Z d , the corresponding configuration on the bonds η ∈ {0, 1} E((L+2)Z d ) is defined by ηx,x+(L+2)ei = 1 iff (1) each tube in N x,i contains at least a zero, (2) for all n = 2, . . . , d, for all B block of L-dimension n in N x,i , let Λ B,i with i ∈ [1, 2 n-d ] be its faces. The configuration should be Λ B,i frameable for KA process with parameter n.
In other words the edge (x, x + (L + 2)e i ) is open if (1) and ( 2) are satisfied, closed otherwise. See Figure 2 for an example of an open bond. Note that conditions (1) and ( 2) do not ask anything of the configuration inside B (0) + x and B (0) + x + (L + 2)e i . As a consequence, the distribution of η is the same for η ∼ µ as for η ∼ µ 0 . We denote it by μ. Proof. To bound the dependence range, it is enough to check that for x, y ∈ (L + 2)Z d at distance at least (L + 2)(d + 2), N x,i and N y,j are disjoint for any i, j = 1, . . . , d.
We now show that the percolation parameter goes to 1 with L. First, the number of blocks in N x,i depends only on d and the configurations inside the different blocks in N x,i are independent, so we just need to show that the probability for each block to satisfy condition (1) or (2) (depending on its L-dimension) goes to one. This is clearly true for condition [START_REF] Bertini | Cristina Exclusion processes with degenerate rates: convergence to equilibrium and tagged particle[END_REF], since the probability that a given tube contains a zero is 1 -ρ 2 d-1 L . For condition (2), consider a block of L-dimension n with n ≥ 2, and notice that under either µ or µ 0 , the configurations inside the 2 d-n different n-dimensional faces of the block are independent since the faces are disjoint. The conclusion therefore follows from Lemma 3.3. Now we can define the auxiliary process (Y t ) t≥0 , which lives on (L + 2)Z d and whose diffusion coefficient we will compare with D(ρ). Fix L(ρ) so that under μ the open cluster percolates. Y is the simple random walk on the infinite percolation cluster. More precisely, let μ * := μ(•|0 ↔ ∞), where as usual we write "0 ↔ ∞" for "0 belongs to the infinite percolation cluster". Y 0 := 0 and from x ∈ (L + 2)Z d , Y jumps to x ± (L + 2)e i at rate ηx,x±(L+2)ei . We write P aux μ * for the distribution of this random walk. Proposition 4.2. For L(ρ) large enough, there exists a positive (non-degenerate)
d × d matrix D aux (ρ) such that under P aux μ * , εY ε -2 t -→ ε→0 2D aux (ρ)B t , (4.2)
where B is a standard d-dimensional Brownian motion and the convergence holds in the sense of weak convergence of path measures on D([0, ∞), R d ). The matrix
D aux (ρ) is characterized by u • D aux (ρ)u = inf f y (L+2)Z d ∼ 0 μ * η0,y [u • y + f (τ y (η)) -f (η)] 2 > 0 (4.3)
for any u ∈ R d , where the infimum is taken over local functions f on {0, 1} E((L+2)Z d ) .
Proof. The convergence to Brownian motion (formula 4.2) was proved in [START_REF] De Masi | An invariance principle for reversible Markov processes. Applications to random motions in random environments[END_REF] in the case of independent bond percolation and the variational formula (the equality in (4.3) was established in [START_REF] Spohn | Tracer diffusion in lattice gases[END_REF]). As pointed out in Remark 4.16 of [START_REF] De Masi | An invariance principle for reversible Markov processes. Applications to random motions in random environments[END_REF], independence is only needed to show positivity of the diffusion coefficient. This property indeed relies on the fact that the effective conductivity in a box of size N is bounded away from 0 as N → ∞. Therefore to prove the positive lower bound of formula 4.3 we just need to show that this property holds under μ if L is large enough. Notice that we just need to prove the result in dimension d = 2. In fact, in order to prove that e 1 • D aux e 1 > 0, we just need to find a lower bound on the number of disjoint open paths from left to right in [1, (L + 2)N ] 2 ([5, Proposition 3.2]). The other directions are similar. More precisely, we only need to show that for L large enough there exists λ > 0 such that for N large enough,
μ(at least λN disjoint left-right open paths in [1, (L + 2)N ] 2 ) ≥ 1 -e -λN . (4.4)
To this aim, we embed open paths in μ into open paths of yet another percolation process built from µ. Let μ be the independent site percolation process defined as follows. The underlying graph is Ẑ2 = 3(L + 2)Z × 2(L + 2)Z. For x ∈ Z 2 , we let x = (L + 2)(3x 1 , 2x 2 ) and Nx be the union of N x,1 and the tubes just above and to the right (Figure 3). We say that x is ∧ -open if each tube (resp. each block) in Nx satisfies condition (1) (resp. ( 2)). This defines a probability measure μ on {0, 1} Ẑ2 , which is an independent site percolation process since Nx
∩ Nx = ∅ if x = x . Moreover, μ(0 is ∧ -open) -→
Comparison of the diffusion coefficients and proof of Theorem 2.2
The main result of this section is the following Theorem, which states that the self diffusion matrix for KA is lower bounded by the self diffusion matrix for the auxiliary model introduced in the previous section. This result will be proved by using the variational characterisation of the diffusion matrices and via path arguments. More precisely, for any move (x, ξ) → (x , ξ ) which has rate > 0 for the auxiliary process we construct (in Lemmata 5.3, 5.4, 5.5, 5.6, 5.7) a path of moves, each having positive rate for the KA process and connecting (x, ξ) to (x , ξ ). Once Theorem 5.1 is proved, our main result follows.
Proof of Theorem 2.2. The result follows by using Proposition 4.
e i • D aux e i ≤ μ(0 ↔ ∞) -1 inf f y (L+2)Z d ∼ 0 µ A η0,y [y i + f (τ y (η 0y, )) -f (η)] 2 , ( 5
µ A η0,y [y i + f (τ y (η 0y, )) -f (η)] 2 = µ A η0,y µ A [y i + f (τ y (η 0y, )) -f (η)] 2 |η ≥ μ η0,y [y i + f (τ y (η)) -f (η)] 2 ≥ μ * η0,y [y i + f (τ y (η)) -f (η)] 2 μ(0 ↔ ∞).
Therefore the result follows by (4.3).
The next sequence of Lemmata will show that for all η, y such that η0,y = 1 and η ∈ A, there exists an allowed path from η to τ y η 0y, of finite length. In order to avoid heavy notations, we will sometimes adopt an informal description of the allowed paths in the proofs. For simplicity, we state the results in the case y = (L + 2)e 1 , but the process would be the same in any direction. In the following, c(d) denotes a constant depending only on d which may change from line to line.
Lemma 5.3. Let η ∈ {0, 1} Z d such that η0,(L+2)e1 = 1. Choose a block of L- dimension n ∈ [2, d] inside N 0,1 , call it Λ.
Then, using at most c(d)2 L d allowed moves, one can empty every site on its interior boundary ∂ -Λ (see Figure 2). Proof. For the blocks of L-dimension d, this follows from the condition on η which implies frameability of these blocks. The number of necessary moves is bounded by the number of configurations inside one block times the number of involved blocks. Then we deal with the blocks of L-dimension d -1, . . . 2 iteratively. Note that the frameability condition given by the definition of η is such that a block of L-dimension k ∈ {2, . . . , d -1} is frameable (in the sense of the KA process in dimension d) as soon as the neighboring blocks of L-dimension k + 1 are framed. Indeed, for k < d, every site x in a block of L-dimension k is adjacent to a point in the interior boundary of d -k different blocks of dimension k + 1 (which belong to N 0,1 by construction). Therefore the path allowed by KA model with parameter k in order to frame the configuration (which exists thanks to condition (2)), is also allowed by KA model with parameter d (since the missing d -k empty sites are found in the interior boundary of the neighboring framed block). After this step, the tubes in N 0,1 are wrapped by zeros, namely for any site x inside a tube, any neighbor of x that belongs to a facilitating block (i.e. to a block of L-dimension ≥ 2) is empty. Next we notice that inside a tube wrapped by zeros, the jump of a particle to a neighboring empty site is always allowed (since the wrapping guarantees an additional zero in the initial and in the final position of the particle). More precisely the following holds Proof. Due to Lemmata 5.3, 5.4, 5.5, 5.6, 5.7, we know that for all η such that η0,y = 1 and η ∈ A, there exists an allowed path from η to τ y η 0y, of length upper bounded by C 2 L d for some finite constant C . In Figure 4, we give the main steps in the construction of such a path. In particular,
N -1 k=0 1 x (k) =0 y (k) i = y i .
Then we can write
y i + f (τ y (η 0y, )) -f (η) = N -1 k=0 1 x (k) =0 y (k) i + f (η (k+1) ) -f (η k) ) .
(5.
Theorem 2 . 2 .Remark 2 . 3 .
2223 Fix an integer d and s ∈[2, d] and consider the KA model on Z d with facilitation parameter s. Then, for any ρ ∈ (0, 1), any i = 1, . . . , d, we have e i • D(ρ)e i > 0. In other words, the matrix D(ρ) is non-degenerate at any density. Since the constraints are monotone in s (the facilitation parameter), it is enough to prove the above result for s = d. From now on we assume s = d.
Lemma 3 . 3 .
33 [4, Lemma 3.4] For any dimension d, any ρ < 1 and any ε > 0, there exists Ξ = Ξ(ρ, ε, d) < ∞ such that, for the KA process in Z d with facilitation parameter d, for L ≥ Ξ it holds
Figure 1 .
1 Figure 1. The covering of {0, . . . L + 1} d by blocks B (n) i for d = 2, 3.
1 LFigure 2 .
12 Figure2. A configuration in which the block neighborhood N x,1 is such that ηx,x+(L+2)e1 = 1 in dimension 2. We represent the frameable blocks as already framed.
L→∞ 1 ,
1 similarly to what we proved in Lemma 4.1. We now use[START_REF] Kesten | Harry Percolation theory for mathematicians[END_REF] Theorem 11.1] to say that (4.4) holds with μ replaced with μ and "open" by " ∧ -open". In order to deduce (4.4), notice μ is transparently coupled with μ since they are both constructed from µ. Moreover, it is clear that for x ∈ Z 2 the following holds• x is ∧ -open implies (x, x + (L + 2)e 1 )is open (for our dependent percolation process), • x, x + 3(L + 2)e 1 are ∧ -open implies (x, x + (L + 2)e 1 ), (x + (L + 2)e 1 , x + 2(L + 2)e 1 ), (x + 2(L + 2)e 1 , x + 3(L + 2)e 1 ) are open, • x, x + 2(L + 2)e 2 are ∧ -open implies (x, x + (L + 2)e 2 ), (x + (L + 2)e 2 , x + 2(L + 2)e 2 ) are open. Therefore, for the natural coupling between μ and μ, existence of disjoint ∧ -open paths implies existence of disjoint open paths and (4.4) follows.
Figure 3 .Theorem 5 . 1 .
351 Figure 3. Nx in configuration such that x is ∧ -open.
2 and Theorem 5.1. We are therefore left with the proof of Theorem 5.1. Let us start by establishing some key Lemmata. Let A = ξ(0) = 1, ξ(x) = 0 ∀x ∈ {0, 1} d \ 0 . Define µ A = µ(•|A) and denote by η xy, the configuration obtained from η by exchanging the contents of the boxes x + {0, 1} d and y + {0, 1} d . Then the following holds Lemma 5.2.
. 1 )
1 where the infimum is taken over local functions f on {0, 1} Z d . Proof. Let f be a local function on {0, 1} Z d . We associate with it a local function f on {0, 1} E((L+2)Z d ) , defined by f (η) = µ A (f |η). Then, for y (L+2)Z d ∼ 0, since η does not depend on the configuration η inside {0, 1} d and y + {0, 1} d , we can bound
Lemma 5 . 4 .
54 Fix i ∈ [1, . . . , d] and choose any configuration ξ such that B (1) i is wrapped by zeros. Fix x ∼ y with x ∈ B (1) i and y ∈ B
( 1 )
1 i . Then c x,y (η) = 1. Therefore if ξ, ξ are two configurations that are both empty on ∂B same number of zeros inside B(1) i , then there is an allowed path with length c(d)L from ξ to ξ . Moreover, if x, x ∈ B (1) i , and ξ, ξ have the same positive number of zeros inside the tube and the tracer respectively at x, x , it takes at most c(d)L allowed moves inside the tube to change ξ into ξ and take the tracer from x to x .
Figure 4 . 5 . 5 . 8 . 2 ≤
45582 Figure 4. An example of the main steps in the construction of the path from η (in line 1) to τ y (η 0y, ) (in line 10) when y = (L + 2)e 1 . Only the liaison tube is represented. From line 1 to 2 we use Lemma 5.5; line 2 to 3: Lemmata 5.6 and 5.4 twice; line 3 to 4: Lemmata 5.6 and 5.4; line 4 to 5: Lemmata 5.7, 5.6; line 5 to 6: Lemmata 5.6, 5.4; line 6 to 7: Lemmata 5.4, 5.6, 5.7; line 7 to 8: Lemmata 5.6, 5.4; line 8 to 9: Lemmata 5.7, 5.6; from 9 to 10, Lemmata 5.4, 5.6, 5.5.
3 ) 2 . 2 ≤ 2 ≤ ( 1 k=0 1 xk=0 1 x 1 k=0 1 x 2 ≤ 3 x
32221111123 By Cauchy-Schwarz inequality, we deduce that[y i + f (τ y (η 0y, )) -f (η)] 2 ≤ C 2 L d N -1 k=0 c x (k) y (k) (η (k) ) 1 x (k) =0 y (k) i + f (η (k+1) ) -f (η k) ) (5.4) Therefore, µ A η0,y [y i + f (τ y (η 0y, )) -f (η)] 2 = (1-ρ) 1-2 d µ 0 1 A η0,y [y i + f (τ y (η 0y, )) -f (η)] (1-ρ) 1-2 d C 2 L d µ 0 η0,y N -1 k=0 c x (k) y (k) (η (k) ) 1 x (k) =0 y (k) i + f (η (k+1) ) -f (η k) ) (k) =x,y (k) =z,η (k) =η c xz (η ) [f (η xz ) -f (η )] (k) =0,y (k) =z,η (k) =η c xz (η ) [z i + f (τ z η xz ) -f (η )] the sums are taken over x ∼ z inside N 0,i , η, η ∈ {0, 1} N0,i with the same number of zeros, and the equality η(k) = η actually means η (k) = τ Y k η , where Y k = k-1 m=0 y (j) . Since η and η have the same number of zeros and the tracer at zero by construction, µ 0 (η) = µ 0 (η ) and we can bound η0,yN -(k) =0,y (k) =z,η (k) =η by N ≤ C 2 L d to obtain µ A η0,y [y i + f (τ y (η 0y, )) -f (η)] (1 -ρ) 1-2 d (C 2 L d ) =0 z∼x µ 0 c xz (η)[f (η xz ) -f (η)] 2 + z∼0 µ 0 c 0z (η)[z i + f (τ z (η 0z )) -f (η)] 2 . (5.6)Finally, we can conclude.Proof of Theorem 5.1. The result follows from Lemma 5.2, Lemma 5.8 and the variational formula for D in Proposition 2.1.
Proof. One just needs to notice that the wrapping ensures the satisfaction of the constraint for any such exchange. Lemma 5.5. Fix a configuration such that: there is at least one zero in each tube inside N 0,1 ; each such tube is wrapped by zeros; the tracer is at zero and the remaining sites of {0, 1} d are empty. Then, the tracer can be moved to any position in B
(1) 1 + 2e 1 , namely for any y ∈ B
(1) 1 + 2e 1 there is an allowed path from (0, η) to (y, η ) for at least a configuration η .
Proof. It is clear that the tracer can get to e 1 and we can bring a zero to 2e 1 thanks to Lemma 5.4. Then we can exchange the configuration in e 1 and 2e 1 , take the zero in e 1 + e 2 inside the tube (namely exchange the configuration in e 1 + e 2 and 2e 1 + e 2 thanks to the empty site in 2e 1 + 2e 2 guaranteed by the wrapping), use Lemma 5.4 again to get the to the desired position, and take the zero back to e 1 + e 2 (if the desired position is 2e 1 + e 2 there is no need to take the zero in e 1 + e 2 inside the tube).
Lemma 5.6. Fix a configuration such that all tubes adjacent to B (0) are wrapped by zeros and they all contain a zero except possibly B 1 + 2e 1 and therefore contain a zero that can be brought to a site adjacent to x using Lemma 5.4. The constraint for the exchange is then satisfied if the configurations differ at x, x + e 1 (else the exchange is pointless).
Lemma 5.7. Fix a configuration such that: all tubes adjacent to B (0) are wrapped by zeros and they all contain a zero except possibly B
(1) 1 + 2e 1 ; either the slice {0} × {0, 1} d-1 or the slice {1} × {0, 1} d-1 are completely empty. Then we can exchange the configurations in {0} × {0, 1} d-1 and {1} × {0, 1} d-1 in at most c(d)L allowed moves.
Proof. We describe the case {0} × {0, 1} d-1 empty. Order arbitrarily the zeros in positions x ∈ {0} × {0, 1} d-1 and move them one by one to x + 1. When attempting to move the i-th zero, initially in position x, a certain number N i of its neighbors in slice {0}×{0, 1} d-1 have not been touched and are still empty. The other d-1-N i zeros are now in neighboring positions of x + e 1 . Moreover, there are d -1 tubes adjacent to both x and x + e 1 . In N i of those, we take the zero to the position adjacent to x + e 1 , and in the other d -1 -N i to the position adjacent to x. Now the condition to exchange the variables at x, x + e 1 is satisfied.
We are now ready to prove the following key result | 42,540 | [
"8017"
] | [
"441569",
"193738",
"521755",
"441569",
"1004954"
] |
00145537 | en | [
"sdv",
"info"
] | 2024/03/04 23:41:44 | 2007 | https://inria.hal.science/inria-00145537v2/file/RR-6190.pdf | Keywords: Regulatory networks, microarray experiments, qualitative modeling Réseaux de régulation, puces à ADN, modélisation qualitative
We proposed in previous articles a qualitative approach to check the compatibility between a model of interactions and gene expression data. The purpose of the present work is to validate this me thodology on a real-size setting. We study the response of Escherichia coli regulatory network to nutritional stress, and compare it to publicly available DNA microarray experiments. We show how the incompatibilities we found reveal missing interactions in the network, as well as observations in contradiction with available literature.
Introduction
There exists a wide range of techniques for the analysis of gene expression data. Following a review by Slonim [START_REF] Slonim | From patterns to pathways: gene expression data analysis comes of age[END_REF], we may classify them according to the particular output they compute: 1. list of significantly over/under-expressed genes under a particular condition, 2. dimension reduction of expression profiles for visualization, 3. clustering of co-expressed genes, 4. classification algorithms for protein function, tissue categorization, disease outcome, 5. inferred regulatory networks.
The last category may be extended to all model-based approaches, where experimental measurements are used to build, verify or refine a model of the system under study. Following this line of research, we showed in previous papers (see [START_REF] Radulescu | Topology and static response of interaction networks in molecular biology[END_REF], [START_REF] Siegel | Qualitative analysis of the relation between DNA microarray data and behavioral models of regulation networks[END_REF] and [START_REF] Veber | Complex qualitative models in biology: A new approach[END_REF]) how to define and to check consistency between experimental measurements and a graphical regulatory model formalized as an interaction graph. The purpose of the present work is to validate this methodology on a real-size setting. More precisely, we show 1. that the algorithms we proposed in [START_REF] Veber | Complex qualitative models in biology: A new approach[END_REF] are able to handle models with thousands of genes and reactions, 2. that our methodology is an effective strategy to extract biologically relevant information from gene expression data. For this we built an interaction graph for the regulatory network of E. coli K12, mainly relying on the highly accurate database RegulonDB [START_REF] Salgado | RegulonDB (version 5.0): Escherichia coli K-12 transcriptional regulatory network, operon organization, and growth conditions[END_REF], [START_REF] Salgado | The comprehensive updated regulatory network of Escherichia coli K-12[END_REF]. Then we compared the predictions of our model with three independant microarray experiments. Incompatibilities between experimental data and our model revealed: either expression data that is not consistent with results showed in literaturei.e. there is at least one publication which contradicts the experimental measurement, either missing interactions in the model We are not the first to address this issue. Actually, in the work of Gutierrez-Rios and coworkers [START_REF] Rosa | Regulatory network of Escherichia coli: consistency between literature knowledge and microarray profiles[END_REF], an evaluation of the consistency between literature and microarray experiments of E. coli K12 was presented. The authors designed on-purpose microarray experiments in order to measure gene expression profiles of the bacteria under different conditions. They evaluate the consistency of their experimental results first with those reported in the literature, second with a rule-based formalism they propose. Our main contribution is the use of algorithmic tools that allow inference/prediction of gene expression of a big percentage of the network, and diagnosis in the case of inconsistency between a model and expression data.
Mathematical framework
Introductory example
We choose as an illustration a model for the lactose metabolism in the bacterium E.Coli (lactose operon). The interaction graph corresponding to the model is presented in Fig. 1. This is a common representation for biochemical systems where arrows show activation RR n ¡ 6190 or inhibition. Basically, an arrow between A and B means that an increase of A tends to increase or decrease B depending on the shape of the arrow head. Common sense and simple biological intuition can be used to say that an increase of allolactose (node A on Figure 1) should result in a decrease of LacI protein. However, if both LacI and cAM P -CRP increase, then nothing can be said about the variation of LacY .
The aim of this section is first, to provide a formal interpretation for the graphical notation used in Figure 1; second, to derive constraints on experimental measurements, which justify our small scale common sense reasoning; finally apply these constraints to the scale of data produced by high throughput experimental techniques. For this, we resort to qualitative modeling ( [START_REF] Kuipers | Qualitative reasoning[END_REF]), which may be seen as a principled way to derive a discrete system from a continuous one.
Equilibrium shift of a differential system
Let us consider a network of n interacting cellular constituents (mRNA, protein, metabolite). We denote by X i the concentration of the i th species, and by X the vector of concentrations (whose components are X i ). We assume that the system can be adequately described by a system of differential equations of the form dX dt = F(X, P), where P denotes a set of control parameters (inputs to the system). A steady state of the system is a solution of the system of equations F(X, P) = 0 for fixed P.
A typical experiment consists in applying a perturbation (change P) to the system in a given initial steady state condition eq1, wait long enough for a new steady state eq2, and record the changes of X i . Thus, we shall interpret the sign of DNA chips differential data as the sign of the variations X eq2 i -X eq1 i . The particular form of vector function F is unknown in general, but this will not be needed as we are interested only in the signs of the variations. Indeed, the only information we need about F is the sign of its partial derivatives ∂Fi ∂Xj . We call interaction graph the graph whose nodes are the constituents {1, . . . , n}, and where there is an edge j → i iff ∂Fi ∂Xj = 0 (an arrow j → i means that the rate of production of i depends on X j ). As soon as F is non linear, ∂Fi ∂Xj may depend on the actual state X. In the following, we will assume that the sign of ∂Fi ∂Xj is constant, that is, that the interaction graph is independent of the state. This rather strong hypothesis, can be replaced by a milder one specified in [START_REF] Radulescu | Topology and static response of interaction networks in molecular biology[END_REF][START_REF] Siegel | Qualitative analysis of the relation between DNA microarray data and behavioral models of regulation networks[END_REF] meaning essentially that the sign of the interactions do not change on a path of intermediate states connecting the initial and the final steady states.
Qualitative constraints
In the following, we introduce an equation that relates the sign of variation of a species to that of its predecessors in the interaction graph. To state this result with full rigor, we need to introduce the following algebra on signs.
We call sign algebra the set {+, , ?} (where ? stands for indeterminate), endowed with addition, multiplication and qualitative equality, defined as:
INRIA + + = ? + + + = + + = + × = + × + = + × = + ? + = ? ? + + = ?
? + ? = ? ? × = ? ? × + = ? ? × ? = ?
≈ + ? + T F T F T T ? T T T Some particularities of this algebra deserve to be mentioned: the sum of + and is indeterminate, as is the sum of anything with indeterminate, qualitative equality is reflexive, symmetric but not transitive, because ? is qualitatively equal to anything; this last property is an obstacle against the application of classical elimination methods for solving linear systems.
To summarize, we consider experiments that can be modelled as an equilibrium shift of a differential system under a change of its control parameters. In this setting, DNA chips provide the sign of variation in concentration of many (but not necessarily all) species in the network. We consider the signs s(X eq2 i -X eq1 i ) of the variation of some species i between the initial state X eq1 and the final state X eq2 . Both states are stationary and unknown.
In [START_REF] Radulescu | Topology and static response of interaction networks in molecular biology[END_REF], we proved that under some reasonable assumptions, in particular if the sign of ∂Fi ∂Xj is constant in states along a path connecting eq1 and eq2, then the following relation holds in sign algebra for all species i:
s(X eq2 i -X eq1 i ) ≈ j∈pred(i) s( ∂F i ∂X j )s(X eq2 j -X eq1 j ) (1)
where s : R → {+, } is the sign function, and where pred(i) stands for the set of predecessors of species i in the interaction graph. This relation is similar to a linearization of the system F(X, P) = 0. Note however, that as we only consider signs and not quantities, this relation is valid even for large perturbations (see [START_REF] Radulescu | Topology and static response of interaction networks in molecular biology[END_REF] for a complete proof).
Analyzing a network: a simple example
Let us now describe a practical use of these results. Given an interaction graph, say for instance the graph illustrated in Figure 1, we use Equation 1 at each node of the graph to build a qualitative system of constraints. The variables of this model are the signs of variation for each species. The qualitative system associated to our lactose operon model is proposed in the right side of Figure 1. In order to take into account observations, measured variables should be replaced by their sign values. A solution of the qualitative system is defined as a valuation of its variables, which does not contain any "?" (otherwise, the constraints would have a trivial solution with all variables set to "?") and that, according to the qualitative equality algebra, will satisfy all qualitative constraints in the system. If the model is correct and if data is accurate, then the qualitative system must posses at least one solution.
A first step then is to check the self-consistency of the graph, that is to find if the qualitative system without observations has at least one solution. Checking consistency
LacI ≈ -A (1) A ≈ LacZ (2) LacZ ≈ cAM P -LacI (3) Li ≈ Le + LacY -LacZ (4) G ≈ Li + LacZ (5) cAM P ≈ -G (6) LacY ≈ cAM P -LacI (7)
Figure 1: Interaction graph for the lactose operon and its associated qualitative system. In the graph, arrows ending with ">" or "-|" imply that the initial product activates or represses the production of the product of arrival, respectively.
between experimental measurements and an interaction graph boils down to instantiating the variables which are measured with their experimental value, and see if the resulting system still has a solution. If this is the case, then it is possible to determine if the model predicts some variations. Namely, it happens that a given variable has the same value in all solutions of the system. We call such variable a hard component. The values of the hard components are the predictions of the model.
Whenever the system has no solution, a simple strategy to diagnose the problem is to isolate a minimal set of inconsistent equations. In our experiments, a greedy approach was enough to solve all inconsistencies (see next section). Note that in our setting isolating a subset of the equations is equivalent to isolating a subgraph of the interaction graph. The combination of the diagnosis algorithm and a visualization tool is particularly useful for model refinement.
Finally, let us mention that we provided in [START_REF] Veber | Complex qualitative models in biology: A new approach[END_REF] an efficient representation of qualitative systems, leading to effective algorithms, some of them could be used to get further insights into the model under study. We shall see in the next section, that these algorithms are able to deal with large scale networks.
Results
Construction of the Escherichia coli regulatory network
For building E.coli regulatory network we relied on the transcriptional regulation information provided by RegulonDB ( [START_REF] Salgado | RegulonDB (version 5.0): Escherichia coli K-12 transcriptional regulatory network, operon organization, and growth conditions[END_REF], [START_REF] Salgado | The comprehensive updated regulatory network of Escherichia coli K-12[END_REF]) on March 2006. From the file containing transcription factor to gene interactions we have built the regulatory network of E.coli as a set of interactions of the form A → B sign where sign denotes the value of the interaction: +, , ?(expressed, repressed, undetermined), and A and B can be considered as genes or proteins, depending on the following situations:
INRIA
The interaction genA → genB was created when both genA and genB are notified by RegulonDB, and when the protein A, synthesized by genA, is among the transcriptional factors that regulate genB. See Figure 2 A. The interaction T F → genB was created when we found TF as an heterodimer protein (protein-complex formed by the union of 2 proteins) that regulates genB. See Figure 2 B. In E.coli transcriptional network we have found 4 protein-complexes which are: IHF, HU, RcsB, and GatR.
The interaction genA → T F was created when we found the transcriptional factor TF as an heterodimer protein and genA synthesizes one of the proteins that form TF. See Figure 2 B.
Adding sigma factors to obtain self-consistency
Using the methods and the algorithms described with detail in [START_REF] Veber | Complex qualitative models in biology: A new approach[END_REF] we built a qualitative system of equations for the interaction graph obtained from E.coli network. For solving qualitative equations we have used our own tool, the PYTHON module PYQUALI. The system was not found to be self-consistent and we used a procedure available in PYQUALI library to isolate a minimal inconsistent subgraph (see Figure 3). A careful reading of the available literature led us to consider the regulations involving sigma factors which were initially absent from the network. Once added to complete the network, we obtained a network of 3883 interactions and 1529 components (genes, protein-complexes, and sigmafactors). This final network (global network) was found to be self-consistent.
Compatibility of a network with a set of observations
A compatible network can be tested with different sets of observations of varied stresses: thermal, nutritional, hypoxic, etc. An observation is a pair of values of the form gene = sign where sign can be + or indicating that the gene is expressed or respectively repressed 1). The set of 40 observations of the stationary phase was found to be inconsistent with the global network of E. coli. We found a direct inconsistency in the system of equations caused by the values fixed by the observations given to ihfA and ihfB: {ihf A = , ihf B = }, implying repression of these genes under stationary phase. This mathematical incompatibility agreed with the literature related to genes ihf A and ihf B expression under stationary growing phase. Studies [START_REF] Ali Azam | Growth Phase-Dependent Variation in Protein Composition of the Escherichia coli Nucleoid[END_REF], [2], [4], [START_REF] Weglenska | Transcriptional pattern of Escherichia coli ihfB (himD) gene expression[END_REF] agree that transcription of ihf A and ihf B increases during stationary phase. Supported by this information, we have modified the observations of ihf A and ihf B and the compatibility test of the global network of E.coli was successful.
RR n ¡ 6190 IHF ihfA ihfB ¡ £¢ ¤ ¦¥ ¦ § ©¨ ¨ ¥ ¦ ¡ £¢ ¤ ¦ ¡ £¢ ! " # ¡ £¢ ! $¥ § %¨ & ' IHF ihfA ihfB rpoD rpoS ihfA ≈ -IHF + rpoS + rpoD (2) ihfB ≈ -IHF + rpoS + rpoD (3) IHF ≈ ihfA + ihfB (1)
Predictions over a compatible network from a set of observations
As mentioned earlier, a regulatory network is said to be consistent with a given set of observations when the associated qualitative system has at least one solution. If a variable is fixed to the same value in all solutions, then mathematically we are talking about a hard component, which is a prediction or inference for this set of observations. We have mentioned that the regulatory network including sigma factors is consistent with the set of 40 observations for stationary phase, after some correction. Actually there are about 2, 66 • 10 16 solutions of the qualitative system which are consistent with the 40 observations of stationary phase. Furthermore, in all these solutions, 381 variables of the system have always the same value (they are hard components, see Figure 4). In other words, we were able to predict the variation: expressed (+) or repressed ( ) of 381 components of our network (25% of the products of the network). We provide a subset of these predictions in Table 2.
INRIA
Validation of the predicted genes
In order to verify whether the 381 predictions obtained from stationary phase data were valid, we have compared them with three sets of microarray data related to the expression of genes of E.Coli during stationary phase. The result obtained is showed in Table 3. The number of compared genes corresponds to the common genes, the validated genes are those genes which variation in the prediction is the same as in the microarray data set. From the sets of microarray data provided by GEO (Gene Expression Omnibus) for stationary phase measured after 20 and 60 minutes, we have taken into account gene expressions whose absolute value is above a specific threshold and compared only these expression data with the 381 predictions. The percentage of validation obtained for different values of thresholds is illustrated in Figure 5. This percentage increases with the threshold, which is normal because stronger variations are more reliable.
INRIA
The percentage of our predictions that does not agree with the microarray results is due to: Erroneous microarray indications for certain genes. The genes xthA, cf a, cpxA, cpxR, gor are predicted as expressed by our model and as repressed by the microarray data [START_REF] Rosa | Regulatory network of Escherichia coli: consistency between literature knowledge and microarray profiles[END_REF]. Nevertheless, there is strong evidence that they are expressed during the stationary phase (see [START_REF] Ishihama | Functional modulation of Escherichia coli RNA polymerase[END_REF][START_REF] Weichart | Identification and characterization of stationary phase-inducible genes in Escherichia coli[END_REF]).
Incompleteness of our network model. Our model predicts that the gene ilvC is expressed, which contradicts microarray data. More careful studies [START_REF] Weichart | Global role for ClpPcontaining proteases in stationary-phase adaptation of Escherichia coli[END_REF] document the decrease of the protein IlvC due to an interaction with clpP which is absent in our model. Indeed, under the introduction of a negative interaction between these species, ilvC is no longer a hard component, which lifts the conflict with data.
Conclusions
Given an interaction graph of a thousand products, such as E.coli regulatory network, we were able to test its self-consistency and its consistency with respect to observations. We have used mathematical methods first exposed in [START_REF] Radulescu | Topology and static response of interaction networks in molecular biology[END_REF][START_REF] Siegel | Qualitative analysis of the relation between DNA microarray data and behavioral models of regulation networks[END_REF][START_REF] Veber | Complex qualitative models in biology: A new approach[END_REF].
We have found that the E.coli transcriptional regulatory network, obtained from Reg-ulonDB site [START_REF] Salgado | RegulonDB (version 5.0): Escherichia coli K-12 transcriptional regulatory network, operon organization, and growth conditions[END_REF], [START_REF] Salgado | The comprehensive updated regulatory network of Escherichia coli K-12[END_REF] is not self consistent, but can be made self-consistent by adding to it sigma-factors which are transcription initiation factors. The self-consistent network (including sigma-factors) is not consistent with data provided by RegulonDB for the stationary growth phase of E.coli. Sources of inconsistency were mistaken observations. Finally, a step of inference/prediction was achieved being able to infer 381 new variations of products (25% of the total products of the network) from E.coli global network (transcriptional plus sigma-factors interactions). This inference was validated with microarray results, obtaining in the best case that 40% of the inferred variations were consistent (37% were not consistent and 23% of them could not be associated to a microarray measure). We have used our approach to spot several imprecisions in the microarray data and missing interactions in our model.
This approach can be used in order to increase the consistency between network models and data, which is important for model refinement. Also, it may serve to increase the reliability of the data sets. We plan to use this approach to test different experimental conditions over E.coli network in order to complete its interaction network model. It should be also interesting to test it with different (signed and oriented) regulatory networks. All the tools provided to arrive to these results were packaged in a Python library called PYQUALI which will be soon publicly available. All scripts and data used in this article are available upon simple request to the authors.
Figure 2 :
2 Figure 2: Representation of genetical interactions. (A) Negative regulation (repression) of gene f iu by the transcription factor F ur represented as f ur → f iu -. (B) Biological interaction of genes ihf A and ihf B forming the protein-complex IHF represented as ihf A → IHF + and ihf B → IHF +, positive regulation of gene aceA by the protein complex IHF represented by IHF → aceA +
Figure 3 :
3 Figure 3: (Left) A minimal inconsistent subgraph, isolated from the E.coli regulatory network using PYQUALI. (Right) Correction proposed after careful reading of available literature on ihfA and ihfB regulation.
Figure 5 :
5 Figure 5: (Left) Percentage of validation of the 381 predicted variations of genes with microarray data sets from GEO (Gene Expression Omnibus) for stationary phase after 20 and 60 minutes. For both experiments we validate the 381 predictions with different sets of microarray observations considering only those genes which absolute value of expression is above certain value (threshold). (Right) Number of genes considered for the validation for the different used thresholds of both microarray data sets.
Table 1 :
1 Table of the 40 variations of products observed under stationary growth phase condition.
Source: RegulonDB March 2006
gene variation gene variation gene variation gene variation gene variation
acnA + csiE + gadC + osmB + recF +
acrA + cspD + hmp + osmE + rob +
adhE + dnaN + hns + osmY + sdaA -
appB + dppA + hyaA + otsA + sohB -
appC + fic + ihfA - otsB + treA +
appY + gabP + ihfB - polA + yeiL +
blc + gadA + lrp + proP + yfiD +
bolA + gadB + mpl + proX + yihI -
under certain condition. To test the global network of E. coli, we have chosen a set of 40 observations for the stationary phase condition provided by RegulonDB (Table
Global E.coli regulatory network with transcriptional and sigma-factors interactions (3883 interactions and 1529 products). Blue and red interactions represent activation or, respectively, repression. Green and blue nodes correspond to positive and negative observations (40). Red nodes (381) are the total inferred variations of products under stationary growth phase condition.
rcA rpoD birA rpoS kdpE nadR zntR zur gatR_2 gatR_1 sdiA yihI allR dhaR sdaR
evgA soxR gntR cusR cynR cysB mhpR ilvY iscR lexA lysR mngR phoB trpR tyrR xapR dsdC crp fnr agaR accA accB accC ebgR accD ampC amyA rpe aroK aroB gph damX dam trpS arsC arsR arsB asnT asnU asnV atpG atpH atpD atpC nanR atpE atpA atpF atpI atpB bcp bglX btuB cedA chpS chpB clpA cls cmk fadR rpsA corA creD rpoE creC creA creB csrB cutA dapA dcuD dut slmA efp fimB fxsA gloA gltX glyV glyX glyY cysT glyW leuZ gmr hemN hepA fdx hscA hscB ileX infC kdsA kdtA leuU lysC lysP menA menB menC menE metW metV metZ mfd ddlB ftsI mraY murD murE murG ftsL mraZ mraW murC murF ftsW yebC ruvC nudB paaY pcnB folK pgi pheP pntA pntB pth ychF relA rfaI rfaJ rfaP rfaQ rfaS rfaY rfaZ rfaK rfaB rfaG rnb rnc acpS pdxJ recO era yhbH ptsN npr yhbJ rplQ rpoA rpsD rpsM rpsK serU serV argV argY argQ argZ serW ssuC ssuE ssuD ssuB ssuA thrL thrA thrB thrC upp uraA valV valW ves xseB dxs yajO ispA yebB rihB ychA ychQ yjaB hflK hfq yjeF yjeE miaA hflX hflC amiB mutL uhpA exuR fhlA ada alsR lsrR rpoN ydeO envY cueR ihfA otsA cbpA cfa narU rsd otsB fabR atoC bioF bioA bioB bioC bioD xthA ybjP yehW yehY yehZ yehX yggE yiaG ldcC fic treA blc ftsQ ansP dpbA gadY katE msyB pfkB tktB uspB modE rpoH phoP kdpA kdpB kdpC pncB nadB zntA znuC znuB GatR allA allB allS gcl glxK glxR hyi ybbV ybbW ybbY dhaK dhaL dhaM garD garK garL garP garR gudD gudP gudX rnpB
argR emrK cspE evgS soxS cpxR cusA cusB cusC cusF cusS cynS cynT cynX cysA cysC cysD cysH cysI cysJ cysK cysM cysN cysP cysU cysW idnR cytR glcC mhpA mhpB mhpC mhpD rhaR mhpE mhpF arcA ilvC iscA iscS iscU dinF dinG dnaG ftsK polB tisB uvrC uvrY ydjM dinQ insK ulaR molR_1 phr ruvA ruvB tisA recA recN recX rpsU ssb sulA umuC umuD uvrA uvrB uvrD lysA oxyR mngA mngB paaX fucR asr b4103 phnC phnD phnE phnF phnG phnH phnI phnJ dcuR gutM argP phnK phnL phnM phnN phnO phnP phoA phoE phoH phoR psiF kdgR malI melR aroH trpA trpB trpC trpD trpE trpL aroL aroM yaiA aroF aroG aroP tyrA tyrB xapA xapB rbsR gntT gntX gntY cyaA glgA glgC glgP dsdA dsdX psiE ugpA ugpB ugpC ugpE ugpQ glpR spf trxA agp glpE glpG appY norR prpR araC hyfR ilvB ilvN ivbL mpl sohB nagC ompA putP speC ubiG xseA yhfA cpdB cstA fadD uspA treR ansB aspA dcuA agaA agaV agaW agaZ ebgA ebgC gapA mlc nanA nanE nanK nanT narL tdcR pdhR narV narW narY narZ rimM rplE rplF rplM rplN rplO rplR rplS rplT rplX rpmD rpmJ rpsE rpsH rpsI rpsN rpsP rpsS secY speD speE trmD narX gatB gatC gatD gatY gatZ yhcH iclR fadI fadJ nrdD uxuR nrdG rplB rplC rplD rseB rplP rplV rplW sbmA rpmC sixA rpsC apaH rpsJ rpsQ agaB agaC agaD agaI apaG agaS agaY yaeL ybaB yeaY yfeY yiiS cutC ecfA ecfD yggN ecfG ecfH ecfJ ecfK ecfL fkpA tufA fusA surA lpxP rseC rseA nlpB imp mdoG mdoH uhpT uxaC exuT uxaB uxaA glgS hmp talA gatA alkA alkB alsA alsB alsC alsE alsI glnG fabA pspF lsrA lsrB lsrC lsrD lsrF lsrG fabB modA modB modC mgtA uidR narP gltI gltJ gltL deoR torR gltK kch yaiS ybhK puuP yfhK zraP zraR gadE moaE moaA moaB moaC moaD copA cueO rfaC rfaL rfaF atoE atoD atoB hldD atoA IHF hflB rrmJ ppiD fruR rtcR clpB clpX clpP dnaJ dnaK groS grpE hslV hslU htgA htpG htpX htrC ibpA pphA borD hemL mgrB pagP phoQ rstA rstB slyB ybjG yrbL allC allD ylbA
argB argC argD argE argF argH argI argG carA carB cspA cpxA cpxP fldA dsbA rdoA baeR ung fldB nfsA ribA rimK ybjC ybjN gntK ppiA gntU cbl edd idnK idnO idnT idnD betI fur rhaS udp cdd glcA glcB glcD glcE glcF glcG ydeA hcaR treB treC glpD fumA nupG tsx paaJ paaK paaI paaH paaG paaF paaE paaD paaC paaB paaA gltA moeB moeA mtr ulaA ulaB ulaC ulaD ulaE ulaF frdA ptsH frdB ptsI frdC frdD crr focA pflB glnA glnL nagA uxuA uxuB appB fdnG fdnH fdnI hyaB hyaC hyaD hyaE hyaF aceB aceK norV norW hypB hypC rob grxA fucA fucI fucK fucO fucP fucU bacA dsbC ecfI htrA psd smpA srlR dctA dnaA eda phoU agn43 pstA katG trxC pstB pstC pstS malX malY melA melB tyrP rbsA rbsB rbsC rbsD rbsK glpK glpX glpF glpQ glpT gntP dps hpt nagB nagD nagE glmS glmU acnA paaZ dcuB ompR pgk yiaJ lctR ackA chbR dcuC appC appA hemA tnaA tnaB tnaC fis prpC prpD prpB prpE araA araB araD araE araH araF araJ araG nanC hyaA narG narH narI narJ narK aceA envZ epd fbaA gcd manX manY manZ hybO aceF icd cydC cydD hybA hybB hybC hybD hybE hybF hybG ulaG gor tdcA hypD hypE hycA hycB hycC hycD hycE hycF hycG hycH hycI mntR yfiD aceE nikR tpx focB hyfA hyfB hyfC hyfD hyfE hyfF hyfG hyfH hyfI hyfJ ubiA ubiC hcp hcr fadA fadB uidA uidB lpxA lpxD skp fabZ gnd purA amiA dppB dppC dppD dppF hemF fruA fruB fruK pckA pfkA ppsA pykF napA napB fdhF torA torC torD napC napD napF napG napH dmsA dmsB dmsC ccmA ccmB ccmC ccmD ccmE ccmF ccmG ccmH potF astC astD astE astA potG potH potI astB nac argT ddpA ddpB ddpC ddpD ddpF ddpX hisJ hisM hisP hisQ ycdG ycdH ycdI ycdJ ycdK ycdL ycdM yeaG yeaH yhdW yhdX yhdY yhdZ lrp glnH glnP dppA pspA pspB pspC pspD pspE pspG zraS hypA tufB leuO ihfB ecpD htrE osmE rtcA rtcB ibpB glk
yhbC nupC sufA sufB sufC sufD sufE sufS sucA srlB sucB sucC sucD srlE srlD recF lyxK malT marA hns acrD entF mdtB entS exbB exbD fepB mdtC mdtD baeS mdtA fepC fepD tauB tauC tauD tauA fepE fepG metJ gcvR gcvA yodA fes fhuA fhuB fhuC fhuD fhuF nohA sraI tonB yhhY purR betB betT betA galR cirA nrdE nrdF entA nrdH nrdI ygaC entB fhuE entC entD entE fepA fiu infB metY nusA pnp rbfA rpsO ybdB gyrA truB pgmA rhaT hcaB hcaC aldA hcaD hcaE hcaF emrR gutQ srlA nrdA nrdB polA fumB dnaN sgbH sgbU ptsG lpdA acs dusB fadL yjcG yjcH yiaO nuoA nuoB nuoC nuoE nuoF nuoG nuoH nuoI nuoJ nuoK nuoL nuoM nuoN tdcD tdcE tdcB tdcF tdcG gltD gltF gltB yeiL acrR asnC lacI fecI alaT alaW alaX argU argW argX aspV mntH csgD glnU glnV glnW glnX gltV glyT glyU gyrB hisR ileT leuP leuQ leuT leuV leuW sgbE leuX lysT lysV yiaK yiaL yiaM yiaN rhaA rhaB rhaD b0725 lctP lctD nmpC sdhA sdhB sdhC sdhD sigma19 xylR hupA chbC chbF chbG chbA chbB lysW mazE mazF metT metU pheU pheV proK proL proM rrfA rrfE rrlA rrlE deoA deoB deoC deoD acnB hupB queA trmA proP aldB adhE glnQ ygjG aidB slyA csiR rrnA rrnE serT serX thrT thrU thrW tpr tyrT tyrU tyrV valT valU valX valY ksgA pdxA ompF serA dadA dadX mtlR nikA nikB nikC nikD nikE cyoA cyoB cyoC cyoD cyoE tdcC ndh kbl gdhA oppA oppB oppC oppD oppF tdh ilvH ilvI livF livG livH topA livJ livK livM lysU sdaA leuA gadW leuB leuC leuD nhaR aroA serC leuL ompC ilvA ilvD ilvE ilvG_1 ilvG_2 ilvL ilvM osmY alpA adiY
malZ lamB malE malF malG malK malM malP malQ malS nfnB putA metC fpr inaA nfo poxB marR acrA acrB asnA metF metI metK metN metQ metB metL metR gcvB cvpA glnB hflD prsA purB purC purD purE purF purH purK fumC zwf purL purM purN pyrC speA speB ubiX pyrD gcvH gcvP gcvT ahpC ahpF cadC codA codB galS sodA lacA lacY lacZ fecR chiA mukB pqiA mukE pqiB emrA emrB guaA guaB mukF smtA fimA fecA fecB fecC fecD fecE fimC fimD fimF fimG fimH fimI csgE csgF csgG hlyE sodB bglB bglF bglG xylA xylB xylG xylH xylF HU rrfB rrfC rrfD rrfF rrfG rrfH rrlB rrlC rrlD rrlG rrlH rrnB rrnC rrnD rrnG rrnH thrV alaU alaV gltT gltU gltW ileU ileV cspD bolA csiE csgA csgB gabP gabT gabD ygaF csiD mtlA mtlD caiF gadX cysG nirB nirC nirD cydA cydB fimE stpA nhaA proV proW rcsA slp proX adiA rcsB
marB metE metH glyA metA cadA cadB galP micF galM galT galE galK pgm seqA hdeD fixA fixB fixC fixX asnB ybaS gadB gadC caiA caiB caiC caiD caiE hdeA hdeB hdfR lrhA qseB caiT amtB glnK gadA lon RcsB
flhD flhC wcaA wcaB wza wzb osmC wzc bdm sra osmB ftsA ftsZ
fliA mglA mglB mdh mglC glpA glpB glpC hydN nrfA hypF nrfB nrfC nrfD nrfE nrfF nrfG
tsr cheA cheW motA motB tar trg aer flgK flgN flgM fliD fliS fliT flxA fliL fliM fliN fliO fliP fliQ ycgR yhjH fliC fliY fliZ fliR
Figure 4:
Table 2 :
2 Table of 42 products inferred under stationary phase condition.
gene variation gene variation gene variation gene variation
IHF ada agaR alsR araC argP argR baeR + + + + + + + + cpxR crp cusR cynR cysB cytR dnaA dsdC + + + + + + + + fucR fur galR gcvA glcC gntR ilvY iscR + + + + + + + + lysR melR mngR oxyR phoB prpR rbsR rhaR + + + + + + + + gene rpoS soxR soxS srlR trpR tyrR variation + + + + + +
cadC + evgA + lexA + rpoD +
RR n
¡ 6190
Table 3 :
3 Validation of the prediction with microarray data sets
Source of microarray data Compared genes Validated genes (%)
Gutierrez-Rios and co-workers [6], stationary phase 249 34%
Gene Expression Omnibus ([3],[5]), stationary phase after 20 minutes 292 51.71%
Gene Expression Omnibus ([3],[5]), stationary phase after 60 minutes 281 51.2%
RR n¡ | 35,688 | [
"839901",
"169547",
"833591",
"740482"
] | [
"2535",
"2535",
"2535",
"2535"
] |
01465218 | en | [
"math"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01465218/file/Motivic%20HRC%20generalized%20Kummers.pdf | Lie Fu
email: [email protected]
Zhiyu Tian
email: [email protected]
Charles Vial
email: [email protected]
MOTIVIC HYPERK ÄHLER RESOLUTION CONJECTURE FOR GENERALIZED KUMMER VARIETIES
, we define the orbifold motive (or Chen-Ruan motive) of the quotient stack [M/G] as an algebra object in the category of Chow motives. Inspired by Ruan [46], one can formulate a motivic version of his Cohomological HyperKähler Resolution Conjecture (CHRC). We prove this conjecture in two situations related to an abelian surface A and a positive integer n. Case(A) concerns Hilbert schemes of points of A: the Chow motive of A [n] is isomorphic as algebra objects, up to a suitable sign change, to the orbifold motive of the quotient stack [A n /S n ]. Case (B) for generalized Kummer varieties: the Chow motive of the generalized Kummer variety K n (A) is isomorphic as algebra objects, up to a suitable sign change, to the orbifold motive of the quotient stack [A n+1 0 /S n+1 ], where A n+1 0 is the kernel abelian variety of the summation map A n+1 → A. In particular, these results give complete descriptions of the Chow motive algebras (resp. Chow rings) of A [n] and K n (A) in terms of h 1 (A) the first Chow motive of A (resp. CH * (A) the Chow ring of A). As a byproduct, we prove the Cohomological HyperKähler Resolution Conjecture for generalized Kummer varieties. As an application, we provide multiplicative Chow-K ünneth decompositions for Hilbert schemes of abelian surfaces and for generalized Kummer varieties. In particular, we have a multiplicative direct sum decomposition of their Chow rings with rational coefficients, which are expected to be the splitting of the conjectural Bloch-Beilinson-Murre filtration. The existence of such a splitting for holomorphic symplectic varieties is conjectured by Beauville [10]. Finally, as another application analogous to Voisin's result in [54], we prove that over a non-empty Zariski open subset of the base, there exists a decomposition isomorphism Rπ * Q ⊕R i π * Q[-i] in D b c (B) which is compatible with the cup-products on both sides, where π : K n (A) → B is the relative generalized Kummer variety associated to a (smooth) family of abelian surfaces A → B.
1. Introduction 1.1. Motivation 1 : Ruan's hyperKähler resolution conjectures. In [START_REF]A new cohomology theory of orbifold[END_REF], Chen and Ruan construct the orbifold cohomology ring H * orb (X) for any complex orbifold X. It is defined to be the cohomology of its inertia variety H * (IX) as Q-vector space (with degree shifted by some rational numbers called age), but is endowed with a highly non-trivial ring structure coming from moduli spaces of curves mapping to X. An algebro-geometric treatment is contained in Abramovich-Graber-Vistoli's work [START_REF] Abramovich | Gromov-Witten theory of Deligne-Mumford stacks[END_REF], based on the construction of moduli stack of twisted stable maps in [START_REF] Abramovich | Compactifying the space of stable maps[END_REF]. In the global quotient case 1 , some equivalent definitions are available : see for example [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF], [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF], [START_REF] Kimura | Orbifold cohomology reloaded, Toric topology[END_REF] and §2.
Originating from the topological string theory of orbifolds in [START_REF] Dixon | Strings on orbifolds[END_REF], [START_REF]Strings on orbifolds[END_REF], one observes that the stringy topological invariants of an orbifold, e.g. the orbifold Euler number and the orbifold Hodge numbers, should be related to the corresponding invariants of a crepant resolution ( [START_REF] Victor | Stringy Hodge numbers of varieties with Gorenstein canonical singularities, Integrable systems and algebraic geometry[END_REF], [START_REF] Victor | Strong McKay correspondence, string-theoretic Hodge numbers and mirror symmetry[END_REF], [START_REF] Lupercio | The global McKay-Ruan correspondence via motivic integration[END_REF]). A much deeper relation was brought forward by Ruan, who made, among others, the following Cohomological HyperKähler Resolution Conjecture (CHRC) in [START_REF] Ruan | Stringy geometry and topology of orbifolds[END_REF]. For more general and sophisticated versions of this conjecture, see [START_REF]The cohomology ring of crepant resolutions of orbifolds[END_REF], [START_REF] Bryan | The crepant resolution conjecture, Algebraic geometry-Seattle[END_REF], [START_REF] Coates | Quantum cohomology and crepant resolutions: a conjecture[END_REF]. Conjecture 1.1 (Ruan's CHRC). Let X be a compact complex orbifold with underlying variety X being Gorenstein. If there is a crepant resolution Y → X with Y being hyperKähler, then we have an isomorphism of graded commutative C-algebras : H * (Y, C) H * orb (X, C).
As the construction of orbifold product can be expressed using algebraic correspondences (cf. [START_REF] Abramovich | Gromov-Witten theory of Deligne-Mumford stacks[END_REF] and §2), one has the analogous definition of the orbifold Chow ring CH orb (X), or more generally the orbifold Chow motive h orb (X) (see Definitions 2.6 and 2.5 for the global quotient case) of a smooth proper Deligne-Mumford stack X. We propose to study the following motivic version of Conjecture 1.1. Let CHM C be the category of Chow motives with complex coefficients and h be the (contravariant) functor that associates to a smooth projective variety its Chow motive. Conjecture 1.2 (Motivic HyperKähler Resolution Conjecture). Let X be a smooth proper complex Deligne-Mumford stack with underlying coarse moduli space X being a (singular) symplectic variety. If there is a symplectic resolution Y → X, then we have an isomorphism h(Y) h orb (X) as commutative algebra objects in CHM C , hence in particular an isomorphism of graded C-algebras : CH * (Y) C CH * orb (X) C .
See Definition 3.1 for generalities on symplectic singularities and symplectic resolutions. See also Conjecture 3.2 for a more precise statement which would contain all situations considered in this paper.
From now on, we will restrict ourselves to the case where the Deligne-Mumford stack in question is of the form of a global quotient X = [M/G], where M is a smooth projective variety with an action of a finite group G. In this case, the definition of the orbifold motive of [M/G] as a (commutative) algebra object in the category of Chow motives with rational coefficients 2 is particularly down-to-earth ; it is the G-invariant sub-algebra object of some explicit algebra object : 1 In this paper, by 'global quotient', we always mean the quotient of a smooth projective variety by a finite group. 2 Strictly speaking, the orbifold Chow motive of [M/G] in general lives in the larger category of Chow motives with where for each ∈ G, M is the fixed subvariety of and the orbifold product orb is defined by using natural inclusions and Chern classes of normal bundles of various fixed loci ; see Definition 2.5 (or (2) below) for the precise formula of orb as well as the Tate twists by age (2.3) and the G-action. The orbifold Chow ring 3 is then defined as the following commutative algebra
h orb ([M/G]) := ∈G h(M ) -age( ) , orb G ,
CH * orb ([M/G]) := i Hom CHM (1(-i), h orb ([M/G])),
or equivalently more explicitly :
(1)
CH * orb ([M/G]) := ∈G CH * -age( ) (M ), orb G ,
where orb is as follows : for two elements , h ∈ G and α ∈ CH i-age( ) (M ), β ∈ CH j-age(h) (M h ), their orbifold product is the following element in CH i+j-age( h) (M h ) :
(2)
α orb β := ι * α| M < ,h> • β| M < ,h> • c top (F ,h ) ,
where ι : M < ,h> → M h is the natural inclusion and F ,h is the obstruction bundle. This construction is completely parallel to the construction of orbifold cohomology due to Fantechi-G öttsche [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] which is further simplified in Jarvis-Kaufmann-Kimura [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF].
Interesting examples of symplectic resolutions appear when considering the Hilbert-Chow morphism of a smooth projective surface. More precisely, in his fundamental paper [START_REF]Variétés Kähleriennes dont la première classe de Chern est nulle[END_REF], Beauville provides such examples :
Example 1 Let S be a complex projective K3 surface or an abelian surface. Its Hilbert scheme of length-n subschemes, denoted by S [n] , is a symplectic crepant resolution of the symmetric product S (n) via the Hilbert-Chow morphism. The corresponding Cohomological HyperKähler Resolution Conjecture was proved independently by Fantechi-G öttsche in [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] and Uribe in [START_REF] Uribe | Orbifold cohomology of the symmetric product[END_REF] making use of Lehn-Sorger's work [START_REF] Lehn | The cup product of Hilbert schemes for K3 surfaces[END_REF] computing the ring structure of H * (S [n] ). The Motivic HyperKähler Resolution Conjecture 1.2 in the case of K3 surfaces will be treated in [START_REF] Fu | The motivic hypekähler resolution conjecture and Chow rings of Hilbert schemes of K3 surfaces[END_REF] and the case of abelian surfaces is the following theorem. [n] ). Let A be an abelian surface and A [n] be its Hilbert scheme as before. Then we have an isomorphism of commutative algebra objects in the category CHM of Chow motives with rational coefficients :
Theorem 1.3 (MHRC for A
h A [n] h orb,dt ([A n / Sn]),
where on the left hand side, the product structure is given by the small diagonal of A [n] × A [n] × A [n] while on the right hand side, the product structure is given by the orbifold product orb with a suitable sign change, called discrete torsion, in 3.4. In particular, we have an isomorphism of commutative graded Q-algebras :
(3)
CH * A [n] Q CH * orb,dt ([A n / Sn]).
Example 2
Let A be a complex abelian surface. The composition of the Hilbert-Chow morphism followed by the sum map A [n+1] → A (n+1) → A is an isotrivial fibration. The generalized Kummer variety K n (A) is by definition the fiber of this morphism over the origin of A. It is a hyperKähler resolution of the quotient A n+1 0 / Sn+1, where A n+1 0 is the kernel abelian variety of the sum map A n+1 → A. The main result of the paper is the following theorem confirming the Motivic HyperKähler Resolution Conjecture 1.2 in this situation.
Theorem 1.4 (MHRC for K n (A)). Let K n (A) be the 2n-dimensional generalized Kummer variety associated to an abelian surface A. Let A n+1 0 := Ker + : A n+1 → A endowed with the natural Sn+1-action. Then we have an isomorphism of commutative algebra objects in the category CHM of Chow motives with rational coefficients :
h (K n (A)) h orb,dt ([A n+1 0 / Sn+1])
, where on the left hand side, the product structure is given by the small diagonal while on the right hand side, the product structure is given by the orbifold product orb with the sign change given by discrete torsion in 3.4. In particular, we have an isomorphism of commutative graded Q-algebras :
(4) CH * (K n (A)) Q CH * orb,dt ([A n+1 0 / Sn+1]).
Taking the Betti cohomological realization, we confirm Ruan's original Cohomological Hy-perKähler Resolution Conjecture 1.1 in this case : Theorem 1.5 (CHRC for K n (A)). Let notation be as in Theorem 1.4. We have an isomorphism of graded commutative Q-algebras :
H * (K n (A)) Q H * orb,dt ([A n+1 0 / Sn+1]).
The CHRC has never been checked in the case of generalized Kummer varieties in the literature. Closely related work on the CHRC in this case are Nieper-Wisskirchen's description of the cohomology ring H * (K n (A), C) in [START_REF] Marc | Twisted cohomology of the Hilbert schemes of points on surfaces[END_REF], which plays an important r ôle in our proof ; and Britze's thesis [START_REF] Britze | On the cohomology of generalized kummer varieties[END_REF] comparing H * (A × K n (A), C) and the computation of the orbifold cohomology ring of [A × A n+1 0 / Sn+1] in Fantechi-G öttsche [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF]. See however Remark 6.16.
On explicit description of Chow rings.
Let us make some remarks on the way we understand Theorem 1.3 and Theorem 1.4. For each of them, the seemingly fancy right hand side of ( 3) and (4) given by orbifold Chow ring is actually very concrete (see [START_REF] Abramovich | Gromov-Witten theory of Deligne-Mumford stacks[END_REF]) : as groups, since all fixed loci are just various diagonals, they are the Chow groups of products of the abelian surface A, which can be handled by Beauville's decomposition of Chow rings of abelian varieties [START_REF]Sur l'anneau de Chow d'une variété abélienne[END_REF] ; while the ring structures are given by the orbifold product which is extremely simplified in our cases (see [START_REF] Abramovich | Compactifying the space of stable maps[END_REF]) : all obstruction bundles F ,h are trivial and hence the orbifold products are either the intersection product pushed forward by inclusions or simply zero.
In short, given an abelian surface A, Theorem 1.3 and Theorem 1.4 provide an explicit description of the Chow rings of A [n] and of K n (A) in terms of Chow rings of products of A (together with some combinatoric rules specified by the orbifold product). To illustrate how explicit it is, we work out two simple examples in §3.2 : the Chow ring of the Hilbert square of a K3 surface or an abelian surface and the Chow ring of the Kummer K3 surface associated to an abelian surface. 1.3. Motivation 2 : Beauville's splitting principle. The original motivation for the authors to study the Motivic HyperKähler Resolution Conjecture 1.2 was to understand the (rational) Chow rings, or more generally the Chow motives, of smooth projective holomorphic symplectic varieties, that is, an even-dimensional projective manifold carrying a holomorphic 2-form which is symplectic (i.e. non-degenerate at each point). As an attempt to unify his work on algebraic cycles on abelian varieties [START_REF]Sur l'anneau de Chow d'une variété abélienne[END_REF] and his result with Voisin on Chow rings of K3 surfaces [START_REF] Beauville | On the Chow ring of a K3 surface[END_REF], Beauville conjectured in [START_REF] Beauville | On the splitting of the Bloch-Beilinson filtration[END_REF], under the name of the splitting principle, that for a smooth projective holomorphic symplectic variety X, there exists a canonical multiplicative splitting of the conjectural Bloch-Beilinson-Murre filtration of the rational Chow ring (see Conjecture 7.1 for the precise statement). In this paper, we will understand the splitting principle as in the following motivic version (see Definition 7.2 and Conjecture 7.4) : Conjecture 1.6 (Beauville's Splitting Principle : motives). Let X be a smooth projective holomorphic symplectic variety of dimension 2n. Then we have a canonical multiplicative Chow-K ünneth decomposition of h(X) of Bloch-Beilinson type, that is, a direct sum decomposition in the category of rational Chow motives :
(5) h(X) = 4n i=0 h i (X)
satisfying the following properties :
(1) (Chow-K ünneth) The cohomology realization of the decomposition gives the K ünneth decomposition : for each
0 ≤ i ≤ 4n, H * (h i (X)) = H i (X). (2) (Multiplicativity) The product µ : h(X) ⊗ h(X) → h(X)
given by the small diagonal δ X ⊂ X × X × X respects the decomposition : the restriction of µ on the summand h i (X) ⊗ h j (X) factorizes through h i+j (X). (3) (Bloch-Beilinson-Murre) for any i, j ∈ N,
-CH i (h j (X)) = 0 if j < i ; -CH i (h j (X)) = 0 if j > 2i ; -the realization induces an injective map Hom CHM 1(-i), h 2i (X) → Hom Q-HS Q(-i), H 2i (X) .
Such a decomposition naturally induces a (multiplicative) bigrading on the Chow ring CH * (X) = ⊕ i,s CH i (X) s by setting :
CH i (X) s := Hom CHM 1(-i), h 2i-s (X) ,
which is the original splitting that Beauville envisaged.
Our main results Theorem 1.3 and Theorem 1.4 allow us, for X being a Hilbert scheme of an abelian surface or a generalized Kummer variety, to achieve in Theorem 1.7 below partially the goal Conjecture 1.6 : we construct the candidate direct sum decomposition (5) satisfying the first two conditions (1) and (2) in Conjecture 1.6, namely a multiplicative Chow-K ünneth decomposition (see Definition 7.2, cf. [START_REF] Shen | The Fourier transform for certain hyperkähler fourfolds[END_REF]). The remaining Condition (3) on Bloch-Beilinson-Murre properties is very much related to Beauville's Weak Splitting Property, which has already been proved in [START_REF] Fu | Beauville-Voisin conjecture for generalized Kummer varieties[END_REF] for the case of generalized Kummer varieties considered in this paper ; see [START_REF] Beauville | On the splitting of the Bloch-Beilinson filtration[END_REF], [START_REF]On the Chow ring of certain algebraic hyper-Kähler manifolds[END_REF], [START_REF] Yin | Finite-dimensionality and cycles on powers of K3 surfaces[END_REF], [START_REF] Riess | On Beauville's conjectural weak splitting property, to appear in IMRN[END_REF] for the complete story and more details. Theorem 1.7 (=Theorem 7.9 + Proposition 7.13). Let A be an abelian surface and n be a positive integer. Let X be the corresponding 2n-dimensional Hilbert scheme A [n] or generalized Kummer variety K n (A). Then X has a canonical multiplicative Chow-K ünneth decomposition
h(X) = 4n i=0 h i (X),
where in the two respective cases we have
h i (A [n] ) := ∈S n h i-2 age( ) ((A n ) )(-age( )) Sn ; h i (K n (A)) := ∈S n+1 h i-2 age( ) ((A n+1 0 ) )(-age( )) Sn+1 .
In particular, we have a canonical multiplicative bigrading on the (rational) Chow ring given by
CH i (X) s := CH i (h 2i-s (X)).
Moreover, the i-th Chern class of X is in CH i (X) 0 for any i.
The associated filtration F j CH i (X) := ⊕ s≥j CH i (X) s is supposed to satisfy the Bloch-Beilinson-Murre conjecture (see Conjecture 7.11). We point out in Remark 7.12 that Beauville's Conjecture 7.5 on abelian varieties implies for X in our two cases some Bloch-Beilinson-Murre properties : CH * (X) s = 0 for s < 0 and the cycle class map restricted to CH * (X) 0 is injective. See Remark 7.10 for previous related results. 1.4. Cup products vs. decomposition theorem. For a smooth projective morphism π : X → B Deligne shows in [START_REF] Deligne | Théorème de Lefschetz et critères de dégénérescence de suites spectrales[END_REF] that one has an isomorphism
Rπ * Q i R i π * Q[-i],
in the derived category of sheaves of Q-vector spaces on B. Voisin [START_REF]Chow rings and decomposition theorems for families of K3 surfaces and Calabi-Yau hypersurfaces[END_REF] remarks that this isomorphism cannot be made compatible with the product structures on both sides even after shrinking B to a Zariski open subset and shows that it can be made so if π is a smooth family of projective K3 surfaces. Her result is extended in [START_REF] Vial | On the motive of some hyperkaehler varieties[END_REF] to relative Hilbert schemes of finite lengths of a smooth family of projective K3 surfaces or abelian surfaces. As a by-product of our main result in this paper, we can similarly prove the case of generalized Kummer varieties. Theorem 1.8 (=Corollary 8.4). Let A → B be an abelian surface over B. Consider π : K n (A) → B the relative generalized Kummer variety. Then there exist a decomposition isomorphism
(6) Rπ * Q i R i π * Q[-i],
and a nonempty Zariski open subset U of B, such that this decomposition becomes multiplicative for the restricted family over U.
Convention and notation.
Throughout the paper, all varieties are defined over the field of complex numbers.
• The notation CH (resp. CH C ) means Chow groups with rational (resp. complex) coefficients. CHM is the category of Chow motives over the complex numbers with rational coefficients. • For a variety X, its small diagonal, always denoted by
δ X , is {(x, x, x) | x ∈ X} ⊂ X × X × X.
• For a smooth surface X, its Hilbert scheme of length-n subschemes is always denoted by X [n] , which is smooth of dimension 2n by [START_REF] Fogarty | Algebraic families on an algebraic surface[END_REF]. • An (even) dimensional smooth projective variety is holomorphic symplectic if it has a holomorphic symplectic (i.e. non-degenerate at each point) 2-form. When talking about resolutions, we tend to use the word hyperKähler as its synonym, which usually (but not in this paper) requires also the 'irreducibility', that is, the simple connectedness of the variety and the uniqueness up to scalars of the holomorphic symplectic 2-form. In particular, punctual Hilbert schemes of abelian surfaces are examples of holomorphic symplectic varieties. • An abelian variety is always supposed to be connected. Its non-connected generalization causes extra difficulty and is dealt with in §6.2. • When working with 0-cycles on an abelian variety A, to avoid confusion, for a collection of points x 1 , . . . , x m ∈ A, we will write
[x 1 ] + • • • + [x m ]
for the 0-cycle of degree m (or equivalently, a point in A (m) , the m-th symmetric product of A) and x 1 + • • • + x m will stand for the usual sum using the group law of A, which is therefore a point in A.
Definition 2.1 (Chow motives with fractional Tate twists). The category of Chow motives with fractional Tate twists with rational coefficients, denoted by CHM, has as objects finite direct sums of triples of the form (X, p, n) with X a connected smooth projective variety, p ∈ CH dim X (X × X) a projector and n ∈ Q a rational number. Given two objects (X, p, n) and (Y, q, m), the morphism space between them consists of correspondences :
Hom CHM (X, p, n), (Y, q, m) := q • CH dim X+m-n (X × Y) • p,
where we simply impose that all Chow groups of a variety with non-integer codimension are zero. The composition law of correspondences is the usual one. Identifying (X, p, n) ⊕ (Y, q, n) with (X Y, p q, n) makes CHM a Q-linear category. Moreover, CHM is a rigid symmetric monoïdal category with unit 1 := (Spec C, Spec C, 0), tensor product defined by (X, p, n)⊗(Y, q, m) := (X×Y, p× q, n+m) and duality given by (X, p, n) ∨ := X, t p, dim Xn . There is a natural contravariant functor h : SmProj op → CHM sending a smooth projective variety X to its Chow motive h(X) = (X, ∆ X , 0) and a morphism f :
X → Y to its transposed graph t Γ f ∈ CH dim Y (Y × X) = Hom CHM (h(Y), h(X)).
Remarks 2.2.
(1) The category CHM C of Chow motives with fractional Tate twists with complex coefficients is defined similarly by replacing all Chow groups with rational coefficients CH by Chow groups with complex coefficients CH C in the above definition.
(2) The usual category of Chow motives with rational (resp. complex) coefficients CHM (resp. CHM C , cf. [START_REF] André | Une introduction aux motifs (motifs purs, motifs mixtes, périodes), Panoramas et Synthèses [Panoramas and Syntheses[END_REF]) is identified with the full subcategory of CHM (resp. CHM C ) consisting of objects (X, p, n) with n ∈ Z.
(3) The above construction works for any adequate equivalence relation and gives corresponding categories of pure motives (with fractional Tate twists) by replacing CH by the group of algebraic cycles modulo the chosen adequate equivalence relation (cf. [START_REF] André | Une introduction aux motifs (motifs purs, motifs mixtes, périodes), Panoramas et Synthèses [Panoramas and Syntheses[END_REF]). In particular, we can talk about the category of numerical motives NumM and homological motives 4 HomM with rational or complex coefficients as well as their variants with fractional Tate twists, etc..
Let M be an m-dimensional smooth projective complex variety equipped with an action of a finite group G. We adapt the constructions in [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] and [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF] to define the orbifold motive of the smooth proper Deligne-Mumford stack [M/G]. For any ∈ G, M := x ∈ M | x = x is the fixed locus of the automorphism , which is a smooth subvariety of M. The following notion is due to Reid (see [START_REF] Reid | La correspondance de McKay[END_REF]).
Definition 2.3 (Age)
. Given an element ∈ G, let r ∈ N be its order. The age of , denoted by age( ), is the locally constant Q ≥0 -valued function on M defined as follows. Let Z be a connected component of M . Choosing any point x ∈ Z, we have the induced automorphism * ∈ GL(T x M), whose eigenvalues, repeated according to multiplicities, are , where O( ) is the set of orbits of as a permutation of {1, . . . , n}. For example, when S is a surface (i.e. , d = 2), the age in this case is always a non-negative integer and we have age(id) = 0, age(12
e 2π √ -1 α 1 r , • • • , e 2π √ -1 αm r , with 0 ≤ α i ≤
• • • r) = r -1, age(12)(345) = 3 etc..
Recall that an algebra object in a symmetric monoïdal category (M, ⊗, 1) (for example, CHM, CHM etc.) is an object A ∈ Obj M together with a morphism µ : A ⊗ A → A in M, called the multiplication or product structure, satisfying the associativity axiom µ
• (µ ⊗ id) = µ • (id ⊗µ). An algebra object A in M is called commutative if µ • ι = µ, where ι : A ⊗ A → A ⊗ A is the structural symmetry isomorphism of M.
For each smooth projective variety X, its Chow motive h(X) is naturally a commutative algebra object in CHM (hence in CHM, CHM C , etc.) whose multiplication is given by the small diagonal δ X ∈ CH 2 dim X (X × X × X) = Hom CHM (h(X) ⊗ h(X), h(X)). Definition 2.5 (Orbifold Chow motive). We define first of all an auxiliary (in general noncommutative) algebra object h(M, G) of CHM in several steps :
(1) As a Chow motive with fractional twists, h(M, G) is defined to be the direct sum over G, of the motives of fixed loci twisted à la Tate byage :
h(M, G) := ∈G h(M ) -age( ) .
(2) h(M, G) is equipped with a natural G-action : each element h ∈ G induces for each ∈ G an isomorphism h : M → M h h -1 by sending x to h.x, hence an isomorphism between the direct factors h(M )(-age( )) and h(M h h -1 )(-age(h h -1 )) by the conjugation invariance of the age function.
(3) For any ∈ G, let r be its order. We have a natural automorphism * of the vector bundle TM| M . Consider its eigen-subbundle decomposition :
TM| M = r-1 j=0 W ,j ,
where W ,j is the subbundle associated to the eigenvalue e 2π √ -1 j r . Define
S := r-1 j=0 j r [W ,j ] ∈ K 0 (M ) Q .
Note that the virtual rank of S is nothing but age( ) by Definition 2.3.
(4) For any 1 , 2 ∈ G, let M < 1 , 2 > = M 1 ∩ M 2 and 3 = -1 2 -1 1 . Define the following element in K 0 (M < 1 , 2 > ) Q : F 1 , 2 := S 1 M < 1 , 2 > + S 2 M < 1 , 2 > + S 3 M < 1 , 2 > + TM < 1 , 2 > -TM| M < 1 , 2 > .
Note that its virtual rank is [START_REF]Variétés Kähleriennes dont la première classe de Chern est nulle[END_REF] rk
F 1 , 2 = age( 1 ) + age( 2 ) -age( 1 2 ) -codim(M < 1 , 2 > ⊂ M 1 2 ).
In fact, this class in the Grothendieck group is represented by a genuine obstruction vector bundle constructed in [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] (cf. [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF]). In particular, age( 1 ) + age( 2 )age( 1 2 ) is always an integer. [START_REF] Victor | Strong McKay correspondence, string-theoretic Hodge numbers and mirror symmetry[END_REF] The product structure orb on h(M, G) is defined to be multiplicative with respect to the G-grading and for each 1 , 2 ∈ G, the orbifold product
orb : h(M 1 )(-age( 1 )) ⊗ h(M 2 )(-age( 2 )) → h(M 1 2 )(-age( 1 2 ))
is the correspondence determined by the algebraic cycle
δ * (c top (F 1 , 2 )) ∈ CH dim M 1 +dim M 2 +age( 1 )+age( 2 )-age( 1 2 ) (M 1 × M 2 × M 1 2 ),
where δ :
M < 1 , 2 > → M 1 × M 2 × M 1 2
is the natural morphism sending x to (x, x, x) and c top means the top Chern class of F 1 , 2 . One can check easily that the product structure orb is invariant under the action of G. (6) The associativity of orb is non-trivial. The proof in [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF]Lemma 5.4] is completely algebraic hence also works in our motivic case.
(7) Finally, the orbifold Chow motive of [M/G], denoted by h orb ([M/G]), is the G-invariant subalgebra object 5 of h(M, G), which turns out to be a commutative algebra object in CHM :
(8) h orb ([M/G]) := h(M, G) G = ∈G h(M ) -age( ) , orb G
We still use orb to denote the orbifold product on this sub-algebra object h orb ([M/G]).
By replacing the rational equivalence relation by another adequate equivalence relation, the same construction gives the orbifold homological motives, orbifold numerical motives, etc. associated to a global quotient smooth proper Deligne-Mumford stack as algebra objects in the corresponding categories of pure motives (with fractional Tate twists).
The definition of the orbifold Chow ring then follows in the standard way and agrees with the one in [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF], [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF] and [START_REF] Abramovich | Gromov-Witten theory of Deligne-Mumford stacks[END_REF].
Definition 2.6 (Orbifold Chow ring). The orbifold Chow ring
of [M/G] is the commutative Q ≥0 - graded Q-algebra CH * orb ([M/G]) := i∈Q ≥0 CH i orb ([M/G]) with (9) CH i orb ([M/G]) := Hom CHM (1(-i), h orb ([M/G]))
The ring structure on CH * orb ([M/G]), called orbifold product, denoted again by orb , is determined by the product structure orb :
h orb ([M/G]) ⊗ h orb ([M/G]) → h orb ([M/G]) in Definition 2.5. More concretely, CH * orb ([M/G]) is the G-invariant sub-Q-algebra of an auxiliary (non-commutative) finitely Q ≥0 -graded Q-algebra CH * (M, G), which is defined by CH * (M, G) := ∈G CH * -age( ) (M ), orb
, where for two elements , h ∈ G and α ∈ CH i-age( ) (M ), β ∈ CH j-age(h) (M h ), their orbifold product is the following element in CH i+j-age( h) (M h ) : [START_REF] Beauville | On the splitting of the Bloch-Beilinson filtration[END_REF]
α orb β := ι * α| M < ,h> • β| M < ,h> • c top (F ,h ) ,
where ι : M < ,h> → M h is the natural inclusion.
Remark 2.7. The main interest of the paper lies in the situation when the underlying singular variety of the orbifold has at worst Gorenstein singularities. Recall that an algebraic variety X is Gorenstein if it is Cohen-Macaulay and the dualizing sheaf is a line bundle, denoted ω X . In the case of a global quotient M/G, being Gorenstein is the same thing as the local G-triviality of the canonical bundle ω M , which is again equivalent to the condition that the stabilizer of each point x ∈ M is contained in SL(T x M). In this case, it is straightforward to check that the Gorenstein condition implies that the age function actually takes values in the integers Z and therefore the orbifold motive lies in the usual category of pure motives (without fractional twists) CHM, the orbifold Chow ring and orbifold cohomology ring are Z-graded. Example 2.4 shows a typical situation that we would like to study. See also the remark after Conjecture 3.2. 5 Here we use the fact that the category CHM is Q-linear and pseudo-abelian to define the G-invariant part A G of a G-object A as the image of the projector 1
|G| ∈G
∈ End(A).
Motivic HyperK ähler Resolution Conjecture
A motivic version of the Cohomological HyperKähler Resolution Conjecture.
In [START_REF] Ruan | Stringy geometry and topology of orbifolds[END_REF], as part of the broader picture of stringy geometry and topology of orbifolds, Yongbin Ruan proposed the Cohomological HyperKähler Resolution Conjecture (CHRC) which says that the orbifold cohomology ring of a compact Gorenstein orbifold is isomorphic to the Betti cohomology ring of a hyperKähler crepant resolution of the underlying singular variety if one takes C as coefficients ; see Conjecture 1.1 in the introduction for the statement. As explained in Ruan [START_REF]The cohomology ring of crepant resolutions of orbifolds[END_REF], the plausibility of CHRC is justified by some considerations from theoretical physics as follows. Topological string theory predicts that the quantum cohomology theory of an orbifold should be equivalent to the quantum cohomology theory of a/any crepant resolution of (possibly some deformation of) the underlying singular variety. On the one hand, the orbifold cohomology ring constructed by Chen-Ruan [START_REF]A new cohomology theory of orbifold[END_REF] is the classical part (genus zero with three marked points) of the quantum cohomology ring of the orbifold (see [START_REF] Chen | Orbifolds in mathematics and physics[END_REF]) ; on the other hand, the classical limit of the quantum cohomology of the resolution is the so-called quantum corrected cohomology ring ( [START_REF]The cohomology ring of crepant resolutions of orbifolds[END_REF]). However if the crepant resolution has a hyperKähler structure, then all its Gromov-Witten invariants as well as the quantum corrections vanish and one expects therefore an equivalence, i.e. an isomorphism of C-algebras, between the orbifold cohomology of the orbifold and the usual Betti cohomology of the hyperKähler crepant resolution.
Before moving on to a more algebro-geometric study, we have to recall some standard definitions and facts on (possibly singular) symplectic varieties (cf. [START_REF] Beauville | Symplectic singularities[END_REF], [START_REF] Namikawa | Extension of 2-forms and symplectic varieties[END_REF]) : Definition 3.1.
• A symplectic form on a smooth complex algebraic variety is a closed holomorphic 2-form that is non-degenerate at each point. A smooth variety is called holomorphic symplectic or just symplectic if it admits a symplectic form. Projective examples include deformations of Hilbert schemes of K3 surfaces and abelian surfaces and generalized Kummer varieties etc.. A typical non-projective example is provided by the cotangent bundle of a smooth variety.
• A (possibly singular) symplectic variety is a normal complex algebraic variety such that its smooth part admits a symplectic form whose pull-back to a/any resolution extends to a holomorphic 2-form. A germ of such a variety is called a symplectic singularity. Such singularities are necessarily rational Gorenstein [START_REF] Beauville | Symplectic singularities[END_REF] and conversely, by a result of Namikawa [START_REF] Namikawa | Extension of 2-forms and symplectic varieties[END_REF], a normal variety is symplectic if and only if it has rational Gorenstein singularities and its smooth part admits a symplectic form. The main examples that we are dealing with are of the form of a quotient by a finite group of symplectic automorphisms of a smooth symplectic variety, e.g., the symmetric products S (n) = S n / Sn of smooth algebraic surfaces S with trivial canonical bundle. • Given a singular symplectic variety X, a symplectic resolution or hyperKähler resolution is a resolution f : Y → X such that the pull-back of a symplectic form on the smooth part X re extends to a symplectic form on Y. Note that a resolution is symplectic if and only if it is crepant :
f * ω X = ω Y .
The definition is independent of the choice of symplectic form on X re . A symplectic resolution is always semi-small. The existence of symplectic resolutions and the relations between them form a highly attractive topic in holomorphic symplectic geometry. An interesting situation, which will not be touched upon in this paper however, is the normalization of the closure of a nilpotent orbit in a complex semi-simple Lie algebra, whose symplectic resolutions are extensively studied in the literature (see [START_REF] Fu | Symplectic resolutions for nilpotent orbits[END_REF], [START_REF] Brion | Symplectic resolutions for conical symplectic varieties[END_REF]). For examples relevant to this paper, see Examples 3.3.
Returning to the story of the HyperKähler Resolution Conjecture, in order to study algebraic cycles and motives of holomorphic symplectic varieties, especially with a view towards Beauville's splitting principle conjecture [START_REF] Beauville | On the splitting of the Bloch-Beilinson filtration[END_REF] (see §7), we would like to propose the motivic version of the CHRC ; see Conjecture 1.2 in the introduction for the general statement. As we are dealing exclusively with the global quotient case in this paper, we prefer to formulate the following more precise statement in this more restricted case. Conjecture 3.2 (MHRC : global quotient case). Let M be a smooth projective holomorphic symplectic variety equipped with an action of a finite group G by symplectic automorphisms of M. If Y is a symplectic resolution of the quotient variety M/G, then we have an isomorphism of (commutative) algebra objects in the category of Chow motives with complex coefficients :
h(Y) h orb ([M/G]) in CHM C .
In particular, we have an isomorphism of graded C-algebras
CH * (Y) C CH * orb ([M/G]) C .
Since G preserves a symplectic form (hence a canonical form) of M, the quotient variety M/G has at worst Gorenstein singularities. As is pointed out in Remark 2.7, this implies that the age functions take values in Z, the orbifold motive h orb ([M/G]) is in CHM, the usual category of (rational) Chow motives and the orbifold Chow ring CH * orb ([M/G]) is integrally graded. Examples 3.3. All examples studied in this paper are in the following situation : let M and G be as in Conjecture 3.2 and Y be (the principal component of) the G-Hilbert scheme G-Hilb(M) of G-clusters of M, that is, a 0-dimensional G-invariant subscheme of M whose global functions form the regular G-representation (cf. [START_REF] Ito | McKay correspondence and Hilbert schemes[END_REF], [START_REF] Nakamura | Hilbert schemes of abelian group orbits[END_REF]). In some interesting cases, Y gives a symplectic resolution of M/G :
• Let S be a smooth algebraic surface and G = Sn act on M = S n by permutation. By the result of Haiman [START_REF] Haiman | Hilbert schemes, polygraphs and the Macdonald positivity conjecture[END_REF], Y = Sn-Hilb(S n ) is isomorphic to the n-th punctual Hilbert scheme S [n] , which is a crepant resolution, hence symplectic resolution if S has trivial canonical bundle, of M/G = S (n) , the n-th symmetric product. • Let A be an abelian surface, M be the kernel of the sum map s : A n+1 → A and G = Sn+1 acts on M by permutations, then Y = G-Hilb(M) is isomorphic to the generalized Kummer variety K n (A) and is a symplectic resolution of M/G.
Although both sides of the isomorphism in Conjecture 3.2 are in the category CHM of motives with rational coefficients, it is in general necessary to make use of roots of unity to realize such an isomorphism of algebra objects. However, in some situation, it is possible to stay in CHM by making some sign change, which is related to the notion of discrete torsion in theoretical physics :
Definition 3.4 (Discrete torsion). For any , h ∈ G, let (11) ( , h) := 1 2 age( ) + age(h) -age( h) .
It is easy to check that
(12) ( 1 , 2 ) ( 1 2 , 3 ) = ( 1 , 2 3 ) ( 2 , 3 ).
In the case when ( , h) is an integer for all , h ∈ G, we can define the orbifold Chow motive with discrete torsion of a global quotient stack [M/G], denoted by h orb,dt ([M/G]), by the following simple change of sign in Step (5) of Definition 2.5 : the orbifold product with discrete torsion
orb,dt : h(M 1 )(-age( 1 )) ⊗ h(M 2 )(-age( 2 )) → h(M 1 2 )(-age( 1 2 ))
is the correspondence determined by the algebraic cycle
( 1 , 2 ) • δ * (c top (F 1 , 2 )) ∈ CH dim M 1 +dim M 2 +age( 1 )+age( 2 )-age( 1 2 ) (M 1 × M 2 × M 1 2 ).
Thanks to [START_REF] Brianc ¸on | Description de Hilb n C{x, y}[END_REF], orb,dt is still associative. Similarly, the orbifold Chow ring with discrete torsion of [M/G] is obtained by replacing Equation (10) in Definition 2.6 by ( 13)
α orb,dt β := ( , h) • i * α| M < ,h> • β| M < ,h> • c top (F ,h ) ,
which is again associative by [START_REF] Brianc ¸on | Description de Hilb n C{x, y}[END_REF].
Thanks to the notion of discrete torsions, we can have the following version of Motivic HyperKähler Resolution Conjecture, which takes place in the category of rational Chow motives and involves only rational Chow groups. Conjecture 3.5 (MHRC : global quotient case with discrete torsion). In the same situation as Conjecture 3.2, suppose that ( , h) of Definition 3.4 is an integer for all , h ∈ G. Then we have an isomorphism of (commutative) algebra objects in the category of Chow motives with rational coefficients :
h(Y) h orb,dt ([M/G]) in CHM .
In particular, we have an isomorphism of graded Q-algebras
CH * (Y) CH * orb,dt ([M/G]).
It is easy to see that Conjecture 3.5 implies Conjecture 3.2 : to get rid of the discrete torsion sign change ( , h), it suffices to multiply the isomorphism to each factor h(M )(-age( )), or CH(M ) by √ -1 age( ) , which involves of course the complex numbers (roots of unities at least).
Toy examples.
To better illustrate the conjecture as well as the proof in the next section, we present in this subsection the explicit computation for two simplest nontrivial cases of MHRC.
3.2.1. Hilbert squares of K3 surfaces. Let S be a K3 surface or an abelian surface. Consider the involution f on S × S flipping the two factors. The relevant DM stack is [S 2 / f ] ; its underlying singular symplectic variety is the second symmetric product S (2) , and S [2] is its symplectic resolution. Let S 2 be the blowup of S 2 along its diagonal ∆ S :
E j G G π S 2 ∆ S i G G S × S
Then f lifts to a natural involution on S 2 and the quotient is q : S 2 S [2] .
On the one hand, CH * (S [2] ) is identified, via q * , with the invariant part of CH * ( S 2 ) ; on the other hand, by Definition 2.6, CH * orb ([S 2 / S2]) = CH * (S 2 , S2) inv . Therefore to check the MHRC 3.2 or 3.5 in this case, we only have to show the following Proposition 3.6. We have an isomorphism of C-algebras : CH * ( S 2 ) C CH * (S 2 , S2)C. If one makes a sign change in the orbifold product on the right hand side, there is an isomorphism of Q-algebras of these two Chow rings with rational coefficients.
Proof. A straightforward computation using (3) and (4) of Definition 2.5 shows that all obstruction bundles are trivial (at least in the Grothendieck group). Hence by Definition 2.6,
CH * (S 2 , S2) = CH * (S 2 ) ⊕ CH * -1 (∆ S )
whose ring structure is explicitly given by
• For any α ∈ CH i (S 2 ), β ∈ CH j (S 2 ), α orb β = α • β ∈ CH i+j (S 2 ) ; • For any α ∈ CH i (S 2 ), β ∈ CH j (∆ S ), α orb β = α| ∆ • β ∈ CH i+j (∆ S ) ; • For any α ∈ CH i (∆ S ), β ∈ CH j (∆ S ), α orb β = ∆ * (α • β) ∈ CH i+j+2 (S 2 ).
The blow-up formula (cf. for example, [START_REF] Voisin | Hodge theory and complex algebraic geometry[END_REF]Theorem 9.27]) provides an a priori only additive isomorphism
( * , j * π * ) : CH * (S 2 ) ⊕ CH * -1 (∆ S ) -→ CH * ( S 2 ),
whose inverse is given by ( * , -π * j * ). With everything given explicitly as above, it is straightforward to check that this isomorphism respects also the multiplication up to a sign change :
• For any α ∈ CH i (S 2 ), β ∈ CH j (S 2 ), one has * (α orb β) = * (α • β) = * (α) • * (β) ; • For any α ∈ CH i (S 2 ), β ∈ CH j (∆ S ), the projection formula yields j * π * (α orb β) = j * π * (α| ∆ • β) = j * j * * (α) • π * β = * (α) • j * π * (β) ;
• For any α ∈ CH i (∆ S ), β ∈ CH j (∆ S ), we make a sign change : α orb,dt β = -∆ * (α • β) and we get
j * π * (α) • j * π * (β) = j * j * j * π * α • π * β = j * c 1 (N E/ S 2 ) • π * α • π * β = - * ∆ * (α • β) = * (α orb,dt β),
where in the last but one equality one uses the excess intersection formula for the blowup diagram together with the fact that N E/ S 2 = O π (-1) while the excess normal bundle is
π * T S /O π (-1) T π ⊗ O π (-1) O π (1),
where one uses the assumption that K S = 0 to deduce that T π O π (2).
As the sign change is exactly the one given by discrete torsion in Definition 3.4, we have an isomorphism of Q-algebras CH * (S [2] ) CH * orb,dt ([S 2 / S2]). Without making any sign change, the above computation shows that
( * , √ -1 • j * π * ) : CH * (S 2 ) C ⊕ CH * -1 (∆ S ) C -→ CH * ( S 2 ) C
is an isomorphism of C-algebras, whose inverse is given by ( * , √ -1•π * j * ). Hence the isomorphism of C-algebras : CH * (S [2] )
C CH * orb ([S 2 / S2])C.
Kummer K3 surfaces.
Let A be an abelian surface. We always identify A 2 0 := Ker A × A + -→ A with A by (x, -x) → x, under which the associated Kummer K3 surface S := K 1 (A) is a hyperKähler crepant resolution of the symplectic quotient A/ f , where f is the involution of multiplication by -1 on A. Consider the blow-up of A along the fixed locus F which is the set of 2-torsion points of A :
E j G G π A F i G G A.
Then S is the quotient of A by f , the lifting of the involution f . As in the previous toy example, the MHRC in the present situation is reduced to the following Proposition 3.7. We have an isomorphism of C-algebras : CH * ( A) C CH * (A, S2)C. If one makes a sign change in the orbifold product on the right hand side, there is an isomorphism of Q-algebras of these two Chow rings with rational coefficients.
Proof. As the computation is quite similar to that of Proposition 3.6, we only give a sketch. By Definition 2.6, age(id) = 0, age( f ) = 1 and CH * (A, S2) = CH * (A) ⊕ CH * -1 (F) whose ring structure is given by
• For any α ∈ CH i (A), β ∈ CH j (A), α orb β = α • β ∈ CH i+j (A) ; • For any α ∈ CH i (A), β ∈ CH 0 (F), α orb β = α| F • β ∈ CH i (F) ; • For any α ∈ CH 0 (F), β ∈ CH 0 (F), α orb β = i * (α • β) ∈ CH 2 (A).
Again by the blow-up formula, we have an isomorphism
( * , j * π * ) : CH * (A) ⊕ CH * -1 (F) -→ CH * ( A),
whose inverse is given by ( * , -π * j * ). It is now straightforward to check that they are moreover ring isomorphisms with the left-hand side equipped with the orbifold product. The sign change comes from the negativity of the self-intersection of (the components of) the exceptional divisor.
Main results and steps of the proofs
The main results of the paper are the verification of Conjecture 3.5, hence Conjecture 3.2 in the following two cases (A) and (B). See Theorem 1.3 and Theorem 1.4 in the introduction for the precise statements. These two theorems are proved in §5 and §6 respectively. In this section, we explain the main steps of their proofs. Let A be an abelian surface and n be a positive integer.
Case (A) (Hilbert schemes of abelian surfaces)
M = A n endowed with the natural action of G = Sn. The symmetric product A (n) = M/G is a singular symplectic variety and the Hilbert-Chow morphism
ρ : Y = A [n] → A (n)
gives a symplectic resolution.
Case (B) (Generalized Kummer varieties)
M = A n+1 0 := Ker A n+1 s -→ A endowed with the natural action of G = Sn+1. The quotient A n+1 0 / Sn+1 = M/G is a singular symplectic variety. Recall that the generalized Kummer variety K n (A) is the fiber over O A of the isotrivial fibration
A [n+1] → A (n+1) s - → A.
The restriction of the Hilbert-Chow morphism
Y = K n (A) → A n+1
0 / Sn+1 gives a symplectic resolution.
For both cases, the proof proceeds in three steps. For each step, Case (A) is quite straightforward and Case (B) requires more subtle and technical arguments.
Step (i)
Recall the notation h(M, G) := ⊕ ∈G h(M )(-age( )). Denote by
ι : h (M, G) G → h (M, G) and p : h (M, G) h (M, G) G
the inclusion of and the projection onto the G-invariant part h (M, G) G , which is a direct factor of h (M, G) inside CHM. We will firstly establish an a priori just additive G-equivariant morphism of Chow motives h(Y) → h(M, G), given by some correspondences (-1) age( )
U ∈ CH(Y × M ) ∈G inducing an (additive) isomorphism φ = p • (-1) age( ) U : h(Y) -→ h orb ([M/G]) = h(M, G) G .
The isomorphism φ will have the property that its inverse is ψ := ( 1 |G| t U ) • ι (see Proposition 5.2 and Proposition 6.4 for Case (A) and (B) respectively). Our goal is then to prove that these morphisms are moreover multiplicative (after the sign change by discrete torsion), i.e. the following diagram is commutative:
(14) h(Y) ⊗2 φ ⊗2 δ Y G G h(Y) φ h orb ([M/G]) ⊗2 orb,dt G G h orb ([M/G])
The main theorem will then be deduced from the following
Z| M 1 ×M 2 ×M 3 = 0 if 3 1 2 1 , 2 • δ * c top (F 1 , 2 ) if 3 = 1 2 .
Here the symmetrization of a cycle in ∈G M 3 is the operation
γ → 1 |G| 3 1 , 2 , 3 ∈G ( 1 , 2 , 3 ) . γ.
h(Y) ⊗2 δ Y G G h(Y) φ h orb ([M/G]) ⊗2 ψ ⊗2 y y orb,dt G G h orb ([M/G])
By the definition of φ and ψ, we need to show that the following diagram is commutative :
(15) h(Y) ⊗2 δ Y G G h(Y) (-1) age( ) U h(M, G) ⊗2 ( 1 |G| t U ) ⊗2 y y h(M, G) p h orb ([M/G]) ⊗2 ι ⊗2 y y orb,dt G G h orb ([M/G])
It is elementary to see that the composition One is therefore reduced to show Proposition 4.1 in both cases (A) and (B).
(-1) age( ) U • δ Y • ( 1 |G| t U ) ⊗2 is
Step (ii)
We prove that W on the one hand and Z on the other hand, as well as their symmetrizations, are both symmetrically distinguished in the sense of O'Sullivan [START_REF] Peter | Algebraic cycles on an abelian variety[END_REF] (see Definition 5.4). In Case (B) concerning the generalized Kummer varieties, we have to generalize a little bit the category of abelian varieties and the corresponding notion of symmetrically distinguished cycles, in order to deal with algebraic cycles on 'non-connected abelian varieties' in a canonical way. By the result of O'Sullivan [START_REF] Peter | Algebraic cycles on an abelian variety[END_REF] (see Theorem 5.5 and Theorem 5.6), it suffices for us to check that the symmetrizations of W and Z are numerically equivalent.
Step (iii)
Finally, in Case (A), explicit computations of the cohomological realization of φ show that the induced (iso-)morphism φ :
H * (Y) → H * orb ([M/G]
) is the same as the one constructed in [START_REF] Lehn | The cup product of Hilbert schemes for K3 surfaces[END_REF]. While in Case (B), based on the result of [START_REF] Marc | Twisted cohomology of the Hilbert schemes of points on surfaces[END_REF], one can prove that the cohomological realization of φ satisfies Ruan's original Cohomological HyperKähler Resolution Conjecture. Therefore the symmetrizations of W and Z are homologically equivalent, which finishes the proof by Step (ii).
Case (A) : Hilbert schemes of abelian surfaces
We prove Theorem 1.3 in this section. Notations are as before : M := A n with the action of G := Sn and the quotient X := A (n) := M/G. Then the Hilbert-Chow morphism
ρ : A [n] =: Y → A (n)
gives a symplectic resolution.
5.1.
Step (i) -Additive isomorphisms. In this subsection, we establish an isomorphism between h(Y) and h orb ([M/G]) by using results in [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF]. First we construct some correspondences similar to the ones used in loc.cit. . For each ∈ G = Sn, let O( ) be the set of orbits of as a permutation of
{1, 2, • • • n}. It is computed in Example 2.4 that age( ) = n -|O( )|.
As in [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF], we make the natural identification
(A n ) A O( ) .
Let ( 16)
U := (A [n] × A (n) (A n ) ) red = (z, x 1 , • • • , x n ) ∈ A [n] × (A n ) ρ(z) = [x 1 ] + • • • + [x n ]
be the incidence variety, where ρ :
A [n] → A (n)
is the Hilbert-Chow morphism. As the notation suggests, U is the fixed locus of the induced automorphism on the isospectral Hilbert scheme
U := U id = A [n] × A (n) A n = (z, x 1 , • • • , x n ) ∈ A [n] × A n ρ(z) = [x 1 ] + • • • + [x n ] . Note that dim U = n + |O( )| = 2n -age( ) and dim A [n] × (A n ) = 2 dim U . We consider the following correspondence for each ∈ G, (17)
Γ := (-1) age( ) U ∈ CH 2n-age( ) A [n] × (A n ) ,
which defines a morphism of Chow motives :
(18) Γ := ∈G Γ : h(A [n] ) → ∈G h ((A n ) ) (-age( )) =: h(A n , Sn),
here we used the notation from Definition 2.5.
Lemma 5.1. The algebraic cycle Γ in ( 18) defines an Sn-equivariant morphism with respect to the trivial action on A [n] and the action on h(A n , Sn) of Definition 2.5.
Proof. For each , h ∈ G, as the age function is invariant under conjugation, it suffices to show that the following composition is equal to
Γ h h -1 : h(A [n] ) Γ -→ h ((A n ) ) (-age( )) h - → h (A n ) h h -1 (-age( )),
which follows from the commutative diagram :
A [n] U o o h G G U h h -1 (A n ) h G G (A n ) h h -1 .
As before, ι :
h (A n , G) G → h (A n , G) and p : h (A n , G) h (A n , G) G
are the inclusion of and the projection onto the G-invariant part. Thanks to Lemma 5.1, we obtain the desired morphism [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF] φ
:= p • Γ : h(A [n] ) → h orb ([A n /G]) = h (A n , G) G , which satisfies Γ = ι • φ.
Now one can reformulate the result of de Cataldo-Migliorini [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF], which actually works for all surfaces, as follows : Proposition 5.2. The morphism φ is an isomorphism, whose inverse is given by ψ
:= 1 n! ∈G t U • ι, where t U : h ((A n ) ) (-age( )) → h(A [n] )
is the transposed correspondence of U .
Proof. We start by a recollection of some notation from [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF]. Let P(n) be the set of partitions of n. Given such a partition λ
= (λ 1 ≥ • • • ≥ λ l ) = (1 a 1 • • • n a n )
, where l := |λ| is the length of λ and
a i = |{ j | 1 ≤ j ≤ n ; λ j = i}|, we define Sλ := Sa 1 × • • • × Sa n .
Let A λ be A l , equipped with the natural action of Sλ and with the natural morphism to A (n) by sending (x 1 , • • • , x l ) to l j=1 λ j [x j ]. Define the incidence subvariety U λ := (A [n] × A (n) A λ ) red . Denote the quotient A (λ) := A λ / Sλ and U (λ) := U λ / Sλ, where the latter is also regarded as a correspondence between A [n] and A (λ) . The main theorem in [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF] asserts that the following correspondence is an isomorphism :
φ := λ∈P(n) U (λ) : h(A [n] ) -→ λ∈P(n) h(A (λ) )(|λ| -n) ;
moreover, the inverse of φ is given by
ψ := λ∈P(n) 1 m λ • t U (λ) : λ∈P(n) h(A (λ) )(|λ| -n) -→ h(A [n] ),
where m λ = (-1) n-|λ| |λ| j=i λ j is a non-zero constant. To relate our morphism φ to the above isomorphism φ as well as their inverses, one uses the following elementary Lemma 5.3. One has a canonical isomorphism :
∈S n h ((A n ) ) (-age( )) Sn -→ λ∈P(n) h A (λ) (|λ| -n).
Proof. By ∈ λ, we mean that the partition determined by the permutation ∈ Sn is λ ∈ P(n). For each partition λ of length l, since for any ∈ λ, the stabilizer of for the action of Sn is isomorphic to the semi-direct product (Z/λ
1 × • • • × Z/λ l ) Sλ. Thus ∈λ h ((A n ) ) Sn h ((A n ) ) (Z/λ 1 ו••×Z/λ l ) Sλ h ((A n ) ) Sλ h A (λ) .
One concludes by taking the direct sum of this isomorphism over all λ ∈ P(n). Now it is easy to conclude the proof of Proposition 5.2. For any ∈ λ, the isomorphism between (A n ) and A λ will identify U to U λ . Hence the composition of φ with the isomorphism in Lemma 5.3 is equal to λ∈P(n) n! m λ U (λ) , which is an isomorphism since φ = λ∈P(n) U (λ) is. As a consequence, φ itself is an isomorphism and its inverse is the composition of the isomorphism in Lemma 5.3 followed by ψ = λ∈P(n)
1 n! • t U (λ) , which is 1 n! ∈G t U • ι =: ψ.
Then to show Theorem 1.3, it suffices to prove Proposition 4.1 in this situation, which will be done in the next two steps.
Step (ii) -Symmetrically distinguished cycles on abelian varieties.
The following definition is due to O'Sullivan [START_REF] Peter | Algebraic cycles on an abelian variety[END_REF]. Recall that all Chow groups are with rational coefficients. As in loc.cit. we denote in this section by CH the Q-vector space of algebraic cycles modulo the numerical equivalence relation. Definition 5.4 (Symmetrically distinguished cycles [START_REF] Peter | Algebraic cycles on an abelian variety[END_REF]). Let A be an abelian variety and α ∈ CH i (A). For each integer m ≥ 0, denote by V m (α) the Q-vector subspace of CH(A m ) generated by elements of the form p * (α
r 1 × α r 2 • • • × α r n ),
where n ≤ m, r j ≥ 0 are integers, and p : A n → A m is a closed immersion with each component A n → A being either a projection or the composite of a projection with [-1] : A → A. Then α is called symmetrically distinguished if for every m the restriction of the projection CH(A m ) → CH(A m ) to V m (α) is injective.
Despite of its seemingly complicated definition, the symmetrically distinguished cycles behave very well. More precisely, we have Theorem 5.5 (O'Sullivan [START_REF] Peter | Algebraic cycles on an abelian variety[END_REF]). Let A be an abelian variety.
(1) The symmetric distinguished cycles in CH i (A) form a sub-Q-vector space.
(2) The fundamental class of A is symmetrically distinguished and the intersection product of two symmetrically distinguished cycles is symmetrically distinguished. They form therefore a graded sub-Q-algebra of CH * (A). The reason why this notion is very useful in practice is that it allows us to conclude an equality of algebraic cycles modulo rational equivalence from an equality modulo numerical equivalence (or, a fortiori, modulo homological equivalence) : Theorem 5.6 (O'Sullivan [START_REF] Peter | Algebraic cycles on an abelian variety[END_REF]). The composition CH(A) sd → CH(A) CH(A) is an isomorphism of Q-algebras, where CH(A) sd is the sub-algebra of symmetrically distinguished cycles. In other words, in each numerical class of algebraic cycle on A, there exists a unique symmetrically distinguished algebraic cycle modulo rational equivalence. In particular, a (polynomial of) symmetrically distinguished cycles is trivial in CH(A) if and only if it is numerically trivial.
Returning to the proof of Theorem 1.3, it remains to prove Proposition 4.1. Keep the same notation as in Step (i), we first prove that in our situation the two cycles in Proposition 4.1 are symmetrically distinguished.
Proposition 5.7. The following two algebraic cycles, as well as their symmetrizations,
• W := 1 |G| U × 1 |G| U × (-1) age( ) U * δ A [n] ;
• The algebraic cycle Z determining the orbifold product (Definition 2.5(5)) with the sign change by discrete torsion (Definition 3.4) :
Z| M 1 ×M 2 ×M 3 = 0 if 3 1 2 1 , 2 • δ * c top (F 1 , 2 ) if 3 = 1 2 . are symmetrically distinguished in CH ∈G (A n ) 3 .
Proof. For W, it amounts to show that for any 1 , 2 , 3 ∈ G, we have that (
U 1 × U 2 × U 3 ) * (δ A [n] ) are symmetrically distinguished in CH ((A n ) 1 × (A n ) 2 × (A n ) 3 ). Indeed, by [55, Proposition 5.6], (U 1 × U 2 × U 3 ) * (δ A [n] ) is a polynomial of big diagonals of (A n ) 1 ×(A n ) 2 ×(A n ) 3 =: A N .
However, all big diagonals of A N is clearly symmetrically distinguished since ∆ A ∈ CH(A×A) is. By Theorem 5.5, W is symmetrically distinguished. As for Z, for any fixed 1 , 2 ∈ G, F 1 , 2 is easily seen to always be a trivial vector bundle, at least virtually, hence its top Chern class is either 0 or 1 (the fundamental class), which is of course symmetrically distinguished. Also recall that (Definition 2.5)
δ : (A n ) < 1 , 2 > → (A n ) 1 × (A n ) 2 × (A n ) 1 2 ,
which is a (partial) diagonal inclusion, in particular a morphism of abelian varieties. Therefore δ * (c top (F 1 , 2 )) is symmetrically distinguished by Theorem 5.5, hence so is Z.
Finally, since any automorphism in G × G × G preserves symmetrically distinguished cycles, symmetrizations of Z and W remain symmetrically distinguished.
By Theorem 5.6, in order to show Proposition 4.1, it suffices to show on the one hand that the symmetrizations of Z and W are both symmetrically distinguished, and on the other hand that they are numerically equivalent. The first part is exactly the previous Proposition 5.7 and we now turn to an a priori stronger version of the second part in the following final step.
5.3.
Step (iii) -Cohomological realizations. We will show in this subsection that the symmetrizations of the algebraic cycles W and Z have the same (rational) cohomology class. To this end, it is enough to show the following Proposition 5.8. The cohomology realization of the (additive) isomorphism
φ : h(A [n] ) -→ ⊕ ∈G h((A n ) )(-age( )) Sn is an isomorphism of Q-algebras φ : H * (A [n] ) -→ H * orb,dt ([A n / Sn]) = ⊕ ∈G H * -2 age( ) ((A n ) ), orb,dt Sn .
In other words, Sym(W) and Sym(Z) are homologically equivalent.
Proof. The existence of isomorphism of Q-algebras between H * (A [n] ) and H * orb,dt ([A n / Sn]) is established by Fantechi and G öttsche [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF]Theorem 3.10] based on the work of Lehn and Sorger [START_REF] Lehn | The cup product of Hilbert schemes for K3 surfaces[END_REF]. Therefore by the definition of φ in Step (i), it suffices to show that the cohomological correspondence
Γ * := ∈S n (-1) age( ) U * : H * (A [n] ) → ∈S n H * -2 age( ) ((A n ) )
coincides with the following inverse of the isomorphism Ψ used in Fantechi-G öttsche [25, Theorem 3.10]
Φ : H * (A [n] ) → ∈S n H * -2 age( ) ((A n ) ) p λ 1 (α 1 ) • • • p λ l (α l ) 1 → n! • Sym(α 1 × • • • × α l ),
Let us explain the notations from [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] in the above formula : α 1 , . . . , α l ∈ H * (A), × stands for the exterior product pr * i (-), p is the Nakajima operator (cf. [START_REF] Nakajima | Lectures on Hilbert schemes of points on surfaces[END_REF]), 1 ∈ H 0 (A [0] ) Q is the fundamental class of the point, λ = (λ 1 , . . . , λ l ) is a partition of n, ∈ Sn is a permutation of type λ with a numbering of orbits of (as a permutation) chosen : {1, . . . , l} ∼ -→ O( ), such that λ j is the length of the j-th orbit, then the class α 1 × • • • × α l is placed in the direct summand indexed by and Sym means the symmetrization operation 1 n! h∈S n h. Note that Sym(α 1 × • • • × α l ) is independent of the choice of , numbering etc.
Recall that the Nakajima operator p k (α) : H * (A [l] ) → H * (A [l+k] ) for any l is by definition β → I l,l+k * (β × α) = q * p * (β × α) using the following correspondence :
I l,l+k := (ξ , x, ξ) ∈ A [l+k] × A × A [l] ξ ⊂ ξ ; ρ(ξ ) = ρ(ξ) + k[x] q s s p C C A [l+k] A [l] × A
Here and in the sequel, ρ is always the Hilbert-Chow morphism. By a repeated (but straightforward) use of the projection formula, one has
p λ 1 (α 1 ) • • • p λ l (α l ) 1 = I λ * (α 1 × • • • × α l ) = q * p * (α 1 × • • • × α l )
using the following correspondence :
I λ := (x 1 , . . . , x l , ξ 1 , . . . , ξ l ) x i ∈A ; ξ 1 ⊂•••⊂ξ l ; ρ(ξ i )=ρ(ξ i-1 )+λ i [x i ] q t t p B B A [n]
A λ := A l where (x 1 , . . . , x l , ξ 1 , . . . , ξ l ) is sent to (x 1 , . . . , x l ) by p and to ξ l ∈ A [n] by q. It is easy to see that the natural morphism
I λ → U λ = (ξ, x 1 , . . . , x l ) ∈ A [n] × A l | ρ(ξ) = l i=1 λ i [x i ]
forgetting the subschemes ξ 1 , . . . , ξ l-1 , is a birational morphism. Therefore
p λ 1 (α 1 ) • • • p λ l (α l ) 1 = U λ * (α 1 × • • • × α l )
and one only has to show that (20)
∈S n (-1) age( ) U * U λ * (α 1 × • • • × α l ) = n! • Sym(α 1 × • • • × α l ).
Indeed, for a given ∈ G, if in not of type λ, then by [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF]Proposition 5.1.3], we know that U * • U λ * = 0. For any ∈ G of type λ, fix a numbering ϕ : {1, . . . , l} ∼ -→ O( ) such that |ϕ(j)| = λ j and let ϕ : A λ = A l → A O( ) be the induced isomorphism. Then denoting by q : A λ A (λ) the quotient map by Sλ, the computation [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF]Proposition 5.1.4] implies that for such ∈ λ,
U * • U λ * (α 1 × • • • × α l ) = ϕ * • U λ * • U λ * (α 1 × • • • × α l ) = (-1) n-|λ| |λ| i=1 λ i • ϕ * • q * • q * (α 1 × • • • × α l ) = (-1) n-|λ| |λ| i=1 λ i • deg(q) • Sym(α 1 × • • • × α l ) = (-1) n-|λ| |λ| i=1 λ i • | Sλ | • Sym(α 1 × • • • × α l )
Putting those together, we have
∈S n (-1) age( ) U * U λ * (α 1 × • • • × α l ) = ∈λ (-1) n-|λ| U * U λ * (α 1 × • • • × α l ) = ∈λ |λ| i=1 λ i • | Sλ | • Sym(α 1 × • • • × α l ) = n! • Sym(α 1 × • • • × α l ),
where the last equality uses the simple counting of conjugacy class : the number of permutations of type λ is
n! |λ| i=1 λ i •| Sλ |
. The desired equality [START_REF] Andrea | The Chow motive of semismall resolutions[END_REF], hence also the Proposition, is proved.
As explained in §4, the proof of Theorem 1.3 is now complete : Proposition 5.7 and Proposition 5.8 together imply that Sym(W) and Sym(Z) are rationally equivalent using Theorem 5.6. Therefore Proposition 4.1 holds in our situation Case (A), which means exactly that the isomorphism φ in Proposition 5.2 (defined in [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF]) is also multiplicative with respect to the product structure on h(A [n] ) given by the small diagonal and the orbifold product with sign change by discrete torsion on h(A n , Sn) Sn .
Case (B) : Generalized Kummer varieties
We prove Theorem 1.4 in this section. Notation is as in the beginning of §4 :
M = A n+1 0 := Ker A n+1 + -→ A
which is non-canonically isomorphic to A n , with the action of G = Sn+1 and the quotient X := A (n+1) 0 := M/G. Then the restriction of the Hilbert-Chow morphism to the generalized Kummer variety
K n (A) =: Y f - → A (n+1) 0
is a symplectic resolution.
6.1.
Step (i) -Additive isomorphisms. We use the result in [START_REF] Andrea | The Chow motive of semismall resolutions[END_REF] to establish an additive isomor-
phism h(Y) -→ h orb ([M/G]).
Recall that a morphism f : Y → X is called semi-small if for all integer k ≥ 0, the codimension of the locus x ∈ X | dim f -1 (x) ≥ k is at least 2k. In particular, f is generically finite. Consider a (finite) Whitney stratification X = a X a by connected strata, such that for any a, the restriction f | f -1 (X a ) : f -1 (X a ) → X a is a topological fiber bundle of fiber dimension d a . Then the semismallness condition says that codim X a ≥ 2d a for any a. In that case, a stratum X a is said to be relevant if the equality holds : codim X a = 2d a . Here is the key result we are using : Theorem 6.1 (de Cataldo -Migliorini [START_REF] Andrea | The Chow motive of semismall resolutions[END_REF]). Let f : Y → X be a semi-small morphism of complex projective varieties with Y being smooth. For each connected relevant stratum X a of codimension 2d a (and fiber dimension d a ), let Z a → X a be the (not necessarily connected) étale cover corresponding to the π 1 (X a )-set of maximal (=d a ) dimensional irreducible components of fibers. Assume Z a → X a is a projective compactification with Z a admitting a stratification with strata being finite group quotients of smooth varieties. Then (the closure of) the incidence subvarieties between Z a and Y induce an isomorphism of Chow motives :
a h(Z a )(-d a ) h(Y).
Moreover, the inverse isomorphism is again given by the incidence subvarieties but with different non-zero coefficients. Remark 6.2.
• The statement about the correspondence inducing isomorphisms as well as the (non-zero) coefficients of the inverse correspondence is contained in [20, §2.5].
• Since any symplectic resolution of a (singular) symplectic variety is semi-small, the previous theorem applies to the situation of Conjecture 3.2 and 3.5. • Note that the correspondence in [START_REF] Andrea | The Chow groups and the motive of the Hilbert scheme of points on a surface[END_REF] which is used in §5 for Case (A) is a special case of Theorem 6.1.
Let us start by making precise a Whitney stratification for the (semi-small) symplectic resolution Y = K n (A) → X = A (n+1) 0 as follows. The notation is as in the proof of Proposition 5.2. Let P(n + 1) be the set of partitions of n + 1, then
X = λ∈P(n+1) X λ ,
where the locally closed strata are defined by
X λ := |λ| i=1 λ i [x i ] ∈ A (n+1) |λ| i=1 λ i x i =0 x i distinct , with normalization of closure being Z λ = X λ norm = A (λ) 0 := A λ 0 / Sλ, where (21) A λ 0 = (x 1 , . . . , x |λ| ) ∈ A λ |λ| i=1 λ i x i = 0 .
It is easy to see that dim X λ = dim A λ 0 = 2(|λ| -1) while the fibers over X λ are isomorphic to a product of Brianc ¸on varieties ( [START_REF] Brianc ¸on | Description de Hilb n C{x, y}[END_REF]
) |λ| i=1 B λ i , which is irreducible of dimension |λ| i=1 (λ i -1) = n + 1 -|λ| = 1 2 codim X λ .
In conclusion, f :
K n (A) → A (n+1) 0
is a semi-small morphism with all strata being relevant and all fibers over strata being irreducible. One can therefore apply Theorem 6.1 to get the following Corollary 6.3. For each λ ∈ P(n + 1), let
V λ := (ξ, x 1 , . . . , x |λ| ) ρ(ξ)= |λ| i=1 λ i [x i ] ; |λ| i=1 λ i x i =0 ⊂ K n (A) × A λ 0 be the incidence subvariety, whose dimension is n-1+|λ|. Then the quotients V (λ) := V λ / Sλ ⊂ K n (A)×A (λ) 0
induce an isomorphism of rational Chow motives :
φ : h (K n (A)) -→ λ∈P(n+1) h A (λ) 0 (|λ| -n -1).
Moreover, the inverse ψ := φ -1 is induced by λ∈P(n+1)
1 m λ V (λ)
, where m λ = (-1) n+1-|λ| |λ| i=1 λ i is a non-zero constant.
Similar to Proposition 5.2 for Case (A) , the previous Corollary 6. [START_REF] André | Une introduction aux motifs (motifs purs, motifs mixtes, périodes), Panoramas et Synthèses [Panoramas and Syntheses[END_REF]
allows us to establish an additive isomorphism between h(K n (A)) and h orb ([A n+1
0 / Sn+1]) : Proposition 6.4. Let M = A n+1 0 with the action of G = Sn+1. Let p and ι denote the projection onto and the inclusion of the G-invariant part of h(M, G). For each ∈ G, let
(22) V := (K n (A) × A (n+1) 0 M ) red ⊂ K n (A) × M
be the incidence subvariety. Then they induce an isomorphism of rational Chow motives :
φ := p • ∈G (-1) age( ) V : h (K n (A)) -→ ∈G h (M ) (-age( )) G .
Moreover, its inverse ψ is given by 1
(n+1)! • λ∈P(n+1) t V • ι.
Proof. The proof goes exactly as for Proposition 5.2, with Lemma 5.3 replaced by the following canonical isomorphism :
∈S n+1 h (A n+1 0 ) (-age( )) Sn+1 -→ λ∈P(n+1) h A (λ) 0 (|λ| -n -1).
Indeed, let λ be the partition determined by , then it is easy to compute age( )
= n + 1 -|O( )| = n + 1 -|λ| and moreover the quotient of (A n+1 0 ) by the centralizer of , which is |λ| i=1 Z/λ i Z Sλ, is exactly A (λ) 0 .
To show Theorem 1.4, it remains to show Proposition 4.1 in this situation (where all cycles U are actually V of Proposition 6.4).
Step (ii) -Symmetrically distinguished cycles on abelian torsors with torsion structures.
Observe that we have the extra technical difficulty that (A n+1 0 ) is in general an extension of a finite abelian group by an abelian variety, thus non-connected. To deal with algebraic cycles on not necessarily connected 'abelian varieties' in a canonical way as well as the property of being symmetrically distinguished, we would like to introduce the following category. Roughly speaking, this is the category of abelian varieties with origin fixed only up to torsion. It is between the notion of abelian varieties (with origin fixed) and the notion of abelian torsors (i.e. a variety isomorphic to an abelian variety thus without a chosen origin). Definition 6.5 (Abelian torsors with torsion structure). One defines the following category A . An object of A , called an abelian torsor with torsion structure, or an a.t.t.s. , is a pair (X, Q X ) where X is a connected smooth projective variety and Q X is a subset of X such that there exists an isomorphism, as complex algebraic varieties, f : X → A from X to an abelian variety A which induces a bijection between Q X and Tor(A), the set of all torsion points of A. The point here is that the isomorphism f , called a marking, usually being non-canonical in practice, is not part of the data. A morphism between two objects (X, Q X ) and (Y, Q Y ) is a morphism of complex algebraic varieties φ : X → Y such that φ(Q X ) ⊂ Q Y . Compositions of morphisms are defined in the natural way. Note that by choosing markings, a morphism between two objects in A is essentially the composition of a morphism between two abelian varieties followed by a torsion translation. Denote by A V the category of abelian varieties. Then there is a natural functor A V → A sending an abelian variety A to (A, Tor(A)).
The following elementary lemma provides the kind of examples that we will be considering : Lemma 6.6 (Constructing a.t.t.s. and compatibility). Let A be an abelian variety. Let f : Λ → Λ be a morphism of lattices6 and f A : A ⊗ Z Λ → A ⊗ Z Λ be the induced morphism of abelian varieties.
(1) Then Ker(
f A ) is canonically a disjoint union of a.t.t.s. such that Q Ker( f A ) = Ker( f A ) ∩ Tor(A ⊗ Z Λ).
(2) If one has another morphism of lattices : Λ → Λ inducing morphism of abelian varieties
A : A ⊗ Z Λ → A ⊗ Z Λ . Then the natural inclusion Ker( f A ) → Ker( A • f A ) is a morphism of a.t.t.s. (on each component).
Proof. For (1), we have the following two short exact sequences of abelian groups :
0 → Ker( f ) → Λ π -→ Im( f ) → 0 ; 0 → Im( f ) → Λ → Coker( f ) → 0,
with Ker( f ) and Im( f ) being lattices. Tensoring them with A, one has exact sequences
0 → A ⊗ Z Ker( f ) → A ⊗ Z Λ π A --→ A ⊗ Z Im( f ) → 0 ; 0 → Tor Z A, Coker( f ) =: T → A ⊗ Z Im( f ) → A ⊗ Z Λ , where T = Tor Z A, Coker( f ) is a finite abelian group consisting of some torsion points of A ⊗ Z Im( f ). Then Ker( f A ) = π -1
A (T) is an extension of the finite abelian group T by the abelian variety A⊗ Z Ker( f ). Choosing a section of π makes A ⊗ Z Λ the product of A ⊗ Z Ker( f ) and A ⊗ Z Im( f ), inside of which Ker( f A ) is the product of A⊗ Z Ker( f ) and the finite subgroup T of A⊗ Z Im( f ). This shows that Q Ker( f A ) := Ker( f A )∩Tor(A⊗ Z Λ), which is independent of the choice of the section, makes Ker( f A ) an a.t.t.s. With (1) being proved, (2) is trivial : the torsion structures on Ker( f A ) and on Ker( A • f A ) are both defined by claiming that a point is torsion if it is a torsion point in A ⊗ Z Λ.
Before generalizing the notion of symmetrically distinguished cycles to the new category A , we have to first prove the following well-known fact. Lemma 6.7. Let A be an abelian variety, x ∈ Tor(A) be a torsion point. Then the corresponding torsion translation t x : A → A y → x + y acts trivially on CH(A).
Proof. We follow the proof in [START_REF] Jiang | On the Chow ring of certain cohomology tori[END_REF]Lemma 2.1]. Let m be the order of x. Let Γ t x be the graph of t x , then one has m * (Γ
t x ) = m * (∆ A ) in CH(A × A)
, where m is the multiplication by m map of A × A. However, m * is an isomorphism of CH(A × A) by Beauville's decomposition [START_REF]Sur l'anneau de Chow d'une variété abélienne[END_REF]. We conclude that Γ t x = ∆ A , hence the induced correspondences are the same, which are t * x and the identity respectively. Definition 6.8 (Symmetrically distinguished cycles in A ). Given an a.t.t.s. (X, Q X ) ∈ A (see Definition 6.5), an algebraic cycle γ ∈ CH(X) is called symmetrically distinguished, if for a marking f : X → A, the cycle f * (γ) ∈ CH(A) is symmetrically distinguished in the sense of O'Sullivan (Definition 5.4). By Lemma 6.7, this definition is independent of the choice of marking. An algebraic cycle on a disjoint union of a.t.t.s. is symmetrically distinguished if it is so on each components. Let CH(X) sd be the subspace of symmetrically distinguished cycles.
The following proposition is clear from Theorem 5.5 and Theorem 5.6. Proposition 6.9. Let (X, Q X ) ∈ Obj(A ) be an a.t.t.s. .
(1) The space of symmetric distinguished cycles CH * (X) sd is a graded sub-Q-algebra of CH * (X). (2) Let f : (X, Q X ) → (Y, Q Y ) be a morphism in A , then f * : CH(X) → CH(Y) and f * : CH(Y) → CH(X) preserve symmetrically distinguished cycles. (3) The composition CH(X) sd → CH(X)
CH(X) is an isomorphism. In particular, a (polynomial of) symmetrically distinguished cycles is trivial in CH(X) if and only if it is numerically trivial.
We will need the following easy fact to prove that some cycles on an a.t.t.s. are symmetrically distinguished by checking it in an ambient abelian variety. Lemma 6.10. Let i : B → A be a morphism of a.t.t.s. which is a closed immersion. Let γ ∈ CH(B) be an algebraic cycle. Then γ is symmetrically distinguished in B if and only if i * (γ) is so in A.
Proof. One implication is clear from Proposition 6.9 [START_REF] Abramovich | Compactifying the space of stable maps[END_REF]. For the other one, assuming i * (γ) is symmetrically distinguished in A. By choosing markings, one can suppose that A is an abelian variety and B is a torsion translation by τ ∈ Tor(A) of a sub-abelian variety of A. Thanks to Lemma 6.7, changing the origin of A to τ does not change the cycle class i * (γ) ∈ CH(A), hence one can further assume that B is a sub-abelian variety of A. By Poincaré reducibility, there is a sub-abelian variety C ⊂ A, such that the natural morphism π : B × C → A is an isogeny. We have the following diagram :
B × C pr 1 Ô Ô π B j h h i G G A As π * : CH(A) → CH(B × C) is an isomorphism with inverse 1 deg(π) π * , we have γ = pr 1 * • j * (γ) = pr 1 * • π * • 1 deg(π) π * • j * (γ) = 1 deg(π) pr 1 * • π * • i * (γ).
Since π and pr 1 are morphisms of abelian varieties, the hypothesis that i * (γ) is symmetrically distinguished implies that γ is also symmetrically distinguished by Proposition 6.9 [START_REF] André | Une introduction aux motifs (motifs purs, motifs mixtes, périodes), Panoramas et Synthèses [Panoramas and Syntheses[END_REF].
We now turn to the proof of Proposition 4.1 in Case (B), which takes the following form. As is explained in §4, with Step (i) being done (Proposition 6.4), this would finish the proof of Theorem 1.4. , the symmetrizations of the following two algebraic cycles are rationally equivalent :
• W := 1 |G| V × 1 |G| V × (-1) age( ) V * δ K n (A) ;
• Z is the cycle determining the orbifold product (Definition 2.5( 5)) with the sign change by discrete torsion (Definition 3.4) :
Z| M 1 ×M 2 ×M 3 = 0 if 3 1 2 1 , 2 • δ * c top (F 1 , 2 ) if 3 = 1 2 .
To this end, we apply Proposition 6.9(3) by proving in this subsection that they are both symmetrically distinguished (Proposition 6.12) and then verifying in the next one §6.3 that they are homologically equivalent (Proposition 6.13).
Let M be the abelian variety
A n+1 0 = (x 1 , • • • , x n+1 ) ∈ A n+1
i x i = 0 as before. For any ∈ G, the fixed locus
M = (x 1 , • • • , x n+1 ) ∈ A n+1 i x i = 0 ; x i = x .i ∀i
has the following decomposition into connected components :
(23) M = τ∈A[d] M τ ,
where d := gcd( ) is the greatest common divisor of the lengths of orbits of the permutation , A[d] is the set of d-torsion points and the connected component M τ is described as follows.
Let λ ∈ P(n + 1) be the partition determined by and l := |λ| be its length. Choose a numbering ϕ :
{1, • • • , l} -→ O( ) of orbits such that |ϕ(i)| = λ i . Then d = gcd(λ 1 , • • • , λ l
) and ϕ induces an isomorphism [START_REF]Strings on orbifolds[END_REF] ϕ : A λ 0 -→ M , sending (x 1 , • • • , x l ) to (y 1 , • • • , y n+1 ) with y j = x i if j ∈ ϕ(i). Here A λ 0 is defined in [START_REF] Deligne | Théorème de Lefschetz et critères de dégénérescence de suites spectrales[END_REF], which has obviously the following decomposition into connected components :
(25) A λ 0 = τ∈A[d] A λ/d τ ,
where
A λ/d τ = (x 1 , • • • , x l ) ∈ A λ l i=1 λ i d x i = τ is connected (non-canonically isomorphic to A l-
φ : h(K n (A)) -→ ⊕ ∈G h((A n+1 0 ) )(-age( )) Sn+1 is an isomorphism of Q-algebras φ : H * (K n (A)) -→ H * orb,dt ([A n+1 0 / Sn+1]) = ∈S n+1 H * -2 age( ) ((A n+1 0 ) ), orb,dt Sn .
In other words, Sym(W) and Sym(Z) are homologically equivalent.
Proof. We use Nieper-Wisskirchen's following description [START_REF] Marc | Twisted cohomology of the Hilbert schemes of points on surfaces[END_REF] of the cohomology ring H * (K n (A), C). Let s : A [n+1] → A be the composition of the Hilbert-Chow morphism followed by the summation map. Recall that s is an isotrivial fibration. In the sequel, if not specified, all cohomology groups are with complex coefficients. We have a commutative diagram :
H * (A) s * G G H * (A [n] ) restr. C G G H * (K n (A))
where the upper arrow s * is the pull-back by s, the lower arrow is the unit map sending 1 to the fundamental class 1 K n (A) , is the quotient by the ideal consisting of elements of strictly positive degree and the right arrow is the restriction map. The commutativity comes from the fact that K n (A) = s -1 (O A ) is a fiber. Thus one has a ring homomorphism R : H * (A [n] )
⊗ H * (A) C → H * (K n (A)).
Then [START_REF] Marc | Twisted cohomology of the Hilbert schemes of points on surfaces[END_REF]Theorem 1.7] asserts that this is an isomorphism of C-algebras. Now consider the following diagram :
(27) H * (A [n+1] ) ⊗ H * (A) C R G G Φ H * (K n (A)) φ ⊕ ∈S n+1 H * -2 age( ) ((A n+1 ) ) Sn+1 ⊗ H * (A) C r G G ⊕ ∈S n+1 H * -2 age( ) ((A n+1 0 ) ) Sn+1 ,
• As just stated, the upper arrow is an isomorphism of C-algebras, by Nieper-Wisskirchen [START_REF] Marc | Twisted cohomology of the Hilbert schemes of points on surfaces[END_REF]Theorem 1.7]. • The left arrow Φ comes from the ring isomorphism (which is exactly CHRC 1.1 for Case (A), see §5.3) :
H * (A [n+1] ) -→ ⊕ ∈S n+1 H * -2 age( ) ((A n+1 ) ) Sn+1 ,
established in [START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] based on [START_REF] Lehn | The cup product of Hilbert schemes for K3 surfaces[END_REF]. By (the proof of) Proposition 5.8, this isomorphism is actually induced by (-1) age( ) • U * : H(A [n+1] ) → ⊕ H((A n+1 ) ) with U the incidence subvariety defined in [START_REF] Chen | Orbifolds in mathematics and physics[END_REF]. Note that on the lower-left term of the diagram, the ring homomorphism H * (A) → ⊕ ∈S n+1 H * -2 age( ) ((A n+1 ) ) Sn+1 lands in the summand indexed by = id, and the map H * (A) → H * (A n+1 ) Sn+1 is simply the pull-back by the summation map A (n+1) → A.
• The right arrow is the morphism φ in question. It is already shown in Step (i) Proposition 6.4 to be an isomorphism of vector spaces. The goal is to show that it is also multiplicative. Now the proof of Theorem 1.4 is complete : by Proposition 6.12 and Proposition 6.13, we know that, thanks to Proposition 6.9 (3), the symmetrizations of Z and W in Proposition 6.11 are rationally equivalent, which proves Proposition 4.1 in Case (B). Hence the isomorphism φ in Proposition 6.4 is an isomorphism of algebra objects between the motive of the generalized Kummer variety h(K n (A)) and the orbifold Chow motive h orb ([A n+1 0 / Sn+1]).
We would like to note the following corollary obtained by applying the cohomological realization functor to Theorem 1.4. Corollary 6.15 (CHRC : Kummer case). The Cohomological HyperKähler Resolution Conjecture is true for Case (B), namely, one has an isomorphism of Q-algebras : [START_REF] Chen | Orbifolds in mathematics and physics[END_REF]. For some reason unclear to the authors, this result has never appeared before in the literature. It is presumably not hard to check CHRC in the case of generalized Kummer varieties directly based on the cohomology result of Nieper-Wisskirchen [START_REF] Marc | Twisted cohomology of the Hilbert schemes of points on surfaces[END_REF], which is of course one of the key ingredients used in our proof. It is also generally believed that the main result of Britze's Ph.D. thesis [START_REF] Britze | On the cohomology of generalized kummer varieties[END_REF] should also imply this result. However, the proof of its main result [START_REF] Britze | On the cohomology of generalized kummer varieties[END_REF]Theorem 40] seems to be flawed : the linear map Θ constructed in the last line of Page 60, which is claimed to be the desired ring isomorphism, is actually the zero map. Nevertheless, the authors believe that it is feasible to check CHRC in this case with the very explicit description of the ring structure of H * (K n (A) × A) obtained in [START_REF] Britze | On the cohomology of generalized kummer varieties[END_REF].
H * (K n (A), Q) H * orb,dt [A n+1 0 / Sn+1] . Remark 6.
Application 1 : Towards Beauville's splitting principle
In this section, a holomorphic symplectic variety is always assumed to be smooth projective unless stated otherwise and we require neither the simple connectedness nor the uniqueness up to scalar of the holomorphic symplectic 2-form. Hence examples of holomorphic symplectic varieties include projective deformations of Hilbert schemes of K3 or abelian surfaces, generalized Kummer varieties etc..
Beauville's Splitting Principle.
Based on [START_REF]Sur l'anneau de Chow d'une variété abélienne[END_REF] and [START_REF] Beauville | On the Chow ring of a K3 surface[END_REF], Beauville envisages in [START_REF] Beauville | On the splitting of the Bloch-Beilinson filtration[END_REF] the following Splitting Principle for all holomorphic symplectic varieties.
≤ i ≤ 4n, (28) CH i (X) = i s=0 CH i (X) s ,
which satisfies :
• (Multiplicativity) CH i (X) s • CH i (X) s ⊂ CH i+i (X) s+s ;
• (Bloch-Beilinson) The associated filtration F j CH i (X) := s≥j CH i (X) s satisfies the Bloch-Beilinson conjecture (cf. [START_REF] Voisin | Hodge theory and complex algebraic geometry[END_REF]Conjecture 11.21] for example). In particular :
-(F 1 = CH hom ) The restriction of the cycle class map cl :
s>0 CH i (X) s → H 2i (X, Q) is zero ; -(Injectivity) The restriction of the cycle class map cl : CH i (X) 0 → H 2i (X, Q) is injective.
We would like to reformulate (and slightly strengthen) Conjecture 7.1 by using the language of Chow motives as follows, which is, we believe, more fundamental. Let us first of all introduce the following (weaker) notion studied in detail in [START_REF] Shen | The Fourier transform for certain hyperkähler fourfolds[END_REF]. Definition 7.2 (Multiplicative Chow-K ünneth decomposition). Given a smooth projective variety X of dimension n, a multiplicative Chow-K ünneth decomposition is a direct sum decomposition in the category CHM of Chow motives with rational coefficients : [START_REF] Fu | The motivic hypekähler resolution conjecture and Chow rings of Hilbert schemes of K3 surfaces[END_REF] h
(X) = 2n i=0 h i (X)
satisfying the following two properties :
• (Chow-K ünneth) The cohomology realization of the decomposition gives the K ünneth decomposition : for each 0
≤ i ≤ 2n, H * (h i (X)) = H i (X). • (Multiplicativity) The product µ : h(X) ⊗ h(X) → h(X)
given by the small diagonal δ X ⊂ X × X × X respects the decomposition : the restriction of µ on the summand h i (X) ⊗ h j (X) factorizes through h i+j (X).
Such a decomposition induces a (multiplicative) bigrading of the rational Chow ring CH * (X) = ⊕ i,s CH i (X) s by setting :
(30) CH i (X) s := CH i (h 2i-s (X)) := Hom CHM 1(-i), h 2i-s (X) .
By the definition of motives (cf. 2.1), a multiplicative Chow-K ünneth decomposition is equivalent to a collection of auto-correspondences π
0 , • • • , π 2 dim X , where π i ∈ CH dim X (X × X), satisfying • π i • π i = π i , ∀i ; • π i • π j = 0, ∀i j ; • π 0 + • • • + π 2 dim X = ∆ X ; • Im(π i * : H * (X) → H * (X)) = H i (X) ; • π k • δ X • (π i ⊗ π j ) = 0, ∀k i + j.
The induced multiplicative bigrading on the rational Chow ring CH * (X) is given by
CH i (X) s := Im π 2i-s * : CH i (X) → CH i (X) .
For later use, we need to generalize the previous notion for Chow motive algebras : Definition 7.3. Let h be an (associative but not-necessarily commutative) algebra object in the category CHM of rational Chow motives. Denote by µ : h ⊗ h → h its multiplication structure. A multiplicative Chow-K ünneth decomposition of h is a direct sum decomposition h = i∈Z h i , such that • (Chow-K ünneth) the cohomology realization gives the K ünneth decomposition : H i (h) = H * (h i ) for all i ∈ Z ; • (Multiplicativity) the restriction of µ to h i ⊗ h j factorizes through h i+j . Now one can enhance Conjecture 7.1 to the following :
We collect some facts about the Beauville-Deninger-Murre decomposition [START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF] for the proof of Theorem 7.9 in the next subsection. By choosing markings for a.t.t.s. 's, thanks to Lemma 6.7, we see that a.t.t.s. 's can be endowed with multiplicative Chow-K ünneth decompositions consisting of Chow-K ünneth projectors that are symmetrically distinguished, and enjoying the properties embodied in the two following lemmas. Their proofs are reduced immediately to the case of abelian varieties, which are certainly well-known. Lemma 7.7 (K ünneth). Let B and B be two abelian varieties (or more generally a.t.t.s. 's), then the natural isomorphism h(B)⊗h(B ) h(B×B ) identifies the summand h i (B)⊗h j (B) as a direct summand of h i+j (B×B ) for any i, j ∈ N. Lemma 7.8. Let f : B → B be a morphism of abelian varieties (or more generally a.t.t.s. 's) of dimension , respectively.
• The pull back f
* := t Γ f : h(B ) → h(B) sends h i (B ) to h i (B) ; • The push forward f * := Γ f : h(B) → h(B ) sends h i (B) to h i+2 -2 (B ).
Candidate decompositions in Case (A) and (B).
In the sequel, let A be an abelian surface. We want to do the similar thing as Beauville and Deninger-Murre did for abelian varieties ( §7.2) : for the holomorphic symplectic variety X being A [n] or K n (A), we construct the candidate multiplicative Chow-K ünneth decomposition for Conjecture 7.4 thus the candidate bigrading on CH * (X) for Conjecture 7.1, then formulate the remaining Bloch-Beilinson condition into a conjecture on the motive of A, which will be investigated upon, especially its relation with Beauville's Conjecture 7.5 on abelian varieties.
Let us start by the existence of multiplicative Chow-K ünneth decomposition : Theorem 7.9. Given an abelian surface A, let X be Case (A) : the 2n-dimensional Hilbert scheme A [n] ; or Case (B) : the n-th generalized Kummer variety K n (A).
Then X has a canonical multiplicative Chow-K ünneth decomposition
h(X) = 4n i=0 h i (X),
where in Case (A) and (B) respectively
h i (A [n] ) := ∈S n h i-2 age( ) ((A n ) )(-age( )) Sn ; (35) h i (K n (A)) := ∈S n+1 h i-2 age( ) ((A n+1 0 ) )(-age( )) Sn+1 . (36)
In particular, a canonical multiplicative bigrading on the (rational) Chow ring given by CH i (X) s := CH i (h 2i-s (X)).
Remark 7.10. The existence of a multiplicative Chow-K ünneth decomposition of A [n] is not new : it was previously obtained by Vial in [START_REF] Vial | On the motive of some hyperkaehler varieties[END_REF]. As for the generalized Kummer varieties, if one ignores the multiplicativity of the Chow-K ünneth decomposition, which is of course the key point here, then it follows rather directly from De Cataldo and Migliorini's result [START_REF] Andrea | The Chow motive of semismall resolutions[END_REF] as explained in §6.1 (see Corollary 6.3) and is explicitly written down by Z. Xu [START_REF] Xu | Algebraic cycles on a generalized Kummer variety[END_REF].
Proof. The following proof works for both cases. Let M := A n , G := Sn, X := A [n] in Case (A) and M := A n+1 0 , G := Sn+1, X := K n (A) in Case (B). Thanks to Theorem 1.3 and Theorem 1.4, we have an isomorphism of motive algebras : Here by convention, h j (M ) = 0 for j < 0, hence in [START_REF] Lehn | The cup product of Hilbert schemes for K3 surfaces[END_REF], h i = 0 if i -2 age( ) > 2 dim(M ) for any ∈ G, that is, when i > max ∈G {4n -2 age( )} = 4n.
h(X) ∈G h (M ) (-age( )), orb,dt
Then obviously, as a direct sum of Chow-K ünneth decompositions,
h = 4n i=0 h i
is a Chow-K ünneth decomposition. It remains to show the multiplicativity condition that µ : h i ⊗ h j → h factorizes through h i+j , which is equivalent to say that for any i, j ∈ N and , h ∈ G, the orbifold product orb (discrete torsion only changes a sign thus irrelevant here) restricted to the summand h i-2 age( ) (M )(-age( ))⊗h j-2 age(h) (M h )(-age(h)) factorizes through h i+j-2 age( h) (M h )(-age( h)).
Thanks to the fact that the obstruction bundle F ,h is always a trivial vector bundle in both of our cases, we know that (see Definition 2.5) orb is either zero when rk(F ,h ) 0 ; or when rk(F ,h ) = 0, is defined as the correspondence from M × M h to M h given by the following composition ( 38)
h(M ) ⊗ h(M h ) -→ h(M × M h ) ι * 1 -→ h(M < ,h> ) ι 2 * --→ h(M h )(codim(ι 2 )),
where M h M < ,h> ? _
ι 2 o o ι 1 G G M × M h
are morphisms of abelian varieties in Case (A) and morphisms of a.t.t.s. 's in Case (B). Therefore, one can suppose further that rk(F ,h ) = 0, which implies by using (7) that the Tate twists match : codim(ι 2 )age( )age(h) =age( h). Now Lemma 7.7 applied to the first isomorphism in [START_REF] Lupercio | The global McKay-Ruan correspondence via motivic integration[END_REF] and Lemma 7.8 applied to the last two morphisms in [START_REF] Lupercio | The global McKay-Ruan correspondence via motivic integration[END_REF] show that, omitting the Tate twists, the summand h i-2 age( ) (M ) ⊗ h j-2 age(h) (M h ) is sent by µ inside the summand h k (M h ), with the index k = i + j -2 age( ) -2 age(h) + 2 dim(M h ) -2 dim(M < ,h> ) = i + j -2 age( h),
where the last equality is by equation ( 7) together with the assumption rk(F ,h ) = 0.
In conclusion, we get a multiplicative Chow-K ünneth decomposition h = 4n i=0 h i with h i given in [START_REF] Lehn | The cup product of Hilbert schemes for K3 surfaces[END_REF] ; hence a multiplicative Chow-K ünneth decomposition for its G-invariant part of the submotive algebra h(X).
The decomposition in Theorem 7.9 is supposed to be Beauville's splitting of the Bloch-Beilinson-Murre filtration on the rational Chow ring of X. In particular, Conjecture 7.11. (Bloch-Beilinson for X) Notation is as in Theorem 7.9, then for all i ∈ N,
• CH i (X) s = 0 for s < 0 ;
• The restriction of the cycle class map cl : CH i (X) 0 → H 2i (X, Q) is injective.
As a first step towards this conjecture, let us make the following Remark 7.12. Beauville's conjecture 7.5 on abelian varieties implies Conjecture 7.11. Indeed, keep the same notation as before. From [START_REF] Kimura | Orbifold cohomology reloaded, Toric topology[END_REF] [START_REF] Kings | Higher regulators, Hilbert modular surfaces, and special values of L-functions[END_REF], we obtain CH i (A [n] ) s = CH i (h 2i-s (A [n] )) =
∈S n CH i-age( ) (h 2i-s-2 age( ) (A O( ) )) Sn = λ∈P(n) CH i+|λ|-n (A λ ) Sλ s ; CH i (K n A) s = CH i (h 2i-s (K n A)) = ∈S n+1
CH i-age( ) (h 2i-s-2 age( ) (A O( ) 0
))
Sn+1 = λ∈P(n+1)
CH i+|λ|-n-1 (A λ 0 ) Sλ s , in two cases respectively, whose vanishing (s < 0) and injectivity into cohomology by cycle class map (s = 0) follow directly from those of A λ or A λ 0 . In fact, [START_REF]Remarks on motives of abelian type[END_REF]Theorem 3] proves more generally that the second point of Conjecture 7.5 (the injectivity of the cycle class map cl : CH i (B) 0 → H 2i (B, Q) for all complex abelian varieties) implies Conjecture 7.11 for all smooth projective complex varieties X whose Chow motive is of abelian type, which is the case for a generalized Kummer variety by Proposition 6.4. Of course, one has to check that our definition of CH i (X) 0 here coincides with the one in [START_REF]Remarks on motives of abelian type[END_REF], which is quite straightforward.
The Chern classes of a (smooth) holomorphic symplectic variety X are also supposed to be in CH i (X) 0 with respect to Beauville's conjectural splitting. We can indeed check this in both cases considered here : Proposition 7.13. Set-up as in Theorem 7.9. The Chern class c i (X) belongs to CH i (X) 0 for all i.
Proof. In Case (A), that is, in the case where X is the Hilbert scheme A [n] , this is proved in [START_REF] Vial | On the motive of some hyperkaehler varieties[END_REF]. Let us now focus on Case (B), that is, on the case where X is the generalized Kummer variety K n (A). Let {π i : 0 ≤ i ≤ 2n} be the Chow-K ünneth decomposition of K n (A) given by [START_REF] Kings | Higher regulators, Hilbert modular surfaces, and special values of L-functions[END_REF]. We have to show that c i (K n (A)) = (π 2i ) * c i (K n (A)), or equivalently that (π j ) * c i (K n (A)) = 0 as soon as (π j ) * c i (K n (A)) is homologically trivial. By Proposition 6.4, it suffices to show that for any ∈ G (π j M ) * ((V ) * c i (K n (A))) = 0 as soon as (π j M ) * ((V ) * c i (K n (A))) is homologically trivial. Here, recall that (23) makes M a disjoint union of a.t.t.s. and that π j M is a Chow-K ünneth projector on M which is symmetrically distinguished on each component of M . By Proposition 6.9, it is enough to show that (V ) * (c i (K n (A)) is symmetrically distinguished on each component of M . As in the proof of Theorem 8.3. Let π : X → B be a smooth projective family, and assume that the generic fiber X of π admits a multiplicative Chow-K ünneth decomposition. Then there exist a decomposition isomorphism as in [START_REF] Nakamura | Hilbert schemes of abelian group orbits[END_REF] and a nonempty Zariski open subset U of B, such that this decomposition becomes multiplicative for the restricted family π| U : X| U → U.
Proof. By spreading out a multiplicative Chow-K ünneth decomposition of X, there exist a sufficiently small but nonempty Zariski open subset U of B and relative correspondences Π i ∈ CH dim B X (X| U × U X| U ), 0 ≤ i ≤ 2 dim B X, forming a relative Chow-K ünneth decomposition, meaning that ∆ X| U /U = i Π i , Π i • Π i = Π i , Π i • Π j = 0 for i j, and Π i acts as the identity on R i (π [n] | U ) * Q and as zero on R j (π [n] | U ) * Q for j i. By [54, Lemma 2.1], the relative idempotents Π i induce a decomposition in the derived category
R(π| U ) * Q 4n i=0 H i (R(π| U ) * Q)[-i] = 4n i=0 R i (π| U ) * Q[-i]
with the property that Π i acts as the identity on the summand H i (R(π| U ) * Q)[-i] and acts as zero on the summands H j (R(π| U ) * Q)[-j] for j i. In order to establish the existence of a decomposition as in [START_REF] Nakamura | Hilbert schemes of abelian group orbits[END_REF] that is multiplicative and hence to conclude the proof of the theorem, we thus have to show that Π k • δ • (Π i × Π j ) acts as zero on R(π| U ) * Q ⊗ R(π| U ) * Q, after possibly further shrinking U, whenever k i + j. But more is true : being generically multiplicative, the relative Chow-K ünneth decomposition {Π i } is multiplicative, that is, Π k • δ • (Π i × Π j ) = 0 whenever k i + j, after further shrinking U if necessary. The theorem is now proved.
As a corollary, we can extend Theorem 8.2 to families of generalized Kummer varieties : Proof. The generic fiber of A [n] → B (resp. K n (A) → B) is the 2n-dimensional Hilbert scheme (resp. generalized Kummer variety) attached to the abelian surface that is the generic fiber of π. By Theorem 7.9, it admits a multiplicative Chow-K ünneth decomposition. (Strictly speaking, we only established Theorem 7.9 for Hilbert schemes of abelian surfaces and generalized Kummer varieties over the complex number ; however, the proof carries over over any field of characteristic zero.) We conclude by invoking Theorem 8.3.
Proposition 4 . 1 . 3 :• W := 1 |G| U × 1 |G|U
41311 Notation being as before, the following two algebraic cycles have the same symmetrization in CH ∈G M × (-1) age( ) U * (δ Y ) ; • The algebraic cycle Z determining the orbifold product (Definition 2.5(5)) with the sign change by discrete torsion (Definition 3.4) :
Proposition 4 .
4 1 implies Theorem 1.3 and 1.4. The only thing to show is the commutativity of (14), which is of course equivalent to the commutativity of the diagram
3 .
3 the morphism (or correspondence) induced by the cycle W in Proposition 4.1. On the other hand, orb,dt for h orb ([M/G]) is by definition p • Z • ι ⊗2 . Therefore, the desired commutativity, hence also the main results, amounts to the equality p • W • ι ⊗2 = p • Z • ι ⊗2 , which says exactly that the symmetrizations of W and of Z are equal in CH ∈G M
( 3 )
3 Let f : A → B be a morphism of abelian varieties, then f * : CH(A) → CH(B) and f * : CH(B) → CH(A) preserve symmetrically distinguished cycles.
Proposition 6 .
6 11 (=Proposition 4.1 in Case (B)). In CH ∈G M 3
Conjecture 7 . 1 (
71 Splitting Principle : Chow rings). Let X be a holomorphic symplectic variety of dimension 2n. Then one has a canonical bigrading of the rational Chow ring CH * (X), called multiplicative splitting of CH * (X) of Bloch-Beilinson type : for any 0
G
and it suffices to prove that the motive algebrah := ∈G h (M ) (-age( )), with orb,dt as the product, has a multiplicative Chow-K ünneth decomposition in the sense of Definition 7.3. To this end, for each ∈ G, an application of Deninger-Murre's decomposition[START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF] to M , which is an abelian variety in Case (A) and a disjoint union of a.t.t.s. in Case (B), gives us a multiplicative Chow-K ünneth decompositionh(M ) = 2 dim M i=0 h i (M ). Now we define for each i ∈ N,(37)h i := ∈G h i-2 age( ) (M )(-age( )).
Corollary 8 . 4 .
84 Let π : A → B be an abelian surface over B. Consider Case (A) : A [n] → B the relative Hilbert scheme of length-n subschemes on A → B ; or Case (B) : K n (A) → B the relative generalized Kummer variety. Then, in both cases, there exist a decomposition isomorphism as in (40) and a nonempty Zariski open subset U of B, such that this decomposition becomes multiplicative for the restricted family over U.
1 as varieties) and is equipped with a canonical a.t.t.s. (Definition 6.5) structure, namely, a point of A λ/dThe component M τ hence acquires a canonical structure of a.t.t.s. It is clear that the decomposition[START_REF] Dixon | Strings on orbifolds[END_REF] and the a.t.t.s. structure on components are both independent of the choice of ϕ. One can also define the a.t.t.s. structure on M by using Lemma 6.6. The cohomology realization of the (a priori additive) isomorphism in Proposition 6.4
Proposition 6.13.
τ and only if it is a torsion point (in the usual sense) in the abelian variety A λ . The decomposition (23) is defined to be of torsion (i.e. in Q A λ/d ) if τ
of M is the transportation of the decomposition (25) of A λ 0 via the isomorphism (24) : A λ/d τ ϕ -→ M τ .
Similar to Proposition 5.7, here is the main result of this subsection :
Proposition 6.12. Notation is as in Proposition 6.11. W and Z, as well as their symmetrizations, are
symmetrically distinguished in CH ∈G M 3 , where M is viewed as a disjoint union of a.t.t.s. as in
(23) and symmetrical distinguishedness is in the sense of Definition 6.8.
The definition of the orbifold Chow ring has already appeared in Page 211 of Fantechi-G öttsche[START_REF] Fantechi | Orbifold cohomology for global quotients[END_REF] and proved to be equivalent to Abramovich-Grabber-Vistoli's construction in[START_REF] Abramovich | Gromov-Witten theory of Deligne-Mumford stacks[END_REF] by Jarvis-Kaufmann-Kimura in[START_REF] Jarvis | Stringy K-theory and the Chern character[END_REF] .
One has to choose a Weil cohomology theory when talking about homological motives. In this paper, however, we always use the Betti cohomology and make the choice implicit.
A lattice is a free abelian group of finite rank.
Acknowledgements. We would like to thank Samuel Boissière, Alessandro Chiodo, Julien Grivaux, Bruno Kahn, Robert Laterveer, Manfred Lehn, Marc Nieper-Wißkirchen, Yongbin Ruan, Claire Voisin, and Qizheng Yin for helpful discussions and email correspondences. The project was initiated when we were members of the I.A.S. for the special year Topology and Algebraic Geometry in 2014-15 (L.F. and Z.T. were funded by the NSF and C.V. by the Fund for Mathematics). We thank the Institute for the exceptional working conditions.
Orbifold motives and orbifold Chow rings
To fix the notation, we start by a brief reminder of the construction of pure motives (cf. [3]). In order to work with Tate twists by age functions (2.3), we have to extend slightly the usual notion of pure motives by allowing twists by a rational number.
Lie Fu is supported by the Agence Nationale de la Recherche (ANR) through ECOVA within the program Jeunes Chercheurs and LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program Investissements d'Avenir (ANR-11-IDEX-0007). Lie Fu and Zhiyu Tian are supported by Projet Exploratoire Premier Soutien (PEPS) Jeunes chercheur-e-s 2016 operated by Insmi and Projet Inter-Laboratoire 2016 by Fédération de Recherche en Mathématiques Rh ône-Alpes/Auvergne CNRS 3490. Charles Vial is supported by EPSRC Early Career Fellowship EP/K005545/1.
1 fractional Tate twists. However, in the case that interests us, namely when there exists a crepant resolution, for the word 'crepant resolution' to make sense we understand that the underlying singular variety M/G is at least Gorenstein, in which case all age shiftings are integers and we stay in the usual category of Chow motives.
Proof. For W, it is enough to show that for any 1 , 2 , 3 ∈ G, q * • p * • δ * (1 K n (A) ) is symmetrically distinguished, where the notation is explained in the following commutative diagram, whose squares are all cartesian and without excess intersections. [START_REF] Fogarty | Algebraic families on an algebraic surface[END_REF] (
where the incidence subvarieties U 's are defined in §5.1 [START_REF] Chen | Orbifolds in mathematics and physics[END_REF] (with n replaced by n + 1) ; all fiber products in the second row are over A ; the second row is the base change by the inclusion of small diagonal A → A 3 of the first row ; the third row is the base change by O A → A of the second the row ; finally, δ, δ , δ are various (absolute or relative) small diagonals.
Observe that the two inclusions i and j are in the situation of Lemma 6.6 : let
which admits a natural morphism u to Λ := Z ⊕ Z ⊕ Z by weighted sum on each factor (with weights being the lengths of orbits). Let v :
Then it is clear that i and j are identified with the following inclusions
By Lemma 6.6, (A n+1 ) 1 × A (A n+1 ) 2 × A (A n+1 ) 3 and M 1 × M 2 × M 3 are naturally disjoint unions of a.t.t.s. and the inclusions i and j are morphisms of a.t.t.s. on each component. Now by functorialities and the base change formula (cf. [30, Theorem 6.2]), we have
which is a polynomial of big diagonals of A |O( 1 )|+|O( 2 )|+|O( 3 )| by Voisin's result [START_REF]Some new results on modified diagonals[END_REF]Proposition 5.6], thus symmetrically distinguished in particular. By Lemma 6.10,
Again by functorialities and the base change formula, we have
Since i is a morphism of a.t.t.s. on each component (Lemma 6.6), one concludes that q * •p * •δ * (1 K n (A) ) is symmetrically distinguished on each component. Hence W, being a linear combination of such cycles, is also symmetrically distinguished. For Z, as in the Case (A), it is easy to see that all the obstruction bundles F 1 , 2 are (at least virtually) trivial vector bundles because according to Definition 2.5, there are only tangent/normal bundles of/between abelian varieties involved. Therefore the only non-zero case is the push-forward of the fundamental class of M < 1 , 2 > by the inclusion into M 1 × M 2 × M 1 2 , which is obviously symmetrically distinguished.
6.3.
Step (iii) -Cohomological realizations. We keep the notation as before. To finish the proof of Proposition 6.11, hence Theorem 1.4, it remains to show that the cohomology classes of the symmetrizations of W and Z are the same. In other words, they have the same realization for Betti cohomology.
• The lower arrow r is defined as follows. On the one hand, let the image of the unit 1 ∈ C be the fundamental class of A (n+1) 0 in the summand indexed by = id. On the other hand, for any ∈ Sn+1, we have a natural restriction map H * -2 age( ) ((A n+1 ) ) → H * -2 age( ) ((A n+1 0 ) ). They will induce a ring homomorphism H * (A n+1 , Sn+1)C → H * (A n+1 0 , Sn+1)C by Lemma 6.14 below, which is easily seen to be compatible with the Sn+1-action and the ring homomorphisms from H * (A), hence r is a well-defined homomorphism of C-algebras.
• To show the commutativity of the diagram [START_REF] Fu | Symplectic resolutions for nilpotent orbits[END_REF], the case for the unit 1 ∈ C is easy to check.
For the case of H * (A [n+1] ), it suffices to remark that for any the following diagram is commutative
) ) where V is the incidence subvariety defined in [START_REF] Deninger | Motivic decomposition of abelian schemes and the Fourier transform[END_REF].
In conclusion, since in the commutative diagram [START_REF] Fu | Symplectic resolutions for nilpotent orbits[END_REF], Φ, R are isomorphisms of C-algebras, r is a homomorphism of C-algebra and φ is an isomorphism of vector spaces, we know that they are all isomorphisms of algebras. Thus Proposition 6.13 is proved assuming the following : Lemma 6.14. The natural restriction maps H * -2 age( ) ((A n+1 ) ) → H * -2 age( ) ((A n+1 0 ) ) for all ∈ Sn+1 induce a ring homomorphism H * (A n+1 , Sn+1) → H * (A n+1 0 , Sn+1), where their product structures are given by the orbifold product (see Definition 2.5 or 2.6).
Proof. This is straightforward by definition. Indeed, for any 1 , 2 ∈ Sn+1 together with α ∈ H((A n+1 ) 1 ) and β ∈ H((A n+1 ) 2 ), since the obstruction bundle F 1 , 2 is a trivial vector bundle, we have
where i : 1 2 is the natural inclusion. Therefore by the base change for the cartesian diagram without excess intersection :
we have :
which means that the restriction map is a ring homomorphism.
The proof of Proposition 6.13 is finished.
Conjecture 7.4 (Motivic Splitting Principle = Conjecture 1.6). Let X be a holomorphic symplectic variety of dimension 2n. Then we have a canonical multiplicative Chow-K ünneth decomposition of h(X) :
which is moreover of Bloch-Beilinson-Murre type, that is, for any i, j ∈ N,
(1) CH i (h j (X)) = 0 if j < i ;
(2) CH i (h j (X)) = 0 if j > 2i ;
(3) the realization induces an injective map
One can deduce Conjecture 7.1 from Conjecture 7.4 via [START_REF] Fulton | Intersection theory[END_REF]. Note that the range of s in (28) follows from the first two Bloch-Beilinson-Murre conditions in Conjecture 7.4.
Splitting Principle for abelian varieties.
Recall that for an abelian variety B of dimension , using Fourier transform [START_REF] Beauville | Quelques remarques sur la transformation de Fourier dans l'anneau de Chow d'une variété abélienne[END_REF], Beauville [START_REF]Sur l'anneau de Chow d'une variété abélienne[END_REF] constructs a multiplicative bigrading on CH * (B) :
is the simultaneous eigenspace for all m : B → B, the multiplication by m ∈ Z map.
Using similar idea as in loc.cit. , Deninger and Murre [START_REF] Deninger | Motivic decomposition of abelian schemes and the Fourier transform[END_REF] constructed a multiplicative Chow-K ünneth decomposition (Definition 7.2)
with (by [START_REF] Kings | Higher regulators, Hilbert modular surfaces, and special values of L-functions[END_REF])
Moreover, one may choose such a multiplicative Chow-K ünneth decomposition to be symmetrically distinguished ; see [START_REF] Shen | The Fourier transform for certain hyperkähler fourfolds[END_REF]Ch. 7]. This Chow-K ünneth decomposition is the candidate decomposition for the analogous Conjecture 7.4 in the case of abelian varieties and induces, via (30), Beauville's bigrading [START_REF] Ito | McKay correspondence and Hilbert schemes[END_REF] ; the remaining Bloch-Beilinson condition becomes the following conjecture of Beauville [START_REF] Beauville | Quelques remarques sur la transformation de Fourier dans l'anneau de Chow d'une variété abélienne[END_REF] on CH * (B) , which is still largely open.
Conjecture 7.5 (Beauville's conjecture on abelian varieties). Notation is as above. Then
Remark 7.6. As torsion translations act trivially on the Chow rings of abelian varieties (Lemma 6.7), the Beauville-Deninger-Murre decompositions ( 32) and ( 33) naturally extend to the slightly broader context of abelian torsors with torsion structure (see Definition 6.5). Proposition 6.12, we have for any ∈ G the following commutative diagram, whose squares are cartesian and without excess intersections :
where the incidence subvariety U is defined in §5.1 [START_REF] Chen | Orbifolds in mathematics and physics[END_REF] (with n replaced by n + 1) and the bottom row is the base change by O A → A of the top row. Note that c i (K n (A)) = c i (A [n+1] )| K n (A) , since the tangent bundle of A is trivial. Therefore, by functorialities and the base change formula (cf. [30, Theorem 6.2]), we have
By Voisin's result [55, Theorem 5.12], q * •p * (c i (A [n+1] )) is a polynomial of big diagonals of A |O( )| , thus symmetrically distinguished in particular. It follows from Proposition 6.9 that (V ) * (c i (K n (A)) is symmetrically distinguished on each component of M . This concludes the proof of the proposition.
Application 2 : Multiplicative decomposition theorem of rational cohomology
Deligne's decomposition theorem states the following :
Theorem 8.1 (Deligne [21]). Let π : X → B be a smooth projective morphism. In the derived category of sheaves of Q-vector spaces on B, there is a decomposition (which is non-canonical in general)
Both sides of (40) carry a cup-product : on the right-hand side the cup-product is the direct sum of the usual cup-products R i π * Q ⊗ R j π * Q → R i+j π * Q defined on local systems, while on the left-hand side the derived cup-product Rπ * Q ⊗ Rπ * Q → Rπ * Q is induced by the (derived) action of the relative small diagonal δ ⊂ X × B X × B X seen as a relative correspondence from X × B X to X. As explained in [START_REF]Chow rings and decomposition theorems for families of K3 surfaces and Calabi-Yau hypersurfaces[END_REF], the isomorphism [START_REF] Nakamura | Hilbert schemes of abelian group orbits[END_REF] does not respect the cup-product in general. Given a family of smooth projective varieties π : X → B, Voisin [54, Question 0.2] asked if there exists a decomposition as in [START_REF] Nakamura | Hilbert schemes of abelian group orbits[END_REF] which is multiplicative, i.e., which is compatible with cup-product, maybe over a nonempty Zariski open subset of B. By Deninger-Murre [START_REF] Deninger | Motivic decomposition of abelian schemes and the Fourier transform[END_REF], there does exist such a decomposition for an abelian scheme π : A → B. The main result of [START_REF]Chow rings and decomposition theorems for families of K3 surfaces and Calabi-Yau hypersurfaces[END_REF] is : Theorem 8.2 (Voisin [54]). For any smooth projective family π : X → B of K3 surfaces, there exist a decomposition isomorphism as in [START_REF] Nakamura | Hilbert schemes of abelian group orbits[END_REF] and a nonempty Zariski open subset U of B, such that this decomposition becomes multiplicative for the restricted family π| U : X| U → U.
As implicitly noted in [50, Section 4], Voisin's Theorem 8.2 holds more generally for any smooth projective family π : X → B whose generic fiber admits a multiplicative Chow-K ünneth decomposition (K3 surfaces do have a multiplicative Chow-K ünneth decomposition ; this follows by suitably reinterpreting, as in [START_REF] Shen | The Fourier transform for certain hyperkähler fourfolds[END_REF]Proposition 8.14], the vanishing of the modified diagonal cycle of Beauville-Voisin [START_REF] Beauville | On the Chow ring of a K3 surface[END_REF] as the multiplicativity of the Beauville-Voisin Chow-K ünneth decomposition.) : | 116,720 | [
"12432"
] | [
"521750",
"21",
"237843"
] |
01004832 | en | [
"spi"
] | 2024/03/04 23:41:44 | 2010 | https://hal.science/hal-01004832/file/GGC.pdf | J A García
email: [email protected]
Ll Gascón
F Chinesca
A FLUX LIMITER STRATEGY FOR SOLVING THE SATURATION EQUATION IN RESIN TRANSFER MOLDING PROCESS SIMULATION
Keywords: Resin Transfer Molding (RTM), fixed mesh simulation, transport problems, flux limiter
The main aim of this work is to propose a numerical procedure for solving the saturation equation in RTM process simulation. In order to analyze in more detail the progressive impregnation of a fibrous preform by a fluid resin, the numerical model proposed here considers the flow through a partially saturated medium, including the dependence of permeability on the saturation degree. The model consists of an elliptic equation governing the pressure distribution and a transport hyperbolic equation describing the evolution of the saturation in RTM. A global flux limiter fixed mesh strategy is proposed for solving the transport equation with a source term. The flux limiter method has the ability to limit the extra numerical diffusion introduced by standard first-order schemes. This formulation can lead to improvements of existing RTM flow simulation codes and optimize the injection process. Preliminary numerical results are presented to validate this new approach.
INTRODUCTION
Liquid Composites Molding (LCM) processes are based on a proper resin impregnation of the reinforcement. Modeling and simulation play an important role in the development and optimization of molds production and in devising appropriate resin injection strategies. Minimization of mold filling time without losing part quality is an important issue in Resin Transfer Molding (RTM). Inadequate injection strategies tend to create macro and microvoids in the part, the formation of which depends on fluid flow velocity.
Over the last decades, different numerical techniques have been used to model the mold filling process in RTM. In general, RTM process simulation requires an accurate treatment of the advection equation which governs the evolution of different fluid properties: fluid presence function, incubation time, temperature, concentration of reacting components, degree of saturation, etc. In some of our former works [START_REF] García | A Fixed Mesh Numerical Method for Modelling the Flow in Liquid Composites Moulding Processes Using a Volume of Fluid Technique[END_REF][START_REF] Sánchez | Towards an efficient numerical treatment of the transport problems in the Resin Transfer Molding Simulation[END_REF], a new procedure was proposed to integrate accurately the transport of the different above mentioned fluid properties. In this work, the numerical procedure is extended for solving the transport equation governing the evolution of saturation in RTM during mold filling. In this model the permeability is assumed to be a function of saturation [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF], and then the continuity equation that governs the pressure distribution includes a source term that depends on saturation. The presence of this source term explains the quadratic spatial pressure distribution in an unsaturated porous medium noticed in unidirectional flows. To derive a closed model the saturation equation is considered whose source term depends on the micro and macro voids content.
The technique here used is based on a flux limiter fixed mesh strategy for solving the transport equation which governs the evolution of degree of saturation. It is well-known that spurious oscillations in the computed solution can be avoided by using first-order upwind discretization schemes, but unfortunately these first order schemes introduce an excessive numerical diffusion. One alternative for avoiding non-physical oscillations, reducing the extra numerical diffusion, lies in the use of second-order numerical fluxes in the regions where the solution is smooth enough that reduce to first-order in presence of discontinuities. This technique was successfully used for computing the volume fraction evolution in [START_REF] García | A Fixed Mesh Numerical Method for Modelling the Flow in Liquid Composites Moulding Processes Using a Volume of Fluid Technique[END_REF].
GOVERNING EQUATIONS
The model which describe the mould filling process is given by Darcy´s law
K v p μ = -∇ (1)
combined with the continuity equation, which should account for the loss of resin due to the perform impregnation. This can be done by incorporating a source term in the mass balance equation
S v t φ ∂ ∇ ⋅ = -∂ (2)
where v is Darcy velocity, K is the preform permeability tensor, μ is the fluid viscosity, φ is the porosity of the fibrous reinforcement, p is the resin pressure and S the degree of saturation. In order to evaluate the permeability, we make use of the model proposed by Breard et al. [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF] to predict void formation in LCM. Then the permeability in Darcy's law should be considered as the product of the geometrical permeability, K Sat , and a relative permeability K rel (S),
( ) ( ) = unsat rel sat K S K S K (3)
where the relative permeability depends on the saturation degree
1/ 1/ ( ) (1 ) rel S S K S R S R β β β ⎡ ⎤ = - + ⎣ ⎦ ( ) 1 S r e l R K S ≤ ≤ ( 4
)
and β is a fitting factor whose usual values range in the interval [ ] 0.4,0.8 [3]. Combining Eqn. 1 and 2 yields ( )
( ) φ μ ⎛ ⎞∂ ∇ ⋅ ∇ = ⎜ ⎟ ∂ ⎝ ⎠ unsat sat S K S p K t (5)
Finally, to close the problem, we assume that saturation is governed by the following transport equation [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF][START_REF] Trochu | New Approaches to Accelerate Calculations and Improve Accuracy of Numerical Simulations in Liquid Composite Molding[END_REF]:
S v S C t φ ∂ + ⋅∇ = ∂ (6)
where the source term can take different forms. According to the model proposed by Breard et al. [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF], the source terms writes:
2 α = ∇ -S C v S Q (7) or ( ) 2 m M S C v S Q v α α ⎡ ⎤ ⎛ ⎞ = + ∇ - ⎜ ⎟ ⎢ ⎥ ⎝ ⎠ ⎣ ⎦ (8)
as in the model of Ruiz et al. [START_REF] Trochu | New Approaches to Accelerate Calculations and Improve Accuracy of Numerical Simulations in Liquid Composite Molding[END_REF]. In this latter approach, α M and α m represent the dispersion coefficients of the macro and micro voids, respectively, and Q S is a function of the modified capillary number [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF].
The saturation S takes a unit value in the saturated domain, a zero value in the empty region and varies between 0 and 1 in the partially saturated region. The whole domain has been designated by Ω and its boundary by ∂Ω. Initially, we assume the condition ( )
0 , 0 1 x S x t x - ∈ Ω ⎧ = = ⎨ ∈ ∂Ω ⎩ (9)
where the saturation function on the inflow boundary -Ω ∂ is assumed unchanged during the filling process, that is:
( ) , 1 S x t - ∈ ∂Ω = (10)
The filling process simulation involves at each time step:
1. The calculation of saturation dependent permeability as well as the calculation of the source term of Eqn. (5). 2. The calculation of the pressure distribution by applying a standard finite element discretization of Eqn. (5). 3. The calculation of the velocity field from Darcy´s law (1).
Updating of saturation is done by integrating Eqn.(6) using a flux limiter technique.
The boundary conditions are given by:
• The pressure gradient in the normal direction to the mold walls vanishes.
• The pressure or the flow rate is specified on the inflow boundary (injection nozzle).
• The pressure is zero in the empty part of mold.
THE ADVECTION EQUATION
The saturation equation has a hyperbolic character, needing for appropriate stabilizations. In this section we propose a flux-limiter technique for its solution. For the sake of simplicity from now on we only consider one dimensional models.
In the one-dimensional case Eqn. ( 6) reads:
S v S C t x φ ∂ ∂ + ⋅ = ∂ ∂ (11)
and defining the flux as v F S φ = , Eqn. ( 11) can be integrated by applying a second-order upwind scheme preserving the TVD property [START_REF] Sweby | High Resolution Schemes Using Flux Limiters for Hyperbolic Conservation Laws[END_REF], whose discrete form writes:
( ) ( ) 1 1/ 2 1/ 2 ˆn n S W S W n i i i i i S S F F t C λ + + - = - - + Δ (12)
where
t h λ Δ = and
( ) ( ) ( ) ( ) ( ) ( ) { } 1/ 2 1/ 2 1/ 2 1/ 2 1/ 2 1 1/ 2 1 1/ 2 1 1 ˆˆ 2 1 ˆ 2 SW UP i i i i i i i UP i i i i i i t F F r sign v v F F h F F F s i g n v F F χ + + + + + + + + + + Δ ⎛ ⎞ = + - - ⎜ ⎟ ⎝ ⎠ = + - - (13)
Here h represents the mesh size, Δt the time step and χ(r) is the flux limiter function. For the aproximation of the source term, and considering the model ( 7)
2 1 1 2 2 2 i i i i i S i S i i i S S S S C v Q v Q x h α α + - ⎛ ⎞ ⎛ - + ⎞ ∂ ⎛ ⎞ = - ≈ - ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ∂ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ (14)
or using the expression of the source term given by (8)
2 1 1 2 2 2 m m i i i i M i S M i S i i i i i S S S S C v Q v Q v x v h α α α α + - ⎡ ⎤ ⎡ ⎤ ⎛ ⎞ ⎛ ⎞ - + ∂ ⎛ ⎞ = + - ≈ + - ⎢ ⎥ ⎢ ⎥ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ∂ ⎝ ⎠ ⎢ ⎥ ⎝ ⎠ ⎝ ⎠ ⎣ ⎦ ⎣ ⎦ (15)
The superscript UP is associated with the first-order upwind scheme and the superscript SW with the second-order scheme using a modified flux limiter technique (in our case the Sweby flux limiter). To define the coefficient r i+1/2 in Eqn. (13), we propose to define its value comparing consecutive variations of the approximate numerical solution with respect to the flow direction [START_REF] Sweby | High Resolution Schemes Using Flux Limiters for Hyperbolic Conservation Laws[END_REF]:
1 1/ 2 1 1/ 2 2 1 1/ 2 1 si 0 si 0 i i i i i i i i i i i S S v S S r S S v S S - + + + + + + + - ⎧ ≥ ⎪ - ⎪ = ⎨ - ⎪ ≤ ⎪ - ⎩ (16)
The Sweby flux limiter reads [START_REF] Sweby | High Resolution Schemes Using Flux Limiters for Hyperbolic Conservation Laws[END_REF]:
{ } { } { } 2 , min , 1 , 2 min , 0 max ) ( r r r SB = χ (17)
In general, ( ) 0 r χ = if 0 r ≤ what guarantees that the scheme will be of first-order in the neighbourhood of a discontinuity, since 0 r ≤ implies that the slopes of the solution have opposite signs. On the other hand, it is necessary that (1) 1 χ = to obtain a second-order scheme in smooth regions of the solution. The limiter function should be chosen verifying both conditions and maximizing the antidiffusive flux. We remark in Eqn. (13) that the choice ( ) 0, r r χ = ∀ gives rise to the original first-order upwind method, whereas the choice ( ) 1, r r χ = ∀ defines a fully second-order scheme. Hence, the proposed hybrid scheme, with 13), is of second-order in smooth regions of the solution but it is a firstorder method when a discontinuity is present. This allows avoid the spurious numerical oscillations associated with the conventional second-order methods in the presence of discontinuities and reduces the excessive numerical diffusion introduced by the first-order upwind schemes.
0 ( ) 1 r χ ≤ ≤ in Eqn. (
NUMERICAL SIMULATIONS
In order to analyze the accuracy and efficiency of the proposed numerical scheme just described for the discretization of the transport equation governing the evolution of the saturation, where the source term is given by Eqn. (8), we analyze some one dimensional described [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF]. A mold of 0.5 m length is considered. A constant injection pressure is prescribed (10 -5 Pa) with a saturated permeability K sat = 5.10 -11 m 2 and resin viscosity 0.1Pa.s.
For the numerical simulation, we consider a time step of 0.1 seconds with α M = α m = 10 -10 and a constant value R S = 0.8. The domain is assumed initially empty, except the first element that represents the injection nozzle that is assumed fully filled.
To analyze the influence of mesh size on the simulated results, three meshes with different nodal distributions are considered, consisting in 30, 60 and 150 nodes, respectively. The associated numerical solutions are depicted in Fig. 1 for a filling time of 700 seconds, using the first-order scheme (Eqn. ( 12)-(13) with χ(r) = 0) and using the Sweby flux limiter scheme (Eqn. ( 12)-(13) with χ(r) defined by Eqn. ( 17)). We notice that the convergence is faster when the flux limiter second-order scheme is considered. Fig. 1 proves also that the diffusivity of the scheme decreases with the mesh refinement. It can be noticed that the use of a first-order scheme introduces a significant numerical overdiffusion. Obviously, there are two terms that contribute to the front smoothing, the one related to the source term, and the one purely numerical introduced by the dicretization scheme. The last one can be reduced by using higher-order schemes and fine enough meshes.
Fig. 2 depicts the saturation profile each 100 seconds by applying both the first-order and the Sweby flux limiter scheme when 150 nodes are used in the space discretization. We can appreciate the significant impact of the discretization scheme on the computed solution. It can be also appreciated that the diffusion is lower, as expected, when flux limiter techniques are employed. To quantify the convergence different computations were performed using different mesh sizes (30, 60, 90, 120 and 150 nodes, respectively). The computed results illustrated in Fig. 3 represent the partially filled domain to the total length of the mold for both discretization schemes. The pressure distribution varies linearly within each saturated element and is continuous between elements. The saturation and pressure distributions for a filling time of 700 seconds are represented in Fig. 4 It can be noticed the presence of three different behaviors: fully impregnation, unsaturated and empty domains. In the fully saturated regions the model reduces to the usual Darcy model, leading to a linear pressure distribution. In the partially saturated regions, the pressure distribution becomes parabolic, as announced in [START_REF] Breard | Numerical Simulation of Void Formation in LCM[END_REF]. Finally, in the empty domain the pressure vanishes. Fig. 4 Pressure and saturation at t = 700 seconds (left); pressure distributions (right).
CONCLUSIONS
A new numerical procedure for simulating LCM processes has been presented. The numerical model is based on the consideration of partially saturated flows. For this purpose, the advection-diffusion equation describing the evolution of the saturation is solved by using a flux limiter upwind scheme. Numerical results confirm that first-order schemes exhibit an excessive and no realistic diffusion due to the numerical approximation of the advective term, while the flux-limiter scheme shows less extra-diffusive effects. Thus, the flux limiter proposed improves significantly the results (with respect to the first-order solutions). The results suggest that the present numerical method has a satisfactory capability of simulating the LCM process but further convergence analysis must be carried out.
Fig. 1
1 Fig. 1 Numerical results for saturation with the first-order scheme (left) and Sweby flux limiter (right).
Fig. 2
2 Fig. 2 Saturation profiles each 100 seconds, using the first-order scheme (left) and Sweby flux limiter (right).
Fig. 3 Convergence analysis.
ACKNOWLEDGEMENTS
This research work is supported by a grant from the Ministerio de Educación y Ciencia (MEC), project DPI2007-66723-C02-01. | 15,398 | [
"15516"
] | [
"300772",
"300772",
"59131"
] |
01004970 | en | [
"spi"
] | 2024/03/04 23:41:44 | 2014 | https://hal.science/hal-01004970/file/SFDJ.pdf | Bun Eang Sar
Sylvain Fréour
Peter Davies
Frédéric Jacquemin
Accounting for differential swelling in the multi-physics modelling of the diffusive behaviour of polymers
Keywords: moisture absorption, thermodynamical approach, unsymmetrical loading of moisture
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Polymers and polymeric composites absorb moisture when exposed to ambient humidity or immersed in liquid. Polymeric matrix composites differ from other materials in the sense that low-molecular weight substances such as water may easily migrate even at room temperature, generating a variation of the material's structure, morphology, and composition. Moreover, many authors have reported that hygro-thermal ageing could induce a loss of the mechanical stiffness and/or strength of organic matrix composites [START_REF] Patel | Hygrothermal effects on the structural behaviour of thick composite laminates using higher-order theory[END_REF][START_REF] Selzer | Mechanical properties and failure behavior of carbon fiber-reinforced polymer composites under the influence of moisture[END_REF]. It is probable that the factors described above will also affect the moisture sorption behavior of polymer matrix composites. In order to predict the time-dependent evolution of the moisture content of composite structures, various models have been developed in the literature. Among them, some are based on the classical Fickian diffusion model [START_REF] Gigliotti | Transient and cyclical hygrothermoelastic stress in laminated composite plates: Modelling and experimental assessment[END_REF][START_REF] Jacquemin | Modelling of the moisture concentration field due to cyclical hygrothermal conditions in thick laminated pipes[END_REF][START_REF] Jedidi | Design of accelerated hygrothermal cycles on polymer matrix composites in the case of a supersonic aircraft[END_REF][START_REF] Shen | Moisture absorption and desorption of composite materials[END_REF]. More recently, Fick's model has successfully been combined with scale transition models such as the Eshelby-Kröner self-consistent model for predicting multi-scale distribution of the internal mechanical states during the transient step of the moisture diffusion process experienced by polymer composites [START_REF] Fréour | On an analytical self-consistent model for internal stress prediction in fiber-reinforced composites submitted to hygroelastic load[END_REF][START_REF] Jacquemin | Prediction of local hygroscopic stresses for composite structures -analytical and numerical micromechanical approaches[END_REF].
Nevertheless, some experimental data demonstrate that the moisture sorption in composite structures could differ from the typical Fickian uptake [START_REF] Cai | Non-Fickian moisture diffusion in polymeric composite[END_REF][START_REF] Rao | Moisture behavior of T300-914C laminates[END_REF]. As a consequence, some researchers have developed models in order to reproduce the anomalous sorption curves observed in practice [START_REF] Carter | Langmuir-type model for anomalous moisture diffusion in composite resins[END_REF][START_REF] Verpoest | Moisture absorption characteristics of aramid-epoxy composites[END_REF]. Among the proposed methods, [START_REF] Roy | Modeling of anomalous diffusion in polymer composites: A finite element approach[END_REF] documented a multi-physics approach to the diffusion mechanisms, compatible with the thermodynamics. The approach is similar to that presented by Larché and Cahn or Aifantis and Gerberich for predicting the diffusion of gases through elastic solids [START_REF] Aifantis | Gaseous diffusion in a stressed-thermoelastic solid. Part II: The thermomechanical formulation[END_REF][START_REF] Aifantis | Gaseous diffusion in a stressed-thermoelastic solid. Part II: Thermodynamic structure and transport theory[END_REF][START_REF] Larché | A linear theory of thermochemical equilibrium of solids under stress[END_REF]. The multiphysics thermodynamic model proposed by Larché and Cahn was later implemented by Neogi et al. who achieved the successful fitting of experimental results obtained on thin polymer membranes [START_REF] Neogi | Diffusion in solids under strain, with emphasis on polymer membranes[END_REF]. Nevertheless, in these pioneering works, the differential swelling was treated owing to simplifying assumptions relating the deformation field to the existing penetrant concentration [START_REF] Larché | The effect of self-stress on diffusion in solid[END_REF].
More recently, other mutliphysics model coupling the mechanical states to mass-transport process were developed in the case that linear viscoelastic solids were considered [START_REF] Carbonell | Coupled deformation and mass-transport processes in solid polymers[END_REF]. An important feature of that formulation, although limited to the one-dimensional case, is that the expressions used for the chemical potential and the stress constitutive equations are thermodynamically consistent, since they come from the equation describing the Helmholtz free energy [START_REF] Carbonell | Coupled deformation and mass-transport processes in solid polymers[END_REF].
In recent works [START_REF] Derrien | The effect of moisture-induced swelling on the absorption capacity of transversely isotropic elastic polymer-matrix composites[END_REF][START_REF] Sar | Coupling moisture diffusion and internal mechanical states in polymers -A thermodynamical approach[END_REF], other models, focused on the description of anomalous diffusion, were also developed which were compatible with the thermodynamics. Nevertheless, the mathematical formalism presented in both references [START_REF] Derrien | The effect of moisture-induced swelling on the absorption capacity of transversely isotropic elastic polymer-matrix composites[END_REF][START_REF] Sar | Coupling moisture diffusion and internal mechanical states in polymers -A thermodynamical approach[END_REF] does not enable the effects on the moisture kinetics induced by the presence of an in-depth heterogeneous profile of the hygroelastic strain to be accounted for. The present work will present a possible way to address this issue. The developments detailed in this paper will also extend the formalism, so that an unsymmetrical hygroscopic load can be considered, whereas only symmetrical cases could be modeled using the historical version of the model [START_REF] Derrien | The effect of moisture-induced swelling on the absorption capacity of transversely isotropic elastic polymer-matrix composites[END_REF][START_REF] Sar | Coupling moisture diffusion and internal mechanical states in polymers -A thermodynamical approach[END_REF], as well as according to the original pioneering papers published in this very field of research [START_REF] Aifantis | Gaseous diffusion in a stressed-thermoelastic solid. Part II: The thermomechanical formulation[END_REF][START_REF] Aifantis | Gaseous diffusion in a stressed-thermoelastic solid. Part II: Thermodynamic structure and transport theory[END_REF][START_REF] Larché | A linear theory of thermochemical equilibrium of solids under stress[END_REF].
Hygroscopic pressure
Moisture absorption induces swelling strains that actually correspond to the existence of a hygroscopic pressure within the material. The in-depth, time-dependent hygroscopic pressure profile occurring during the transient stage of the diffusion process is determined according to the three following equations: (1) Hygro-elastic Hooke's law, (2) Equilibrium equations, and (3) Compatibility equations.
ε il = 1 + ν E σ il - ν E δ il trσ il + ηCδ il , ( 1
)
σ il,l = 0, (2)
ε il,jk + ε jk,il -ε jl,ik -ε ik,jl = 0, (3)
where ν is Poisson's ratio, E the Young's modulus and η the coefficient of moisture expansion of the polymer (CME). C denotes the moisture content (assuming the material to be initially dry) while δ il stands for the Kronecker's symbol, i.e.
δ il = 1 (i = l) 0 (i = l)
. For a given set of indices (i, l) in ( 1)-( 2), we use the following replacement rule j = k = 1, 2, 3 in Eq. ( 3), the summation of which yields:
Δε il + ε kk,il -ε ik,lk + ε lk,ik = 0. (4)
Accounting for the hygro-elastic Hooke's law (1), the sum ε ik,lk + ε lk,ik appearing above in relation (4) actually satisfies the following equation
ε ik,lk + ε lk,ik = 1 + ν E σ ik,lk + σ lk,ik - ν E σ kk,lk δ ik + σ kk,ik δ lk + η C ,lk δ ik + C ,ik δ lk . ( 5
)
Since σ ik,lk = σ lk,ik = 0; σ kk,lk δ ik = σ kk,ik δ lk = σ kk,il ; C ,lk δ ik = C ,ik δ lk = C ,il , many terms cancel in Eq. ( 5) that can be written in the following simplified form:
ε ik,lk + ε lk,ik = - 2ν E σ kk,il + 2ηC ,il . (6)
Substituting Eq. ( 6) into Eq. (4) yields
Δε il + ε kk,il + 2ν E σ kk,il -2ηC ,il = 0. (7)
Considering the replacement rule i = l in [START_REF] Fréour | On an analytical self-consistent model for internal stress prediction in fiber-reinforced composites submitted to hygroelastic load[END_REF] yields
Δε ll + ε kk,ll + 2ν E σ kk,ll -2ηC ,ll = 0. (8)
Actually, the Laplacian of moisture content is written as C ,ll = ΔC. Moreover, Δε ll = Δε kk = ε kk,ll . As a result, the relation can be simplified as follows
Δε kk + ν E σ kk,ll -ηΔC = 0. (9)
From Eq. (1), the second derivative of the hygro-elastic strain trace, ε kk,il , featured in Eq. ( 7), satisfies:
ε kk,il = 1 -2ν E σ kk,il + 3ηC ,il . ( 10
)
Putting i = l into Eq. ( 8) provides the following expression for the Laplacian of the trace of the hygro-elastic strain, Δε kk
ε kk,ll = Δε kk = 1 -2ν E Δσ kk + 3ηΔC. (11)
Combining ( 9) to (11) yields
1 -2ν E Δσ kk + 3ηΔC + ν E σ kk,ll -ηΔC = 0 ⇔ 1 -2ν + ν E Δσ kk + 2ηΔC = 0, (12)
Δσ kk = -2η E 1 -ν ΔC. (13)
In the present work, the trace of stress tensor σ kk is considered to correspond to the following sum of an external mechanical load, P ex and a hygroscopic pressure, P is , so that σ kk = -3 (P ex + P is ) where P ex is a constant parameter. Thus, ΔP ex = 0 and Eq. ( 13) can be reduced to
ΔP is = 2E 3 (1 -ν) ηΔC = α A 0 ηΔC, ( 14
)
where the constants α and A 0 have already been defined in previous works [START_REF] Sar | Coupling moisture diffusion and internal mechanical states in polymers -A thermodynamical approach[END_REF], as
α A 0 = 2E 3 (1 -ν) , ( 15
)
A 0 = 3ω w RT ρ 0 , ( 16
)
where ρ 0 is the density of polymer resin at free strain state, whereas ω w stands for the molar mass of water, T the temperature and R the ideal gas constant. We consider a plate whose lateral dimensions are large compared to the thickness. As a consequence, the diffusion is considered to occur along the direction x, only. The unidirectional solution of Eq. ( 14) satisfies the following general form
P is (x, t) = α A 0 ηC (x, t) + k 1 (t) x + k 2 (t) . ( 17
)
The constants k 1 (t) and k 2 (t) are deduced from the equilibrium conditions, in which L stands for the thickness of the sample
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ L 0 P is (x, t)dx = 0, L 0 P is (x, t)xdx = 0. (18)
The solutions satisfying the system of Eqs. [START_REF] Sar | Coupling moisture diffusion and internal mechanical states in polymers -A thermodynamical approach[END_REF] are
k 1 (t) = 6 L 3 α A 0 η L 2 C (t) -2I , ( 19
)
k 2 (t) = 2 L 2 α A 0 η 3I -2L 2 C (t) , (20)
where
C (t) = 1 L L 0 C (x, t) dx, ( 21
)
I = L 0 C (x, t) xdx. ( 22
)
Introducing ( 19)- [START_REF] Shen | Moisture absorption and desorption of composite materials[END_REF] in the general expression [START_REF] Roy | Modeling of anomalous diffusion in polymer composites: A finite element approach[END_REF] for the internal pressure yields
P is (x, t) = α A 0 η C (x, t) -4C (t) + 6 L 3 α A 0 ηx L 2 C (t) -2I + 6 L 2 α A 0 ηI. ( 23
)
In the case that a symmetrical hygroscopic load is applied on the boundaries of the structure, C(x, t) = C(Lx, t). As a result, the integration I is equal to L 2 C(t)
2
. Hence, the corresponding hygroscopic pressure is given by the simplified form
P is (x, t) = α A0 η C (x, t) -C (t) .
Chemical potential
The chemical potential of water μw is defined as the partial derivative of free energy of Helmholtz, F = F 0 + nf w (C) + V ε W , with respect to the amount of water n w . Where F 0 is the free energy of the dry stress-free polymer, f w (C) is the variation of the free energy per mole of dry polymer, due to the addition of water when the polymer is free to swell, n and V ε are respectively the amount of polymer and its volume at any stage, whereas W denotes the hygro-elastic strain energy [START_REF] Derrien | The effect of moisture-induced swelling on the absorption capacity of transversely isotropic elastic polymer-matrix composites[END_REF] μw
(C) = ∂F ∂n w = ∂F ∂C ∂C ∂n w . ( 24
)
The moisture content in the polymer is calculated through ∂C ∂nw = ωw The hygro-elastic strain energy written as a function of both the bulk modulus k and shear modulus G, is defined by
W = 1 2 σ : ε el = k 2 (trε -3ηC) 2 + Ge : e, ( 25
)
where ε el is the elastic strain, ε being the total strain, whereas e is the deviatoric strain tensor.
Introducing
f / w (C) = ∂[fw (C)]
∂C
, one obtains the following expression for the derivative of the Helmholtz free energy with respect to the moisture content
∂F ∂C = ∂F 0 ∂C + nf / w (C) + W ∂V ε ∂C + V ε ∂W ∂C . ( 26
)
During the moisture diffusion process, we take into account the evolution of the volume occupied by the polymer, and the resulting variation of its density, through:
V ε V 0 = ρ 0 ρ ε = trε + 1, (27)
where V ε , V 0 , ρ 0 , ρ ε stand respectively for the polymer volume and its density at present (strained) and initial (strain-free) states
∂V ε ∂C = ∂V ε ∂ trε ∂ trε ∂C = V 0 ∂ trε ∂C = nω ρ 0 ∂ trε ∂C . ( 28
)
Let us consider (27) as well as the equation e : e = 0 (which comes from the specific case, considered here, of a macroscopically isotropic polymer submitted to a hydrostatic pressure) in the expression of the hygro-elastic strain energy (25). As a result, the partial derivative of Helmholtz free energy F with respect to the moisture content (26) transforms as follows
∂F ∂C = nf / w (C) + V ε k (trε -3ηC) ∂ (trε -3ηC) ∂C + k 2 (trε -3ηC) 2 nω ρ 0 ∂ trε ∂C . ( 29
)
Accounting for Eq. ( 29), the chemical potential (24) eventually satisfies
μw (C, trε) = ω w ω f / w (C) + k ω w ρ 0 (trε -3ηC) ∂ trε ∂C -3η (trε + 1) + ω w ρ 0 k 2 (trε -3ηC) 2 ∂ trε ∂C . (30)
Besides, the trace of the strain tensor can be expressed as a function of the total pressure as follows
trε = σ kk 3k + 3ηC = - P k + 3ηC = - P ex + P is k + 3ηC. ( 31
)
Taking into account the expression (21) obtained for the internal pressure induced notably by the differential swelling, the derivative of relation (31) with respect to the moisture content satisfies
∂ trε ∂C = η 3A 0 k -α A 0 k . ( 32
)
According to Eq. (32) and considering that ωw ω f / w (C) = μ 0 + RT ln C C0 , the relation previously obtained for the expression of the chemical potential (30) can be developed as follows
μw (C, trε) = μ 0 + RT ln C C 0 - 3ηω w k ρ 0 (trε -3ηC) (trε + 1) + ηω w A 0 ρ 0 (3A 0 k -α) (trε -3ηC) (trε + 1) + ηω w A 0 ρ 0 3A 0 k -α 2 (trε -3ηC) 2 . ( 33
)
Equation of model
Generally, the diffusion equation is deduced from the conservation of mass equation [START_REF] Larché | The effect of self-stress on diffusion in solid[END_REF] in which the first derivative of moisture with respect to time, Ċ, relates to the diffusion flux of moisture, J i , as follows
Ċ + J i,i = 0. (34)
In the present work, the diffusion flux of moisture was written in term of chemical potential μw as proposed, for instance, in [14]
J i = - DC RT → grad μw , ( 35
)
where D is the diffusion coefficient in [mm 2 /s], R is the gas constant in [J/(mol.K)], and T the absolute temperature [K].
We obtain the constitutive equation by using the mass conservation equation ( 34) in which the chemical potential of water has been written as a function of both the trace of the strains and the moisture content
Ċ = D RT div C → grad μw (C, trε) . ( 36
)
In vector calculus, the gradient operator satisfies the following linear property
→ grad μw (C, trε) = ∂ μw ∂C → grad C + ∂ μw ∂ trε → grad trε. ( 37
)
Introducing the development (37) in (36) yields
Ċ = D RT div C ∂ μw ∂C → grad C + ∂ μw ∂ trε → grad trε . ( 38
)
The partial derivatives of the chemical potential by either the moisture content C or the strain trace trε can respectively be written as
∂ μw (C, trε) ∂C = RT C + 9η 2 ω w k ρ 0 (trε + 1) - 3η 2 ω w A 0 ρ 0 (3A 0 k -α) (2 trε -3ηC + 1) , ( 39
) ∂ μw (C, trε) ∂ trε = - 3ηω w k ρ 0 (2 trε -3ηC + 1) + ηω w A 0 ρ 0 (3A 0 k -α) (3 trε -6ηC + 1) . ( 40
)
Combining Eqs. (39) and (40) with the diffusion equation (38) leads to
Ċ = D RT div C RT C + 9η 2 kω w ρ 0 ( trε + 1) → grad C - 3ηω w k ρ 0 (2 trε -3ηC + 1) → grad trε + D RT div C - 3η 2 ω w A 0 ρ 0 (3A 0 k -α) (2 trε -3ηC + 1) → grad C + D RT div C ηω w A 0 ρ 0 (3A 0 k -α) (3 trε -6ηC + 1) → grad trε . ( 41
)
Further simplifications applied to the previous form (41) enable us to write
Ċ = D RT div C RT C + 9η 2 kω w ρ 0 (trε + 1) → grad C - 3ηω w k ρ 0 (2 trε -3ηC + 1) → grad trε + g, ( 42
)
where Finally, the factor g could be written as follows
g = D RT div C - 3η 2 ω w A 0 ρ 0 (3A 0 k -α) (2 trε -3ηC + 1) → grad C + D RT div C ηω w A 0 ρ 0 (3A 0 k -α) (3 trε -6ηC + 1)
g = Dξ z 1 ∂ 2 C ∂x 2 + z 2 ∂C ∂x 2 + z 3 ∂C ∂x + 3C 6 L 3 α A 0 k η L 2 C(t) -2I 2 , ( 45
)
where
z 1 = -3η (2 trε -3ηC + 1) C + (3 trε -6ηC + 1) C 3A 0 k -α A 0 k η, ( 46
)
z 2 = -3η (2 trε -6ηC + 1) + (3 trε -18ηC + 1) 3A 0 k -α A 0 k η + 3C 3A 0 k -α A 0 k η 2 , ( 47
)
z 3 = - 6 L 3 α A 0 k η L 2 C(t) -2I (3 trε -18ηC + 1) - 36C L 3 α A 0 k L 2 C(t) -2I 3A 0 k -α A 0 k η 2 , ( 48
) ξ = 3A 0 k -α 3 η. ( 49
)
Using the same method, the first term of the right hand side of Eq. ( 42) was developed, and then simplified. The resulting time-dependent diffusive behavior for a polymer plate subjected to an unsymmetrical humid ambient load is given by
Ċ = D 1 + V 1 η 2 C + V 2 η 3 C 2 ∂ 2 C ∂x 2 + η 2 (V 3 + V 4 C) ∂C ∂x 2 - 6 L 3 α A 0 k η L 2 C(t) -2I V 5 + V 6 η 2 C ∂C ∂x - 72 L 6 α 2 A 0 k η 3 L 2 C(t) -2I 2 C + g, ( 50
)
where
V 1 = -3A 0 k trε + 2α trε + α, V 2 = 9A 0 k -3α, V 3 = -3A 0 k trε + 2α trε + α, V 4 = ηV 2 - 2ηα 2 A 0 k , V 5 = 2ηA 0 trε + A 0 η, V 6 = 3A 0 -4 α k .
Significant simplifications of Eq. ( 50) can be made when the polymer structure is subjected to symmetrical moisture conditions. This requires that the equation L 2 C(t) -2I = 0 be satisfied. The resulting behavior law then respects the following form
Ċ = D 1 + V 1 η 2 C + V 2 η 3 C 2 + ξz 1 ∂ 2 C ∂x 2 + η 2 (V 3 + V 4 C + ξz 2 ) ∂C ∂x 2 . ( 51
)
Boundary conditions
The boundary condition is obtained by equating the chemical potential of water in humid air, μw = μ0 + RT ln pw p0 (where μ0 is the chemical potential of water in humid air at the reference pressure p 0 , the partial pressure of water being p w ), with the generalized chemical potential of the polymer, μw , the expression for which is given by Eq. (33) in the present work. This statement yields the following equation
μw (C, trε) = μ 0 + RT ln C C 0 - 3ηω w k ρ 0 ( trε -3ηC) ( trε + 1) + ηω w A 0 ρ 0 (3A 0 k -α) ( trε -3ηC) ( trε + 1) + ηω w A 0 ρ 0 3A 0 k -α 2 (trε -3ηC) 2 . ( 52
)
The boundary condition is obviously only satisfied at the specific positions x b denoting the boundaries between the ambient fluid and the polymer. The equalization between the chemical potential of water in humid air and the generalized chemical potential of water in the system leads to the following moisture conditions at the structure boundaries
C (x b , t) = p w p 0 C 0 exp μ0 -μ 0 RT + ηkA 0 (trε -3ηC) (trε + 1) - η 3 (3A 0 k -α) (trε -3ηC) (trε + 1) - η 6 (3A 0 k -α) (trε -3ηC) 2 . ( 53
)
Equation ( 53) could also be written as a function of the total pressure P instead of tr ε owing to their relation as expressed by (31). One can then write:
C (x b , t) = C 0 p 0 p w exp μ0 -μ 0 RT exp ηkA 0 - P k - P k + 3ηC + 1 - η 3 (3A 0 k -α) - P k - P k + 3ηC + 1 - η 6 (3A 0 k -α) - P k 2 . ( 54
)
Introducing Henry's law, S = C0 p0 exp μ0-μ0 RT into Eq. ( 54), the boundary condition for the moisture content becomes
C x b,t = Sp w exp ηA 0 k - η 3k 2 (3A 0 k -α) P 2 -3ηCkP -kP - η 6k 2 (3A 0 k -α) P 2 . ( 55
)
Numerical results
The numerical simulations correspond to a 4 mm thick plate made of epoxy resin whose Young modulus is 3.65 GPa and Poisson's ratio is 0.36. The polymer plate is subjected to moisture diffusion while experiencing a hydrostatic pressure load. We would like to simulate the moisture absorption within the above described material in the cases when a whether symmetrical or an unsymmetrical moisture condition takes place at the opposite edges of the plate.
Symmetrical moisture load
The opposite surfaces of the plate are assumed to be submitted to the same relative humidity, which correspond to a reference moisture content level C 0 of 5% (in the case that the multiphysics effects are neglected). The mathematical equation governing the diffusion corresponds to Eq. (51), whereas the boundary condition is obtained owing to expression (55).
Figure 1 shows the time-dependent evolution of the macroscopic (average) moisture content, as a function of the CME: η = 0; η = 0.6 or η = 1, at an imposed pressure of 1 MPa. Increasing CME reduces the maximum moisture absorption capacity of the polymer as indicated by the evolution of the average moisture content in the steady state.
According to Fig. 2, the moisture uptake in the polymer resin decreases with the reduction of the relative humidity on the second side of the plate. Non-linearities, similar to those observed on Fig. 1, occur in the presently considered cases, also. The previously so-called "delay time" before the establishment of a Fickian-like diffusion process can be observed, as well. The dependence of the apparent diffusion coefficient with time increases with the deviation of the environmental conditions applied to the opposite boundaries of the plate. Besides, the maximum moisture content attained in permanent regime clearly does not vary linearly with the boundary condition applied to the second surface of the plate.
According to Fig. 1, the multiphysics model predicts a fickian diffusion process in the case that the coefficient of moisture absorption of the polymer is assumed to be null. Discrepancies from the typical Fickian kinetics occur is predicted by the model when η = 0. In particular, the apparent moisture diffusion coefficient of the polymer plate (i.e. the slope of the curves drawn on Fig. 1) varies at the beginning of the diffusion process (i.e. the slope of the curves are not independent from the ratio √ t e anymore when at the initial stage, when t tends towards 0). Thus, a sort of delay time is predicted, during which the instantaneous moisture diffusion coefficient varies. This short period of time is followed by a pseudo-fickian diffusion regime with a constant apparent diffusion coefficient. These discrepancies significantly increase with the coefficient of moisture expansion. Eventually, the coefficient of moisture absorption affects both the transient and permanent stages of the diffusion process predicted according to the multiphysics model.
Unsymmetrical moisture load
Let us consider the case when the opposite surfaces of the plate are submitted to different relative humidity levels. The environmental conditions correspond to a reference moisture content level C 0 of 5% on the first side of the plate, whereas it is either equal to 0%, 2.5% or 5% on the second side.
The moisture diffusion process is computed through Eq. ( 50), assuming the polymer to present a typical coefficient of moisture expansion: η = 0. Fig. 2 Average moisture content predicted in a polymer plate submitted to unsymmetrical humid conditions. Cs2 stands for the moisture content reference level assumed to be applied on the second side of the plate.
Conclusions
This work is focused on developing an enhanced version of the model describing the diffusion of moisture in polymers based on the so-called thermodynamical approach first introduced by Derrien and Gilormini [START_REF] Derrien | The effect of moisture-induced swelling on the absorption capacity of transversely isotropic elastic polymer-matrix composites[END_REF], and then improved by Sar et al. [START_REF] Sar | Coupling moisture diffusion and internal mechanical states in polymers -A thermodynamical approach[END_REF]. For the first time, in contrast to both those references, the present paper handles the differential swelling experienced by the polymer during the moisture diffusion process. The effects induced by the through-thickness differential swelling on the time-dependent diffusion are properly taken into account in the mathematical development of the model, through additional terms involving partial derivatives of the volume strain by the moisture content. Obviously, the resulting multiphysics kinetics law changes by comparison with the original (simplified) version of the model. The expressions satisfied by the boundary conditions for the moisture have been determined both for the cases where the material is considered as subjected to symmetrical moisture loads, and in cases when heterogeneous humid conditions are experienced by the polymer structure. Some preliminary results obtained through computations demonstrate that the developed model enables to predict anomalous (i.e. non Fickian) moisture uptakes. The anomalies of diffusion do mostly take place at both the very beginning of the diffusion process and the permanent regime. Non-linearities of the weight gain are thus predicted when the moisture sorption starts. After a short time, these non-linearities vanish, so that a pseudo-fickian moisture uptake follows. This pseudo-fickian regime corresponds to an instantaneous moisture diffusion coefficient independent from the √ t e ratio.
At the end of the process, the permanent regime is characterized by a maximum moisture absorption capacity, the value of which depends on materials properties such as the coefficients of moisture expansion, as an example. Further work will be dedicated to a thorough investigation of this new version of the model through extensive numerical tests.
Future work will focus on further enhancements, such as accounting for reversible plasticization effects experienced by the polymer during the moisture diffusion process, (i.e. the reduction of the material stiffness induced by the presence of water).
ω w , ω stand for respectively the molar mass of water and polymer.
→
grad trε .(43) Equation (43) yields the following developed expressiong = D η (3A 0 kα) 3 -3η (2 trε -3ηC + 1) CΔC -3η (2 trε -6ηC + 1) -6ηC + 1) CΔ (trε) + (3 trε -18ηC + 1)
6 .
6 The obtained results for the volume average of the moisture content are shown as a function of the classical
Fig. 1
1 Fig. 1 Effect of CME on moisture sorption (Pex = 1 MPa).Fig.2Average moisture content predicted in a polymer plate submitted to unsymmetrical humid conditions. Cs2 stands for the moisture content reference level assumed to be applied on the second side of the plate. | 27,350 | [
"957714",
"850332"
] | [
"10921",
"10921",
"300022",
"10921"
] |
01465278 | en | [
"math"
] | 2024/03/04 23:41:44 | 2017 | https://inria.hal.science/hal-01465278/file/napier400introduction.pdf | Denis Roegel
email: [email protected].
What did Napier invent?
The tercentenary of the publication of Napier's Descriptio took place in 1914. At that time, logarithms were everywhere and new tables were still being constructed by hand. It will suffice to mention the great tables of Bauschinger-Peters (1910)(1911) and Andoyer (1911). Machine-assisted computations made their debuts, but few were able to anticipate the dramatic changes that the 20th century would bring in computation.
But for all these groundbreaking efforts, the vast majority of the tables of logarithms were rooted several centuries before. Most of the tables then in use were not newly computed tables, but tables collated from other tables, carefully checked and compared, and which eventually derived from the first large tables of logarithms computed by Briggs (1624Briggs ( , 1633) ) and Vlacq (1628Vlacq ( , 1633)).
In 2014, tables of logarithms have long fallen into oblivion, although the digital age has made it possible to resurrect many of them. Anybody who wishes to use a table of logarithms can now print one and get his/her hands on the calculation methods which were commonplace for engineers, accountants and other professions until the 1970s. This is not that far away.
Nowadays, tables of logarithms are of course no longer needed. They have been replaced by handheld calculators, and even by the virtual calculators of our smartphones. Logarithms were used to simplify multiplications and divisions, but we can now multiply and divide without logarithms. It would therefore seem that commemorating Napier's work is now a bit off-topic. But is it?
In fact, the logarithmic function is alive and well. True, tables of logarithms are gone, but logarithms and exponentials appear everywhere as soon as one goes a little bit beyond elementary computations. Logarithms are not a function of the past, but the function has just become easier to compute than in the past.
Still, there is a veil of confusion surrounding the invention of logarithms. In order to get a better understanding of what were Napier's innovations, we need to consider the context. Napier did not appear out of nothing, and it is important to examine the roots of his work, and also how it compares to similar and contemporary work. It is interesting to ponder the topics of the tercentenary and quatercentenary meetings. In 1914, we had communications on the invention of logarithms, on Napier, on the computation of logarithms, on the law of exponents, on alleged prior inventions of logarithms, on the change to Briggs's logarithms, on Napierian logarithms calculated before Napier, on Edward Sang's tables, on fundamental trigonometrical and logarithmic tables, and on a number of other subjects [START_REF] Cargill | Napier tercentenary memorial volume[END_REF]. Topics in 2014 included the methods used by Napier, Napier's other calculating devices, how tables of logarithms pervaded Europe and beyond, their publication in Austria, Babbage's table of logarithms, Bauschinger and Peters's table published shortly before the 1914 meeting, and the work of Scottish mathematicians on logarithms. These are all important topics, showing the many ramifications of an apparently simple object.
What exactly happened around 1614 also needs a careful examination for the very reason that not all was new. This does explain why there have been claims for prior inventions of logarithms, sometimes attributed to Jost Bürgi (1552-1632), to Michael Stifel (1487-1657), or even to Archimedes (3rd century BC). Such priority claims are not new, and some of them had already been considered in 1914. But if we hope to settle the matter and make everybody happy, we ought of course to be accurate in our wordings and be clear not only about what are "logarithms," but also about what it means to be the inventor of logarithms. We cannot answer such questions without first defining what we mean by these terms. For instance, if we focus on the law of exponents (a n • a m = a n+m ), then the invention of logarithms has been brought back by some to Archimedes. But should we focus only on the law of exponents?
Before trying to define what we understand by "logarithm," let us do a little historical sketch and consider a number of significant early developments related to logarithms.
The law of exponents
One of the earliest expressions of the law of exponents actually even goes beyond Archimedes (c287 BC-c212 BC), since it is found in Euclid's writings, slightly before Archimedes. Euclid flourished around 300 BC, and in book IX of the Elements, he states the following proposition: "If numbers in any amount are continuously in proportion from the unit, the smallest one measures the largest one according to one of those among the numbers in proportion." (our translation from [3, p. 424]) Heath translated it as follows:
"If as many numbers as we please beginning from an unit be in continued proportion, the less measures the greater according to some one of the numbers which have place among the proportional numbers." [6, p. 395] This may appear somewhat opaque, but considering that "measuring" is taken in the sense that "2 cm measures 10 cm five times," all what Euclid means is that a n /a = a i with 1 ≤ i ≤ n. Euclid adds the porism: "And it is obvious that the place that the measuring number has from the unit is the same as the one that the number according to which we measure, has from the measured number, towards the preceding one." (our translation from [3, pp. 424-425]) that Heath translated as "And it is manifest that, whatever place the measuring number has, reckoned from the unit, the same place also has the number according to which it measures, reckoned from the number measured, in the direction of the number before it." [6, p. 396] This is to be understood as follows. Let a r be the "measuring number" at place r. This number measures another number a n which is at place n. At place nr, there is the number which measures equally a n . In other words, a r = a n /a n-r , or in modern terms a n = a r • a n-r .
The law of exponents may have been buried in words, but it was known to Euclid. In fact it was certainly pretty obvious when considering a sequence of numbers in proportion.
Archimedes was somewhat more explicit. In the Sand-reckoner, a treatise on the definition and use of very large numbers, he wrote the following: "If numbers are in continuous proportion from the unit and if some of them are multiplied together, the product will, in the same progression, be removed from the greatest of the multiplied numbers by as many numbers as the smallest multiplied number is removed from unit in that progression, and removed from unit by the sum minus one of the numbers of which the multiplied numbers are removed from the unit." (our translation from [17, p. 366
])
In 1897, Heath [5, pp. 229-230] gave a more modernized translation: "If there be any number of terms of a series in continued proportion, say A 1 , A 2 , A 3 , . . . A m , . . . A n , . . . A m+n-1 , . . . of which A 1 = 1, A 2 = 10, and if any two terms as A m , A n be taken and multiplied, the product A m • A n will be a term in the same series and will be as many terms distant from A n as A m is distant from A 1 ; also it will be distant from A 1 by a number of terms less by one than the sum of the numbers of terms by which A m and A n respectively are distant from A 1 ."
In other words, in this passage, Archimedes considers a sequence of numbers a, b, c, etc., such that the ratio between a and b is the same as between b and c, and so on. a is assumed to be equal to 1. Archimedes also uses a notion of distance from the unit, a being at distance 1 from a, b at distance 2, c at distance 3, and so forth. In modern terms, given the sequence a 1 = 1, a 2 , a 3 , . . . , with a i+1 = ra i , Archimedes merely says that if a k = a i × a j , and i < j, then kj + 1 = i and k = i+ j -1. If we write a i = r i-1 , Archimedes in fact expresses that r i-1 ×r j-1 = r (i+ j-1)-1 . This is indeed equivalent to the law of exponents, but Archimedes only uses it as a way to bridge the sequences of large numbers he has introduced.
We can see that Archimedes was manipulating simultaneously ratios and distances, or ratios and indices, and he noticed a simple property of indices. However noticing this property of indices does not mean that the indices were thought to be "functions" of the numbers. They were much more functions of the sequences.
A great breakthrough in the law of exponents seems to have been made by Nicole Oresme (c1320-1382). In his Algorismus proportionum, Oresme of course knows about the law of exponents with integers, but he also does come up with fractional exponents. In one of his examples, he shows that (2/1) 1/2 × (3/1) 1/3 is equal to (72/1) 1/6 [4, p. 338]. One should however be cautious and not hastily conclude that Oresme's notion of ratios was identical with the modern one, as recently highlighted by Rommevaux [15, p. 32].
The 15th century then witnessed the first "tables" of correspondences between arithmetic and integer sequences. Nicolas Chuquet (c1440-c1500), for instance, in his Le triparty en la science des nombres (c1484), considers the sequence 1, 2, 4, 8, 16, etc., and puts it in correspondence with the integers 0, 1, 2, etc. Chuquet then shows how the "denominations" are added when the numbers of the first sequence are multiplied [9, pp. 155-156].
Half a century later, it was the turn of Michael Stifel (c1486-1567). In his Arithmetica integra [16, folio 249], published in 1544, Stifel shows a correspondence between the integers from -3 to 6 and the corresponding powers of 2. In order to multiply 1/8 by 64, Stifel says that the indices (-3 and 6) can be added, yielding [START_REF] Euclide | Les Éléments[END_REF], and the answer is therefore the power of 2 corresponding to the index 3, namely 8. The use of negative integers allows Stifel to deal with powers of 2 smaller than unity, but Stifel did not use fractional exponents.
The law of exponents, in various forms, was no doubt rediscovered by many others. Mention should be made of the Algerian Ibn Hamza who published in 1591 a treatise of arithmetic in which he basically expressed the law of exponents. This would be a footnote, except that a commentary published in 1913 interpreted Ibn Hamza's discovery as a capacity to discover logarithms, and this in turn generated a vast literature [START_REF] Ageron | Ibn Hamza a-t-il découvert les logarithmes ?[END_REF].
Anticipating logarithms
Logarithms have a strong connection with powers and the law of exponents, and a notion of logarithms was therefore latent in any work making use of this law. But in addition to the works displaying a knowledge of the law of exponents, there have also been cases where the solution to some problems could have benefited from logarithms.
In his Problèmes numériques faisant suite et servant d'application au Triparty en la Science des nombres, Chuquet does for instance consider a problem in which a barrel of wine loses a tenth of its content each day [START_REF] Marre | Problèmes numériques faisant suite et servant d'application au Triparty en la Science des nombres de Nicolas Chuquet Parisien[END_REF]p. 29,problem XCIV]. Chuquet wants to find out when the barrel will be half empty. This is a problem where logarithms prove handy, but Chuquet didn't have them. Chuquet's first answer to his problem is 6 days and 31441/531441 fractions of a day. Given that 0.9 6 = 0.531441 and 0.9 7 = 0.4782969, Chuquet's answer is perhaps a typo for the linear interpolation 31441/(531441 -478297), an approximation of 314410/(5314410 -4782969) = 0.591617 . . .. In any case, Chuquet says that many will be happy with this answer, but that the real value is a certain number still unknown. We can therefore presume that Chuquet had an understanding of the inadequacy of a mere linear interpolation. The correct answer is 6.578813. . . days, and the linear interpolation is in fact not that bad.
Luca Pacioli (c1445-1517) on the other hand, considered a similar problem, but provided a solution which nowadays would be obtained using logarithms. In his Summa de Arithmetica published in 1494 [START_REF] Pacioli | Summa de Arithmetica, Geometria, Proportioni & Proportionalita[END_REF], he considered a capital increasing with an annual rate of r percent, and he wanted to know how long it would take to double that capital. Pacioli wrote that the number of years for doubling the capital is obtained by dividing 72 by the annual rate [8, p. 163], [12, folio 181]. Assuming the rate is small, the number of years can indeed by approximated by ln(2) × 100/r = 69.3/r, which is not too far from 72/r. In fact, Pacioli must have found that when the rate is small, there is almost an inverse linear relationship between the rate and the number of years. This result can be established knowing the properties of logarithms, but that is of course very far from sufficient to conclude that Pacioli anticipated logarithms.
The origins of functions
Nowadays, the logarithm is viewed as a function. It is an object which takes a value and yields another one. The function establishes a correspondence between two sets. It is therefore important to have some idea about the emergence of this notion.
Among the earliest manifestations of functions were certainly the cases where curves were thought to be "generated" by some kind of motion, so-called kinematic curves. To different times corresponded different positions. This was true for celestial motions, but also for some mathematical curves, such as Hippias's trisectrix used around 420 BC [2, p. 56]. In that curve, two points are defined as having uniform motions, one along a segment, another along an arc, and at each moment these two points are in a configuration which is used to define a third point, the one which is part of the new curve. Alternatively, one can also consider that one segment has a uniform translation motion, and that the other rotates uniformly. Not only is there a correspondence between a time and a new position, but also between three points throughout time.
When Napier defined his logarithms, he also created two motions, one uniform, the other not, but such that two sequences of points were defined, and such that these points could be put in correspondence. Like in the quadratrix, the correspondence gave birth to a function. And these functions were implicitely continuous, each point going through all the positions of a given curve [11, pp. 82-83]. Napier's kinematic construction, then, should also be put in the context of the Oxford Calculators of the 14th century, who used such techniques [START_REF] Thomas | And John Napier created logarithms[END_REF].
Continuity
These considerations naturally lead to the important notion of continuity, which is essential for the modern notion of logarithms. If we return to the consideration of powers and ratios, we can see that at the beginning, exponents were discrete. Ratios were applied as a whole. Exponents were discrete just as a sequence was considered as something being countable, having a first element, then a second one, a third one, and so on. This led to gaps, and at the same time, whenever a correspondence was defined, such as those between integers and powers, this raised the question of what happened in between. If 2 corresponds to 4 and 3 to 8, to what does 2 and a half correspond? Such questions may be irrelevant if one works only with integers, but if one considers that the integers represent time, or even a position, then intermediate values have a clear meaning, and the question makes sense.
The underlying notion of continuity was probably almost intuitive, as it is so closely connected with that of motion. When an object moves from one place to another, it is natural to consider the transition as smooth and to assume that the object has occupied every intermediate position. But from an abstract and more mathematical point of view, continuity was not something totally obvious. It seemed intuitive, but defining it was not so easy. Moreover, there have been competing views. For Aristotle, for instance, something continuous was something that could be divided infinitely, a never-ending division process. A continuous line could therefore not be made of indivisible points. For Archimedes, on the other hand, continuity meant to be made of an infinity of indivisible elements. For instance, a plane can be thought as made of an infinity of lines, and a line as an infinity of points. Archimedes's approach is much closer to the modern one than Aristotle's.
The first practical tables
Napier and Bürgi are the authors of the first practical tables for simplifying calculations based on the replacement of multiplications by additions. The theory underlying both tables is the same. Bürgi's system appears much simpler and much easier to understand than Napier's. The very construction of Bürgi's table is so straightforward that practically anybody could recompute his table, although it would take some time. Earlier authors had some of Napier and Bürgi's ideas, but they did not construct extensive tables, and don't seem to have contemplated doing so. Now, what distinguishes Napier and Bürgi?
Bürgi
Jost Bürgi is a lesser known figure than Napier, in particular because no mathematical concept bears his name. But Bürgi was a very skilled mechanician, an astronomer, as well as a mathematician. A recently rediscovered trigonometry manuscript from the 1580s shows how much Bürgi was ahead of his time. That he constructed a table to simplify multiplications is therefore not a surprise. But how does this table compare to Napier's?
Whereas Stifel used powers of 2, and did not provide an extensive table, Bürgi understood that to make the process useable, all of the numbers should be reachable, and that this could be obtained to some extent by taking the powers of a number close to 1. By filling the interval 10 8 to 10 9 (or between 1 and 10 for simplification) using powers of a number close to 1, Bürgi provided a means to replace multiplications by additions, as an extension of Stifel's scheme. Bürgi neither invented the law of exponents, nor fractional exponents, but to a number of integers, he associated values between 1 and 10, and he extended this correspondence to several non integer indices between 230270 and 230270.022 in order to close the gap with 10. This necessarily meant that 0.022 was a measure of a (multiplicative) fraction of the ratio 1.0001, the one needed to go from 999999779 to 10 9 . It doesn't matter that much at this point as to how Bürgi computed this association, be it through an interpolation, or differently. But he had 1 → 1.0001 and 0.022 10 9 999999779 . This of course denotes some kind of understanding of fractional powers.
→
Admittedly, Bürgi necessarily also had the converse understanding, that associating an index to a number, because this is exactly how his table was used: a value was looked up, and its index was found, and used in an addition. So, in a way, Bürgi's table is used to find the logarithm of a number, and later the antilogarithm of another.
But this idea of associating an index to a value and conversely can not be ascribed to Bürgi. Stifel and others had this understanding, at least for the values of their discrete sequences, and Oresme, even earlier, had an understanding that some ratio was obtainable as the power of another ratio. Still, Bürgi was one of those who made this idea useable for ordinary calculations.
Napier's abstract definition of a function
Napier on the other hand started with an abstract definition of a relation, which can be viewed as a specification. He then wanted to take measurements on the motions that he had specified. And he needed continuity and interpolation methods. Each of his two motions were made dependent on time (or at least on a sequence of integers meant to represent uniform time), but the relationship between the two motions did not depend on time anymore. It only depended on the positions. Time was only a tool used for constructing a relationship. This is like constructing a square in order to measure the ratio of its diagonal to its side. The dimensions of the square have to be specified, but the ratio will be independent of these dimensions.
Napier's purpose was chiefly to replace ratios by differences, and to provide a table for that effect. His aim was more specialized than Bürgi's, in that he was more interested in trigonometrical calculations than into mere multiplications. He realized that it was possible to combine tables of (pure) logarithms and trigonometrical tables. Napier was certainly right, and facilitating trigonometrical calculations was badly needed, but "pure" multiplications were also useful, and in fact the first table of pure (non trigonometrical) logarithms appeared in 1617, only three years after Napier's Descriptio.
Interval arithmetic and the origins of accuracy
Napier's construction was admittedly complex. He defined several sequences of numbers, using different ratios, and he needed to associate these numbers n to others L(n), which are their logarithms. Given the complex structure of his sequences, there was a strong need to be able to estimate the accuracy of the values Napier obtained, as an incorrect value would not be conspicuously wrong.
In order to control the accuracy of his logarithms, Napier made perhaps the first use of interval arithmetic. Instead of manipulating mere values, Napier did manipulate pairs of values. Taking a simple example, √ 2 could be represented as the pair (1.41, 1.42), and then one might deduce that 2 √ 2 can be represented by (2.82, 2.84). Interval arithmetic was made easier by the geometric series employed by Napier. In his definitions, Napier took L(10 7 ) = 0 and he found that 1 < L(9999999) < 1.00000010000001... Napier chose the average of the two surrounding values as an approximation of the real value. He knew therefore that the error of the logarithm was less than 10 -7 and presumed that the logarithm was close to 1 + 5 • 10 -8 . The actual error on this value is in fact smaller than 10 -14 , but Napier didn't know it. Napier would have had his first logarithm equal to 1 if he had taken a slightly larger ratio than 0.9999999 in his first sequence, but this would have been extremely impractical. In any case, once Napier had an approximation of the first logarithm, he used it to obtain interval approximations of the next values in his sequences.
Napier's purpose was that his three construction (base) tables be as accurate as possible. It was on these tables that the actual canon was based. Interval arithmetic was actually confined to the construction, but could have been extended to the computation of actual logarithms beyond those of the construction tables. However, in order to use interval arithmetic on the values in the canon, something should have been known on the accuracy of the sine values.
Napier knew that a given logarithm was precisely defined by his kinematic construction. Bürgi, instead, adopted a different construction. He too could ensure the accuracy of his table, by various checks. Using various multiplications, Bürgi was certainly able to avoid the propagation of errors, and hence to bound his errors. That may also have been done by Napier, but only on his numbers, not on his logarithms. Bürgi only had integer indexes, and therefore didn't need to approximate their values.
Since Bürgi started with integer indices (by increments of 10), it may not have been totally clear to which index a given number was associated. All we know is that some interpolation may have been used to find which index produces 10 (or 10 9 ). For instance, given the values for 4000 and 4010, to what number does index 4001 correspond? How does Bürgi answer that question? If we do a linear interpolation between the values for 4000 (104080869) and 4010 (104091277), we obtain 104081909.80 . . . An exact calculation gives 104081910.03 . . . which is pretty good. But the accuracy depends on what values one uses, and there is no expression of that accuracy.
Of course, these considerations of accuracy could be taken into account in Bürgi's table, but they are nowhere apparent in the manual of the table and there does therefore appear a clear gap between the groundbreaking work of Napier, and Bürgi's extension of earlier works. This distinction was already made twenty years ago by Whiteside, and recently published in a posthumous article [START_REF] Thomas | And John Napier created logarithms[END_REF]
Defining the invention
Now that we have a better understanding of each contribution, we may attempt to define properly what it could mean to have invented logarithms. Of course, such a definition cannot purely focus on the name coined by Napier. Inventing logarithms also cannot merely be about the law of exponents, or else Euclid, Archimedes, and others will come first, and we will be in trouble defining what Napier (or Bürgi) did.
Bürgi and Napier both introduced tables for simplifying calculations. But we feel that tables are not necessary for defining the invention of logarithms, as one could obviously write down their theory without providing tables. Tables certainly do complement a theoretical basis. One could also provide a table, without a firm theoretical basis, as Bürgi did. Sometimes, a theoretical basis may have existed, but if it can't be produced, we cannot assume that it did exist. In the case of Bürgi, constructing his table is rather easy, and a theoretical basis is hardly needed.
One problem that needs to be examined is how Bürgi's work departs from those that came before, such as Stifel's work. Stifel and others showed how the indices in a progression could be used to multiply two terms. Bürgi's innovation was to make this property practical. He did so by using a ratio very close to 1, and at the same time he divided the interval [START_REF] Ageron | Ibn Hamza a-t-il découvert les logarithmes ?[END_REF][START_REF] Marre | Problèmes numériques faisant suite et servant d'application au Triparty en la Science des nombres de Nicolas Chuquet Parisien[END_REF] so that every number could be located in this interval. In a way, of course, Bürgi is even closer than Napier to modern tables of logarithms. Bürgi already had 10 play a fundamental role, and he had a means to obtain the index of any number, possibly using interpolation. But Bürgi's correspondence also had problems. Different numbers had the same index. For instance, 64570 is the index of 190726011, but also of 1.90726011, 19.0726011, etc. This, admittedly, is also true of modern tables of logarithms, where one would look up in the same place, whether one is searching for log 11, log 110, or log 1100. Napier, instead, really defined a function having different values for different arguments. There were not two numbers having the same logarithm. Napier's table was much closer to the theoretical logarithm than was Bürgi's table.
However Napier went beyond, or rather, he started with an abstract view of logarithms, which he tried to make practical. This step is totally new and it anticipates the development of calculus. Without a proper notation for functions, Napier defined a function, and constructed a procedure to evaluate the values of this function, in view of constructing a table of fundamental values, itself in order to use it for a table of logarithmic sines. This, and also Napier's interval arithmetic, does not seem to have any equivalent in Bürgi's work, even though Bürgi's tables were very accurate. Bürgi may have produced a much simpler table, and he did in some way anticipate modern tables of logarithms, but he did not display a grasp of an abstract function, in particular because he did not need any.
And this is exactly what we believe defines the invention of logarithms. Logarithms did no longer appear backstage, they did not have a mere implicit appearance any longer. For the first time in 1614, the function was put in the front and from its properties, tables were derived, using an array of both ancient and new techniques.
. It is as if Napier was climbing Everest, making several simultaneous breakthroughs, like climbing several sides: the abstract definition of a function, the separate implementation of this function, a high concern for the accuracy of his fundamental table, an early use of interval arithmetic, a fundamental table which could be used for arithmetic calculations, and an applied trigonometrical table built upon the fundamental table. Bürgi's table on the other hand serves the same purpose as Napier's construction tables, but anticipated the pure arithmetic tables of logarithms of which Briggs' 1617 table was the first vanguard. | 28,976 | [
"6974"
] | [
"206041"
] |
01465316 | en | [
"spi"
] | 2024/03/04 23:41:44 | 2016 | https://hal.science/hal-01465316/file/final%20ALCOSP16_0061_FI.pdf | A A R Al Tahir
email: [email protected]
R Lajouad
email: [email protected]
F Z Chaoui
Ramdane Tami
F Giri
T Ahmed-Ali
A Novel Observer Design for Sensorless Sampled Output Measurement: Application of Variable Speed Doubly Fed Induction Generator
Keywords: DFIG, High -gain observer, Sampled -data, Sensorless measurements, ISS stabilization.
published or not. The documents may come
INTRODUCTION
It is widely recognized that the induction machine has become one of the main actuators for industrial use. Indeed, as compared to the DC machine, it provides a better power/mass ratio, simpler maintenance (as it includes no mechanical commutators), and a relatively lower cost. It is largely agreed that these machines have promising perspectives in the industrial actuator held. This has motivated an intensive research activity on induction machine control, especially over the last 15 years [START_REF] Giri | AC Electric motors control: advanced design techniques and applications[END_REF]El Fadili et al., 2013). Doubly-fed induction machines (DFIM) have recently entered into common use. This is due almost exclusively to the advent of wind power technologies for electricity generation.
Doubly-fed induction generator (DFIG) are by far the most widely used type of doubly-fed electric machine, and are one of the most common types of generator used to produce electricity in wind turbines. Doubly-fed induction generators have a number of advantages over other types of generators when used in wind turbines. The primary advantage is that they allow the amplitude and frequency of their output voltage to be maintained at a constant value, no matter the speed of the wind blowing on the wind turbine rotor. Because of this, doubly-fed induction generator can be directly connected to the ac power network and remain synchronized at all times with the ac power network. Other advantages include the ability to control the power factor (e. g., to maintain the power factor at unity), while keeping the power electronics devices in the wind turbine at a moderate size (Boldea andNasar, 2010, Lajouad et al., 2014). Measuring mechanical quantities is always a challenge for control and visualization of the states of the system under studies. In fact the synthesis of an observer for the system is still beneficial to measure inaccessible magnitudes and quantities or requiring high-end sensors as claimed by [START_REF] Bastin | Stable adaptive observers for nonlinear time-varying systems[END_REF], Marino and Tomei, 1996[START_REF] Gildas | Nonlinear observers and applications[END_REF], Zhang, 2002[START_REF] Besancon | On adaptive observers for state affine systems[END_REF]. Much research work has been done around the synthesis observers for doubly fed induction machine. For example in (Li et al., 2010) the author suggests an adaptive estimate of rotor currents of the machine. This estimate depends on the time varying of the machine parameters.This estimate has applications where measurement of rotor currents is practecally difficult. In [START_REF] Lascu | A class of flux observers for doubly-fed induction generators used in small power wind generation systems[END_REF] the paper investigates a family of stator and rotor flux observers of doubly-fed induction generators (DFIG). Four stator flux observer topologies are described and compared. All proposed schemes use the voltage and current models connected in parallel or in series. In this structure no mechanical quantity is estimated. In [START_REF] Beltran | DFIG -based wind turbine robust control using high-order sliding modes and a high gain observer[END_REF], the paper deals with the sensorless control of a doubly-fed induction generator (DFIG) based wind turbine. The sensorless control scheme is based on a high-order sliding mode (HOSM) observer to estimate only the DFIG rotational speed. All these design methods provide continuous-time observers that need discretization for practical implementation purpose (Laila et al, 2006). The point is that exact discretization is a highly complex issue due to the strong nonlinearity of the observer. On the other hand, there is no guarantee that approximate discrete-time versions can preserve the performances of the original continuous-time adaptive observers. This explains why quite a few studies have, so far, focused on designing sampled-data adaptive observers that apply to nonlinear systems subject to parametric uncertainty. In [START_REF] Deza | High gain estimation for nonlinear systems[END_REF], discrete-continuous time observers have been designed on the basis of the system continuoustime model. The proposed observers include in effect two parts: an open-loop state estimator (with zero innovation term) operating between two successive sampling times and a feedback state estimator operating at the sampling times. The output estimation error is shown to be exponentially vanishing under ad hoc assumptions.
Another approach has been proposed by [START_REF] Raff | Observer with sample-and-hold updating for Lipschitz nonlinear systems with nonuniformly sampled measurements[END_REF] that consisted in using a single hybrid continuous-discrete observer with a ZOH sampled innovation term. The observer is applicable to a class of systems with Lipschitz nonlinearity in the state and linear matrix inequalities (LMIs) are established to meet global stability. In (Kravaris, 2013) a hybrid continuous-discrete observer involving an intersample output predictor has been proposed. Only the output predictor is reinitialized at each sampling time, while the state estimate is continuously updated by a standard structure observer where the (unavailable) inter-sample output measurement is replaced by the output prediction. The observer is applicable to triangular globally Lipschitz systems and features exponential convergence of the state observation. This rest of this paper is organized as follows: the reduced model of the DFIG system is described in Section 2. In Section 3 we deal with the candidate observer of the designated system. The fundamental theorem describing the proposed observer dynamic performances is presented in Section 4; all results are validated by numerical simulation throught MATLAB/SIMULINK environment, as given in Section 5. Finally, conclusions and remarks is given in Sections 6.
REDUCED MODEL OF THE DFIG
It is important to initiate with a good and appropriate model of the DFIG when designing the observer. The model of doubly fed induction generator, in d-q coordinates, is defined by the following equations given in (El Fadili et al., 2013[START_REF] Fadili | Adaptive nonlinear control of induction motors through ac/dc/ac converters[END_REF]:
v sd = R s i sd + ϕ ̇sd -ω s ϕ sq (1a) v sq = R s i sq + ϕ ̇sq -ω s ϕ sd (1b) v rd = R r i rd + ϕ ̇rd -(ω s -pω)ϕ rq (1c) v rq = R r i rq + ϕ ̇rq -(ω s -pω)ϕ rd (1d) [ ϕ sd ϕ sq ϕ rd ϕ rq] = [ 𝐿 𝑠 0 𝑀 𝑠𝑟 0 0 𝐿 𝑠 0 𝑀 𝑠𝑟 𝑀 𝑠𝑟 0 𝐿 𝑟 0 0 𝑀 𝑠𝑟 0 𝐿 𝑟 ] [ i sd i sq i rd i rq] (1e)
ω̇= (𝑇 𝑒𝑚 -𝑇 𝑔 -f v ω)/ J (1f) where R s , L s , R r and L r are, respectively the stator and rotor resistances and self-inductances, and M sr is the mutual inductance between the stator and the rotor windings. ϕ sd , ϕ sq , ϕ rd and ϕ rq denote the rotor and stator flux components. (i sd , i sq , i rd , i rq ) and (v sd , v sq , v rd , v rq ) are the stator and rotor components of the current and voltage respectively. p is the number of pole-pair, ω s is the stator angular frequency, ω represents the generator speed. f v , J and T g are, respectively the viscuous friction coefficient, the total moment of inertia for lumped mass model (rotor blades, hub, and generator), and generator torque. Equation (1a-1f) can be re-written as follows:
v = [ R s I 2 O 2 O 2 R r I 2 ] i + dΦ dt + [ ω s J 2 O 2 O 2 (ω s -pω)J 2 ] Φ (2a) Φ = [ L s I 2 M sr I 2 M sr I 2 L r I 2 ] i (2b) dω dt = 1 J (𝑇 𝑒𝑚 -𝑇 𝑔 -f v ω) ( 2c
)
𝑇 𝑒𝑚 is the electromagnetic torque represented by: 𝑇 𝑒𝑚 = pM sr (i rd i sq -i rq i sd ) = pM sr i T T 0 i (3a)
T 0 = [ O 2 J 2 O 2 O 2 ] (3b) i = [i sd i sq i rd i rq ] T , v = [v sd v sq v rd v rq ]
T and Φ =
[ϕ sd ϕ sq ϕ rd ϕ rq ] T denote, respectively the state vector of stator and rotor current, voltage and flux in (d-q) reference coordinate, where
J 2 = [ 0 -1 1 0 ] , I 2 = [ 1 0 0 1 ] and O 2 = [ 0 0 0 0 ]
Then the model of DFIG in the (d; q) coordinates system (El Fadili et al., 2013[START_REF] Fadili | Adaptive nonlinear control of induction motors through ac/dc/ac converters[END_REF] can be given by the following equations: 4c) is motivated by the fact that, in numerous applications, the generator torque 𝑇 𝑔 is assumed bounded, derivable and its derivative is also bounded. Indeed, the generator torque 𝑇 𝑔 is actually infrequent, that is, the input generator torque takes a value with slowly varying.
i̇= γM 1 v + M 23 i -pγM 4 ωi (4a) ω̇= (𝑝𝑀 𝑠𝑟 𝑖 𝑇 𝑇 0 𝑖 -𝑇 𝑔 -𝑓 𝑣 𝜔)/J (4b) 𝑇 ̇𝑔 = 0 (4c) Equation (
with 𝛾 = 1 𝜚𝐿 𝑠 𝐿 𝑟 , 𝜚 = 1 - 𝑀 𝑠𝑟 2 𝐿 𝑠 𝐿 𝑟
, 𝑀 1 , 𝑀 2 , 𝑀 3 , 𝑀 23 and 𝑀 4 are constant matrices that can be represented as follows:
𝑀 1 = ( 𝐿 𝑟 𝐼 2 -𝑀 𝑠𝑟 𝐼 2 -𝑀 𝑠𝑟 𝐼 2 𝐿 𝑠 𝐼 2 ) (5a) 𝑀 2 = ( 𝑅 𝑠 𝐿 𝑟 𝐼 2 -𝑀 𝑠𝑟 𝑅 𝑟 𝐼 2 -𝑀 𝑠𝑟 𝑅 𝑠 𝐼 2 𝑅 𝑟 𝐿 𝑠 𝐼 2 ) (5b) 𝑀 3 = ( 𝐽 2 𝑂 2 𝑂 2 𝐽 2 ) (5c) 𝑀 23 = -𝑀 3 𝜔 𝑠 -𝛾𝑀 2 (5d) 𝑀 4 = ( 𝑀 𝑠𝑟 2 𝐽 2 𝑀 𝑠𝑟 𝐿 𝑟 𝐽 2 -𝑀 𝑠𝑟 𝐿 𝑠 𝐽 2 -𝐿 𝑠 𝐿 𝑟 𝐽 2 ) (5e)
Now, let us introduce the following state variables representation: 𝑥 11 = 𝑖 𝑠𝑑 , 𝑥 12 = 𝑖 𝑠𝑞 , 𝑥 13 = 𝑖 𝑟𝑑 , 𝑥 14 = 𝑖 𝑟𝑞 , 𝑥 2 = 𝜔, 𝑥 3 = 𝑇 𝑔 and let's 𝑥 1 = [ 𝑥 11 𝑥 12 𝑥 13 𝑥 14 ] 𝑇 .Then the system (4) can be re -written in the reduced form as :
𝑥̇= 𝑓(𝑣, 𝑥) (6a) 𝑦 = ℎ(𝑥) = 𝑥 1 (6b)
ℎ(𝑥) designs a measured states variables of the system (6), and 𝑓(𝑣, 𝑥) = [𝑓 1 , 𝑓 2 , 𝑓 3 , 𝑓 4 , 𝑓 5 , 𝑓 6 ](𝑣, 𝑥), with:
[𝑓 1 , 𝑓 2 , 𝑓 3 , 𝑓 4 ] 𝑇 (𝑣, 𝑥) = 𝛾𝑀 1 𝑣 + 𝑀 23 𝑥 1 -𝑝𝛾𝑀 4 𝑥 2 𝑥 1 (7a) 𝑓 5 (𝑣, 𝑥) = 1 𝐽 (𝑝𝑀 𝑠𝑟 𝑥 1 𝑇 𝑇 0 𝑥 1 -𝑥 3 -𝑓 𝑣 𝑥 2 ) (7b) 𝑓 6 (𝑣, 𝑥) = 0 (7c)
Recall that the electromagnetic torque, 𝑇 𝑒𝑚 expresses in terms of the current vector as follows:
𝑇 𝑒𝑚 = 𝑝𝑀 𝑠𝑟 𝑖 𝑇 𝑇 0 𝑖 ≝ ℎ(𝑖) (8) Using (4a), it is readily checked that 𝑇 ̇𝑒𝑚 undergoes the following equation:
𝑇 ̇𝑒𝑚 = 𝑝𝑀 𝑠𝑟 𝑖 𝑇 (𝑇 0 + 𝑇 0 𝑇 ) 𝑑𝑖 𝑑𝑡 = 𝑆 1 (𝑖, 𝑣) -𝑆 2 (𝑖)𝜔 (9)
with:
𝑆 1 (𝑖, 𝑣) = 2𝑝𝑀 𝑠𝑟 𝑖 𝑇 𝑇 0 (𝛾𝑀 1 𝑣 + 𝑀 23 𝑖) (10a) 𝑆 2 (𝑖) = 2𝑝 2 𝛾𝑀 𝑠𝑟 𝑖 𝑇 𝑇 0 𝑀 4 𝑖 (10b)
by combining with (4b-c) constitutes a suitable mathematical state -space representation for the estimation of the generator rotor speed 𝜔. For writing convenience, the system representation is re-written as:
𝑇 ̇𝑒𝑚 = 𝑆 1 (𝑖, 𝑣) -𝑆 2 (𝑖)𝜔 (11a) 𝜔̇= - 1 𝐽 𝑇 𝑒𝑚 + 1 𝐽 𝑇 𝑔 - 𝑓 𝑣 𝐽 𝜔 (11b)
𝑇 ̇𝑔 = 0 (11c) Throughout this paper, to avoid the mistakes and confusion between the notations of full order system variables and the corresponding reduced model, the following more compact form of the reduced model with new notations of state variables is given: 𝜉 ̇= 𝛶 1 (𝑖)𝜉 + 𝛶 2 (𝑖, 𝜉) (12a) 𝑦 = 𝐶𝜉 (12b) where:
𝜉 ≝ [𝜉 1 𝜉 2 𝜉 3 ] 𝑇 = [𝑇 𝑒𝑚 𝜔 𝑇 𝑔 ] 𝑇 (13a) 𝛶 1 (𝑖) = ( 0 -𝑆 2 (𝑖) 0 0 0 -1 𝐽 0 0 0 ) (13b) 𝛶 2 (𝑖, 𝜉) = ( 𝑆 1 (𝑖, 𝑣) 𝜉 1 𝐽 - 𝑓 𝑣 𝐽 𝜉 2 0 ) (13c) 𝐶 = [1 0 0] (13d)
For physical point of view and domain of working principle, it is supposed that all physical state variables are bounded in domain of interest as stated in 𝑨 𝟐 . To overcome the blow up state variables in finite time that may reduce the escape time of the system and to restrict the initial peaking phenomenon, which are practically reasonable (Khalil,2002[START_REF] Khalil | Semiglobal stabilization of a class of nonlinear systems using output feedback[END_REF]. It turns out that the state vector 𝜉(𝑡) is bounded (i.e. ‖𝜉(𝑡)‖ ≤ 𝜌 𝜉 , ∀𝑡, for some constant 0 < 𝜌 𝜉 < ∞) is an upper bound.
This system (11) satisfies the observability rank condition, if the Jacobian of the vector formed by the Lie derivative terms
[𝜉 1 -𝑆 2 (𝑖)𝜉 2 - 𝑆 2 (𝑖) 𝐽 𝜉 3 ]
𝑇 is Full structural rank. This condition is all time verified because 𝑆 2 2 (𝑖) is non null too.
PROPOSED OBSERVER DESIGN
In the case where the current vector 𝑖(𝑡) is accessible to measurements for all 𝑡 ≥ 0, the system model given by ( 11) almost fits the observable canonical form for which the standard (continuous-time) high-gain observer applies (see e.g. [START_REF] Gauthier | Observability and observers for nonlinear systems[END_REF]. As a matter of fact, the system model in (11) differs from the canonical form in that the first state variable, denoted by, 𝑇 𝑒𝑚 , is not (directly) measurable. This can only be computed using the (supposedly available) current measurements 𝑖(𝑡) using the relation ( 8). One difficulty is that relation ( 8) is not output injective. Another difficulty is that the current vector 𝑖(𝑡) is not accessible to measurements all the time, 𝑡 ≥ 0. Only sampled-data measurements 𝑖(𝑡 𝑘 ), (𝑘 = 0,1,2 … ) are presently available at sampling instant. Therefore, the following (non-standard) high-gain sampled-output nonlinear observer is proposed:
𝜉 ̂̇= 𝛶 1 (𝜎(𝑧))𝜉 ̂+ 𝛶 2 (𝜎(𝑧), 𝜉 ̂) -𝜃𝛬 -1 𝐾(𝐶𝜉 ̂-ℎ(𝜎(𝑧)))(14a) 𝑧̇= 𝛾𝑀 1 𝑣 + 𝑀 23 𝑧 -𝑝𝛾𝑀 4 𝜉 ̂2𝜎(𝑧) (14b) ∀ 𝑡 𝑘 < 𝑡 < 𝑡 𝑘+1 , 𝑘 = 0,1,2, … 𝑧(𝑡 𝑘 ) = 𝑖(𝑡 𝑘 ) (14c) 𝛬 = 𝑑𝑖𝑎𝑔 [1 1 𝜃 1 𝜃 2 ] (14d)
with 𝐾 ∈ ℝ 3 , being any fixed vector gain such that the following inequality holds, for some scalar 𝜇 > 0 and some positive definite matrix 𝑃 = 𝑃 𝑇 : (𝛶 1 (𝜎(𝑧)) -𝐾𝐶)
𝑇 𝑃 + 𝑃(𝛶 1 (𝜎(𝑧)) -𝐾𝐶) ≤ -𝜇𝐼 3 (15)
with, 𝑥 ̂(0) is arbitrarily initial condition chosen in space.
𝑨 𝟏 : 𝜎(𝑧) denotes the saturation function of the variable 𝑧 bounded between (1,-1) can be defined as follows,
𝜎(𝑧) = lim𝑧→∞ 𝑡→∞ tanh(𝑧(𝑡))𝑚𝑖𝑛(|𝑧|, 𝑖 𝑀 ) (16)
Knowing that 𝑖 𝑀 is any upper bound of the current i.e.
𝑖 𝑀 ≥ ‖𝑖(𝑡)‖ 0≤𝑡<∞ 𝑠𝑢𝑝 (17)
Note that the knowledge of 𝑖 𝑀 is not an issue because the maximum amplitude of the current vector 𝑖 in DFIG is a priori known in practice.
𝑨 𝟐 : The DFIG stays in domain working principle, it can be denoted by ℧ which defined as follows: ℧ = { 𝑋 ∈ ℝ 6 such that: ‖ϕ sd ‖ , ‖ϕ sq ‖, ‖ϕ rd ‖, ‖ϕ rq ‖ ≤ ϕ Max , and ‖i rd ‖ , ‖i rq ‖ ≤ I Max with, ‖ 𝜔 𝑔 ‖ ≤ 𝜔 Max , and ‖𝑇 𝑔 ‖ ≤ 𝑇 Max .
Knowing that the functions , 𝜓 𝑛𝑜𝑚 , 𝐼 𝑛𝑜𝑚 , 𝜔 𝑛𝑜𝑚 and 𝑇 𝑛𝑜𝑚 are the nominal models and upper bound of the actual state variables such as rotor flux , generator current, generator rotor speed and mechanical torque can be physically obtained and satisfied.
The above observer presentation is completed by a number of relevant remarks:
a) The existence of a gain 𝐾 ∈ ℝ 3 satisfying ( 15) is a consequence of Lemma 4.0 in [START_REF] Gauthier | Observability and observers for nonlinear systems[END_REF].
b) Comparing ( 14b)-( 14c) and (4a), it is readily seen that the variable 𝑧(𝑡) undergoes, between two sampling instants, the same differential equation as the current 𝑖(𝑡). Furthermore, 𝑧(𝑡)is reinitialized at each sampling instant, it is set to the value of the current at those instants. It turns out that 𝑧(𝑡)represents a prediction of the current 𝑖(𝑡) over each interval (𝑡 𝑘 , 𝑡 𝑘+1 ). In view of ( 8), it turns out that ℎ(𝑧(𝑡)) is a prediction of the electromagnetic torque , 𝑇 𝑒𝑚 . c) Notice that, the current prediction 𝑧(𝑡) is replaced by its saturated version 𝜎(𝑧(𝑡)) everywhere on the right side of the observer (14a)-( 14c). The introduction of the saturation function in state observers is a recent usual practice (see e.g. references). Presently, the saturation process will prove to be crucial to ensure exponential convergence of the observer (because the function 𝛶 2 (𝑧, 𝜉) is not Lipschitz in 𝑧 (see equations ( 10a) and ( 13c)). On the other hand, since the saturated value 𝜎(𝑧(𝑡)) is closer to the current 𝑖(𝑡) than 𝑧(𝑡), the observer (14a)-(14c) will necessarily perform better (it will be more speedier) than the basic version not involving saturation.
d) Clearly, the sampled-output observer (14a)-( 14c) does not involve a ZOH innovation term (as those in e.g. [START_REF] Dabroom | Output feedback sampled-data control of nonlinear systems using highgain observers[END_REF]. The present observer includes inter -ample prediction innovation correction term. But, it differs from previous sampled observers of this type (e.g. [START_REF] Ahmed-Ali | Continuous -discrete adaptive observers for state affine systems[END_REF] in that the first state variable 𝑇 𝑒𝑚 in the model ( 11) is related to the measurable generator current 𝑖 through a noninjective relation, namely (8).
OBSERVER ANALYSIS
Theorem (Main result): Given a class of nonlinear system state in (12) with compact form, submits to assumptions (𝑨 𝟏 and 𝑨 𝟐 ). Let us consider a high -gain sampled output observer coupled with inter sampled output predictor given by ( 14) such that the evaluation of high gain design parameter is chosen for 𝑘 𝑖 , 𝑖 = 1; 2; 3, gain matrix corresponding to measurement error, for 0 < θ * < ∞ be sufficiently large values of parameter such that for all 𝜃 > 𝜃 * , 𝜃 * (𝜇 -
𝜌 ℎ ‖𝐾‖‖𝑃‖ 𝜁 ) -‖𝑃‖ (2𝜌 𝐺 + 1 𝜁 (𝜌 𝜉 𝜌 𝐹 + 𝜌 𝐺 )) > 0 (18)
with 𝑃 ≝ 𝑃 𝑇 > 0, is the solution of algebraic Lyapunov equation with constant scalar 0 < 𝜌 ℎ , 𝜌 𝜉 , 𝜌 𝐹 , 𝜌 𝐺 < ∞ exist and the free design parameter 𝜁 be selected such that, 𝜇 -𝜌 ℎ ‖𝐾‖‖𝑃‖ 𝜁 > 0. To ensure exponential convergence of the observation error between estimated and measured states towards zero, there exists a real positive bounded 0 < 𝑡 -𝑡 𝑘 < 𝜏, with 𝜏 = 𝑠𝑢𝑝 0≤𝑘<∞ (𝑡 𝑘 -𝑡 𝑘-1 ), the estimation error is ultimately bounded and the corresponding bound can be made as small as possible by choosing 𝜃 high enough value.
Moreover, there exist real positives (𝑀 1 ; 𝛽 1 ; 𝛽 2 ), such that the closed loop observation error 𝜉 ̃(𝑡) = 𝜉 ̂(𝑡) -𝜉(𝑡) satisfies the following inequality
∀ t ∈ [t k , t k + 1) , k ∈ N , ‖𝜉 ̅ ‖exp αt/2 ≤ M 1 1-𝛽 1 𝜏 𝑒𝑥𝑝 𝛼𝜏/2 1-(𝛽 1 +𝜈𝛽 2 )𝜏 𝑒𝑥𝑝 𝛼𝜏/2 , (19)
So, the whole system is globally exponentially stable (GES) based on ISS stabilization where 𝜉 ̂(t) is the estimate trajectory given by (14a) associated with (𝑣, 𝑦) for whatever initial conditions (𝜉 0 , 𝜉 ̂0) ∈ 𝜉 2 .
Proof of Theorem 1: The proof of Theorem (main result) has been removed due to the limitation of the number of pages.
SIMULATIONS RESULTS
To implement this formulation of the observer, the system of equations ( 4) are used with the numerical values given in the Table .1.
These values issued from the technical documentation of a doubly fed induction generator. We shall estimate rotational speed and generator torque, without using mechanical sensors, such as position encoder which are most costly and unreliable.
To validate the robustness of the proposed observer, the test benchmark include, some limit cases: Mechanical torque is zero (the wind speed is less than the start-up speed of the wind turbine). the mechanical torque is greater than the nominal torque, Slow variation of mechanical torque (usual case), Abrupt variation of mechanical torque (theoretical case).
Characteristics
Therefore, the reference of the generator torque 𝑇 𝑔 will take the form shown in Fig. 2. Fig. 3 shows generator torque and its istimate 𝑇 ̂𝐺 with zooming in for short period of time around (t = 5s), for complete one cycle of 20s.
With the test conditions mentioned above, one was implanted the simulation in the MATLAB/SIMULINK environment with the tracking high gain parameter θ = 175, the gain matrix corresponding to measurement error 𝐾 = [7 ; 25; 30] 𝑇 and the sampling interval 𝑇 𝑠 = 2𝑠 or 𝑓 𝑠 = 0.5 𝐻𝑧. In each figure is displayed in solid and blue lines the measurement of the variable at the system output, and in dashed line the measurement of the variable at the output of the observer. Fig. 3 show that the estimated generator torque reached quickly the real torque. The response time is less than 0.2s. Similarly, one note that the observed electromagnetic torque joined the actual electromagnetic torque. Fig. 4 illustrates the electromagnetic torque and its estimates. In Fig. 5, one notice that the observed rotation speed reached the second region of operation in less than 0.5s. Fig. 6 clarifies the output state prediction error for complete one cycle of 20s. However, as expected, the fast transient of generator input torque affect momentarily the estimation and prediction process. This phenomena is depicted through the appearance of overshoot on the figures shown in (3,4,5 and 6), respectively at times (5s, 10s, 15s) of a complete one cycle. There is no error on the initial conditions at the entry of the unobservability region and the provided estimates still converge to their true closed loop trajectories. Sampled output state predictor is re-initialized at each sampling instant and remains continuous between two sampling instants through ZOH device, which is a mathematical model of the practical signal reconstruction operating at a specific sampling interval.
The proposed observer hooks -up to the system states approximately after (3-5) sampled measurements. It is observed that at low value (150) of observer design parameter θ, it allows to occur free noise estimates, but these states estimates vary slowly and are not capable of tracking the real state variations. On the other side, a very large value of θ (200) permits a good tracking performance of the missing states variations, but for such case, the observer becomes noise sensitive since the noise level in the provided estimates is significantly high. So, we must make a good compromise between both cases by selecting optimum gain parameter (175).
CONCLUSIONS AND REMARKS
This paper tackles the problem of designing a sampled -data observer for the doubly fed induction gernerator DFIG. The aim is to get online estimate of the mechanincal variables based on the electrical measurement. First, the machine has been modeled in the (d -q) reference frame (4).
The proposed observer combines the advantage of a highgain structure in terms of convergence speed and an output state predictor, which remains continuous between two adjecent sampling instants. An observer ( 14) has been designed and analyzed successfly with the assistance of Lyapunov stability tools method and ISS propertry. It is formally shown that all variables converge to a neighboors of their true values. In the particular case of constant load mechanical generator torque, the estimates converge exponentially to their true values. Then theoritical results are confirmed by simulation.
One succeeded to introduce an explicit expression of continuous time output prediction error between two consecutive sampling instants depends on other sampled high gain design parameters and system nonlinearity. The recent developments of lower cost digital systems makes wide range applications of DFIG, fast dynamic response, robustness against external disturbances are considered the most important criteria of high performance systems.
Fig. 1 .
1 Fig. 1. Experimental setup of the DFIG-based WECS simulator.
Fig 6 .Fig 4 .
64 Fig 6. Output state prediction error, 𝑒 𝑧 0
FigFig 2 .
2 Fig 3. Electromagnetic torque
Table 1 .
1 DFIG system nominal Parameters
Symbol Value Unity
DFIG
Nominal power Pn 5.00 kW
Mutual inductor Msr 0.103 H
Stator resistor Rs 0.163 Ω
Stator cyclic inductor Ls 0.309 H
Rotor resistor Rr 0.140 Ω
Rotor cyclic inductor Lr 0.035 H
Number of pole pairs p 2 -
Rotor inertia J 2.2 Nm/rd/s 2
Viscous friction f 0.004 Nm/rd/s
Three-phase network
Voltage En 220/380 V
Network frequency fn 50 Hz | 23,329 | [
"989061",
"999602"
] | [
"150",
"150",
"150",
"150",
"150",
"150"
] |
01465330 | en | [
"spi"
] | 2024/03/04 23:41:44 | 2015 | https://hal.science/hal-01465330/file/Sampled%20Data%20High%20Gain%20Observer%20Design%20Combined%20with%20Output%20State%20Predictor%20For%20Synchronous%20PMSMs.pdf | A A R Al Tahir
Tarek Fouad Giri
Tarek Ahmed-Ali
Ahmed-Ali Sampled
Sampled Data High Gain Observer Design Combined with State Predictor For Synchronous PMSMs
come L'archive ouverte pluridisciplinaire
Introduction
Ali AL -TAHIR, F. Giri , Tarek A. Ali GREYC -CNRS UMR 6072, University of Caen, UCBN [c] Sampled Data High Gain Observer Design Combined with State Predictor For Synchronous PMSMs
• This work presents observer with sampled data measurements and his application to a surface permanent magnet synchronous motor.
• The design of nonlinear observers with sampled measurements has received a great attention. • This interest is motivated by many applications such as Networks Communication Systems. • Recently, a hybrid samples data observer dedicated of a class of nonlinear systems has been presented.
• This approach is based on an inter -sample time predictor, which estimates the output between two sampling instants. • The advantage of this algorithm is in the fact that the state estimates remain continuous time model.
• At each sampling instant, the predictor is reset to the actual output .
• This work is devoted to Sampled Data High Gain Observer design for a class of uniformly observable systems.
Modelling and Observability Study of Synchronous PMS Machines
Because the rotor position is not supposed to be available, the PMSM model is considered in the ( αβ ) frame :
Simulation Results: Maximum Allowable Sampling Period 𝜏 MASP :
0 1 2 3 4 5 6 -2.5 -2 -1.5 -1 -0.
• We derive a condition on the maximum allowable sample period that ensures a convergence of the observation error.
𝜏 MASP = lim 𝜌 → 0 ( 1 𝜎 2 arctan 𝜆 -1 -arctan 𝜆 ) = 1 𝜃 [ 𝜆 2 max 𝑃 𝜆 2 max 𝐾 + 𝜃+𝜉 1 2 𝜃𝜆max 𝑄 -2𝜆max 𝑃 𝜉 1 + 𝑐𝜆min 𝑃 ]( 𝜋 2 - arctan(𝜆))
Mathematical Results:
• Then, the observer is exponentially stable whatever the initial conditions. The error vector 𝑒 𝑧 = 𝑧 (𝑡) -𝑧(𝑡) converges exponentially to a compact neighborhood of the origin and the size of this compact set can be made small by choosing the design parameters 𝜃 sufficiently large.
Theorem 1: Under assumptions (A1-A2), system ( 22) is a sampled data observer for system (16) with the following property: For sufficiently large values of parameters 𝜃 and ki = 1;2;3, there exists a real positive bounded 𝜏 MASP such for all 𝜏 ∈ (0, 𝜏 MASP ), the observation error is ultimately bounded and the corresponding ultimate bound can be made as small as desired by choosing values of 𝜃 high enough. • The simulation results illustrate that the proposed SDHGO observer has fast transient response, good external load torque rejection response ,accurate tracking response and accurate recovery from any external disturbance.
• The objective is to determine under what conditions all the state of the SPMSM motor, is 𝑖 𝑠 ,𝜑 ω m and T L can be determined from the output and input measurements, namely the current and the voltage measurements i s and u, respectively.
Fig. ( I ): Single Line Diagram of Proposed Case Study of SPMSM
Table I . Mechanical and Electrical Characteristics for Surface -Mounted PMSM .
I
Parameter Symbol Value
DC / AC converter V dc 600 V
Modulation Frequency F m 10 Hz
Nominal Power P 3 kW
Nominal flux 𝜓 𝑛 0. 3 Wb
Stator resistance 𝑅 𝑠 0. 6 𝛺
Stator inductance 𝐿 𝑠 0.0094 H
Rotor and load Viscous damping F 0. 003819 SI
coefficient
Moment of inertia J 0.02765 SI
Number of pole pairs p 2
𝛉 K 1 K 2 K 3 𝛕 𝐬𝐚𝐦𝐩𝐥𝐢𝐧𝐠 𝛕 𝐦𝐚𝐱 Eta η a
64 (Med.) 30.2 241 258 0.56 0.77ms 20 0.75
61(Small) 30.2 241 258 0.01 min. 0.77ms 20 0.75
69 30.2 241 258 0.77 (max) 0.77 ms 20 0.75
(Large)
Table ( II): Sampled Data High Gain Observer Design Parameters.
( | 3,753 | [
"989061",
"941428"
] | [
"389385",
"389385",
"389385"
] |
01004692 | en | [
"spi"
] | 2024/03/04 23:41:44 | 2011 | https://hal.science/hal-01004692/file/HYDHA.pdf | Wei Hu
Yin Zhenyu
Dano Christophe
& Hicher
A constitutive model for granular materials considering grain breakage
Keywords: plasticity, critical state, grain size distribution
A particle breakage has a significant impact on the mechanical behavior of granular materials. In this paper, we present an elasto-plastic model with two yield surfaces to which the influence of particle breakage has been introduced. The main feature of this model is to incorporate the change in the critical state line (CSL) consequent to the grain breakage induced by isotropic and deviatoric stresses during loading. For this purpose we propose a breakage function which connects the evolution of the CSL to the energy consumed. Results from earlier studies on drained and undrained compression and extension triaxial tests were used to calibrate and validate the model. Comparison between earlier results and our simulations indicates that the model can reproduce with good accuracy the mechanical behavior of crushable granular materials and predict the evolution of the grain size distribution during loading. granular materials, grain breakage, elasto-
Introduction
A grain breakage commonly occurs when a granular material undergoes compression and shearing, especially under high confining stress (e.g., earth dams, deep well shafts). Its impact on the mechanical behavior of granular materials has been widely studied in the past decades [START_REF] Vesic | Behaviors of granular materials under high stresses[END_REF][START_REF] Hardin | Crushing of soil particles[END_REF][START_REF] Coop | The mechanics of uncemented carbonate sands[END_REF][START_REF] Biarez | Elementary Mechanics of Soil Behaviors[END_REF][START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF][START_REF] Lade | Significance of particle crushing in granular materials[END_REF][START_REF] Biarez | Influence de la granulométrie et de son évolution par ruptures de grains sur le comportement mécanique de matériaux granulaires[END_REF][START_REF] Coop | Particle breakage during shearing of a carbonate sand[END_REF][START_REF] Bopp | Relative density effects on undrained sand behavior at high pressures[END_REF][START_REF] Huang | A study of mechanical behaviour of rock-fill materials with reference to particle crushing[END_REF][START_REF] Einav | Breakage mechanics-Part I: Theory[END_REF][START_REF] Wood | Changing grading of soil: Effect on critical state[END_REF][START_REF] Mcdowell | The fractal crushing of granular materials[END_REF], and different modeling methods have been developed. Daouadji et al. [START_REF] Daouadji | An elastoplastic model for granular materials taking into account grain breakage[END_REF] connected the position of the critical state line (CSL) to the amount of energy needed for grain breakage, showing that the CSL in the e-logp' (void ratio versus mean effective stress in log scale) descends according to the evolution of gradation. Muir Wood et al. [START_REF] Wood | Changing grading of soil: Effect on critical state[END_REF] confirmed the change of the position of CSL with grain gradation. Based on this result, they connected the CSL with a grading state index I G which is a state parameter that evaluates the evolution of the gradation as a result of grain breakage. Russell et al. [START_REF] Russell | A bounding surface plasticity model for sands exhibiting particle crushing[END_REF] used a three-segment type CSL within a boundary surface constitutive model to describe the behavior of crushable granular materials. Salim et al. [START_REF] Salim | A new elasto-plastic constitutive model for coarse granular aggregates incorporating particle breakage[END_REF] formulated a ratio between the deviatoric and the mean stresses as a function of dilation and derived a new plastic flow rule from this formulation to take into account the effects of particle breakage. Sun et al. [START_REF] Sun | An elastoplastic model for granular materials exhibiting particle crushing[END_REF] and Yao et al. [START_REF] Yao | Constitutive model considering sand crushing[END_REF] modified the plastic hardening parameter of their models in order to take into account the effect of particle breakage.
The change of the CSL with gradation can also be found when fines are added to sands (Thevanayagam et al. [START_REF] Thevanayagam | Undrained fragility of clean sands, silty sands and sandy silts[END_REF]), which in turn can be evidence of the evolution of the CSL with gradation changes due to particle breakage. Therefore, the models by Daouadji et al. [START_REF] Daouadji | An elastoplastic model for granular materials taking into account grain breakage[END_REF], Muir Wood et al. [START_REF] Wood | Particle crushing and deformation behavior[END_REF], Daouadji et al. [START_REF] Daouadji | An enhanced constitutive model for crushable granular materials[END_REF] can be better justified from a physical point of view. The model proposed in this paper situates itself along this line. However, departing from the models of Daouadji et al. [START_REF] Daouadji | An elastoplastic model for granular materials taking into account grain breakage[END_REF][START_REF] Daouadji | An enhanced constitutive model for crushable granular materials[END_REF], we propose a simple two-yield surface plastic model using the evolution of the CSL with gradation, and departing from the model of Muir Wood et al. [START_REF] Wood | Particle crushing and deformation behavior[END_REF], we connect the evolution of the gradation to the plastic work during loading. Since the gradation is an important factor in the proposed model, we have also made it possible to predict its evolution at each stage of loading.
In the first part of the paper, we present an analysis of the connection between the CSL, particle breakage and energy consumption based on experimental results. Then, we formulate an elasto-plastic model within the framework of critical state soil mechanics under triaxial condition. Finally, results of numerical simulations of triaxial tests performed on Cambria Sand under different loading conditions at high confining stresses are compared to experimental data.
Analysis of breakage 2.1 Definition of breakage index
Hardin [START_REF] Hardin | Crushing of soil particles[END_REF] suggested a breakage index B r in order to quantify the amount of particle breakage. The index is based on the changes in particle size. Einav [START_REF] Einav | Breakage mechanics-Part I: Theory[END_REF] modified the definition of this factor, taking into account the changes on the overall grain size distribution and assuming a fractal rule for particle breakage [START_REF] Mcdowell | The fractal crushing of granular materials[END_REF].
0 0 ( ) ( l o g ( ) ) , ( )( l o g () ) M m M m d d p r d t u d F d F d d d B B B F d F d d d (1)
where B p is the area between the original and the present grain size distributions; B t is the total area between the original and the ultimate fractal grain size distributions; F 0 (d) and F u (d) represent respectively the initial gradation before grain breakage and the ultimate fractal distribution; F(d) is the present gradation; d is the present grain size; d M and d m are the maximum and minimum grain sizes of the material. The present gradation can be expressed as
( ) / , M F d d d (2)
where is a material constant. For F 0 (d) the value of can be measured from the initial grain size distribution, for example = 5.5 for Cambria Sand. For F u (d), the value of = 0.4, proposed by Coop et al. [START_REF] Coop | The mechanics of uncemented carbonate sands[END_REF], is adopted in order to obtain the ultimate fractal grain size distribution. Thus, for a given grain size distribution, can be obtained by fitting the grading curve with eq. ( 2). Then, the breakage index B r can be obtained by eq. ( 1). In turn, for a given B r , the present grain size distribution can be determined.
Influence of plastic work on evolution of gradation
In this section, results from drained triaxial compression tests on Cambria sand [START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF][START_REF] Lade | Significance of particle crushing in granular materials[END_REF]
p p p v d w p q ( 3
)
where p is the mean effective stress:
1 3
( 2 )/3; p q is the deviatoric stress: 3) implies that the shear induced dilation (d v p <0) is not accounted for in the modified plastic work. As a result, the evolution of the gradation is not influenced by shear induced dilation based on drained triaxial tests with confining stresses less than 2.1 MPa.
The plastic strain increments were calculated from the total strain increments by subtracting the elastic strain increments, using the following elastic law:
d d d , d , 3 e e v d p q K G ( 4
)
where G and K are the hypo-elastic shear and bulk modulus, respectively, defined as follows (Richart et al. [START_REF] Richart | Vibration of Soils and Foundations[END_REF]):
2 0 2.97 ' , 1 n at e p G G e p (5) 2 0 2.97 ' , 1 n at e p K K e p ( 6
)
where G 0 , K 0 and n are elastic parameters; p at is the atmospheric pressure used as reference pressure (p at = 101 kPa). For Cambria sand, K 0 = 26.3 MPa and n = 0.4 were determined from isotropic compression test, and G 0 = 35 MPa from the initial slope of the stress-strain curve (e.g., 1 < 0.1%) of drained triaxial compression tests.
The values of the breakage index B r as well as the modified plastic work were measured for different tests. B r is plotted versus w p in Figure 1 where is a material constant controlling the evolution rate of the gradation. For Cambria sand, = 15000 was obtained.
Influence of gradation on the position of the CSL
One of the important elements to be considered in soil modeling is the critical state concept. The critical state void ratio e c is a function of the mean effective stress p . The relationship is traditionally written in the e -logp' plane as [START_REF] Wood | Changing grading of soil: Effect on critical state[END_REF] through simulations by the discrete element method. Their results agree with the concept developed by Biarez and Hicher. However, up to now the studies on the relation between the position of the CSL and the gradation are not based on experimental results. The concept of the CSL is based on the assumption that at the critical state the material remains at a constant volume while being subjected to continuous distortion. If ever the CSL is able to move, this concept becomes invalid. This paper extends the concept of critical state by defining the position of the CSL at a given loading stage, corresponding to the present gradation.
The drained compression tests performed by Yamamuro et al. [START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF] on Cambria sand were used to investigate the evolution of the CSL with the gradation. For each drained compression test, the void ratio at failure was measured and the state e, log p was considered as the critical state corresponding to the gradation at the final stage of the test. The value of e ref representing the position of the CSL was obtained by using eq. ( 8). Based on all drained compression tests, e ref is plotted versus the breakage index B r , as shown in Figure 1(b), from which a hyperbolic relation can be derived as
0 0 , r ref ref refu ref r B e e e e B ( 9
)
where e ref0 and e refu are the initial and ultimate reference critical state void ratios, respectively; is a material constant controlling the evolution rate of the CSL with particle breakage. For the Cambria sand, e ref0 = 0.59 and = 0.006 were obtained from drained triaxial compression tests under low confining stresses (less than 1 MPa) for which Yamamuro et al. [START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF] indicated that very limited grain breakage occurred. e refu = 0.13 and = 0.16 were obtained from
Constitutive model
An isotropic hypo-elasticity was assumed for the elastic part of the model (see eqs. ( 4) to ( 6)). Thus, three parameters are required for the elastic behavior: G 0 , n, . For the plastic behavior, the proposed approach uses two yield surfaces, one for shear sliding and one for normal compression. Thus, the framework of the proposed approach is similar to that of the double-hardening model developed by Vermeer [START_REF] Vermeer | A double hardening model for sand[END_REF].
Shear sliding criterion
As in many models for sand [START_REF] Wood | Particle crushing and deformation behavior[END_REF][START_REF] Vermeer | A double hardening model for sand[END_REF], the shape of the yield surface for the shear component is linear in p -q plot, written as follows: where = q/ p ; H is the hardening variable defined by a hyperbolic function in the Hd p plane, similar to the one proposed by Yin et al. [START_REF] Yin | Micromechanical modelling for effect of inherent anisotropy on cyclic behaviour of sand[END_REF] ,
, S f H ( 10
)
p p d p p d p M H M p G K (11)
where G p is used to control the initial slope of the hyperbolic curve d p . Eq. [START_REF] Einav | Breakage mechanics-Part I: Theory[END_REF] guarantees that the stress ratio will approach the peak value of stress ratio M p .
In order to take into account dilation or contraction during shear sliding, a Roscoe-type stress dilatancy equation is used
d , d p v pt p d D M ( 12
)
where D is a soil parameter. M pt is the slope of the phase transformation line for sand as defined by Ishihara et al. [START_REF] Ishihara | Cyclic behavior of sand during rotation of principal axes[END_REF] or the characteristic line defined by Luong [START_REF] Luong | Stress-strain aspects of cohesionless soils under cyclic and transient loading[END_REF].
Normal Compression Criterion
In order to describe the compressible behavior of breakable granular materials, we added a second yield surface. The second yield function is assumed to be as follows
, N y f p p ( 13
)
where p y is the hardening variable controlling the size of the yield surface. The yield surface expands with the plastic volumetric strain. The hardening rule of the modified Cam Clay model has been adopted:
d d . p v y y p p p c ( 14
)
An associated flow rule has been adopted for the normal compression. The initial value of compression yield stress p y0 for 0 p v is also needed for the model. In order to interpolate the slope of critical state line in p -q plan "M" between its values M c (for compression) and M e (for extension) by means of the Lode angle (see Sheng et al. [START_REF] Sheng | Aspects of finite element implementation of critical state models[END_REF]), the following relation is adopted:
1 4 4 4 4 2 , 1 1 sin3 c c M M c c ( 15
)
where c=(3-sin )/(3+sin ), assuming the same friction angle for compression and extension;
Density state effect
The material's density state is defined by the ratio e c /e, where e is the present void ratio and e c is the critical void ratio for the present value of p obtained by eq. ( 8).
According to Biarez et al. [START_REF] Biarez | Elementary Mechanics of Soil Behaviors[END_REF], the peak friction angle p (related to M p = 6sin p /(3-sin p ) for triaxial compression) is linked to the intrinsic friction angle (related to the critical state value M=6sin /(3-sin ) for triaxial compression) and the density state e c /e tan tan ,
p c e e ( 16
)
M p is then obtained through the critical state M and the density state e c /e. Eq. ( 16) means that in a loose assembly the peak friction angle p is smaller than . On the other hand, a dense state provides a higher degree of interlocking. Therefore, the peak friction angle p is greater than . When the stress state reaches the phase transformation line, the dense assembly dilates and the degree of interlocking decreases. As a consequence, the peak friction angle is reduced, which results in a strain-softening phenomenon. M pt is the slope of the phase transformation line for sand which we assume to be a function of the void ratio tan tan .
c p t e e ( 17
)
Eq. ( 17) indicates that a dense packing has a smaller phase transformation angle than a loose packing, producing the same effect as in the formulation used by Muir Wood et al. [START_REF] Wood | Particle crushing and deformation behavior[END_REF].
From eqs. ( 7), ( 8) and ( 9), the particle breakage directly influences the position of CSL, which results in the change of the density state e c /e. All the terms related to e c /e (e.g., p , pt , H, etc.) are then influenced by breakage, which allows us to incorporate the influence of particle breakage into the model.
Test simulations 4.1 Determination of model parameters
The drained triaxial tests on Cambria sand performed by Yamamuro et al. [START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF] and the undrained triaxial tests on the same material by Bopp et al. [START_REF] Bopp | Relative density effects on undrained sand behavior at high pressures[END_REF] were used to calibrate and validate the model. All selected samples were isotropically consolidated with an initial void ratio of 0.52 at different stress levels before shearing. The determination of the model parameters is based on one isotropic compression test and drained triaxial tests in compression as follows.
K 0 = 26.3 MPa and n = 0.4 were calibrated from an isotropic compression test ( p < 12 MPa, see Figure 2(a)), and G 0 = 35 MPa was calibrated from the initial slope of the stress strain curve (e.g., 1 < 0.1 %) of drained triaxial compression tests.
p y0 = 12 MPa and c p = 0.028 were obtained by the curve fitting of the isotropic compression test (see Figure 2(a)). In Figure 2(a), simulations were also carried out for different values of c p which controls the slope of the compression line under high stress levels.
G p = 3.5 MPa was obtained by fitting the initial slope of the curve q a (for a < 1%) of the drained test at the confining stress 26 MPa.
= 37.5° was determined from drained triaxial tests at lower confining stresses (2.1, 4, 5.8 MPa). e ref0 = 0.59 and = 0.006 were obtained from drained triaxial compression tests under low confining stresses (less than 1 MPa) for which Yamamuro et al. [START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF] indicated very slight grain breakage.
= 15000, = 0.16 and e refu = 0.13 were determined on the basis of the breakage analysis (see Figures 1(a), (b), 2(b) and (c)). Numerical simulations with different values of agree with the experimental results on carbonate sands presented by Coop [START_REF] Coop | The mechanics of uncemented carbonate sands[END_REF]: particle breakage increases the contraction and decreases the peak friction angle of the material.
All the determined values of the model parameters are summarized in Table 1, and are used for simulating the tests with different stress paths.
Simulations of drained triaxial tests in compression and extension
Figure 3 shows comparison between experimental results and numerical simulations for drained triaxial tests in compression with confining stresses varying from 2.1 to 52 MPa. A good agreement was achieved for all comparisons. With the model, we are able to reproduce the main features of the mechanical behavior of sand influenced by particle breakage.
(i) Under the lowest confining stress (2.1 MPa), the material exhibits a dilative behavior.
(ii) For higher confining stresses, the material becomes contractive. The disappearance of the dilation is linked to the increase of particle breakage occurring under high stresses (2.1-26 MPa). For tests under confining stresses from 2.1 to 26 MPa, the volumetric strain increases with the increase of the confining stress.
(iii) For tests under confining stresses from 26 to 52 MPa, the volumetric strain decreases with the increase of the confining stress. Yamamuro et al. [START_REF] Yamamuro | Drained sand behavior in axisymmetric tests at high pressures[END_REF] indicated that this effect is caused by increasing amounts of volumetric contraction and particle breakage during the isotropic consolidation stage, and that lower void ratios are obtained by increasing the confining stress, which in turn generates less volumetric contraction during shearing. This trend was well reproduced by the model incorporating particle breakage. The increase of particle breakage during isotropic loading results in a smaller amount of breakable grains left over during the shearing stage (see Figure 1(a) for the evolution of B r which becomes stable for high plastic work). As a result, the material becomes less contractive.
Using the method presented in the section concerning the breakage analysis, we can predict the evolution of the gradation. The test results and the model's predictions of the gradation are presented in Figure 4, which demonstrates that the model's capacity to predict gradation changes during loading.
Using the same set of parameters (Table 1), the model has also been applied to simulate drained extension tests on Cambria sand with confining stresses varying from 6 to 42 MPa (Bopp et al. [START_REF] Bopp | Relative density effects on undrained sand behavior at high pressures[END_REF]). A good agreement was also achieved between experimental results and numerical simulations, as shown in Figure 5. With the model, we are capable of re- producing the stress-strain relation and the volumetric strain response for different confining stress levels. The evolution of the gradation for all the selected drained extension tests was also predicted. Again, good agreement was achieved between experimental data and simulations as shown in Figure 6.
Simulations of undrained triaxial tests in compression and extension
The model with the same set of parameters was also used to simulate the undrained compression and extension behaviors of Cambria sand under high consolidated stresses varying from 16.7 to 68.9 MPa for compression tests and from 12 to 52 MPa for extension tests. A good agreement was achieved between experimental and numerical results as presented in Figures 7 and9.
(i) The initial stress strain curves increases with increasing consolidation pressure.
(ii) Different from drained tests, the peak deviatoric stress is reached at a very low strain level, followed by a distinct reduction in the deviatoric stress. (iii) The axial strain corresponding to the peak deviatoric stress increases slightly with an increase of the initial consolidation pressure.
(iv) The effective confining stress decreases as a result of the rapidly increasing pore pressure, and the loading resistance of the material becomes reduced, which corresponds to an unstable state also observed in similar tests performed on loose specimens at low confining stresses.
The experimental and predicted evolutions of the gradation are presented in Figures 8 and10, which show that the model with parameters determined from drained compression tests can predict the evolution of the gradation during undrained shearing in compression and extension.
Conclusions
We analyzed the evolution of gradation as a function of the amount of plastic work and the evolution of the position of CSL with the gradation. Based on these analyses, we suggested two constitutive equations for the relation between the breakage index, the modified plastic work, and the reference critical state void ratio. A double-yield surface mod-el accounting for the influence of particle breakage was developed, which includes the constitutive equations controlling grain breakage.
Triaxial tests on Cambria sand were used to calibrate and validate the model. The parameters can be easily determined from one isotropic compression test and several drained compression tests. Using the set of determined parameters, several other tests including drained tests in com-pression and extension and undrained tests in compression and extension were simulated. The grain size distributions at the end of each test were also predicted. All comparisons between the experimental data and the numerical simulations demonstrate that the model reproduces with good accuracy the mechanical behavior of granular materials with particle breakage along various loading paths, as well as the evolution of the grain size distribution during loading.
and deviatoric plastic strain increments, respectively: d v p = d 1 p +2d 3 p and d d p = 2(d 1 p d 3 p )/3; F is the MacCauley function: F = 0 for F < 0 and F = F for F>0. The MacCauley function in eq. (
(a) which shows that the value of B r increases with the modified plastic work. Based on these results, we suggest a hyperbolic function between B r and w p ,
e ref and p ref determine a particular point of the critical-state in the e -logp' plane, determines the slope of the CSL. Thus, the CSL can be determined by the two parameters e ref and when p ref is assigned (in this study, p ref = 100 kPa). According to Biarez et al. [4], the position of the CSL moves down in the e-log p plan with the increase of the coefficient of uniformity C u = d 60 /d 10 . The position of the CSL as a function of C u has also been demonstrated by Muir Wood et al.
Figure 1(b).
Figure 1
1 Figure 1 Particle crushing effect. (a) Evolution of breakage index versus modified plastic work, and (b) evolution of reference critical state void ratio versus breakage index.
Figure 2
2 Figure 2 Parametric study of particle breakage related parameters. (a) For isotropic compression test and (b)-(c) deviatoric stress and volumetric strain versus major principle strain, respectively, of drained compression test with constant confining stress 26 MPa.
Figure 3
3 Figure 3 Comparison between experimental data and numerical simulations for drained triaxial compression tests. (a-c) Deviatoric stress versus major principle strain; (b-d) volumetric strain versus major principle strain.
Figure 4
4 Figure 4 Grain size distributions for drained compression tests. (a) Experiments, and (b) simulations.
Figure 5
5 Figure 5 Comparison between experimental results and simulations for drained triaxial extension tests. (a) Deviatoric stress versus major principle strain, and (b) volumetric strain versus major principle strain.
Figure 6
6 Figure 6 Grain size distributions for drained extension tests. (a) Experiments, and (b) simulations.
Figure 7
7 Figure 7 Comparison between experimental results and simulations for undrained triaxial compression tests. (a) Deviatoric stress versus major principle strain, and (b) effective stress paths.
Figure 8
8 Figure 8 Grain size distributions for undrained compression tests. (a) Experiments, and (b) simulations.
Figure 9
9 Figure 9 Comparison between experimental results and simulations for undrained triaxial extension tests. (a) Deviatoric stress versus major principle strain, and (b) effective stress paths.
Figure 10
10 Figure 10 Grain size distributions for undrained extension tests. (a) Experiments, and (b) simulations.
Table 1
1 Values of model parameters for Cambria sand
G 0 (MPa) K 0 (MPa) n G p p y0 (MPa) c p e ref0 e refu
35 26.3 0.4 3.5 37.5° 12 0.028 0.59 0.006 15000 0.16 0.13 | 27,386 | [
"947413",
"171859",
"966872"
] | [
"484660",
"233961",
"390084",
"390084"
] |
01465389 | en | [
"math"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01465389/file/Mazumdar_BlowUp_Hardy-Sobolev_Feb8.pdf | Saikat Mazumdar
email: [email protected]
HARDY-SOBOLEV EQUATIONS WITH ASYMPTOTICALLY VANISHING SINGULARITY: BLOW-UP ANALYSIS FOR THE MINIMAL ENERGY
Keywords:
We study the asymptotic behavior of a sequence of positive solutions (u ) >0 as → 0 to the family of equations
where (s ) >0 is a sequence of positive real numbers such that lim
and Ω ⊂ R n is a bounded smooth domain such that 0 ∈ ∂Ω. When the sequence (u ) >0 is uniformly bounded in L ∞ , then upto a subsequence it converges strongly to a minimizing solution of the stationary Schrödinger equation with critical growth. In case the sequence blows up, we obtain strong pointwise control on the blow up sequence, and then using the Pohozaev identity localize the point of singularity, which in this case can at most be one, and derive precise blow up rates. In particular when n = 3 or a ≡ 0 then blow up can occur only at an interior point of Ω or the point 0 ∈ ∂Ω.
Introduction
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3, such that 0 ∈ ∂Ω. We define the Sobolev space H 2 1,0 (Ω) as the completion of the space C ∞ c (Ω), the space of compactly supported smooth functions in Ω, with respect to the norm u → u H 2 1,0 (Ω) = |∇u L 2 (Ω) . We let 2 * := 2n n-2 be the critical Sobolev exponent for the embeding H 2 1,0 (Ω) → L p (Ω). Namely, the embedding is defined and continuous for 1 ≤ p ≤ 2 * , and it is compact iff 1 ≤ p < 2 * . Let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω, that is there exists A 0 > 0 such that
Ω (|∇ϕ| 2 + aϕ 2 ) dx ≥ A 0 u 2 H 2 1,0 (Ω) for all ϕ ∈ H 2 1,0 (Ω). Solutions u ∈ C 2 (Ω) to the problem ∆u + a(x)u = u 2 * -1 in Ω u > 0 in Ω u = 0
on ∂Ω (often referred to as "Brezis-Nirenberg problem") are critical points of the functional
u → J(u) := Ω |∇u| 2 + au 2 dx Ω |u| 2 * dx 2/2 * .
Here, ∆ := -div(∇) =i ∂ ii is the Laplacian with minus sign convention. A natural way to obtain such critical points is to find minimizers to this functional, that is to prove that µ a (Ω) = inf u∈H 2 1,0 (Ω)\{0}
J(u)
is achieved. There is a huge and extensive litterature on this problem, starting with the pioneering article of Brezis-Nirenberg [START_REF] Brézis | Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents[END_REF] in which the authors completely solved the question of existence of minimizers for µ a (Ω) when a ≡constant and n ≥ 4 for any domain, and n = 3 for a ball. Their analysis took inspiration from the contributions of Aubin [START_REF] Th | Problèmes isopérimétriques et espaces de Sobolev[END_REF] in the resolution of the Yamabe problem. The case when a is arbitrary and n = 3 was solved by Druet [START_REF] Druet | Elliptic equations with critical Sobolev exponents in dimension 3[END_REF] using blowup analysis.
In [START_REF] Ghoussoub | Hardy-Sobolev critical elliptic equations with boundary singularities[END_REF], Ghoussoub-Kang suggested an alternative approach by adding a singularity in the equation as follows. For any s ∈ [0, 2), we define 2 * (s) := 2(n -s) n -2 so that 2 * = 2 * (0). Consider the weak solutions u ∈ H 2 1,0 (Ω)\{0} to the problem
∆u + a(x)u = u 2 * (s)-1 |x| s in Ω u ≥ 0 in Ω u = 0 on ∂Ω.
Note here that 0 ∈ ∂Ω is a boundary point. Such solutions can be achieved as minimizers for the problem µ s,a (Ω) = inf for s ∈ (0, 2) (1) Consider a sequence of positive real numbers (s ) >0 such that lim →0 s = 0. We let
(u ) >0 ∈ C 2 Ω\{0} ∩ C 1 Ω such that ∆u + au = u 2 * (s )-1 |x| s in Ω, u > 0 in Ω, u = 0 on ∂Ω. (2)
Moreover, we assume that the (u )'s are of minimal energy type in the sense that
Ω |∇u | 2 + au 2 dx Ω |u | 2 * (s ) |x| s dx 2/2 * (s ) = µ s ,a (Ω) + o(1) ≤ 1 K(n, 0) + o(1) (3)
as → 0, where K(n, 0) > 0 is the best constant in the Sobolev embedding defined in [START_REF] Caffarelli | Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth[END_REF]. Indeed, it follows from Ghoussoub-Robert [START_REF] Ghoussoub | The effect of curvature on the best constant in the Hardy-Sobolev inequalities[END_REF][START_REF]Concentration estimates for Emden-Fowler equations with boundary singularities and critical growth[END_REF] that such a family (u ) exists if the the mean curvature of ∂Ω at 0 is negative.
In this paper we are interested in studying the asymptotic behavior of the sequence (u ) >0 as → 0. As proved in Proposition 2.2, if the weak limit u 0 of (u ) in H 2 1,0 (Ω) is nontrivial, then the convergence is indeed strong and u 0 is a minimizer of µ a (Ω). We completely deal with the case u 0 ≡ 0, which is more delicate, in which blow-up necessarily occurs. In the spirit of the C 0 -theory of Druet-Hebey-Robert [START_REF] Druet | Blow-up theory for elliptic PDEs in Riemannian geometry[END_REF], our first result is the following:
Theorem 1.
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3 , such that 0 ∈ ∂Ω, and let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω. Let (s ) >0 ∈ (0, 2) be a sequence such that lim →0 s = 0. Suppose that the sequence (u ) >0 ∈ H 2 1,0 (Ω), where for each > 0, u satisfies (2) and (3), is a blowup sequence, i.e u 0 weakly in H 2 1,0 (Ω) as → 0
Then, there exists C > 0 such that for all > 0
u (x) ≤ C µ µ 2 + |x -x | 2 n-2 2
for all x ∈ Ω where µ
-n-2 2 = u (x ) = max x∈Ω u (x).
With this optimal pointwise control, we to obtain more informations on the localization of the blowup point x 0 := lim →0 x and the blowup parameter (µ ) . We let G : Ω × Ω \ {(x, x) : x ∈ Ω} -→ R be the Green's function of the coercive operator ∆ + a in Ω with Dirichlet boundary conditions. For any x ∈ Ω we write G x as:
G x (y) = 1 (n -2)ω n-1 |x -y| n-2 + g x (y) for y ∈ Ω \ {x}
where ω n-1 is the area of the (n -1)-sphere. In dimension n = 3 or when a ≡ 0, one has that g x ∈ C 2 (Ω \ {x}) ∩ C 0,θ (Ω) for some 0 < θ < 1, and g x (x) is defined for all x ∈ Ω and is called the mass of the operator ∆ + a.
Theorem 2.
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3 , such that 0 ∈ ∂Ω, and let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω. Let (s ) >0 ∈ (0, 2) be a sequence such that lim →0 s = 0. Suppose that the sequence
(u ) >0 ∈ H 2 1,0 (Ω)
, where for each > 0, u satisfies (2) and (3), is a blowup sequence, i.e u 0 weakly in H 2 1,0 (Ω) as → 0
We let (µ ) ∈ (0, +∞) and (x ) ∈ Ω be such that µ
-n-2 2 = u (x ) = max x∈Ω u (x).
We define x 0 := lim →0 x and we assume that
x 0 ∈ Ω is an interior point. Then lim →0 s µ 2 = 2 * K(n, 0) 2 * 2 * -2 d n a(x 0 ) for n ≥ 5 lim →0 s µ 2 log (1/µ ) = 256ω 3 K(4, 0) 2 a(x 0 ) for n = 4 lim →0 s µ n-2 = -nb 2 n K(n, 0) n/2 g x0 (x 0 ) for n = 3 or a ≡ 0.
where g x0 (x 0 ) the mass at the point x 0 ∈ Ω for the operator ∆ + a, and
d n = R n 1 + |x| 2 n(n -2) -(n-2) dx for n ≥ 5 ; b n = R n 1 + |x| 2 n(n -2) -n+2 2 dx (4)
and ω 3 is the area of the 3-sphere. When x 0 ∈ ∂Ω is a boundary point, we get similar estimates:
Theorem 3.
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3 , such that 0 ∈ ∂Ω, and let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω. Let (s ) >0 ∈ (0, 2) be a sequence such that lim →0 s = 0. Suppose that the sequence
(u ) >0 ∈ H 2 1,0 (Ω)
, where for each > 0, u satisfies (2) and (3), is a blowup sequence, i.e u 0 weakly in H 2 1,0 (Ω) as → 0
We let (µ ) ∈ (0, +∞) and (x ) ∈ Ω be such that µ
-n-2 2 = u (x ) = max x∈Ω u (x).
Assume that lim
→0 x = x 0 ∈ ∂Ω. Then (1) If n = 3 or a ≡ 0, then as → 0 lim →0 s d(x , ∂Ω) n-2 µ n-2 = n n-1 (n -2) n-1 K(n, 0) n/2 ω n-1 2 n-2 .
Moreover, d(x , ∂Ω) = (1 + o(1))|x | as → 0. In particular x 0 = 0.
(2) If n = 4. Then as → 0
s 4 K(4, 0) -2 + o(1) - µ d(x , ∂Ω) 2 (32ω 3 + o(1)) = µ 2 log d(x , ∂Ω) µ [64ω 3 a(x 0 ) + o(1)] (3) If n ≥ 5. Then as → 0 s (n -2) 2n K(n, 0) -n/2 + o(1) - µ d(x , ∂Ω) n-2 n n-2 (n -2) n ω n-1 2 n-1 + o(1) = µ 2 [d n a(x 0 ) + o(1)]
where d n is as in (4).
Theorem 3 is a particular case of Theorem 7 proved in Section 7.
The main difficulty in our analysis is due to the natural singularity at 0 ∈ ∂Ω. Indeed, there is a balance between two facts. First, since s > 0, this singularity exists and has an influence on the analysis, and in particular on the Pohozaev identity (see the statement of Theorem 2). But, second, since s → 0, the singularity should cancel, at least asymptotically. In this perspective, our results are twofolds. Theorem 1 asserts that the pointwise control is the same as the control of the classical problem with s = 0: however, to prove this result, we need to perform a very delicate analysis of the blowup with the perturbation s > 0, even for the initial steps that are usually standard when s = 0 (these are Sections 3 and 4).
The influence and the role of s > 0 is much more striking in Theorems 2 and 3. Compared to the case s = 0, the Pohozaev identity (see Section 6) enjoys an additional term involving s that is present in the statement of Theorems 2 and 3.
Heuristically, this is due to the fact that the limiting equation ∆u = |x| -s u 2 * (s)-1 is not invariant under the action of the translations when s > 0.
Some classical references for the blow-up analysis of nonlinear critical elliptic pdes are Rey [START_REF] Rey | The role of the Green's function in a nonlinear elliptic equation involving the critical Sobolev exponent[END_REF], Adimurthi-Pacella-Yadava [START_REF] Adimurthi | Interaction between the geometry of the boundary and positive solutions of a semilinear Neumann problem with critical nonlinearity[END_REF], Han [START_REF] Han | Asymptotic approach to singular solutions for nonlinear elliptic equations involving critical Sobolev exponent[END_REF], Hebey-Vaugon [START_REF] Hebey | The best constant problem in the Sobolev embedding theorem for complete Riemannian manifolds[END_REF] and Khuri-Marques-Schoen [START_REF] Khuri | A compactness theorem for the Yamabe problem[END_REF]. In Mazumdar [START_REF] Mazumdar | GJMS-type operators on a compact Riemannian manifold: best constants and Coron-type solutions[END_REF] the usefulness of blow-up analysis techniques were illustrated by proving the existence of solution to critical growth polyharmonic problems on manifolds. The analysis of the 3 dimensional problem by Druet [START_REF] Druet | Elliptic equations with critical Sobolev exponents in dimension 3[END_REF] and the monograph [START_REF] Druet | Blow-up theory for elliptic PDEs in Riemannian geometry[END_REF] by Druet-Hebey-Robert were important sources of inspiration.
This paper is organized as follows. In Section 2 we recall general facts on Hardy-Sobolev inequalities and prove few useful general and classical statements. Section 3 is devoted to the proof of convergence to a ground state up to rescaling. In Section 4, we perform a delicate blow-up analysis to get a first pointwise control on u . The optimal control of Theorem 1 is proved in Section 5. With the pointwise control of Theorem 1, we are able to estimate the maximum of the u 's when the blowup point is in the interior of the domain (Section 6) or on the boundary (Section 7).
Acknowledgements. I would like to express my deep gratitude to Professor Frédéric Robert and Professor Dong Ye, my thesis supervisors, for their patient guidance, enthusiastic encouragement and useful critiques of this work.
2. Hardy-Sobolev inequality and the case of a nonzero weak limit
The space D 1,2 (R n ) is defined as the completion of the space C ∞ c (R n ), the space of compactly supported smooth functions in R n , with respect to the norm
u D 1,2 = ∇u L 2 (R n ) . The embedding D 1,2 (R n ) → L 2 * (R n
) is continuous, and we denote the best constant of this embedding by K(n, 0) which can be characterised as
1 K(n, 0) = inf u∈D 1,2 (R n )\{0} R n |∇u| 2 dx R n |u| 2 * dx 2/2 * (5)
Interpolating the Sobolev inequality and the Hardy inequality
(6) R n |u(x)| 2 |x| 2 dx ≤ 2 n -2 2 R n |∇u| 2 dx for u ∈ D 1,2 (R n ),
we get the so-called "Hardy-Sobolev inequality" (see [START_REF] Ghoussoub | Hardy-Sobolev critical elliptic equations with boundary singularities[END_REF] and the references therein): there exists a constant K(n, s) > 0 such that
1 K(n, s) = inf u∈D 1,2 (R n )\{0} R n |∇u| 2 dx R n |u| 2 * (s) |x| s dx 2/2 * (s) (7)
As one checks, lim s→0 K(n, s) = K(n, 0). For a domain Ω ⊂ R n , we also have:
Proposition 2.1. lim s→0 µ s,a (Ω) = µ a (Ω).
Proof. Let u ∈ H 2 1,0 (Ω)\{0}. Hölder and Hardy inequalities yield
Ω |u(x)| 2 * (s) |x| s dx 2/2 * (s) ≤ 2 n -2 2s/2 * (s) Ω |∇u(x)| 2 dx s/2 * (s) Ω |u(x)| 2 * dx 2-s 2 * (s)
then the Sobolev inequality gives that for all u ∈ H 2 1,0 (Ω)\{0} one has
Ω |∇u| 2 + au 2 dx Ω |u| 2 * dx 2/2 * ≤ Ω |∇u| 2 + au 2 dx Ω |u| 2 * (s) |x| s dx 2/2 * (s) 1 K(n, 0) 1/2 * 2 n -2 s n-2 n-s So µ a (Ω) ≤ µ s,a (Ω) 1 K(n,0) 1/2 * 2 n-2 s n-2
n-s . Passing to limits as s → 0, one obtains that µ a (Ω) ≤ lim inf s→0 µ s,a (Ω). Let u ∈ H 2 1,0 (Ω)\{0}. By Fatou's lemma one has
Ω |u(x)| 2 * dx ≤ lim inf s→0 Ω |u(x)| 2 * (s) |x| s dx ≤ lim inf s→0 1 µ s,a (Ω) Ω |∇u| 2 + au 2 dx 2 * (s)/2 , Ω |u(x)| 2 * dx 2/2 * ≤ lim inf s→0 1 µ s,a (Ω) Ω |∇u| 2 + au 2 dx
Therefore lim sup s→0 µ s,a (Ω) ≤ µ a (Ω), hence lim s→0 µ s,a (Ω) = µ a (Ω). This proves Proposition 2.1.
The following proposition is standard:
Proposition 2.2.
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3 , such that 0 ∈ ∂Ω. Let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω Let (u ) >0 ∈ C 2 Ω\{0} ∩ C 1 Ω be as in (2) and (3). Then there exists u 0 ∈ H 2 1,0 (Ω) such that, up to extraction, u u 0 weakly in H 2 1,0 (Ω) as → 0.
Indeed, u 0 ∈ C 2 Ω\{0} ∩ C 1 Ω is a solution to ∆u 0 + au 0 = u 2 * -1 0 in Ω u 0 ≥ 0 in Ω, u 0 = 0 on ∂Ω If u 0 = 0, then u 0 > 0 in Ω and lim →0 u = u 0 in C 1 (Ω).
Moreover, µ a (Ω) is achieved by u 0 .
Preliminary Blow-up Analysis
We define R n -= {x ∈ R n : x 1 < 0} where x 1 is the first coordinate of a generic point in R n . This space will be the limit space in certain cases after blowup. We describe a parametrisation around a point of the boundary ∂Ω. Let p ∈ ∂Ω. Then there exists U ,V open in R n and a smooth diffeomorphism T : U -→ V such that upto a rotation of coordinates if necessary (8)
• 0 ∈ U and p ∈ V • T (0) = p • T (U ∩ {x 1 < 0}) = V ∩ Ω • T (U ∩ {x 1 = 0}) = V ∩ ∂Ω • D 0 T = I R n .
Here D x T denotes the differential of T at the point x and I R n is the identity map on R n . • D 0 T (e 1 ) = ν p where ν p denotes the outer unit normal vector to ∂Ω at the point p.
• {D 0 T (e 2 ), • • • , D 0 T (e n )} forms an orthonormal basis of T p ∂Ω.
We start with a scaling lemma which we shall employ many times in our analysis.
Lemma 1.
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3 , such that 0 ∈ ∂Ω, and let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω. Let (s ) >0 ∈ (0, 2) be a sequence such that lim →0 s = 0. Consider the sequence
(u ) >0 ∈ H 2 1,0 (Ω)
, where for each > 0, u satisfies (2) and (3). Let (y ) ∈ Ω, and let (ν ) and (β ) be sequences of positive real numbers defined by
ν -n-2 2 = u (y ) β := |y | s /2 ν 2-s 2 for > 0 (9)
Suppose that lim →0 ν = 0 which then implies that lim →0 β = 0. Assume that there exists C 1 > 0 such that for any given R > 0 one has for > 0 small u (x) ≤C 1 u (y ) for all x ∈ B y (Rν ) [START_REF]Concentration estimates for Emden-Fowler equations with boundary singularities and critical growth[END_REF] Then ν = o (|y |) as → 0. Along with the above assumption also suppose that there exists C 2 > 0 such that for any given R > 0 one has for > 0 small u (x) ≤C 2 u (y ) for all x ∈ B y (Rβ ) [START_REF] Ghoussoub | Hardy-Sobolev critical elliptic equations with boundary singularities[END_REF] Then β = o (d(y , ∂Ω)) as → 0. For > 0 we then rescale and define
w (x) := u (y + β x) u (y ) for x ∈ Ω -y β (12) Then there exists w ∈ C ∞ (R n ) ∩ D 1,2 (R n ) such that w > 0 and for any η ∈ C ∞ c (R n ) ηw ηw weakly in D 1,2 (R n ) as → 0 Further, lim →0 w = w in C 1 loc (R n ) and w satisfies the equation ∆w = w 2 * -1 in R n w ≥ 0 in R n .
Proof. The proof is completed in the following steps.
Step
(x) = u • T (ν x) u (y ) for x ∈ U ν ∩ {x 1 ≤ 0} Step 1.1: For any η ∈ C ∞ c (R n ), one has that η w ∈ D 1,2 (R n -)
for > 0 sufficiently small. We claim that there exists wη
∈ D 1,2 (R n -) such that upto a subsequence η w wη weakly in D 1,2 (R n -) as → 0 η w (x) → wη (x)
a.e in R n -as → 0
We prove the claim. Let x ∈ R n -, then
∇ (η w ) (x) = w (x)∇η(x) + ν u (y ) η(x)D (ν x) T [∇u (T (ν x))]
Now for any θ > 0, there exists
C(θ) > 0 such that for any a, b > 0, (a + b) 2 ≤ C(θ)a 2 + (1 + θ)b 2 .
With this inequality we then obtain
R n - |∇ (η w )| 2 dx ≤ C(θ) R n - |∇η| 2 w2 dx + (1 + θ) ν 2 u 2 (y ) R n - η 2 D (ν x) T [∇u (T (ν x))] 2 dx Since D 0 T = I R n we have as → 0 R n - |∇ (η w )| 2 dx ≤ C(θ) R n - |∇η| 2 w2 dx + (1 + θ) (1 + O(ν )) ν 2 u 2 (y ) R n - η 2 |∇u (T (ν x))| 2 (1 + o(1))dx
With Hölder inequality and a change of variables this becomes
R n - |∇ (η w )| 2 dx ≤ C(θ) ∇η 2 L n Ω u 2 * dx n-2 n + (1 + θ) (1 + O(ν )) Ω |∇u | 2 dx (14)
Since u H 2 1,0 (Ω) = O(1) and ν → 0 as → 0, so for > 0 small enough,
η w D 1,2 (R n -) ≤ C η ,
where C η is a constant depending on the function η. The claim then follows from the reflexivity of D 1,2 (R n -).
Step 1.2: Via a diagonal argument, we get that there exists w
∈ D 1,2 (R n -) such that for any η ∈ C ∞ c (R n ), then η w η w weakly in D 1,2 (R n -) as → 0 η w (x) → η w(x) a.e x in R n -as → 0 We claim that w ∈ C 1 (R n -)
and it satisfies weakly the equation
(15) ∆ w = w2 * -1 in R n - w = 0 on {x 1 = 0}
We prove the claim. For i, j = 1, . . . , n, we let g ij = (∂ i T , ∂ j T ), the metric induced by the chart T on the domain U ∩ {x 1 < 0} and let ∆ g denote the Laplace-Beltrami operator with respect to the metric g. We let g = g (ν x)
From (2) it follows that for any > 0 and R > 0, w satisfies weakly the equation
∆ w + ν 2 (a • T (ν x)) w = w2 (s )-1 T (ν x) ν s in B 0 (R) ∩ {x 1 < 0} w = 0 on B 0 (R) ∩ {x 1 = 0} (16)
From [START_REF]Concentration estimates for Emden-Fowler equations with boundary singularities and critical growth[END_REF] and the properties of the boundary chart T it follows that there exists C 1 > 0 such that for > 0 small 0 ≤ w (x) ≤ C 1 for all x ∈ B 0 (R) ∩ {x 1 ≤ 0}, for R > 0 large. Then for any p > 1 there exists a constant C p > 0 such that
B0(R)∩{x1<0} ( w ) 2 * (s )-1 T (ν x) ν s p dx ≤ C p B0(R)∩{x1<0} 1 |x| s p dx
So the right hand side of equation ( 16) is uniformly bounded in L p for some p > n.
From standard elliptic estimates it follows that the sequence
(η R w ) >0 is bounded in C 1,α0 (B 0 (R) ∩ {x 1 ≤ 0})
for some α 0 ∈ (0, 1). So by Arzela-Ascoli's theorem and a diagonal argument, we get that w ∈ C 1,α loc (R n ∩ {x 1 ≤ 0}) for 0 < α < α 0 , and that, up to a subsequence
lim →0 w = w in C 1,α loc (R n ∩ {x 1 ≤ 0})
for 0 < α < α 0 . Passing to the limit in ( 16), we get [START_REF] Gilbarg | Elliptic partial differential equations of second order[END_REF]. This proves our claim.
Step 1.3: Let ỹ ∈ U be such that T (ỹ ) = y . From the properties (8) of the boundary chart T , we get that
|ỹ | ν = O |y | ν . Then there exists ỹ ∈ R n -such that ỹ ν → ỹ as → 0. Therefore w(ỹ) = lim →0 w (ν -1 ỹ ) = 1. Therefore ỹ ∈ R n -, and then w ∈ C 1 (R n -) is a nontrivial weak solution of the equation ∆ w = w2 * -1 in R n - w = 0 on {x 1 = 0}
which is impossible, see Struwe [START_REF] Struwe | Variational methods[END_REF] (Chapter III, Theorem 1.3) and the Liouville theorem on half space. Hence [START_REF] Hebey | Asymptotic analysis for fourth order Paneitz equations with critical growth[END_REF] holds. This completes the proof of Step 1.
Step 2: Next, arguing similarly as in Step 1 and using (13), we get that lim →0 d(y , ∂Ω) β = +∞ [START_REF] Mazumdar | GJMS-type operators on a compact Riemannian manifold: best constants and Coron-type solutions[END_REF] We define w as in [START_REF] Han | Asymptotic approach to singular solutions for nonlinear elliptic equations involving critical Sobolev exponent[END_REF]. We fix η ∈ C ∞ c (R n ). Then ηw ∈ D 1,2 (R n ) for > 0 small. Arguing as in Step 1, for any θ > 0, there exists C(θ) > 0 such that
R n |∇ (ηw )| 2 dx ≤ ν β n-2 C(θ) ∇η 2 L n R n u 2 * dx n-2 n + (1 + θ) ν β n-2 R n η x -y β 2 |∇u | 2 dx. ( 18
)
Arguing as in Step 1, (ηw ) is uniformly bounded in D 1,2 (R n ), and there exists
w ∈ D 1,2 (R n ) such that upto a subsequence ηw ηw weakly in D 1,2 (R n ) as → 0 ηw (x) → ηw(x) a.e x in R n as → 0 (19) Further w ∈ C ∞ (R n ) ∩ D 1,2 (R n ), w ≥ 0 and it satisfies weakly the equation ∆w = w 2 * -1 in R n . Moreover lim →0 w = w in C 1 loc (R n
), w(0) = 1 and w > 0. This ends Step 2 and proves Lemma 1.
We let (u ) be as in Theorem 1. We will say that blowup occurs whenever u 0 weakly in H 2 1,0 (Ω) as → 0. We describe the behaviour of such a sequence of solutions (u ). By regularity, for all , u ∈ C 0 (Ω). We let x ∈ Ω and µ > 0 be such that :
u (x ) = max Ω u (x)
and µ
-n-2 2 = u (x ) (20)
The main result of this section is the following theorem:
Theorem 4.
Let Ω be a bounded smooth oriented domain of R n , n ≥ 3 , such that 0 ∈ ∂Ω, and let a ∈ C 1 (Ω) be such that the operator ∆ + a is coercive in Ω. Let (s ) >0 ∈ (0, 2) be a sequence such that lim →0 s = 0. Suppose that the sequence
(u ) >0 ∈ H 2 1,0 (Ω)
, where for each > 0, u satisfies (2) and (3), is a blowup sequence, i.e u 0 weakly in H 2 1,0 (Ω) as → 0
We let (x ) , (µ ) be as in [START_REF] Struwe | Variational methods[END_REF]. Let k be such that
k := |x | s /2 µ 2-s 2 for > 0 (21) Then lim →0 µ = lim →0 k = 0 and lim →0 d(x , ∂Ω) µ = lim →0 d(x , ∂Ω) k = +∞.
We rescale and define
v (x) := u (x + k x) u (x ) for x ∈ Ω -x k Then there exists v ∈ C ∞ (R n ) such that v = 0 and for any η ∈ C ∞ c (R n ) ηv ηv weakly in D 1,2 (R n ) as → 0 and lim →0 v = v in C 1 loc (R n ) where for x ∈ R n , v(x) = 1 + |x| 2 n(n -2) -n-2 2 and R n |∇v| 2 dx = 1 K(n, 0) 2 * 2 * -2 (22)
Moreover upto a subsequence, as → 0
µ |x | s → 1 and k µ → 1. (23)
Proof. The proof goes through following steps.
Step 1: We claim that: µ = o(1) as → 0.
We prove our claim. Suppose lim →0 µ = 0. Then (u ) is uniformly bounded in L ∞ , and then (|x| -s u 2 (s )-1 ) is uniformly bounded in L p (Ω) for some p > n. Then from (2), the weak convergence to 0 and standard elliptic theory, we get that u → 0 in C 1 (Ω), as → 0. From ( 2) and (3), we then get that lim →0 µ s ,a (Ω) = 0 and therefore, µ a (Ω) = 0, contradicting the coercivity. This ends Step 1.
Step 2: From Lemma 1 it follows that lim
→0 |x | µ = +∞ , lim →0 d(x , ∂Ω) k = +∞. (24) and, there exist v ∈ C 1 (R n ), v > 0, such that lim →0 v = v in C 1 loc (R n ) and it satisfies ∆v = v 2 * -1 in R n . Further we have that max x∈R n v(x) = v(0) = 1. By
Caffarelli, Gidas and Spruck [START_REF] Caffarelli | Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth[END_REF], we then have the first assertion of (22).
Step 3: Arguing as in the proof of [START_REF] Rey | The role of the Green's function in a nonlinear elliptic equation involving the critical Sobolev exponent[END_REF], for any θ > 0, there exists C(θ) > 0 such that for any R > 0
R n |∇(η R v )| 2 dx ≤ C(θ) B0(2R)\B0(R) (η 2R v ) 2 * dx n-2 n + (1 + θ) µ k n-2 Ω |∇u | 2 dx (25)
Now u 0 weakly in H 2 1,0 (Ω) as → 0, where for each > 0, u satisfies ( 2) and (3). So we have
Ω |∇u | 2 dx = Ω |u (x)| 2 * (s ) |x| s dx + o(1) ≤ µ s ,a (Ω) 2 * (s ) 2 * ( )-2 + o(1) as → 0.
Using Proposition 2.1, letting → 0, then R → +∞, and the θ → 0, we obtain
R n |∇v| 2 dx ≤ lim sup →0 µ |x | s n-2 2 µ a (Ω) 2 * 2 * -2 (26)
From (24) we get lim sup Proof. We obtain by change of variables
→0 µ |x | s ≤ 1. Since µ a (Ω) ≤ 1 K(n,0) (see Aubin [2]), we get R n |∇v| 2 dx ≤ µ a (Ω) 2 * 2 * -2 ≤ 1 K(n, 0) 2 * 2 * -2 Since v ∈ D 1,2 (R n ) satisfies ∆v = v 2 * -1 ,
Ω\Bx (Rk ) |u (x)| 2 * (s ) |x| s dx = Ω |u (x)| 2 * (s ) |x| s dx - Bx (Rk ) |u (x)| 2 * (s ) |x| s dx = Ω |u (x)| 2 * (s ) |x| s dx - k n µ n-s B0(R) |v (x)| 2 * (s ) |x + k x| s dx = Ω |u (x)| 2 * (s ) |x| s dx - |x | s µ s n-2 2 B0(R) |v (x)| 2 * (s ) x |x | + k |x | x s dx
Letting → 0 and then R → +∞ one obtains the proposition using Theorem 4.
Refined Blowup Analysis I
In this section we obtain pointwise bounds on the blowup sequence (u ) >0 that will be used in next section to get the optimal bound.
Theorem 5. With the same hypothesis as in Theorem 4, we have that there exists a constant C > 0 such that for > 0
|x -x | n-2 2 u (x) + |x -x | n 2 d(x, ∂Ω) u (x) ≤ C for all x ∈ Ω.
Moreover,
lim R→+∞ lim →0 sup x∈Ω\Bx (Rk ) |x -x | n-2 2 u (x) = 0
The proof of Theorem 5 comprises the three propositions proved below.
Proposition 4.1. With the same hypothesis as in Theorem 4, we have that there exists a constant C > 0 such that for > 0
|x -x | n-2 2 u (x) ≤ C for all x ∈ Ω
Proof. We argue by contradiction and let y ∈ Ω be such that Step 3: It follows from (32) and the definitions ( 27) and (28) that for any R > 0 one has for > 0 sufficiently small u (x) ≤ 2u (y ) for all x ∈ B y (Rl ). Therefore hypothesis [START_REF] Ghoussoub | Hardy-Sobolev critical elliptic equations with boundary singularities[END_REF] of Lemma 1 is satisfied and one has lim
|y -x | n-2 2 u (y ) = sup x∈Ω |x -x | n-2 2 u (x) → +∞ as → 0. ( 27
→0 d (y , ∂Ω) l = +∞. ( 35
)
We let for > 0
w (x) = u (y + l x) u (y ) for x ∈ Ω -y l .
From Lemma 1 it follows that lim
→0 w = w in C 1 loc (R n ) where w ∈ C ∞ (R n ) ∩ D 1,2 (R n
) is such that ∆w = w 2 * -1 in R n w ≥ 0 and w(0) = 1. We obtain by a change of variable for R > 0 and > 0
B0(R) |w (x)| 2 * (s ) y |y | + l |y | x s dx = λ s |y | s n-2 2
By (Rl )
|u (x)| 2 * (s )
|x| s dx
Passing to the limit as → 0, we have for R > 0
B0(R) w 2 * dx ≤ lim sup →0 By (Rl ) |u (x)| 2 * (s )
|x| s dx and so
R n w 2 * dx = lim R→+∞ B0(R) w 2 * dx ≤ lim R→+∞ lim sup →0 By (Rl ) |u (x)| 2 * (s ) |x| s dx
Now for any R > 0, we claim that B x (Rk ) ∩ B y (Rl ) = ∅ for > 0 sufficiently small. We argue by contardiction and we assume that the intersection is nonempty, which yields y -x = O(k + l ) as → 0, up to extraction. It then follows from (32) that y -x = O(k ) as → 0, and then |y - 1) with ( 20) and (23). This contradicts (28) and proves the claim. Then by Proposition 3.1
x | n-2 2 u (y ) = O(k n-2 2 u (y )) = O(µ n-2 2 u (x )) = O(
R n w 2 * dx ≤ lim R→+∞ lim sup →0 Ω\By (Rl ) |u (x)| 2 * (s ) |x| s dx = 0
A contradiction since w(0) = 1. Hence (28) does not hold. This completes the proof of Proposition 4.1.
Having obtained the strong bound in Proposition 4.1 we show that Proposition 4.2. With the same hypothesis as in Theorem 4 we have that there exists a constant C > 0 such that for > 0
|x -x | n/2 |∇u (x)| ≤ C and |x -x | n/2 u (x) ≤ Cd(x, ∂Ω) for all x ∈ Ω
Proof. We proceed by contradiction and assume that there exists a sequence of points (y ) >0 in Ω such that
|y -x | n/2 |∇u (y )| + |y -x | n/2 u (y ) d(y , ∂Ω) -→ +∞ as → 0 (36)
We define lim →0 x = x 0 ∈ Ω and lim →0 y = y 0 ∈ Ω.
Case 1: we assume that x 0 = y 0 . We choose δ > 0 such that 0 < 4δ < |x 0 -y 0 |. Then one has that δ < |x -x | for all x ∈ B y0 (2δ) ∩ Ω and Lemma 4.1 then gives us that there exists a constant C(δ) > 0 such that 0 ≤ u ≤ C(δ) in B y0 (2δ). Then from equation ( 2) and standard elliptic theory, u is bounded in C 1 B y0 (δ) ∩ Ω . So there exists a constant C > 0 such that |∇u (x)| ≤ C and u (x) ≤ Cd(x, ∂Ω) for all x ∈ B y0 (δ) ∩ Ω. This contradictis (36). The proposition is proved in Case 1. coming back to the definition of ũ , this contradicts (36). This ends Case 2.1.
Case 2.2:
We assume that upto a subsequence
d(x , ∂Ω) ≤ 2 |y -x |
Let T : U → V be a parametrisation of the boundary ∂Ω as in [START_REF] Druet | The Lin-Ni's problem for mean convex domains[END_REF] around the point p = x 0 . Let z ∈ ∂Ω be such that |z -x | = d(x , ∂Ω) for > 0. We let x , z ∈ U be such that T (x ) = x and T (z ) = z . Then it follows from the properties of the boundary chart T , that lim →0 x = 0 = lim →0 z , (x ) 1 < 0 and (z ) 1 = 0. For all > 0, we let
ũ (x) = α n-2 2 u • T (z + α x) for x ∈ U -z α ∩ {x 1 ≤ 0}
For any R > 0, ũ is defined in B 0 (R) ∩ {x 1 ≤ 0} for > 0 small enough. Using lemma Lemma 4.1 and the properties of the chart T , one obtains that there exists a constant C > 0 such that
|ρ -x| n-2 2 ũ (x) ≤ C for x ∈ B 0 (R) ∩ {x 1 ≤ 0}
where ρ = x -z α and there exists ρ 0 ∈ R -such that ρ → ρ 0 as → 0. Arguing again as in Step 1.3 of the proof of Lemma 1, standard elliptic theory yields
ũ C 1 (B0(R/2)\Bρ 0 (2δ)∩{x1≤0}) = O(1)
as → 0 and ũ vanishes on the boundary B 0 (R/2) \ B ρ0 (2δ) ∩ {x 1 = 0}. Let ỹ ∈ U be such that T (ỹ ) = y . It then follows that, as → 0
∇ũ ỹ -z α = O(1), ũ ỹ -z α = O(1)
and since ũ vanishes on the boundary A contradiction to (38), proving our claim. We note that then there exists c 2 > 0 such that for > 0 small
B 0 (R/2) \ B ρ0 (2δ) ∩ {x 1 = 0}, it follows that 0 ≤ ũ ỹ -z α = O (ỹ -z ) 1 α = O (ỹ ) 1 α = O d(y ,
|y -x | l = |y -x | λ λ s /2 |y | s /2 ≥ c 2 (41)
Arguing as in case 2.2 of Lemma 4.1 we see that we cannot have lim Let ρ 0 > 0 be such that upto a subsequence d(y , ∂Ω) l ≥ 2ρ 0 . Without loss of generality we can take 2ρ 0 < c 2 . Then proceeding as in step 3 of Lemma 4.1 we arrive at a contradiction. These steps complete the proof of Proposition 4.3.
Refined Blowup Analysis II
This section is devoted to the proof of Theorem 1.
Proof.
Step 1: We claim that for any α ∈ (0, n -2), there exists
C α > 0 such that for all > 0 |x -x | α µ n-2 2 -α u (x) ≤ C α for all x ∈ Ω (42)
Proof. Since the operator ∆ + a is coercive on Ω and a ∈ C(Ω), there exists U 0 ⊂ R n an open set such that Ω ⊂⊂ U 0 , and there exists
a 1 > 0, A 1 > 0 such that U0 |∇ϕ| 2 dx + U0 (a -a 1 ) ϕ 2 dx ≥ A 1 U0 ϕ 2 dx for all ϕ ∈ C ∞ c (U 0 ),
where we have continuously extended a to U 0 . In other words the operator ∆+(a-a 1 ) is coercive on U 0 . Let G : U 0 × U 0 \ {(x, x) : x ∈ U 0 } -→ R be the Green's function of the operator ∆ + (a -a 1 ) with Dirichlet boundary conditions. The G satisfies
∆ G(x, •) + (a -a 1 ) G(x, •) = δ x (43)
Since the operator ∆ + (a -a 1 ) is coercive on U 0 , G exists. See Robert [START_REF] Robert | Existence et asymptotiques optimales des fonctions de Green des opérateurs elliptiques d'ordre deux (personal notes[END_REF]. We set G (x) = G(x , x) for all x ∈ U 0 \{x } and > 0. Then there exists C > 0 such that
0 < G (x) < C |x -x | n-2 for x ∈ U 0 \{x }.
Moreover there exists δ 0 > 0 and C 0 > 0 such that for all > 0
G (x) ≥ C 0 |x -x | n-2 and |∇ G (x)| | G (x)| ≥ C 0 |x -x | for x ∈ B x (δ 0 )\{x } ⊂⊂ U 0 (44)
We define the operator
L = ∆ + a - u 2 * (s )-2 |x| s
Step 1.1: We claim that there exists ν 0 ∈ (0, 1) such that given any ν ∈ (0, ν 0 ) there exists R 1 > 0 such that for R > R 1 and > 0 sufficiently small we have
L G1-ν > 0 in Ω\B x (Rk ) (45)
We prove the claim. We choose ν 0 ∈ (0, 1) such that for any ν ∈ (0, ν 0 ) one has ν (a -a 1 ) ≥ -a1 2 in Ω. Fix ν ∈ (0, ν 0 ). Using (43) we obtain for > 0 sufficiently small
L G1-ν G1-ν =a 1 + ν(a -a 1 ) + ν(1 -ν) |∇ G | 2 | G | 2 - u 2 * (s )-2 |x| s in Ω\{x } ≥ a 1 2 + ν(1 -ν) |∇ G | 2 | G | 2 - u 2 * (s )-2 |x| s in Ω\{x } Let |x -x | ≥ δ 0 ,
|x| s = 0 in C(Ω\B x (δ 0 ))
Hence for > 0 sufficiently small we have for ν ∈ (0, ν 0 )
L G1-ν G1-ν > 0 for x ∈ Ω\B x (δ 0 )
By strong pointwise estimates, Proposition 4.3 we have that, given any ν ∈ (0, ν 0 ), there exists
R 1 > 0 such that for any R > R 1 sup Ω\Bx (Rk ) |x -x | n-2 2 u (x) ≤ ν(1 -ν) 4 C 2 0 n-2 4
Here C 0 is as in (44). And then using Lemma 4.2 we obtain for > 0 small
u 2 * (s )-2 |x| s = u 2 * (s )-2-s u |x| s ≤ ν(1 -ν) 2 C 2 0 |x -x | 2 for all x ∈ Ω\B x (Rk ). Therefore if x ∈ B x (δ 0 )\B x (Rk ) then with (44) we get L G1-ν G1-ν ≥ a 1 2 + ν(1 -ν) 2 C 2 0 |x -x | 2 > 0
for > 0 small. This proves the claim and ends Step 1.1.
Step 1.2: Let ν ∈ (0, ν 0 ) and R > R 1 . We claim that there exists C(R) > 0 such that for > 0 small
L C(R)µ n-2 2 -ν(n-2) G1-ν > L u in Ω\B x (Rk ) C(R)µ n-2 2 -ν(n-2) G1-ν > u on ∂ (Ω\B x (Rk )) (46)
We prove the claim. Since L u = 0 in Ω, so it follows from (45 44) and ( 23), we obtain for > 0 small
) that L C(R)µ n-2 2 -ν(n-2) G1-ν > L u in Ω\B x (Rk ) for R > R 1 and > 0 sufficiently small. With (
u (x) µ n-2 2 -ν(n-2) G1-ν (x) ≤ µ -n-2 2 µ n-2 2 -ν(n-2) (Rk ) (n-2)(1-ν) C 1-ν 0 ≤ (2R) (n-2)(1-ν) C 1-ν 0 for x ∈ Ω ∩ ∂B x (Rk ). So for x ∈ ∂ (Ω\B x (Rk )) one has for > 0 small u (x) µ n-2 2 -ν(n-2) G1-γ (x) ≤ C(R) for x ∈ Ω ∩ ∂B x (Rk )
This proves the claim and ends Step 1.2.
Step 1.3: Let ν ∈ (0, ν 0 ) and R > R 1 . Since G1-ν > 0 in Ω\B x (Rk ) and L G1-ν > 0 in Ω\B x (Rk ), it follows from [START_REF] Berestycki | The principal eigenvalue and maximum principle for second-order elliptic operators in general domains[END_REF] that the operator L satisfies the comparison principle. Then from (46) we have that for > 0 small
u (x) ≤ C(R)µ n-2 2 -ν(n-2) G1-ν (x) for x ∈ Ω\B x (Rk )
Then with (44) we get that
|x -x | (n-2)(1-ν) u (x) ≤ C(R)µ n-2 2 -ν(n-2) for x ∈ Ω\B x (Rk ) Taking α = (n -2)(1 -ν), we have for α close to n -2 |x -x | α µ n-2 2 -α u (x) ≤ C α for x ∈ Ω\B x (Rk ).
As easily checked, this implies (42) for all α ∈ (0, n -2). This ends Step 1.3 and also Step 1.
Next we show that one can infact take α = n -2 in (42).
Step 2: We claim that there exists C > 0 such that for all > 0
|x -x | n-2 u (x ) u (x) ≤ C for all x ∈ Ω (47)
Proof. The claim is equivalent to proving that for any (y ) ∈ Ω, we have that
|y -x | n-2 u (x ) u (y ) = O(1)
as → 0
We have the following two cases.
Step 2.1:
Suppose that |x -y | = O(µ ) as → 0. By definition (20) it follows that |y -x | n-2 u (x ) u (y ) ≤ |y -x | n-2 µ 2-n
. This proves (47) in this case and ends Step 2.1.
Step 2.2: Suppose that
lim →0 |x -y | µ = +∞ as → 0 (48) We let for > 0 v (x) = µ n-2 2 u (µ x + x ) for x ∈ Ω -x µ
Then from (42), it follows that for any α ∈ (0, n -2), there exists
C α > 0 such that for all > 0 v (x) ≤ C α 1 + |x| α for x ∈ Ω -x µ
Let G be the Green's function of ∆ + a with Dirichlet boundary conditions. Green's representation formula and standard estimates on the Green's function yield
u (y ) = Ω G(x, y ) u 2 * (s )-1 (x) |x| s dx ≤ C Ω 1 |x -y | n-2 u 2 * (s )-1 (x) |x| s dx for all > 0
where C > 0 is a constant. We write the above integral as follows
u (y ) ≤ C Ω u (x) |x| s 1 |x -y | n-2 u (x) 2 * (s )-1-s dx for all > 0
Using Hölder inequality and then by Hardy inequality [START_REF] Druet | Elliptic equations with critical Sobolev exponents in dimension 3[END_REF] we get that for > 0
u (y ) ≤C Ω |u (x)| 2 |x| 2 dx s /2 Ω 1 |x -y | n-2 2 2-s u (x) (2 * (s )-1-s ) 2 2-s dx 2-s 2 ≤C 2 n -2 2 Ω |∇u | 2 dx s /2 Ω 1 |x -y | n-2 2 2-s u (x) (2 * (s )-1-s ) 2 2-s dx 2-s 2
Since (u ) >0 is bounded in H 2 1,0 (Ω),there exists C > 0 such that for > 0 small u (y )
2 2-s ≤ C Ω 1 |x -y | 2(n-2) 2-s u (x) (2 * (s )-1-s ) 2 2-s dx
With a change of variables the above integral becomes
u (y ) 2 2-s ≤ C µ n µ n-2 2-s (2 * (s )-1-s ) Ω-x µ 1 |y -x -µ x| 2(n-2) 2-s v (x) (2 * (s )-1-s ) 2 2-s dx
And so we get that for > 0 small
µ -n-2 2 u (y ) 2 2-s ≤ C Ω-x µ ∩{|y -x -µ x|≥ |y -x | 2 } 1 |y -x -µ x| 2(n-2) 2-s v (x) (2 * (s )-1-s ) 2 2-s dx + C Ω-x µ ∩{|y -x -µ x|≤ |y -x | 2 } 1 |y -x -µ x| 2(n-2) 2-s v (x) (2 * (s )-1-s ) 2 2-s dx (49)
We estimate the above two integrals separately. First we have for > 0 small and α close to n -2
Ω-x µ ∩{|y -x -µ x|≥ |y -x | 2 } 1 |y -x -µ x| 2(n-2) 2-s v (x) (2 * (s )-1-s ) 2 2-s dx (50) ≤ 2 2(n-2) 2-s |y -x | 2(n-2) 2-s Ω-x µ v (x) (2 * (s )-1-s ) 2 2-s dx = O 1 |y -x | 2(n-2) 2-s
as → 0. On the other hand for > 0 small
Ω-x µ ∩{|y -x -µ x|≤ |y -x | 2 } 1 |y -x -µ x| 2(n-2) 2-s v (x) (2 * (s )-1-s ) 2 2-s dx ≤ C α Ω-x µ ∩{|y -x -µ x|≤ |y -x | 2 } 1 |y -x -µ x| 2(n-2) 2-s 1 |x| (2 * (s )-1-s ) 2α 2-s dx ≤ C α 2µ |y -x | (2 * (s )-1-s ) 2α 2-s {|y -x -µ x|≤ |y -x | 2 } 1 |y -x -µ x| 2(n-2) 2-s dx ≤ C α µ |y -x | (2 * (s )-1-s ) 2α 2-s -n 1 |y -x | n-2 2 2-s
Taking α close to (n -2), and using (48), we obtain for sufficiently small (51)
Ω-x µ ∩{|y -x -µ x|≤ |y -x | 2 } v2 * (s )-1 (x) |y -x -µ x| n-2 dx = o 1 |y -x | n-2 2 2-s
as → 0. Combining (49), ( 50) and (51) we obtain that
µ -n-2 2 u (y ) 2 2-s ≤ O 1 |y -x | 2(n-2) 2-s as → 0
This proves (47) and ends Step 2.2 and then Step 2.
Step 3: The estimate (47) and the definition (20) of µ yield Theorem 1.
Localizing the Singularity: The Interior Blow-up Case
In this section we prove Theorem 2. We assume that
x 0 ∈ Ω.
The proof goes through four steps. We first recall the Pohozaev identity. Let U be a bounded smooth domain in R n , let p 0 ∈ R n be a point and let u ∈ C 2 (U ). We have
U (x -p 0 , ∇u) + n -2 2 u ∆u dx = ∂U (x -p 0 , ν) |∇u| 2 2 -(x -p 0 , ∇u) + n -2 2 u ∂ ν u dσ ( 52
)
here ν is the outer normal to the boundary ∂U . Using the above Pohozaev Identity we obtain the following identity for the Hardy Sobolev equation: Let U be a family of smooth domains such that x ∈ U ⊂ Ω for all > 0. One has for all > 0
U a + (x -x , ∇a) 2 u 2 dx - s (n -2) 2(n -s ) U u 2 * (s ) |x| s (x, x ) |x| 2 dx = ∂U (x -x , ν) |∇u | 2 2 + au 2 2 - 1 2 * (s ) u 2 * (s ) |x| s dσ - ∂U (x -x , ∇u ) + n -2 2 u ∂ ν u dσ (53)
Since x 0 ∈ Ω, let δ > 0 be such that B x0 (3δ) ⊂ Ω. Note that then lim
→0 |x | s = 1,
and it follows from (23) that lim →0 µ s = 1. We will estimate each of the terms in the above Pohozaev identity and calculate the limit as → and δ → 0. It will depend on the dimension n.
Step 1: We prove the following convergence outside x 0 : Proposition 6.1. We have that µ
-n-2
This proves the claim. From (2), we get that ∆(µ
-n-2 2 u ) + a(x)(µ -n-2 2 u ) =µ 2-s (µ -n-2 2 u ) 2 * (s )-1 |x| s in Ω µ -n-2 2 u =0 on ∂Ω.
It follows from the pointwise estimate of Theorem 1 that µ
-n-2
2 u is uniformly bounded in L ∞ loc (Ω \ {x 0 }). It then follows from standard elliptic theory that the limit (54) holds in C 1 loc (Ω \ {x 0 }). This completes the proof of Proposition 6.1.
Step 2: Next we show that lim
→0 Bx (δ) u 2 * (s ) |x| s (x, x ) |x| 2 dx = 1 K(n, 0) 2 * 2 * -2 . ( 55
)
Proof. Recall our definition of v in Theorem 4. With a change of variable we have
Bx (δ) u 2 * (s ) |x| s (x, x ) |x| 2 dx = |x | s µ s n-2 2 B0(δ/k ) (x + k x, x ) |x + k x| 2 v (x) 2 * (s ) x |x | + k |x | x s dx
Passing to limits, and using Theorems 4 and 1 we obtain by Lebesgue dominated convergence theorem lim
→0 Bx (δ) u 2 * (s ) |x| s (x, x ) |x| 2 dx = R n v 2 * dx = 1 K(n, 0) 2 * 2 * -2 .
This proves (55) and ends Step 2.
Step 3: We define a (x) := a(x) + 1 2 (x -x , ∇a) for x ∈ Ω. We claim that
Bx (δ) a u 2 dx = O(δµ ) for n = 3 or a ≡ 0, µ 2 log 1 k [64ω 3 a(x 0 ) + o(1)] for n = 4, µ 2 [d n a(x 0 ) + o(1)]
for n ≥ 5.
(56) as → 0, where d n is as in (4).
Proof. We divide the proof in three steps. Case 3.1: We assume that n ≥ 5. Recall our definition of v in Theorem 4. With a change of variable we obtain Case 3.3: we assume that n = 3. It follows from Theorem 1 that there exists C > 0 such that µ -1/2 u (x) ≤ C|x -x | -1 for all > 0 and x ∈ Ω. Therefore
µ -2 Bx (δ) a u 2 dx = k µ 4 B0(δ/k ) a (x + k x)v 2 dx.
Bx (δ) a u 2 dx = O(µ ) Bx (δ)
|x| -2 dx = O(δµ ) as → 0.
Step 3: We prove Theorem 2 for n ≥ 4. From the Pohozaev identity (53) we have
µ -2 Bx (δ) a + (x -x , ∇a) 2 u 2 dx -µ -2 s (n -2) 2(n -s ) Bx (δ) u 2 * (s ) |x| s (x, x ) |x| 2 dx =µ n-4 ∂Bx (δ) (x -x , ν) |∇(µ -n-2 2 u )| 2 2 + a 2 (µ -n-2 2 u ) 2 - µ 2-s 2 * (s ) (µ -n-2 2 u ) 2 * (s ) |x| s dσ -µ n-4 ∂Bx (δ) (x -x , ∇(µ -n-2 2 u )) + n -2 2 (µ -n-2 2 u ) ∂ ν (µ -n-2 2 u ) dσ (57)
Passing to the limits as → 0 in (57), using (55), (56) and Theorem 6.1, we get Theorem 2 when n ≥ 4.
Step 4: We now deal with the case of dimension n = 3. Recall from the introduction that we write the Green's function G as G x (y) = 1 4π|x-y| + g x (y) for all x, y ∈ Ω, x = y, with g x ∈ C 2 (Ω \ {x}) ∩ C 0,θ (Ω) for some 0 < θ < 1. In particular, when n = 3 or a ≡ 0, g x (x) is defined for all x ∈ Ω. For any x ∈ Ω, g x satifies the equation ∆g x + ag x = -a/(4π|x -y|) in Ω \ {x} and g x (y) = -1 ω 2 |x -y| on ∂Ω.
Note that any
x ∈ Ω lim r→0 sup y∈∂Bx(r) |y -x||∇g x (y)| = 0 (58)
The proof goes as in Hebey-Robert [START_REF] Hebey | Asymptotic analysis for fourth order Paneitz equations with critical growth[END_REF]. We omit it here. From the Pohozaev identity (53), multiplying both the sides by µ -1 we obtain
Bx (δ) a + (x -x , ∇a) 2 (µ -1/2 u ) 2 dx - s 2µ (3 -s ) Bx (δ) u 2 * (s ) |x| s (x, x ) |x| 2 dx = ∂Bx (δ) (x -x , ν) |∇(µ -1/2 u )| 2 2 + a (µ -1/2 u ) 2 2 - µ 2-s 2 * (s ) (µ -1/2 u ) 2 * (s ) |x| s dσ - ∂Bx (δ) (x -x , ∇(µ -1/2 u )) + n -2 2 (µ -1/2 u ) ∂ ν (µ -1/2 u ) dσ (59)
It follows from Proposition 6.1 that lim
→0 ∂Bx (δ) (x -x , ν) |∇(µ -1/2 u )| 2 2 + a (µ -1/2 u ) 2 2 - µ 2-s 2 * (s ) (µ -1/2 u ) 2 * (s ) |x| s dσ -lim →0 ∂Bx (δ) (x -x , ∇(µ -1/2 u )) + n -2 2 (µ -1/2 u ) ∂ ν (µ -1/2 u ) dσ = b 2 3 ∂Bx 0 (δ) δ |∇G x0 | 2 2 + δa 2 (G x0 ) 2 - (x -x 0 , ∇G x0 ) 2 δ - n -2 2 (x -x 0 , ∇G x0 ) δ G x0 dσ
Using (58), we get that the right-hand-side goes to b 2 3 2 g x0 (x 0 ) as δ → 0. Putting this identity, (56) when n = 3, and (55) in (59) , we get Theorem 2 in the case n = 3. The proof is similar when a ≡ 0.
Localizing the Singularity: The Boundary Blow-up Case
This section is devoted to the proof of Theorem 3.
7.1. Convergence to Singular Harmonic Functions. Here, G is still the Green's function of the coercive operator ∆ + a in Ω with Dirichlet boundary conditions. The following result for the asymptotic analysis of the Green's function is in the spirit of Proposition 5 of [START_REF] Robert | Existence et asymptotiques optimales des fonctions de Green des opérateurs elliptiques d'ordre deux (personal notes[END_REF] and Proposition 12 of [START_REF] Druet | The Lin-Ni's problem for mean convex domains[END_REF].
Theorem 6 ( [START_REF] Druet | The Lin-Ni's problem for mean convex domains[END_REF][START_REF] Robert | Existence et asymptotiques optimales des fonctions de Green des opérateurs elliptiques d'ordre deux (personal notes[END_REF]). Let (x ) >0 ∈ Ω and let (r ) >0 ∈ (0, +∞) be such that lim →0 r = 0.
(1) Assume that lim →0 d(x ,∂Ω) r = +∞. Then for all x, y ∈ R n , x = y, we have that
lim →0 r n-2 G(x + r x, x + r y) = 1 (n -2)ω n-1 |x -y| n-2
where ω n-1 is the area of the (n -1)-sphere. Moreover for a fixed x ∈ R n , this convergence holds uniformly in C 2 loc (R n \{x}).
(
(x ) = ((x ) 1 , x ). Then for all x, y ∈ R n ∩ {x 1 ≤ 0}, x = y, we have that lim →0 r n-2 G (T ((0, x ) + r x), T ((0, x ) + r y)) = 1 (n -2)ω n-1 |x -y| n-2 - 1 (n -2)ω n-1 |π(x) -y| n-2
where π : R n → R n defined by π((x 1 , x )) → (-x 1 , x ) is the reflection across the plane {x : x 1 = 0}. Moreover for a fixed x ∈ R n -, this convergence holds uniformly in C 2 loc (R n -\{x}).
The next proposition shows that the pointwise behaviour of the blowup sequence (u ) >0 is well approximated by bubbles. Note that the following proposition holds Proof. Since D 0 T = I R n we have: d(x , ∂Ω) = (1 + o(1)) |(x ) 1 |. Let θ be as in (60). Then we have that θ 0 = lim →0 θ = (-1, 0) ∈ R n -and π(θ 0 ) = (1, 0) ∈ R n + . We fix R > 0. ṽ is defined in B 0 (R) ∩ {x 1 ≤ 0} for > 0 small. It follows from the strong upper bounds obtained in Theorem 1 that there exists a constant C > 0 such that for > 0 small we have
0 ≤ ṽ (x) ≤ C r 2 |T ((0, x ) + r x) -x | 2 n-2 2 for x ∈ B 0 (R) ∩ {x 1 < 0} For any x ∈ B 0 (R) ∩ {x 1 ≤ 0} we get from Proposition 7.1 that as → 0 ṽ (x) = (1 + o(1)) k µ n-2 2 1 k r 2 + |T ((0,x )+r x)-x | 2 n(n-2)r 2 n-2 2 - 1 k r 2 + |T ((0,x )+r x)-π -1 T (x )| 2 n(n-2)r 2 n-2 2 (61)
Fom the properties of the boundary map T , one then gets
lim →0 ṽ (x) = (n(n -2)) n-2 2 |x -(1, 0)| n-2 - (n(n -2)) n-2 2 |x + (1, 0)| n-2 for x ∈ (B 0 (R) \ {(1, 0)}) ∩ {x 1 ≤ 0} (62)
For i, j = 1, . . . , n, we let (g ) ij (x) = (∂ i T ((0, x ) + r x) , ∂ j T ((0, x ) + r x)), the induced metric on the domain B 0 (R) ∩ {x 1 < 0}, and let ∆ g denote the Laplace-Beltrami operator with respect to the metric g. From eqn (2) it follows that given any R > 0, ṽ weakly satisfies the following equation for > 0 sufficiently small Since A is constant, this latest limit and (72) yield (69). This completes Step 1.
∆ g ṽ + r 2 (a • T ((0, x ) + r x)) ṽ = µ
Step 2: Proceeding similarly as in (55) we obtain
Bx (r /2) u 2 * (s ) |x| s (x, x ) |x| 2 dx = 1 K(n, 0) 2 * 2 * -2 + o(1)
as → 0
Step 3: Arguing as in the proof of (56), we get that
ν j |∇v| 2 2 -∂ j v ∂ ν v dσ + o(1)
where v and v are as in Step 1 above. Arguing as in Step 1 above , we get that
∂B0(1/2) ν j |∇v| 2 2 -∂ j v ∂ ν v dσ = ω n-1 (n -2)(n(n -2)) n-2 2 ∂ j h(0), ( 76
)
where h is as in (73). For j = 1, taking the explicit expression of h yields Step 4.
|∇u| 2 + au 2 dx Ω |u| 2 * (s) |x| s dx 2 / 2 *
2222 (s)
1 :
1 We claim. Suppose on the contrary that |y | ν = O(1) as → 0. Then lim →+∞ |y | = 0. Let T : U → V be a parametrisation of the boundary as in (8) at the point p = 0. For all > 0, we let w
Sobolev's inequality[START_REF] Caffarelli | Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth[END_REF] then yields the seocnd assertion of (22). Then (26) implies lim sup →0 µ |x | s ≥ 1, which yields (23). This completes the proof of Theorem 4. As a consequence of Theorem 4, we get the following concentration of energy: Proposition 3.1. Under the hypothesis of Theorem 4 one further has that lim R→+∞ lim →0 Ω\Bx (Rk ) |u (x)| 2 * (s ) |x| s dx = 0
2 :Step 1 :Step 2 :
212 = u (y ). Then µ ≤ λ , and (28It follows from the definition (27) and (29) that given any R > 0 one has for > 0 sufficiently small u (x) ≤ 2u (y ) for all x ∈ B y (Rλ ). Therefore hypothesis[START_REF]Concentration estimates for Emden-Fowler equations with boundary singularities and critical growth[END_REF] of Lemma 1 is satisfied and one has lim We claim that (32) lim →0 |y -x | l = +∞. We prove the claim. Due to (31), the claim is clear when y = O(|y -x |) as → 0. We assume that y -x = o(|y |) as → 0. We then have that |x | |y | as → 0. Therefore, there exists c 0 > 0 such that shown in (23), it follows that there exists c 1 > 0 such that λ |y | s ≥ c 1 for > 0 small enough. Therefore,
Case 2 :Case 2 . 1 : 2 2- 2 2
22122 we assume that x 0 = y 0 . Define α = |y -x |, so that lim →0 α = 0. We assume that upto a subsequence d(x , ∂Ω) ≥ 2 |y -x | For > 0 we let ũ (x) = α nu (x + α x) for x ∈ B 0 (3/2) This is well defined since B x (2α ) ⊂ Ω. Using Lemma 4.1 one obtains that there exists a constant C > 0 such that |x| nũ (x) ≤ C for x ∈ B 0 (3/2). Arguing as in Step 1.3 of the proof of Lemma 4.1, standard elliptic theory yields ũ C 1 (B0(5/4)\B0(1/2)) = O(1) as → 0 Then one then obtains as → 0 ∇ũ y -x |y -x | = O(1) and ũ y -x |y -x | = O(1).
∂Ω) α comig back to the definition of ũ this implies that as → 0 |y -x | n/2 |∇u (y )| = O(1), and |y -x | n/2 u (y ) = O(d(x , ∂Ω)), contradicting (36). This ends Case 2.2. All these cases prove Proposition 4.2. As a consequence of Proposition 4.1 and Proposition 4.2 we get the following: Corollary 4.1. Let (u ) >0 be as in Theorem 4, and let lim →0 x → x 0 ∈ Ω, then upto a subsequence lim →0 u = 0 in C 1 loc (Ω\{x 0 }). We slightly improve our estimate in Proposition 4.1 to obtain Proposition 4.3. With the same hypothesis as in Theorem 4 we have lim Suppose on the contrary there exists 0 > 0 and a sequence of points (y ) >0 ∈ Ω such that upto a subsequence |y -contradiction and we assume that lim →0 λ s |y | s = 0. Now, using (33) and (23), we get that lim →0 |x | s |y | s = 0. And in particular one has that lim
Theorem 1 5 . 3 . 2 :
1532 reads v (x) ≤ C(1 + |x| 2 ) 1-n/2. Therefore, Lebesgue's theorem and Theorem 4 yield (56) when n ≥ Case We assume that n = 4 and we argue as in Case 3.1. With the pointwise control of Theorem 1, we get that B0(δ/k ) a (x + k x)v 2 dx. = log (δ/k ) (64ω 3 a(x 0 ) + o(1)) as → 0.
)
Assume that lim →0 d(x ,∂Ω) r = ρ ∈ [0, +∞). Then lim →0 x = x 0 ∈ ∂Ω. Let T be a parametrisation of the boundary ∂Ω as in (8) around the point p = x 0 . We write T -1
1 T 7 . 2 . 2 * 2 *-2 2 |x| n- 2 +
1722222 ((0,x )+r x) r s in B 0 (R) ∩ {x 1 < 0} ṽ = 0 on B 0 (R) ∩ {x 1 = 0} (63)Arguing as in Step 1.2 of the proof of Lemma 1, we get that the convergence of ṽ holds in C 1 loc (R n -\ {θ 0 }). This completes the proof of Proposition 7.2. Estimates on the blow up rates: The Boundary Case. Suppose that the sequence of blow up points (x ) >0 converges to a point on the boundary, i.e suppose lim→0 x = x 0 ∈ ∂Ω. We let r = d(x , ∂Ω) (64)Then lim →0 r = 0 and from[START_REF] Mazumdar | GJMS-type operators on a compact Riemannian manifold: best constants and Coron-type solutions[END_REF], we have as → 0: µ = o(r ) and k = o(r ). We apply the Pohozaev identity for the Hardy Sobolev equation (53) to the domain B x (r /2). Note that since d(x ,∂Ω) r = 1 for all > 0, so B x (r /2) ⊂⊂ Ω for > 0 small. The Pohozaev identity (53) gives usBx (r /2) a + (x -x , ∇a) 2 u 2 dx -s (n -2) 2(n -s ) Bx (r /2) u (s ) |x| s (x, x ) |x| 2 dx = ∂Bx (r /2) (x -x , ν) (s ) |x| s -(x -x , ∇u ) + n -2 2 u ∂ ν u dσ (65)With the change of variable x → x + r z we obtain v dσPassing to limit as → 0 in (72) and using (71), we get(72) µ r 2-n ∂Bx (r /2) F dσ = A(1/2) + o(1) as → v dσ.Let 0 < δ < 1/2. Since ∆v = 0 in B 0 (1/2) \ B 0 (δ), applying the Pohozaev identity (52), we see that A(δ) = A(1/2) for all 0 < δ < 1/2. We write= (n(n -2)) nh(x) for x ∈ B 0 (1) \ {0}(73) where h(x) = -(n(n-2)) n-2 2 |x+(2,0)| n-2 . With the explicit expression of v we obtain lim δ→0 A(δ) = -n n-2 (n -2) n ω n-1 2 n-1
Bx (r / 2
2 a(x 0 ) + o(1)] for n = 4, µ 2 [d n a(x 0 ) + o(1)] for n ≥ 5.where d n is as in[START_REF] Brézis | Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents[END_REF]. Combining Steps 1 to 3 in the Pohozaev identity (65) yields (66), (67) and (68).
Step 5 : 4 (x ) 1 |x | 2 K( 4 ,
54124 Arguing as in Step 2 we have Bx (r /2) in Step 3, for every 1 ≤ j ≤ n we have as → 0 Bx (r /2) ∂ j a(x) u 2 (x) dx = identity (74), (68) and these estimates, noting that r = d(x , ∂Ω) = (1 + o(1))|x ,1 |, we then obtain that d(x , ∂Ω) = (1 + o(1))|x | as → 0 when n = 3 or a ≡ 0. When n = 4, then as → 0s 0) -2 + o(1) + µ 2 r 3 (32ω 3 + o(1)) = O µ 2 log r µ .Finally, when n ≥ 5, we get as → 0s (n -2) 2n (x ) 1 |x | 2 K(n, 0) -n/2 + o(1) + r -1 µ r n-2 n n-2 (n -2) n ω n-1 2 n-1 + o(1)= O µ 2
To get extra informations, we differentiate the Pohozaev identity (53) with respect to the j th variable (x ) j and get Proof. As Step 1 above, using Proposition 7.2 we have as → 0
Bx (r /2) ∂ j a 2 u 2 dx + s (n -2) 2(n -s ) Bx (r /2) u 2 * (s ) |x| s x j |x| 2 dx =
(74) ∂Bx (r /2) ν j |∇u | 2 2 + au 2 2 - 1 2 * (s ) 2 * (s ) u |x| s -∂ j u ∂ ν u dσ
Step 4: We claim that
µ 2-n r 1-n ∂Bx (r /2) ν 1 |∇u | 2 2 + au 2 2 - 1 2 * (s ) 2 * (s ) u |x| s -∂ 1 u ∂ ν u dσ
(75) = - n n-2 (n -2) n ω n-1 2 n-1 + o(1)
µ 2-n r 1-n ∂Bx (r /2) ν j |∇u | 2 2 + au 2 2 - 1 2 * (s ) 2 * (s ) u |x| s -∂ j u ∂ ν u dσ
=
∂B0(1/2)
u -→ b n G x0 in C 1 loc (Ω \ {x 0 }) as → 0,where b n is as in (4) and G is the Green's function for ∆+a with Dirichlet condition.
author, funded by "Fédération Charles Hermite" (FR3198 du CNRS) and "Région Lorraine". The author acknowledges these two institutions for their supports.
Proof. We fix y 0 ∈ Ω such that y 0 = x 0 . We first claim that lim →0 µ -n-2 2 u (y 0 ) -→ b n G x0 (y 0 ).
We prove the claim. We choose δ ∈ (0, δ) such that |x 0 -y 0 | ≥ 3δ and |x 0 | ≥ 3δ . From Green's representation formula we have
Using the bounds on u obtained in Theorem 1 and the estimates on the Green's function G we get as → 0
Recall our definition of v in Theorem 4. With a change of variable, Theorem 4 yields
Lebesgue dominated convergence theorem, Theorems 4 and 1 then yield
with x 0 ∈ Ω, in the interior or on the boundary. We omit the proof as it goes exactly like the proof of Proposition 13 in [START_REF] Druet | The Lin-Ni's problem for mean convex domains[END_REF] .
Proposition 7.1. We set for all > 0
Suppose that the sequence (u ) >0 ∈ H 2 1,0 (Ω), where for each > 0, u satisfies (2) and (3), is a blowup sequence. We let x 0 := lim →0 x . Let (y ) >0 be a sequence of points in Ω. We have
where π T = T • π • T -1 . Here, T and π are as in Theorem 6.
Using Proposition 7.1, we derive the following when the sequence of blowup points converge to a point on the boundary Proposition 7.2. Let (u ) >0 ∈ H 2 1,0 (Ω) be such that for each > 0, u satisfies (2) and (3). We assume that u 0 weakly in H 2 1,0 (Ω) as → 0. We let x 0 := lim →0 x . Let r = d(x , ∂Ω). We assume that lim →0 r = 0. Therefore, lim →0 x = x 0 ∈ ∂Ω. Let T be a parametrisation of the boundary ∂Ω as in (8) around the point p = x 0 . We write
where
and π : R n → R n defined by π((x 1 , x )) → (-x 1 , x ) is the reflection across the plane {x :
for all > 0 small. We now estimate each of the terms in the integral above. Theorem 3 will be a consequence of the following theorem:
Let Ω, a, (s ) >0 , (u ) >0 ∈ H 2 1,0 (Ω) as in Theorem 3. Assume that (64) holds and lim
and
and
where d n is as in (4) for n ≥ 5 and d 4 = 64ω 3 .
Proof. For convenience we define
as → 0 (69) Proof. We define Plugging together these estimates and (67) and (68), we get Theorem 7. | 53,868 | [
"777508"
] | [
"221718"
] |
01465753 | en | [
"sde"
] | 2024/03/04 23:41:44 | 2016 | https://hal.science/hal-01465753/file/mo2016-pub00046768.pdf | Z Hauschild
Yan Dong
Ralph K Rosenbaum
Michael Z Hauschild
Assessment of metal toxicity in marine ecosystems -Comparative Toxicity Potentials for nine cationic metals in coastal seawater
Assessment of metal toxicity in marine ecosystems: comparative toxicity potentials for nine cationic metals in coastal seawater
Introduction
Life Cycle Assessment (LCA) "quantifies all relevant emissions and resources consumed" [START_REF]International Reference Life Cycle Data System (ILCD) Handbook -General guide for Life Cycle Assessment -Detailed guidance[END_REF] associated with a good or service in a Life Cycle Inventory (LCI) and assesses "the related environment and health impacts and resource depletion issues" 1 by Life Cycle Impact Assessment (LCIA). LCA has been broadly used to support environmentally informed decisions in policymaking, product development and procurement, and consumer choices [START_REF] Hellweg | Emerging approaches, challenges and opportunities in life cycle assessment[END_REF] . It is a valuable screening tool to facilitate identifying environmental hotspots [START_REF] Hellweg | Emerging approaches, challenges and opportunities in life cycle assessment[END_REF] . The uncertainties associated with LCA results can be high due to data and simplified modelling [START_REF] Hellweg | Emerging approaches, challenges and opportunities in life cycle assessment[END_REF] . This can be partially compensated by enhancing regional detailed modelling.
Metals are often ranked at the top of toxicity concerns in Life Cycle Assessment (LCA) [START_REF] Huijbregts | Priority assessment of toxic substances in life cycle assessment. Part I: Calculation of toxicity potentials for 181 substances with the nested multi-media fate, exposure and effects model USES-LCA[END_REF] . Large quantities of metals are released from anthropogenic resources to the natural environment (up to 3×10 [START_REF]The European Pollutant Release and Transfer Register. European industrail annual pollutant release; European Environment Agency (EEA[END_REF] tons/year for selected metals, e.g. Mn) [START_REF] Pacyna | Global budget of trace metal sources[END_REF] . Waterborne emissions contribute 50-80%, and originate mainly in industrial sectors such as iron or steel production, thermal power stations, mineral oil and gas refineries etc. [START_REF]The European Pollutant Release and Transfer Register. European industrail annual pollutant release; European Environment Agency (EEA[END_REF] Waterborne metal emissions typically reach freshwater first and move towards seawater through fluvial pathways, thus potentially causing ecotoxicity in both freshwater and marine compartments [START_REF] Chester | The transport of material to the oceans: the fluvial pathway[END_REF] . Hitherto, metal toxicity in the aquatic environment has been modelled in LCIA using models developed to simulate the behaviour of organic chemicals with poor representation of the speciation behaviour of metals and bioavailability (e.g. USES-LCA 2.0 [START_REF] Van Zelm | USES-LCA 2.0-a global nested multi-media fate, exposure, and effects model[END_REF] used in ReCiPe, IMPACT 2002+ [START_REF] Jolliet | IMPACT 2002+: a new life cycle impact assessment methodology[END_REF] ). Following the principles laid out in the Apeldoorn Declaration [START_REF] Aboussouan | Declaration of Apeldoorn on LCIA of non-ferro metals. Results of a workshop by a group of LCA specialists[END_REF] and the Clearwater Consensus 10 , Gandhi et al. [START_REF] Gandhi | New method for calculating Comparative Toxicity Potential of cationic metals in freshwater: application to copper, nickel, and zinc[END_REF][START_REF] Gandhi | Implications of considering metal bioavailability in estimates of freshwater ecotoxicity: examination of two case studies[END_REF] developed a new method to calculate the toxicity potential of six metals in freshwater ecosystems (expressed as a Comparative Toxicity Potential (CTP), also known as a Characterization Factor in LCIA), including fate, bioavailability and effect of metals. Their CTP was calculated for a number of archetypical freshwater chemistries. Dong et al. [START_REF] Dong | Development of Comparative Toxicity Potentials of 14 cationic metals in freshwater[END_REF] further adapted the method, expanding its scope of metals and calculated freshwater CTP for 14 metals. The results showed that for some metals (e.g. Al, Be, Cr(III), Cu and Fe(III)), freshwater CTP was highly dependent on the speciation of metal in a certain water chemistry, thus varying by 2-6 orders of magnitude in different water archetypes. This reveals the importance of 1) including metal speciation and bioavailability in the modelling and 2) identifying spatially determined and differentiated water chemistries.
In comparison, marine CTP of metals has received less attention. Following the Apeldoorn Declaration [START_REF] Aboussouan | Declaration of Apeldoorn on LCIA of non-ferro metals. Results of a workshop by a group of LCA specialists[END_REF] , "the oceans are deficient in essential metals, and the CTP for essential metals should be set at zero for toxicity in the oceans." [START_REF] Aboussouan | Declaration of Apeldoorn on LCIA of non-ferro metals. Results of a workshop by a group of LCA specialists[END_REF] In contrast, coastal seawater receives higher anthropogenic metal emissions not just through fluvial pathways [START_REF] Chester | The transport of material to the oceans: the fluvial pathway[END_REF] , but also from airborne emission and metals resuspended from the seabed [START_REF] Bruland | Controls on trace metals in seawater[END_REF] , leading to the observable metal concentrations in the coastal zones, and even reach the mmol/l level close to wastewater discharges [START_REF] Fu | Removal of heavy metal ions from wastewaters: a review[END_REF] . This can lead to exceeding the levels where metal becomes toxic to organisms. Not all metal forms are toxic. Only bioavailable forms, often within the truly dissolved forms, can access a sensitive receptor, the biotic-ligand, and become hazardous [START_REF] Paquin | The biotic ligand model: a historical overview[END_REF][START_REF] Sunda | Trace metal interactions with marine phytoplankton[END_REF] . In addition to metal availability, also its residence time in the coastal seawater is essential for the exposure and hence its CTP. For most metals, a substantial removal happens after entering coastal zone, where complex binding to Suspended Particulate Matter (SPM) followed by removal through sedimentation is increased [START_REF] Chester | The transport of material to the oceans: the fluvial pathway[END_REF] . The fate of a metal in coastal seawater is thus strongly influenced by its tendency to adsorb to SPM, its solubility in seawater and its complexation affinity with particulate and dissolved organic matter [START_REF] Mason | Trace metal(loid)s in marine waters[END_REF] .
Until now metal marine CTP in the previous LCIA models has either not been calculated (e.g. USEtox [START_REF] Rosenbaum | USEtox-the UNEP-SETAC toxicity model: recommended characterisation factors for human toxicity and freshwater ecotoxicity in life cycle impact assessment[END_REF] , IMPACT 2002+ 8 ) or it has been derived neglecting speciation and bioavailability, and using freshwater toxicity data (e.g. USES-LCA [START_REF] Van Zelm | USES-LCA 2.0-a global nested multi-media fate, exposure, and effects model[END_REF] ), with a questionable representativeness for saltwater organisms [START_REF] Leung | Can saltwater toxicity be predicted from freshwater data?[END_REF] . Moreover, as demonstrated by Gandhi and co-workers, metal freshwater CTP is highly sensitive to water chemistry [START_REF] Gandhi | Implications of geographic variability on Comparative Toxicity Potentials of Cu, Ni and Zn in freshwaters of Canadian ecoregions[END_REF] . While water chemistry parameters such as pH, Dissolved Organic Carbon (DOC), SPM and salinity affect the speciation of metals, different seawater residence times (SRT) in different coastal zones also play a large role when determining the fate of metal in coastal compartment [START_REF] Tankere | Mass balance of trace metals in the Adriatic Sea[END_REF][START_REF] Brodie | An assessment of residence times of land-sourced contaminants in the Great Barrier Reef lagoon and the implications for management and reef recovery[END_REF] . So far, no study has given a coherent treatment of the global spatial variability of metal marine CTP, considering speciation and applying toxicity data for marine organisms. As a consequence, toxic impacts on the marine ecosystem were either not at all considered in LCA studies or they were assessed with methods of limited reliability based on questionable assumptions. These shortcomings and the strives for a coherent consideration of marine biodiversity in LCA studies set the objectives of this study.
Aiming for consistency with the methodology developed for characterizing metal toxicity in freshwater [START_REF] Gandhi | New method for calculating Comparative Toxicity Potential of cationic metals in freshwater: application to copper, nickel, and zinc[END_REF][START_REF] Dong | Development of Comparative Toxicity Potentials of 14 cationic metals in freshwater[END_REF] and applying marine ecotoxicity data availability in ECOTOX database [START_REF]ECOTOX database[END_REF] , the objective of this paper is to develop new, spatially differentiated and globally applicable marine CTPs for Cadmium (Cd), Cobalt (Co), Chromium(III) (Cr), Copper(II) (Cu), Iron(III) (Fe), Manganese(II) (Mn), Nickel (Ni), Lead (Pb) and Zinc (Zn), taking metal speciation and bioavailability into account, and investigating their variation over 64 Large Marine Ecosystems (LMEs) for emissions received in coastal seawater all over the world.
Methods
General framework
For metals, CTP i for ecosystems is expressed as the Potentially Affected Fraction of species integrated over time and space [(PAF)•day•m 3 /kg emitted ], representing the ecotoxicity potential for the total metal in compartment i. It is calculated as the product of three factors: Fate Factor (FF), Bioavailability Factor (BF) and Effect Factor (EF) as presented in Eq. 1 [START_REF] Gandhi | New method for calculating Comparative Toxicity Potential of cationic metals in freshwater: application to copper, nickel, and zinc[END_REF] .
CTP i = FF i • BF i • EF i (Eq.1)
Where: This framework can be used for any single environmental compartment (e.g. freshwater, soil).
FF i : Fate Factor [day],
When considering a multi-compartment system, the terms of eq.1 become matrices, which besides residence times also include inter-compartmental transfers [START_REF] Rosenbaum | A flexible matrix algebra framework for the multimedia multipathway modeling of emission to impacts[END_REF] . In this paper we focus on metals received from adjacent environmental compartments or directly emitted into the coastal seawater compartment. Therefore, FF represents the persistence of the metal in coastal seawater, while BF and EF represent bioavailability and metal ecotoxicity effects in coastal seawater respectively. FF is modelled for the total metal rather than dissolved metal, due to the fact that this is the entity which is reported in LCI and that the metal in the water may re-partition between particulate and dissolved forms during transportation. Note that the partitioning pattern can vary over time and with local environmental conditions. This can have an impact on the FF of metals. For the purpose of LCA temporal variations need to be averaged over a year to be compatible to the information in the life cycle inventory.
Spatial differentiation of environmental conditions and parameters
To explore the spatial variability of CTP in coastal seawater, we worked with the LMEs following Cosme et al. [START_REF] Cosme | method and characterisation and normalisation factors for ecosystem impacts of eutrophying emissions: phase 3 (report, model and factors)[END_REF] . The coastal compartment that is represented by a LME covers the marine area from the coastal line to the seaward boundary of the continental shelf and includes any estuaries. Thus defined, the coastal compartment with its adjacency to the continents receives emissions related to human activity through the influx of continental freshwater or direct discharges to the sea. 80%-90% of marine net primary production occurs in this compartment, which thus comprises the majority of species and biomass that potentially may be affected by metal emissions [START_REF] Chen | Continental margin exchanges[END_REF] . The global coastal seawater zone was divided into 64 LMEs according to "distinct bathymetry (seabed topography), hydrography, productivity and trophically dependent populations" [START_REF] Sherman | The Large Marine Ecosystem concept: research and management strategy for living marine resources[END_REF] , where each LME represents a relatively independent coastal zone. Data on SRT, seawater surface area, temperature and water chemistry were collected for each LME from literature (Table S1
Bioavailability model
BF, Kp SS, and K DOC all represent ratios between different metal species in coastal seawaters. They are thus dependent on the metal speciation in each LME. In the modelling of this speciation behaviour, we assumed that metals remained at their background concentration (Table S4 in SI) in coastal seawater before the emission. BF, Kp SS, and K DOC were then calculated for each LME with its specific water chemistry. This assumption is based on the fact that LCA assesses impacts caused by marginal changes. Nevertheless, a sensitivity analysis of the dependence of BF, Kp SS, and K DOC on background concentration change is performed in section 3.4.5.
WHAM VII [START_REF] Tipping | Humic Ion-Binding Model VII: a revised parameterisation of cation-binding by humic substances[END_REF] was used to calculate metal speciation in seawater. While originally developed for freshwater, its applicability for prediction of metal free ion activity in seawater has been validated [START_REF] Stockdale | Trace metals in the open oceans: speciation modelling based on humic-type ligands[END_REF] .
Furthermore it contains data and has a good reputation for simulating metal binding to DOC, POC, Fe oxide and Mn oxide. These two criteria favoured the choice of WHAM VII over other speciation models (e.g. Visual Minteq 37 , MINEQL+ [START_REF] Schecher | Environmental Research Software[END_REF] , PHREEQC [START_REF] Parkhurst | Description of input and examples for PHREEQC version 3-A computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations[END_REF] ).
Ecotoxicity model
Currently there are two main ecotoxicity models to explain how cationic metals cause toxicity in organisms. The Free Ion Activity Model (FIAM) assumes that the toxic compound is free metal ion represented by its activity. The Biotic Ligand Model (BLM) further includes the competition between free metal ion and other cations (e.g. Ca 2+ , H + ) for binding to biotic ligand -the receptor in the target organism where the metal binds to exert its uptake and/or toxicity. Due to lack of BLMs for metals in seawater, FIAM was chosen in this study. It has been validated to assess metal toxicity to marine organisms in saltwater [START_REF] Lorenzo | Effect of humic acids on speciation and toxicity of copper to Paracentrotus lividus larvae in seawater[END_REF][START_REF] Sunda | The relationship between free cupric ion activity and the toxicity of copper to phytoplankton[END_REF] [START_REF] Payet | Assessing toxic impacts on aquatic ecosystems in LCA[END_REF] . The purpose of LCA is to compare alternatives, where robustness is highly required.
Therefore HC 50 values calculated from EC 50 are normally applied in LCA. It can use all the available toxicity data for a metal and is a measure associated with less uncertainty than the PNEC [START_REF] Henderson | USEtox fate and ecotoxicity factors for comparative assessment of toxic emissions in life cycle analysis: sensitivity to key chemical properties[END_REF][START_REF] Larsen | Evaluation of ecotoxicity effect indicators for use in LCIA[END_REF] . Detailed descriptions of calculation methods for the PAF method and HC 50 can be found in Larsen et al. [START_REF] Larsen | Evaluation of ecotoxicity effect indicators for use in LCIA[END_REF][START_REF] Larsen | GM-troph: a low data demand ecotoxicity effect indicator for use in LCIA[END_REF] . EFs were calculated exclusively from data on chronic marine EC 50 from literature. The availability of marine ecotoxicity data in the ECOTOX database [START_REF]ECOTOX database[END_REF] allowed us to apply our model to nine cationic metals, including Cd, Co, Cr, Cu, Fe(III), Mn, Ni, Pb, and Zn (Table S5 in SI). For metals where chronic marine ecotoxicity data were insufficient, extrapolation from acute marine ecotoxicity data was performed applying an Acute-to-Chronic Ratio (ACR)
derived from the available toxicity data as described in Table S6 in SI. Total metal marine EC 50 reported in literature were translated into free ion EC 50 using WHAM VII [START_REF] Tipping | Humic Ion-Binding Model VII: a revised parameterisation of cation-binding by humic substances[END_REF] , taking into account water chemistry of the test medium in which the reported EC 50 was determined. This conversion reduced the standard deviation of the EC 50 of each metal by at least an order of magnitude (Table S5), which also justifies the use of FIAM in EF calculation.
The calculation of EF was based on the recommended principles for LCA [START_REF] Henderson | USEtox fate and ecotoxicity factors for comparative assessment of toxic emissions in life cycle analysis: sensitivity to key chemical properties[END_REF][START_REF] Larsen | GM-troph: a low data demand ecotoxicity effect indicator for use in LCIA[END_REF][START_REF] Jolliet | Establishing a framework for life cycle toxicity assessmentfindings of the Lausanne review workshop[END_REF] . For each metal at each trophic level (i.e. primary producers, primary and secondary consumers), a free ion activity HC 50-trophic was calculated as the geometric mean of the corresponding free ion EC 50 for all species with available data. The geometric mean of the resulting three HC 50-trophic represents the free ion activity HC 50 in saltwater for that specific metal . Then, for each combination of metal and LME, a truly dissolved HC 50 was calculated using WHAM VII, based on the free ion activity HC 50 and corresponding LME water chemistry. Finally, EF was calculated as 0.5/truly dissolved HC 50
43
.
Results and Discussion
In this section the results for the spatially differentiated FF, BF, EF and CTP are discussed. The results are shown for all combinations of metals and LMEs in Table S7 in SI.
Fate Factors
Cr, Cu and Fe have the highest log K DOC and log Kp SS among all metals, indicating their strong tendency of complexation with Organic Matter (OM, represented by DOC and the organic fraction of SPM (POC)) in seawater (Figure S3 in SI). This is in accordance with previous findings that Cr, Cu and Fe have high affinity for OM [START_REF] Yang | Metal complexation by humic substances in seawater[END_REF] . Compared with empirical values, Kp SS in this study were generally within an order of magnitude (Table 1).
Both log K DOC and log Kp SS vary linearly with OM concentrations and salinity (0.31<R 2 <0.93, p<0.001, Table S8) for all metals except Pb and Fe. OM in WHAM is considered as humic molecules, which are "rigid spheres, with proton-dissociating groups at the surface that can bind metal ions." [START_REF] Tipping | Humic Ion-Binding Model VI: An Improved Description of the Interactions of Protons and Metal Ions with Humic Substances[END_REF] Metal ion binding to a humic molecule can be simply expressed by the general reaction in Eq.2, which is described by the intrinsic association constant K M (Eq.3) [START_REF] Tipping | Humic Ion-Binding Model VI: An Improved Description of the Interactions of Protons and Metal Ions with Humic Substances[END_REF] .
ܴ + ܯ ௭ = ܯܴ ା௭ (Eq. 2)
ܭ ெ = ൣோெ ೋశ ൧ ሾோ ೋ ሿሾெ ሿ (Eq.3)
Here R is the humic molecule, M is metal and z is the net charge. Under similar conditions (e.g. pH value, temperature, etc.), K M stays within a comparably narrow range. Therefore increasing OM concentration leads to a higher concentration of metal-OM complex, resulting in a higher log K DOC b. Log Kp SS-D values in literature were presented as a function of other water chemistry parameters (e.g. salinity and SPM). Here we took approximate values derived from water chemistry similar to this study (e.g. SPM≈1mg/L, Salinity≈30‰-35‰, etc.) FF is largely influenced by log Kp SS and log K DOC . Metals with high log Kp SS and log K DOC (e.g. Cr, Cu and Fe) have an efficient removal, due to complex formation with OM followed by sedimentation. Therefore they have the lowest FF in all LMEs (Figure 1a). In contrast, FFs of Cd and Co are the highest across all metals, due to their low log Kp SS and log K DOC . For a given metal, FF increases with SRT across LMEs (Figure S4 in SI). For Cd, Co, Mn, Ni, Pb and Zn, FF and SRT are linearly correlated with SRT (R 2 >0.97, p<0.001, Table S8 in SI). It means that FF variation mainly depends on SRT and metal removal processes play a minor role. For the metals with high log Kp SS and log K DOC (e.g. Cr, Cu and Fe), metal removal processes show a stronger influence on FF. Thus FFs for these three metals are less strongly correlated to SRT, but rather determined by the variation of SRT, log Kp SS , and log K DOC together. Note that the metals with lower Kp SS and log K DOC (Cd, Co, Mn, Ni and Zn) can have a FF that is higher than SRT in some LMEs. The reason is that for these combinations of metal and ecosystem, the removed fraction is insignificant compared to the total input. A non-negligible fraction of the metal flows out to the ocean, from where some of it eventually recirculates back to the coastal seawater system after reaching steady state that USEtox calculates. This results in a longer FF than the water that originally carried them out. The effect is most pronounced in the LMEs with short SRTs because the inflow from the ocean is more important compared to the volume and the freshwater input for these LMEs. FF varies 2-3 orders of magnitude across LMEs for each metal. Within one LME, FF variation between different metals is within two orders of magnitude (Figure S5a in SI). It indicates that FF is slightly more sensitive to environmental parameters than to properties of metal.
We compared our FF with data from other studies. The age of water constituent models the residence time of seawater constituents in particle forms in seawater by simulating particle cycling [START_REF] Deleersnijder | The concept of age in marine modelling I. Theory and preliminary model results[END_REF] and is similar to the concept of FF in this study. The constituent age of Baltic Seawater varies between a few days and up to 40 years [START_REF] Meier | Modeling the pathways and ages of inflowing salt-and freshwater in the Baltic Sea[END_REF] , which is similar to the range of metal FF in the Baltic Sea (LME 23) in this study (3-21years). The constituent age of Kara Seawater is 1-2 years [START_REF] Ivanov | Application of inverse technique to study radioactive pollution and mixing processes in the Arctic Seas[END_REF] , which is within the range of the metal FF in the Kara Sea (LME 58) in this study (1-4 years). The constituent age of Norwegian Seawater and North Seawater combined together is 5-8 years [START_REF] Orre | A reassessment of the dispersion properties of 99Tc in the North Sea and the Norwegian Sea[END_REF] , which is slightly larger than the sum of metal FF ranges in the Norwegian Sea (LME 21) and the North Sea (LME 22) in this study ( 1-5 years).
Bioavailability Factors
Representing the fraction of total metal in coastal water that is truly dissolved, BF of Cd, Co, Mn, Ni, Pb and Zn varies less than a factor of eight across LMEs (Figure 1b). For Cr, Cu and Fe the variations in BF are much larger with 3-4 orders of magnitude, due to their large variations in log K DOC and log Kp SS across LMEs (Figure 1b). For all metals, clear correlations were observed between BF and log K DOC or log Kp SS (Figure S6). This implies that BF is largely determined by metal binding to DOC (log K DOC ) and SPM (log Kp SS ). Co has the highest BF in all LMEs across metals, due to its low log K DOC and log Kp SS . Similarly, Cr, Cu and Fe have the lowest BF across all LMEs, due to their high log K DOC and log Kp SS values (Figure 1b and Figure S6 in SI).
Effect Factors
Some nutrient metals are essential for biota growth (e.g. Co, Cu, Fe, Mn, Ni and Zn) [START_REF] Morel | Marine bioinorganic chemistry: the role of trace metals in the oceanic cycles of major nutrients[END_REF][START_REF] Rengel | Heavy metal as essential nutrients[END_REF] .
However, some of them may not reach the essential concentration to support biota growth under normal conditions in seawater, due to their low concentrations (at nmol level, Table S4 in SI).
Under such circumstances, instead of being a toxic pollutant, a metal emission is more likely to facilitate biota growth. It is meaningless to talk about contribution to ecotoxicity under these circumstances. Therefore a true zero value of coastal CTP is given for those metals, in agreement with the recommendation in the Apeldoorn declaration [START_REF] Van Zelm | USES-LCA 2.0-a global nested multi-media fate, exposure, and effects model[END_REF] . For the metals covered in this study the essentiality condition appears to be relevant only for Fe, where the essential concentration range lies above its background concentration in coastal waters. This is caused by efficient removal of Fe in the estuary (ca. 90%) via precipitation, flocculation, and sedimentation [START_REF] Chester | The transport of material to the oceans: the fluvial pathway[END_REF] . Meanwhile, fluvial pathways contribute 75% of Fe inputs to seawater [START_REF] Mason | Trace metal(loid)s in marine waters[END_REF] , which leads to a low concentration of dissolved Fe in seawater. Morel et al. [START_REF] Morel | Marine bioinorganic chemistry: the role of trace metals in the oceanic cycles of major nutrients[END_REF] reviewed the essential concentration of metals in seawater and found that for the metals Co, Cu, Mn, Ni and Zn, the background concentration in seawater is sufficient to support biota growth. This is in accordance with other studies showing that iron is the only limiting nutrient metal for algae growth in seawater [START_REF] Martin | Iron as a limiting factor in oceanic productivity[END_REF][START_REF] Sato | The only elements required by plants that are deficient in seawater are nitrogen, phosphorous and iron[END_REF][START_REF] Barsanti | Algae: Anatomy, Biochemicstry, and Biotechnology[END_REF] . Therefore, a true zero was given to the EF of Fe in all LMEs, which were excluded from the discussions in the rest of this section.
EFs show a modest variation, staying within one order of magnitude difference across all LMEs except for Cr, which shows a larger variation of three orders of magnitude (Figure 1c). Cu has the highest EFs in 90% of the LMEs, while Mn has the lowest EFs in all LMEs.
EF is influenced by temperature, pH, salinity and OM through their impacts on the speciation (the fraction of free ion activity within truly dissolved metal). In general, with increasing pH, the metal may form hydroxide or carbonate complexes, decreasing the metal free ion concentration in solution, which leads to a lower EF [START_REF] Millero | Effect of ocean acidification on the speciation of metals in seawater[END_REF] . Increases in salinity leads to a higher ionic strength, which results in lower free ion activity for a given free ion concentration, and thus a lower EF [START_REF] Deruytter | The combined effect of Dissolved Organic Carbon and salinity on the bioaccumulation of Copper in marine mussel larvae[END_REF] . When OM decreases, a fraction of metal may be released into truly dissolved forms, which leads to a higher truly dissolved HC 50 , thus lower EF.
Comparative Toxicity Potentials
The comparative toxicity potentials are calculated as the product of FF, BF and EF following Eq.1. Results are shown in Figure 1d. Due to its background concentration below essentiality levels in coastal seawater ecosystems, the effect factor of Fe was set to zero and as a consequence its CTP also becomes zero.
Spatial variability of Comparative Toxicity Potentials
Cr and Cu show the largest variation in CTP across LMEs with four orders of magnitude (Figure 1d). For Cr the variation is mainly driven by the variation in EF (R 2 =0.60, p<0.001, Figure S7 in SI), and less influenced by variation of FF and BF (R 2 <0.15). For Cu no single individual parameter shows a significant correlation with CTP.
CTPs of Cd, Co, Mn, Ni, Pb and Zn vary by three orders of magnitude across LMEs (Figure 1d). These metals have rather stable BF and EF, which vary less than one order of magnitude across LMEs. Thus CTP variations are largely caused by FF. As FF of these metals is linearly correlated with SRT, CTP is overall strongly driven by the variation in SRT (0.64<R 2 <0.96, Figure S8b in SI), with higher CTP for longer SRT.
Ranking of Comparative Toxicity Potentials
Among all metals, Cd has the highest CTP in 45% of the LMEs (Figure S5d in SI), followed by Zn (31%) and Pb (24%). These three metals have high FF, BF and middle to high EF. They are ranked among the top four CTPs in all LMEs. In contrast, Cr has the lowest CTP in all LMEs (apart from Fe, for which CTP is zero). Although its EF is in the middle range compared to the other metals, its BF and FF are constantly low in all LMEs, due to its high log K DOC and log Kp SS . Also Mn and Ni are consistently in the lower ranking of CTP in all LMEs (5 th -7 th ), due to their low EFs.
For Cd, Co, Mn, Ni, Pb and Zn, variation in CTP is significantly driven by SRT. Thus, the highest CTPs for these metals are observed in LME 5 (Gulf of Mexico), LME 26 (Mediterranean), and LME 62 (Black Sea), which have the longest residence time across LMEs (90 years). In contrast, the lowest CTP is observed in LME 35 (Golf of Thailand), which has the 2 nd shortest SRT among all LMEs (15 days).
CTPs ranking for Cr and Cu across LMEs are largely determined by SRT and by temperature through its influence on speciation. The highest CTPs are found in LME 64 (Antarctic), where the 2 nd lowest temperature (-1.20 ºC) and long SRT (11 years) are observed. In contrast, they have the lowest CTP value in LME 35 (Golf of Thailand), which has the 2 nd highest temperature and 2 nd shortest SRT.
Comparison between freshwater and coastal CTPs
Cd, Co, Cr, Mn, Ni and Zn marine CTPs show similar ranges to freshwater CTP determined by Dong et al. [START_REF] Dong | Development of Comparative Toxicity Potentials of 14 cationic metals in freshwater[END_REF] using a parallel approach (Figure 1d). These similarities hide remarkable differences in fate and effect behaviour in freshwater and coastal waters, which tend to neutralize each other in the calculation of the CTPs. For these metals, EFs are thus up to two orders of magnitude lower in seawater due to higher free ion activity HC 50 in seawater (Table S9 in SI). This is in accordance with previous research that freshwater species are more sensitive to metals than marine species [START_REF] Wheeler | Freshwater to saltwater toxicity extrapolation using species sensitivity distributions[END_REF] . In contrast, FFs are up to two orders of magnitude higher in seawater due to longer water residence times in many LMEs (the residence time of freshwater is 143 days at maximum in USEtox [START_REF] Dong | Development of Comparative Toxicity Potentials of 14 cationic metals in freshwater[END_REF] ). For the metals Cd, Co, Cr, Mn, Ni and Zn, BF in freshwater and seawater are rather similar. Cd, Co, Mn, Ni, and Zn were insensitive to variations in water chemistry in freshwater [START_REF] Dong | Development of Comparative Toxicity Potentials of 14 cationic metals in freshwater[END_REF] . Thus it may be reasonable to expect similar BF in freshwater and seawater for these metals. BF of Cr is correlated to log K DOC and log Kp SS . These two values are negatively correlated with both SPM and salinity in estuaries [START_REF] Turner | Trace-metal partitioning in estuaries: importance of salinity and particle concentration[END_REF] . From the freshwater end to seawater end, salinity increases and SPM decreases, which in combination leads to similar ranges of log K DOC and log Kp SS , and thus similar BF ranges in seawater and freshwater for Cr. In summary, a combination of similar BF in sea-and freshwater, lower EF in seawater, and higher FF in seawater results in a similar range of CTP in seawater and freshwater for Cd, Co, Cr, Mn, Ni, and Zn (Figure S9 in SI).
Cu has up to two orders of magnitude higher FF in freshwater. It has a similar BF in freshwater and seawater, for similar reasons as Cr. But its EF is 2-4 orders of magnitude lower in seawater, which results in a slightly lower CTP in seawater (Figure S9 in SI).
Pb has a FF up to three orders of magnitude higher and a slightly lower EF in seawater than in freshwater. At the same time its BF is 1-2 orders of magnitude higher in seawater, possibly due to lower SPM and OM concentrations in seawater. This results in 1-4 orders of magnitude higher CTP in coastal seawater than in freshwater (Figure S9 in SI).
CTP is expressed in potentially affected fraction of species integrated over time and space.
However, the species density varies considerably depending on location -from 7×10 -12 to 5×10 -4 species/m 3 in different freshwater ecosystems at various locations [START_REF] Azevedo | Freshwter eutrophication. In LC-Impact. A spatially differentiated life cycle impact assessment approach[END_REF] . Thus, even if two different archetypes have the same CTP, the number of affected species can in extreme cases differ up to eight orders of magnitude in freshwater. Variation would also be expected for species density in coastal marine ecosystems. Moreover, species density in freshwater is generally about three orders of magnitude higher than in seawater [START_REF] Goedkoop | A life cycle impact assessment method which comprises harmonised category indicators at the midpoint and the endoint level[END_REF] , which should be taken into account when comparing CTP values in freshwater and seawater.
Comparison of Fate Factors and Bioavailability Factors from USEtox
The current version of USEtox does not provide marine CTP and only has seawater as a fate compartment supporting FF and the eco-exposure factor (XF) calculation for seven of the metals covered in this study (Cd, Co, Cr, Cu, Ni, Pb and Zn). USEtox operates with a default SRT of one year, which is at the middle range of SRTs applied for the LMEs in this study. The default USEtox FF thus falls within the range of the new FF in this study for all the metals (Figure S9 in SI). BF in this study is similar to the concept of eco-exposure factor (XF) in USEtox. The default XF in USEtox falls within or close to the range of BF found in this study for most metals. The only exceptions are Cr and Cu, for which the USEtox XF is 1-6 orders of magnitude higher (Figure S9 in SI). This is because the default K DOC and Kp SS values in USEtox were taken from literature [START_REF] Allison | Partition coefficients for metals in surface water, soil, and waste[END_REF] , where it was defined as the ratio between absorbed metal and total dissolved metal. Recall that K DOC and Kp SS calculated in this study represent the ratios between absorbed metal and truly dissolved metal. This results in a lower K DOC and Kp SS in USEtox, which leads to a higher XF.
Sensitivity analysis
Several water chemistry parameters (DOC, POC, SPM, pH, salinity, metal background concentration and concentrations of Fe oxides, Mn oxides and Al oxides) and environmental parameters (SRT, surface area, freshwater inflow and temperature) are involved in the calculation of CTP in this study. In the following section, we will test the sensitivity of CTP to these parameters.
Salinity and pH values were extracted from a complete datasets 70 . Surface area and freshwater inflow were measured data taken from a global database [START_REF]A global database on marine fisheries and ecosystems[END_REF] . They are well established values and their uncertainty are only caused by measurement error. Thus the uncertainty is hence judged to be low (e.g. uncertainty of pH meter measurement accuracy <0.1 [START_REF] Hanna Instruments | pH meter by accuracy[END_REF] , salinity probe <3% [START_REF]Salinity sensor[END_REF] ). LME-specific land surface areas were applied in USEtox to calculate CTPs for metals in this study. Compared to the CTPs calculated by applying default land surface area in USEtox, the differences are less than 2%, caused by slightly different air deposition (which is also proportional to the land area).
The importance of the uncertainty accompanying the Fe, Mn and Al oxide concentrations was tested by changing them by a factor of 10. As a result CTPs varied less than 10% for all metals except Pb, for which the variation amounted to 1-35% across all LMEs. DOC, POC and SPM affect metal partitioning in water, and thus the CTP. These three parameters show a significant positive correlation in natural waters (Figure S10 This indicates that CTP is sensitive to DOC, POC and SPM concentrations for Cr and Cu, but less sensitive for the other metals. Note that within each LME, DOC, POC and SPM vary across locations and time. The average value of these parameters in a specific LME was applied in our study to calculate the corresponding CTP in that LME. Considering the large water volume and surface area in each LME, and the comparatively constant pH, salinity and POC values (Figure S2 in SI), the average value of DOC, POC and SPM, thus CTPs are not likely to change dramatically within one LME. However, the uncertainty associated with CTPs of Cr and Cu is still comparably larger than the other metals. This needs to be noted when comparing CTPs across metals.
SRT has a strong influence on FF for all metals and hence also on the CTP. We varied SRT by two orders of magnitude (0.1X-10X of original values) resulting in a variation in CTP by a factor of 0.05-21(Figure S12 in SI). The variations of CTP and SRT show a similar trend, indicating that CTP positively covariates with SRT. Therefore, SRT is an important parameter determining CTP when comparing metal CTP across LMEs, but it is less relevant for comparing CTP across metals within the same LME.
Temperature has influence on metal speciation, thus potentially influencing FF, BF, and EF. We calculated CTP by changing temperatures to 10ºC lower or 10ºC higher than the original values. This variation range covers the surface seawater temperature for the whole year, judging from data in the MODIS database [START_REF]Moderate Resolution Imaging Spectroradiometer (MODIS)[END_REF] . We found that CTP only varies within a factor of 0.4-2.8 (Figure S13 in SI) for all metals. For Cr, BF and EF vary up to one order of magnitude. However, BF and EF have positive and negative correlation respectively with temperature and hence partly compensate each other, which results in a moderate change of CTP. It can be concluded here that CTP is not very sensitive to temperatures. BF, Kp SS, and K DOC were calculated from metal background concentration in generic seawater, which may differ in different locations. Therefore we tested the dependence of BF, Kp SS, and K DOC on metal background concentration, by varying background concentration by a factor of 10 (0.1X-10X of original value). For the metals with higher Kpss and Kdoc values (e.g., Cr, Cu and Fe), BF can vary up to one order of magnitude and Kp SS, and K DOC can vary up to two orders of magnitude.
The variation is largely caused by metal binding with OMs. For the other metals, the variations of BF, Kp SS, and K DOC are less than 2X. This result is similar to the observation in Gandhi et al. [START_REF] Gandhi | Implications of geographic variability on Comparative Toxicity Potentials of Cu, Ni and Zn in freshwaters of Canadian ecoregions[END_REF] . It shows that in the systems with higher background concentrations, BFs thus CTPs of metals with higher Kp SS, and K DOC values may be underestimated. However, this might be offset by the adaptation of aquatic biota in those systems, which is not considered in the current effect modelling [START_REF] Gandhi | Implications of geographic variability on Comparative Toxicity Potentials of Cu, Ni and Zn in freshwaters of Canadian ecoregions[END_REF] .
Practical implications
This study is the first attempt to derive marine CTP considering speciation, bioavailability, seawater specific toxicity, and spatial differentiation. The results show that CTP for one metal can The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625 vary 3-4 orders of magnitude across LMEs, except for Fe, for which CTP is zero due to its low background concentration and essentiality to marine biota. It was clearly demonstrated that it is of great importance to apply spatially differentiated CTP for metals in coastal seawater, as shown for all metals covered by this study except Fe. This raises the requirement for LCA practitioners to consider the emission location in the inventory. The variation of CTPs is primarily driven by SRT for most metals except Cr and Cu. If there is any updates on SRT in future research, it is strongly recommended to recalculate metal CTPs correspondingly. Due to limited ecotoxicity data for marine species and the metal coverage of the speciation model WHAM VII, it is difficult to derive marine CTP for additional metals at this point. It is recommended to look into methods to estimate marine ecotoxicity data by extrapolation from freshwater ecotoxicity data, or from known metal properties. This can potentially provide ecotoxicity data for more metals and thus allow calculation of additional marine CTPs. Where measured chronic data was missing, acute toxicity data was extrapolated to chronic EC 50 s for the EF calculation of some metals (e.g. Co, Cr, Mn, Ni, Pb and Zn, Table S5 in SI). It is recommended to revise these data when chronic data is available. The speciation model WHAM VII cannot simulate metal redox reactions and precipitation except Al and Fe hydroxide. Due to the fact that the CTP developed in this study is for metal in coastal seawater where water column depth is modest and presence of oxides are limited, the occurrence of extreme redox conditions will be rare in most LMEs. E.g., When Cr(III) is emitted to coastal seawater, its oxidation to Cr(IV) is limited and slow, unless abundant Mn dioxide and hydroxides exist [START_REF] Sadiq | Chapter 6 Chromium in marine environments[END_REF] . However, the lack of precipitation modelling in WHAM can cause some uncertainties, especially for metals which may form insoluble compounds with major anions in seawater.
Therefore, it is recommended to explore the possibility of applying other metal speciation models to complement WHAM VII (e.g. MINEQL+ [START_REF] Schecher | Environmental Research Software[END_REF] , Visual Minteq 37 , CHEAQS Pro 76 or PHREEQC [START_REF] Parkhurst | Description of input and examples for PHREEQC version 3-A computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations[END_REF] ) covering other metals and supporting the modelling of precipitation and redox reactions where
EF i :Effect Factor [(PAF)•m 3 /kg], representing the fraction of species potentially affected by the toxicity of the truly dissolved metal in compartment i.
Figure 1 .
1 Figure 1. Variation of Fate Factor (FF, 1a), Bioavailability Factor (BF, 1b), Effect Factor (EF, 1c) and Comparative Toxicity Potential
Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625
in SI). The parameter values are accompanied by variation among different seawater and locations, along with the transition from fresh to marine waters. We therefore tested the sensitivity of CTP to these three parameters, by varying them all together by a factor of 0.1-10. These variations can cover DOC, POC, and SPM concentrations ranging from conditions in freshwater to the open ocean. For these variations, CTP of Cr and Cu show the highest sensitivity, varying between a factor of 0.004 and 168. The other metals show very modest sensitivity, varying between a factor of 0.2 and 2.2 (Figure S11 in SI).
Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278
ACS Paragon Plus Environment Environmental Science & Technology
Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278
The original publication is available at http://pubs.acs.org
Doi: 10.1021/acs.est.5b01625
concentrations at different salinities 29 . For each LME, the relevant environmental parameter and
water chemistry values were applied to derive a CTP value for each metal.
Note that also within one LME, environmental parameters such as pH, salinity, DOC, POC, and
SPM show both spatial and temporal variation. The annual variation ranges are shown for pH,
salinity and POC within each LME in Figure S2 in SI. Fe, Mn and Al oxides have been shown to be
strong adsorbents for metal ions 30-32 , because of their large surface area. Due to lack of spatially
differentiated concentration data for these oxides, fixed concentrations of 0.15, 0.02 and 0.4 µg/L
for Fe, Mn, and Al oxides respectively had to be assumed across all LMEs 33 .
in Supporting Information (SI)). The values for these parameters show large variations
across the 64 LMEs (Figure S1 in SI). SRT varies from 11 days-90 years, surface area from
1.5×10 5 -5.7×10 6 km 2 , estuary discharge rate (water flow rate from freshwater to coastal seawater)
from 0-1.3×10 5 m 3 /s, temperature from -1°C-29°C, pH from 7.75-8.35, DOC from 0.6-6.5 mg/l,
Particulate Organic Carbon (POC) from 31-802 ug/l, SPM from 0.2-2.9 mg/l and salinity from 6.2-
40.3‰. For speciation modelling, salinity was translated into concentrations of the major ions (Na + ,
Mg 2+ , K + , Ca 2+ , SO 4 2-, and Cl -) by scaling from a standard salinity (35 ‰) and its corresponding
major ion concentrations (Table S2 in SI), assuming a fixed relationship between the major ion
2.3 Model and parameter selection 2.3.1 Fate model
Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625 metal from seawater to sediment. The former process is modelled by metal complexation with SPM, followed by SPM sedimentation. The removal largely depends on the fraction of metal adsorbing to SPM, the concentration of SPM and the SPM sedimentation velocity. Metal diffusion into sediment is determined by the dissolved fraction of metal and the metal's mass transfer coefficient between sediment and water. The metal fraction adsorbed to SPM can be calculated from a spatially differentiated adsorption coefficient Kp SS (L/kg; the ratio of metal concentration between SPM bound metal and truly dissolved metal). The truly dissolved fraction of metals is calculated using both Kp SS and K DOC (L/kg; the ratio of metal concentration between DOC complex bound metal and
With the intended use in LCA in mind, the multimedia fate model embedded in USEtox 19 was
chosen for this study. USEtox is an LCIA model for assessing ecotoxicity and human toxicity
impacts. It has been developed in a scientific consensus process involving LCIA and chemical fate
modelling experts. It is the recommended characterization model for toxicity impacts in LCA
[START_REF] Hauschild | Building a model based on scientific consensus for life cycle impact assessment of chemicals: The search for harmony and parsimony[END_REF]
. In USEtox, the fate is calculated based on a steady-state mass balance. USEtox determines metal FF in the coastal seawater compartment by modelling of metal inflow, metal outflow and metal removal (including sedimentation and sediment burial/re-suspension). Metal inflow and outflow largely depend on the retention time of the coastal seawater. Thus the default SRT of seawater on continental scale in USEtox was replaced by the SRT representative for each LME. To be consistent, also the default surface area of continental seawater and the water flow rate from continental freshwater to continental seawater (estuary discharge rate) were replaced by the corresponding LME-specific data. Water flow from ocean to coastal seawater is then automatically calculated from parameters mentioned above. Details of LME-specific data and calculations are available in Table
S1
in SI. Metal removal is simulated by metal sedimentation and diffusion of ACS Paragon Plus Environment Environmental Science & Technology truly dissolved metal). All parameters mentioned above vary between different LMEs. Thus Kp SS and K DOC were recalculated in WHAM VII
[START_REF] Tipping | Humic Ion-Binding Model VII: a revised parameterisation of cation-binding by humic substances[END_REF]
for each metal in each LME respectively, to replace the default values in USEtox. WHAM
[START_REF] Tipping | Humic Ion-Binding Model VII: a revised parameterisation of cation-binding by humic substances[END_REF]
is a metal speciation modelling software. Based on the input of target metal concentration and relevant water chemistry, it can deliver the concentration of target metal in a specific form. In WHAM's calculation of Kp SS and K DOC values, it is assumed that metals are in equilibrium with the discrete sites of DOC and the organic fraction of SPM. Here target metals have to compete with other cations (e.g. Ca 2+ , Mg 2+ , K + and Na + ) to form complexes with SPM or DOC. The ratio between the concentration of metal that is truly dissolved in water and the concentration of metal forming complexes with SPM or DOC were calculated for each LME and each metal as the specific Kp SS and K DOC value. Default DOC and SPM concentration in USEtox were also replaced by the corresponding specific parameter values for each LME. Other landscape parameters were kept unchanged. All parameters used in FF calculation are listed in Table
S3
in SI. There were no substance parameter values for Mn and Fe in default USEtox inorganic database. They thus had to be collected from literatures. The retrieved values are presented together with substance parameter values for the other metals in Table
S4
in SI.
. As stated in Clearwater Consensus[START_REF] Diamond | The clearwater consensus: the estimation of metal hazard in fresh water[END_REF] , we calculated EF based on truly dissolved metal, assuming that free ion is a fraction of truly dissolved metal and is responsible for the toxicity. In risk assessment, Predicted No Effect Concentration (PNEC) is typically used as effect indicator to protect the sensitive species of the ecosystem. Compared to 10 PNEC, the geometric mean HC 50 calculated from EC 50 , representing the Potentially Affected Fraction (PAF) of species exposed above chronic EC 50 values, is more robust but less conservative
Table 1 :
1 and log Kp SS. When salinity increases, the metal ions are in stronger competition with major cations in the seawater for the binding sites on OM, thus decreasing log K DOC and log Kp SS 49 . The exception for Pb and Fe is due to the fact that the binding of Pb and Fe to DOC and particles is not only Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625 influenced by OM concentrations and salinity, but also by other parameters (e.g. temperature and 249 pH values). 250 Log Log Kp SS (L/kg) in this study represents the calculated partitioning coefficient between metal 253 bound to SPM and truly dissolved metal. Log Kp SS-D (L/kg) in other studies represents the 254 partitioning coefficient between metal bound to SPM and total dissolved metal. To make the 255 values comparable, we calculated log Kp SS-D values from the log Kp SS that we determined in 256
Page of 33
Page 12 of 33 ACS Paragon Plus Environment Environmental Science & Technology Author
-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625
Page 16 of 33 ACS Paragon Plus Environment Environmental Science & Technology Author
-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625
Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625
ACS Paragon Plus Environment Environmental Science & TechnologyAuthor-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625
ACKNOWLEDGMENTS
The authors thank Dennis Hansell (RSMAS/MAC, University of Miami) and Reiner Schlitzer (AWI, Helmholtz center for polar and marine research) for providing DOC data. Dr. Rosenbaum gratefully acknowledges the financial support of the partners in the Industrial Chair for Life Cycle Sustainability Assessment ELSA-PACT (a research project of ELSA -Environmental Life Cycle & Sustainability Assessment): Suez Environment, Société du Canal de Provence (SCP), Compagnie d'aménagement du Bas-Rhône et du Languedoc (BRL), Val d'Orbieu -UCCOAR, EVEA, ANR, IRSTEA, Montpellier SupAgro, École des Mines d'Alès, CIRAD, ONEMA, ADEME, and the Region Languedoc -Roussillon. ACS Paragon Plus Environment Environmental Science & Technology
Funding Sources
This research is financially supported by the EU commission within the Seventh Framework Programme Environment ENV. 2008.3.3.2.1: PROSUITE -Development and application of a Page 24 of 33 ACS Paragon Plus Environment Environmental Science & Technology Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625
needed. Literature reported that eutrophication can increase metal bioavailability up to an order of magnitude [START_REF] Li | Effects of nitrate addition and iron speciation on trace element transfer in coastal food webs under phosphate and iron enrichment[END_REF][START_REF] Wang | Effects of major nutrient additions on metal uptake in phytoplankton[END_REF] . However, this may be offset by decreasing EF due to organism adaptation, which is not considered in this study. Comparing to 3-4 orders of magnitude variation in CTPs, the uncertainty introduced by differences in eutrophication across LMEs will not have significant influences on the result. FIAM was used to assess EF in this study. However, unlike BLM it does not include competition between free metal ion and other cations for binding to biotic ligands. Thus it is recommended to estimate EFs with marine BLM when available. This study only developed CTP for metals in the water column of the seawater compartment. Ecotoxicity potentials in sediments were not considered here. In LCIA this is typically considered as a separate compartment (if at all) and would require a separate study.
ASSOCIATED CONTENT
Supporting Information
9 tables and 13 figures addressing additional data were presented in supporting information. This material is available free of charge via the internet at http://pubs.acs.org.
AUTHOR INFORMATION
Corresponding Author
*Email: [email protected]. Phone: +45 4525 4417 standardized methodology for the PROspective SUstaInability assessment of Technologies (Grant agreement No.: 227078).
Author-produced version of the article published in Environmental Science and Technology, 2016, N°50(1), p.269-278 The original publication is available at http://pubs.acs.org Doi: 10.1021/acs.est.5b01625 | 57,282 | [
"1275045"
] | [
"302599",
"16792",
"302599"
] |
01465740 | en | [
"sdv",
"sdu"
] | 2024/03/04 23:41:44 | 2016 | https://hal.science/hal-01465740v2/file/doc00026368.pdf | Hydrologist, IFSTTAR Eric Gaume
Hydrologist France Marco Borga
Maria Carmen Llasat
Michel Lang
Mediterranean extreme floods and flash floods
The Mediterranean area is particularly exposed to flash floods Floods are weather-related hazards and their patterns are likely to be significantly affected by climate change. Floods are already the most frequent and among the costliest and deadliest natural disasters worldwide (Munich RE, NatCat Service;[START_REF] Swiss Re | Sigma annual report[END_REF]. This is also true in the Mediterranean area. The EM-DAT international disaster database (http://www.emdat.be/) lists for instance 200 billion Euros damages related to various disasters since 1900 in the countries surrounding the Mediterranean Sea, out of which 85 billion are related to river flooding.
Disastrous flash-floods 1 are much more frequent in some parts of the Mediterranean region than in the rest of Europe [START_REF]A collation of data on European flash floods[END_REF]Llassat et al. 2010). This is due to the local climate, which is prone to short intense bursts of rainfall. The reliefs surrounding the Mediterranean Sea force the convergence of low-level atmospheric flows and the uplift of warm wet air masses that drift from the Mediterranean Sea to the coasts, thereby creating active convection. In addition, population growth is particularly high along the Mediterranean coasts, leading to a rapid increase in urban settlements and populations exposed to flooding.
The Mediterranean region is a large area extending more than 4,000 km from west to east and 1,500 km from south to north with spatially variable climatic patterns and population densities. It is characterized by diverse climates, synoptic meteorological conditions and hydrological properties: bedrock and soil types, land use and vegetation cover. The flood regimes and the types of dominant flood generating rainfall events vary significantly along the coasts of the Mediterranean Sea [START_REF] Llasat | Flash floods trends versus convective precipitation in a Mediterranean region[END_REF]. Damaging floods are mainly produced by: 1. Short-lived (often less than 1 hour) strongly convective intense precipitation events (up to 180 mm/h in only 5 minutes) but limited total rainfall amounts (generally less than 100 mm). Such events have a limited areal extent (typically less than 100 km 2 ) and generate local flash floods of small headwater streams. A typical example of such flash floods is the catastrophic flood that occurred in Algiers in November 2001. 2. Mesoscale convective systems can produce stationary rain lasting several hours leading to rainfall amounts exceeding 200 mm in a few hours. In France, up to 700 mm of rainfall within 12 hours was locally recorded during floods in the Aude region in November 1999 and in the Gard region in September 2002. The areal extent of such events ranges from several hundred to several thousand km 2 . These events mainly occur in fall and affect the north-western coast of the Mediterranean Sea.
The flash floods that occurred in Genoa, Italy, in October 1970 and in the Var region of France in June 2010 belong to this category.
1 Flash floods are induced by short duration -from less than one hour to 24 hours -and heavy rainfall convective events -typically 100 mm or more rainfall accumulated over a few hours. The affected areas are often limited to a few hundred square kilometers, with rapid hydrological responses -generally less than 6 hours delay between peak rainfall intensity and the peak discharge downstream. 3. On some occasions, heavy and sustained rainfall may be part of a large scale perturbation lasting several days. In such cases, extreme rainfall accumulation may be observed locally: 1,500 mm over four days and a record of close to 1,000 mm in 24 hours in October 1940 in the eastern Pyrenees in France. These events cover a large area. Typical examples were the October 1969 heavy rainfall and flash flood in South Tunisia (more than 300 killed) and the October 1940 event that affected both sides of the Pyrenees: the Ter river valley in Spain (90 killed) and the Tech and Têt river valleys in France (44 killed).
Total rainfall amounts as well as land use, soil and bedrock types and the initial soil moisture content influence the responses of watersheds to heavy rainfall events and especially their runoff rates: the estimated proportion of the incident rainfall contributing to the observed stream discharges. The runoff rates during flash floods are often limited to 10% to 30%. In some rare cases, when large cumulated rainfall amounts lead to saturation of the watersheds, runoff rates may reach 100% [START_REF] Marchi | Characterization of selected extreme flash floods in Europe and implications for flood risk management[END_REF]. The observed variability of flood frequencies and discharge magnitudes is therefore the result of complex interplay between the characteristics of the generating rainfall events (spatial extent, duration, maximum intensities) and the factors that control the response of the watersheds, especially rainfall rates.
Sources of information about flood magnitudes and impacts
Information on flood characteristics and magnitudes comes from a wide range of sources (databases, the press, local technical reports) that are incomplete and often not entirely accurate. It is therefore difficult to build comprehensive databases to better understand the spatial pattern and climatology of large flood events. The number of fatalities, sometimes damage estimates, is the type of information that can be collected most easily, mainly from press reports, and is typically the type of information collected in the EM-DAT International Disaster Database (http://www.emdat.be/) or the Dartmouth global archive of large flood events (http://www.dartmouth.edu/~floods/) or in the Italian AVI flood database or the database on Mediterranean floods created in the framework of the HYMEX European research project [START_REF] Llasat | Towards a database on societal impact of Mediterranean floods within the framework of the HYMEX project[END_REF].
Re-insurance companies such the Munich Re (CAT NAT Service) and the Swiss Re (Sigma reports) also gather information on damage costs worldwide but which is nevertheless only accessible in a synthetic form. As illustrated in figure 1, such databases should be interpreted with caution. The information collation efficiency is for instance increasing with time and thanks to the improved circulation of information on
Internet and the social networks. The EM-DAT seeks to collect information on disastrous events that meet one of the following criteria: at least 10 people killed, 100 people affected, a state of emergency was declared and there was a call for international assistance. Clearly, the number of floods documented in this database has increased since its creation. Can this be interpreted as a sign of a trend? The total number of reported annual fatalities does not show the same trend, suggesting that a larger number of moderate floods have been included in this database in the recent period, but that the number of catastrophic events has not significantly changed. Lasatt et al. (2013) came to the same conclusion. Moreover, the amount of damage and the number of fatalities are only indirectly related to the magnitude of the floods and are also strongly determined by local exposure and vulnerability. Both vary in space and over time. Major population and economic growth has taken place along the Mediterranean coast in the past century, with both huge densification and extension of urban settlements outside but also inside flood prone areas. This has led -and continues to lead -to a general increase in flood exposure and in economic vulnerability and costs. At the same time, newly constructed buildings are more resistant to flooding, thereby providing better shelter; warning systems and emergency management has improved and may reduce the number of fatalities. Observed spatial distributions of costs and fatalities are the result of complex interplay between different explanatory factors. It is useful information that reveals the economic and social impact of floods on our societies but its interpretation is questionable. monitored. Direct discharge measurements are seldom available and discharge values have to be estimated using indirect methods that may sometimes lead to large errors, especially overestimations, even if significant progress has been made in recent years. Thanks to various European (HYMEX, HYDRATE, FLASH) and national research projects, the number of documented extreme floods has increased significantly in recent years. For the purpose of this paper, we collected information to build a sample of 172 documented extreme floods (Table 1). The database covers the largest floods since 1940 reported in the countries concerned either according to their estimated discharges or to the number of deaths: events with more than 10 deaths were selected when no discharge estimates were available (39 events) except for Greece for which a larger dataset was available.
Magnitudes and seasonality of extreme floods around the Mediterranean Sea
Analysis of deaths related to flood events Figure 2 shows a clear contrast between the reported deaths in the eastern and western parts of the Mediterranean region. The contrast concerns both the spatial density of the events that killed more than 10 people and the average number of deaths per event. This conclusion remains valid even if the two most dramatic events occurred in Algiers (2001, 896 killed) and the town of Rubi near Barcelona in Spain (1962, 815 killed) are removed. Floods generally caused few casualties in the eastern part of the Mediterranean region, with the notable exception of the flood in Tripoli Lebanon on December 17, 1955 (around 160 deaths) induced by a rainfall event of short duration: 100 mm of rain fell on the city of Tripoli and its surroundings in two hours [START_REF] De | [END_REF]. The contrast was confirmed by analysis of the EM-DAT database (see Figure 3). The average number of people killed each year during river floods ranges between seven in France, most of which in the Mediterranean region, and 30 in Algeria, both western Mediterranean countries, whereas in eastern Mediterranean countries, the number does not exceed two. These figures are also since moderate floods are not recorded in the EM-DAT database, despite being in the correct range of magnitude.
For instance, the average number of people killed due to floods in Greece is four based on comprehensive flood databases established for Greece (Pappagianaki et al. 2013;[START_REF] Diakakis | Flood fatalities in Greece: 1970-2010[END_REF].
Analysis of major flood peak discharges
The estimated peak discharges of the major floods confirm the observed contrast based on the number of deaths, even if the information concerning eastern Mediterranean countries in the database is not complete (figure 4). To be able to compare discharge values for a large range of watershed areas and to rate the magnitudes of the reported floods, reduced discharges are mapped in figure 4 according the suggestion of [START_REF]A collation of data on European flash floods[END_REF]. The reduced discharge Q r is the ratio of the discharge value Q (m 3 /s) to the upstream watershed area A (km 2 ) at a power 0.6: Q r = Q / A 0.6 . Reduced discharges exceeding 50 have only been reported in the western part of the Mediterranean region. Such extreme discharges appear to be more frequent in the north-western Mediterranean region, more particularly in some hot-spots: the Piedmont and Liguria regions in Italy, the Cevennes-Vivarais-Roussillon region in France and Catalonia in Spain. These areas are preferentially affected by longer lasting stationary rainfall events in the fall, leading to high runoff rates and hence high discharge values [START_REF]A collation of data on European flash floods[END_REF]). The spatial heterogeneity of extreme discharge magnitudes is clear in Italy and France, and the flood database is close to complete for these two countries. It needs completing for Spain and North Africa for which only partial datasets are available at the present time.
Seasonal distribution of extreme flood events
Fall is the main season but not the only one in which extreme floods occur in the Mediterranean region (Figure 5). The same seasonal distribution is observed in the eastern part of the area even if it is less pronouncedhowever, this conclusion is based on an incomplete dataset. The occurrence of summer floods is notable in the Alps and more generally in mountain areas, as illustrated by the Ourika flood in the Moroccan Atlas (August 8, 1995) or the Barronco de Aras flood in the Spanish Pyrenees (August 8, 1996). Spring floods are indeed rare but when they do occur, can have significant impacts, as did the Sarno flood in Italy (May 5, 1998, 147 killed) and the Var floods in France (June 6, 2010, 25 killed). Extreme floods in winter mainly occur in North Africa and Greece.
Extreme Mediterranean floods compared to world figures
Figure 6 compares the dataset of extreme peak discharges in the Mediterranean region analyzed here with values extracted from other flood catalogues. Some major events are highlighted in the figure to illustrate two facts:
1. The most fatal events (Algiers 2001 andBarcelona 1962) do not necessarily have the largest peak discharge values. This confirms that the local risk is the result of the interplay between hazarddescribed by the possible magnitude of the peak discharge and local exposure and vulnerability. 2. Few reported world records exceed the proposed envelope curve for the Mediterranean (Q = 100 A 0.6 ). It should also be noted that several past records may have been overestimated [START_REF] Lumbroso | Reducing the uncertainty in indirect estimates of extreme flash flood discharges[END_REF]. Some sub-regions of the Mediterranean area, Liguria, Cévennes-Vivarais, appear to be exposed to floods whose magnitudes are comparable to world records.
Observed trends in flood magnitudes and frequency and projections
The numerous studies conducted, especially in Europe, report contrasted trends for extreme streamflow with both positive and negative trends, with variable statistical significance and no clear structured pattern [START_REF] Madsen | Review of trend analysis and climate change projections of extreme precipitation and floods in Europe[END_REF]. There are no clear national or large-scale regions in Europe with uniform statistically significant increases in flood discharges in recent years, although some trends appear for smaller regions. Figure 7 illustrates the type of results of trend detection studies in France: trends in annual maximum peak discharges [START_REF] Giuntoli | Floods in France. In Kundzewicz ed: Changes in Flood risk in Europe[END_REF]). The analysis was based on a rich dataset of 209 stream gauge measurements covering the period 1968-2008 and two statistical tests conducted on each individual series (local test) and on regional samples of series to gain robustness (regional test). A limited number of series or [START_REF] Alfieri | Global warming increases the frequency of river floods in Europe[END_REF].
Conclusions
Our knowledge of floods in the Mediterranean area has advanced substantially in recent years thanks to the development of databases and focused research programs. A general pattern of the spatial and seasonal distribution of flood magnitudes can now be established as reported here. The main characteristics of the floods around the Mediterranean Sea are the following:
1. The magnitude and impact of extreme floods vary significantly over the Mediterranean region with a clear contrast between west and east. The western part of the area is much more exposed to high impact and high magnitude events. This is probably due to the proximity of the Atlantic Ocean and oceanic climatic influences at latitudes where eastward atmospheric flows dominate.
2. Some sub-regions, including Liguria and Piedmont in Italy, Cévennes-Vivarais-Roussillon in France, and Catalonia and the Valencian province in Spain are particularly exposed to extremely severe floods whose peak discharge values may be close to world records. This particular pattern is the result of the interplay between the dominant atmospheric low level flow circulation patterns and the relief and orientations of the northern Mediterranean coast, which force convergence and trigger convection [START_REF] Ducrocq | HYMEX-SOPI The Field Campaign Dedicated to Heavy Precipitation and Flash Flooding in the Northwestern Mediterranean[END_REF].
3. Fall is clearly the main season -but not the only season -for extreme and damaging floods. This is particularly the case of mesoscale convective systems producing long lasting and stationary rainfall events that lead to strong responses by the watersheds concerned (i.e. high runoff rates due to soil and subsoil saturation) and to extraordinary peak discharge values.
4. No significant trend was detected in the frequency and magnitude of extreme floods in the Mediterranean region to date, probably due to the limitations of the available datasets and some complex overlapping signals (e.g. decadal and inter-decadal variability). Likewise, the existing projections do not clearly point to a change in extreme flood patterns in the Mediterranean region linked to climate change. But, whatever the case may be, the risk of flooding is likely to increase due to population growth and urban development in flood prone areas in the coming years.
Figure 1 .
1 Figure 1. Changes in the number of damaging floods in the countries surrounding the Mediterranean Sea in the EM-DAT database (source: emdat.be)
Figure 2 .
2 Figure 2. Number of people reported killed in each documented flood event over the period 1940-2015
Figure 3 .
3 Figure 3. Average number of people reported killed per country and per year by floods over the period 1940-2015 in the EM-DAT database (source: emdat.be)
Figure 4 .
4 Figure 4. Estimated reduced peak discharge of the documented flood events over the period 1940-2015. The reduced discharge Q r is Q/A 0.6 , Q is the peak discharge (m 3 /s) and A is the watershed area (km 2 ).
Figure 1 .
1 Figure 1. Monthly and seasonal distributions of the documented floods over the period 1940-2015
Figure 2 .
2 Figure 2. Reported estimated world records from different sources, proposed European envelope curve and some selected European record events since 1940
Figure 3 .
3 Figure 3. Trends in annual maximum streamflow in France based on local and regional tests (Giuntoli et al., 2012).
Table 1 . Contents of the database of notable Mediterranean flash flood events for the period 1940-2015.
1 Flood discharge estimates are therefore indispensable to describe and analyze the geographical patterns of floods, detect possible trends and enable comparisons between countries and sub-regions. But major floods and especially flash floods are difficult to document, as they affect small headwater watersheds that are not GAUME, Eric, BORGA, Marco, LLASSAT, Maria Carmen, MAOUCHE, Said, LANG, Michel, DIAKAKIS, Michalis, 2016, Mediterranean extreme floods and flash floods. In Allenvi (Ed.) The Mediterranean Region under Climate Change. A Scientific Update, Coll. Synthèses,IRD Editions,
Country Number With Sources
of events discharge
estimates
Algeria 20 1 Sardou et al. (2016), Recouvreur (2005), press
Egypt 3 0 Internet and press
France 40 38 Hydrate, recent surveys by the authors, press
Greece 22 5 Hydrate, press, Pappanagiakis et al. (2013), Diakakis et al. (2015)
Israel 11 11 Tarolli et al. (2012)
Italy 46 36 Hydrate, Anselmo (1985), Barredo (2007), recent surveys, press
Lebanon 1 0 Press
Morocco 7 7 Hymex database
Portugal 1 0 Press
Slovenia 1 1 Survey by the authors
Spain 16 11 Hydrate, LLasat et al. (2013), Barredo (2007)
Tunisia 3 2 Press and technical reports
Turkey 1 0 Press
Total 172 112
Estimated discharge was available for 112 reported flood events. Despite the large number of values available, only one representative value -not necessarily the largest one, as it could be overestimated -was selected for each flood event on any one date in one region. Events including dam failures were not included: the Moulouya valley floods in Morocco(May 23, 1963, 170 killed), the Fréjus catastrophe in France(October 2, 1959, 423 killed), the Vajon dam failure in Italy(October 9, 1963(October 9, , 2400 killed) killed), and the Val di Stava dam failure in Italy(July 19, 1985, 268
[START_REF] Gaume | Mediterranean extreme floods and flash floods (Sub-chapter 1.3.4)[END_REF], Mediterranean extreme floods and flash floods (Sub-chapter 1.3.4). In Allenvi (Ed.) The Mediterranean Region under Climate Change. A Scientific Update, Coll. Synthèses,IRD Editions,
[START_REF] Gaume | Mediterranean extreme floods and flash floods (Sub-chapter 1.3.4)[END_REF], Mediterranean extreme floods and flash floods. In Allenvi (Ed.) The Mediterranean Region under Climate Change. A Scientific Update, Coll. Synthèses, IRD Editions, pp. 133-144 regions showed a statistically significant trend. It is interesting to note a general and unexpected decreasing tendency in southern France that is spatially consistent if not statistically significant. The main conclusion is that if some trends exist, they are almost impossible to detect due to the limitations of the available datasets, especially on extreme events. Likewise, recent streamflow projections based on Euro-Cordex climate projections led to significant forecasted changes in river flood frequencies in Europe but to less clear results for the Mediterranean region | 21,957 | [
"760982",
"736214"
] | [
"221986",
"301093",
"3447",
"134213",
"302049",
"502478"
] |
01465998 | en | [
"math"
] | 2024/03/04 23:41:44 | 2016 | https://hal.science/hal-01465998/file/1-New-Teichmuller.pdf | Athanase Papadopoulos
email: [email protected]
QUASICONFORMAL MAPPINGS, FROM PTOLEMY'S GEOGRAPHY TO THE WORK OF TEICHM ÜLLER
Keywords: AMS Mathematics Subject Classification: 01A55, 30C20, 53A05, 53A30, 91D20 Quasiconformal mapping, geographical map, sphere projection, Tissot indicatrix
published or not. The documents may come
Quasiconformal mappings, from Ptolemy's geography to the work of Teichmüller
Introduction
The roots of quasiconformal theory lie in geography, more precisely in the study of mappings from (subsets of) the sphere to the Euclidean plane, and the attempts to find the "best" such mappings. The word "best" refers here to mappings with the least "distortion," a notion to be defined appropriately, involving distances, angles and areas. The period we cover starts with Greek antiquity and ends with the work of Teichmüller. The latter, during his short lifetime, brought the theory of quasiconformal mappings to matureness. He made an essential use of it in his work on Riemann's moduli problem [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] [175] [START_REF] Teichmüller | M. Karbe, Complete solution of an extremal problem for the quasiconformal mapping[END_REF], on the Bieberbach coefficient problem [START_REF] Teichmüller | Ungleichungen zwischen den Koeffizienten schlichter Funktionen[END_REF], on value distribution theory [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF] [START_REF] Teichmüller | Einfache Beispiele zur Wertverteilungslehre[END_REF] and on the type problem [START_REF] Teichmüller | Eine Anwendung quasikonformer Abbildungen auf das Typenproblem[END_REF]. The so-called Teichmüller Existence and Uniqueness Theorem concerns the existence and uniqueness of the best quasiconformal mapping between two arbitrary Riemann surfaces in a given homotopy class of homeomorphisms. This result is a wide generalization of a result of Grötzsch on mappings between rectangles [79] [80]. Teichmüller proved this theorem in the two papers [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] and [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF]. Motivated by his work on extremal quasiconformal mappings, he wrote a paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] on extremal problems from a very general broad point of view. In that paper, he made analogies between algebra, Galois theory, and function theory, based on the idea of extremal mappings. He also wrote a paper on the solution of a specific problem on quasiconformal mappings [START_REF] Teichmüller | Karbe, A displacement theorem for quasiconformal mapping[END_REF], which has several natural generalizations. Regarding Teichmüller's work, Ahlfors, in his 1953 paper on the Development of the theory of conformal mapping and Riemann surfaces through a century, written at the occasion of the hundredth anniversary of Riemann's inaugural dissertation [START_REF] Ahlfors | Development of the theory of conformal mapping and Riemann surfaces through a century[END_REF], says:
In the premature death of Teichmüller, geometric function theory, like other branches of mathematics, suffered a grievous loss. He spotted the image of Grötzsch's technique, and made numerous applications of it, which it would take me too long to list. Even more important, he made systematic use of extremal quasiconformal mappings, a concept that Grötzsch had introduced in a very simple special case. Quasiconformal mappings are not only a valuable tool in questions connected with the type problem, but through the fundamental although difficult work of Teichmüller it has become clear that they are instrumental in the study of modules of closed surfaces. Incidentally, this study led to a better understanding of the role of quadratic differentials, which in somewhat mysterious fashion seem to enter in all extremal problems connected with conformal mapping. Talking also about Teichmüller, Lehto writes in [START_REF] Lehto | A historical survey of quasiconformal mappings[END_REF] (1984), p. 210:
In the late thirties, quasiconformal mappings rose to the forefront of complex analysis thanks to Teichmüller. Introducing novel ideas, Teichmüller showed the intimate interaction between quasiconformal mappings and Riemann surfaces. In the present paper, while trying to convey the main ideas and mathematical concepts related to quasiconformal mappings, we make several digressions which will help the reader including these ideas in their proper context. The largest digression concerns the history of geographical maps. We tried to show the broadness of the subject and its importance in the origin of differential geometry.
The plan of this paper is as follows:
After the present introduction, the paper is divided into 8 sections. Section 2 is an exposition of the history of quasiconformal mappings, before the name was given, and their origin in geography. We start in Greek antiquity, namely, in the work of Ptolemy on the drawing of geographical maps. Ptolemy quotes in his Geography the works of several of his predecessors on the subject, and we shall mention some of them. We then survey works of some major later mathematicians who contributed in various ways to the development of cartography. The names include Euler, Lagrange, Chebyshev, Gauss, Darboux and Beltrami. The long excursion that we take in geography and the drawing of geographical maps will also show that we find there the origin of several important ideas in differential geometry. Furthermore, the present article will show the close relationship between geometry and problems related to the measurement of the Earth, a relationship which was very strong in antiquity and which never lost its force, and which confirms the original meaning of the word geo-metria 1 (earth-measurement).
§3 concerns the work of Nicolas-Auguste Tissot. This work makes the link between old cartography and the modern notion of quasiconformal mappings. Tissot associated to a map f : S 1 → S 2 between surfaces the field of infinitesimal ellipses on S 2 that are the images by the differential of the map f of a field of infinitesimal circles on S 1 . He used the geometry of this field of ellipses as a measure of the distortion of the map. At the same time, he proved several properties of the "best map" with respect to this distortion, making use of a pair of perpendicular foliations on each of the two surfaces that are preserved by the map. He showed the existence of these pairs of foliations, and their uniqueness in the case where the map is not conformal. All these ideas were reintroduced later on as essential tools in modern quasiconformal theory. Tissot's pair of foliations are an early version of the pair of measured foliations underlying the quadratic differential that is associated to an extremal quasiconformal mapping.
The modern period starts with the works of Grötzsch, Lavrentieff and Ahlfors. We present some of their important ideas in Sections 4, 5 and 6 respectively.
Section 7 contains information on Teichmüller's writings and style. Our aim is to explain why these writings were not read. Despite their importance, some of his ideas are still very poorly known, even today. 1 Proclus (5th c. A.D.), in his Commentary on Book I of Euclid's Elements, considers that geometry was discovered by the Egyptians, for the needs of land surveying. Proclus' treatise is sometimes considered as the oldest essay on the history of geometry. It contains an overview of the subject up to the time of Euclid.
Section 8 is a survey of the main contributions of Teichmüller on the use of quasiconformal mappings in the problem of moduli of Riemann surfaces and in problems in function theory.
Our digressions also concern the lives and works of several authors we mentioned and who remain rather unknown to quasiconformal theorists. This concerns in particular Nicolas-Auguste Tissot, who had bright ideas that make the link between old problems on conformal and close-to-conformal mappings 2 that arise in cartography, and modern quasiconformal theory. The name Tissot is inexistent in the literature on quasiconformal mappings, with the exception of references to his work by Grötzsch in his early papers, see [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF], and a brief reference to him by Teichmüller in his paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF]. We also highlight the work of Mikhaïl Alekseïevitch Lavrentieff on quasiconformal mappings. This work is well known in Russia, but it is rather poorly quoted in the Western literature on this subject.
We hope that these historical notes will help the reader understand the chain of ideas and their development.
There are other historical surveys on quasiconformal mappings; cf. for instance [START_REF] Lehto | A historical survey of quasiconformal mappings[END_REF] and [START_REF] Cazacu | Foundations of quasiconformal mapping[END_REF].
Part 1. Geography
Geography and early quasiconformal mappings
We start by recalling the notion of a quasiconformal mapping, since this will be the central element in our paper.
Let us first consider a linear map L : V 1 → V 2 between two 2-dimensional vector spaces V 1 and V 2 and let us take a circle C in V 1 , centered at the origin. Its image L(C) is an ellipse in V 2 , which is a circle if and only if the linear map L is conformal (that is, angle-preserving). We take as a measure of the distortion of L, or its "deviation from conformality, " the quantity
D(L) = log b a ,
where b and a are respectively the lengths of the great and small axes of the ellipse L(C). The distorsion D(L) does not depend on the choice of the circle C. Let f : S 1 → S 2 be now a map between Riemann surfaces, and assume for simplicity that f is of class C 1 . At each point of S 1 , we consider the distortion D(df x ) of the linear map df x induced by f between the tangent spaces at x and f (x) to S 1 and S 2 respectively. The distortion of f is defined as
D(f ) = sup x∈S1 D(df x ).
The value of D(f ) might be infinite. The map f is said to be quasiconformal if its distortion is finite. It is conformal if and only if its distortion is zero.
This definition has a history, and it went through extensive developments. It has also a large number of applications in various fields (complex analysis, differential 2 In the present paper, we use the expression "close-to-conformal" in a broad sense. Generally speaking, we mean by it a mapping between surfaces which is (close to) the best map with respect to a certain desired property. This property may involve angle-preservation (conformality in the usual sense), area-preservation, ratios of distances preservation, etc. Usually the desired property is a combination of these. Making this idea precise is one of the recurrent themes in cartography. Note that we shall also use the notion of "closeto-conformality" in the sense of a conformal map which minimizes some other distortion parameters, for instance the magnification ratio (see the works of Lagrange, Chebyshev and others that we survey in §2 below).
equations, metric geometry, physics, geography, etc.). This is the subject of the present paper. Our overall presentation is chronological, reflecting the fact that in mathematics the new theories and new results are built on older ones, and the solution of a problem naturally gives rise to new ones.
Teichmüller explicitly states in his paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] (see §164, titled: Why do we study quasiconformal mappings? ) that the notion of quasiconformal mapping first appeared as a generalization of that of conformal mapping in cartography, and he mentions in this respect the name of Nicolas-Auguste Tissot (1824Tissot ( -1897)). The latter was a French cartographer and mathematics professor at Lycée Saint-Louis in Paris, 3 and he was also an examiner for the entrance exam of the École Polytechnique. The reader is referred to the paper [START_REF] Papadopoulos | A link between cartography and quasiconformal theory[END_REF] for more information on his life. Tissot developed a theory in which the distortions of the various projections of the sphere onto the plane that were used in drawing geographical maps play a central role, and he invented a device to measure these distortions. We shall say more about this interesting work, but we start our survey with an exposition of the beginning of conformal geometry and cartography, and the relation between these two topics. We hope to convey the idea that the idea of quasiconformal mapping is inherent in these works.
Cartography is a science which makes a link between geometry and the real world: "nature," or "physis." The Greek word "geo-graphia" means "drawing the Earth." Indeed, the classical problems of this field concern the representation on a flat surface of portions of a spherical (or spheroidal) surface. At a practical level, the sphere considered was either the Earth (which was known, since the seventeenth century, to be spheroidal rather than spherical) 4 or the heavenly sphere with its multitude of celestial bodies: planets, stars, constellations, etc. A geographical map is a representation in the Euclidean plane of a subset of one of these curved surfaces. 5 Some of the practical questions that arise in cartography led to major problems in abstract mathematics; we shall see this in the next few pages. Furthermore, 3 We note, for the reader who is not familiar with the French educational system, that it was very important to have very good teachers (sometimes they were prominent mathematicians) at the French good lycées, the reason being that these lycées prepared the pupils to the competitive entrance examination at École Normale Supérieure or École Polytechnique. 4 It became generally accepted, starting from the eighteenth century, that the Earth is spheroidal and not spherical, namely, it is slightly flat at the poles. We recall in this respect that the description of the shape of the Earth was a major controversial issue in the seventeenth and eighteenth centuries. This question opposed the English scientists, whose main representative was Newton, to the French, represented by the astronomer Jacques Cassini (1677-1756), the physicist Jean-Jacques d'Ortous de Mairan (1678-1771) and others, who pretended on the contrary that the Earth is stretched out at the poles. Newton had concluded in his Principia that the Earth is flat at the poles, due to its rotation. More precisely, he expected the flattening to be of the order of 1/230. Huygens was on the side of Newton. The controversy on the shape of the Earth is one of the most fascinating scientific issues raised in the eighteenth century science. It led to long expeditions by French and other scientists to the Lapland and Peru, in order to make precise measurements of the meridians near the poles. 5 Of course, at the local level, the Earth cannot be neither spherical nor spheroidal; it has mountains, valleys, canyons, plateaux, etc. In principle, all this has to be taken into account in map drawing. But for the mathematics that is discussed here, it is sufficient to assume that the Earth is spherical. From a practical point of view, this would be perfectly correct if we assume that the map is intended for airplanes that travel a few thousand miles above the surface of the Earth in such a way that they do not face any obstacle.
there is an aesthetic side in the art of map drawing -this is especially visible in ancient maps -which certainly complements the beauty of pure geometrical thought. An old representation of coordinates on the Earth by meridians and parallels is reproduced in Figure 1. As a science, the drawing of geographical maps dates back to Greek antiquity. It was known to the mathematicians and geographers of that epoch that there is no representation of the sphere on the plane that is faithful in the sense that proportions of distances are respected. Despite the fact that no mathematical proof of this statement existed, the idea was widely accepted. The notion of conformal (angle-preserving) mapping existed. In particular, it was known that areas were always distorted by such a mapping. The relative magnitudes of the various lands needed to be represented in a faithful manner. This came naturally as a need for practical problems linked to harvesting, the distribution of water, calculating taxes, etc. The question of finding the "best" maps, that is, those with a minimal amount of "distortion," with this word referring to some specific property or a compromise between a few desirable properties, arose very naturally. Angle-preservation, or closeness to angle-preservation, was another a desirable property. Thus, the relation between geography, the theory of quasiconformal mappings, and what we call closeto-conformal mappings has many aspects.
Several prominent mathematicians were at the same time geographers and astronomers. They were naturally interested in the drawing of geographical maps and they were naturally led to problems related to conformality and close-toconformality. Among these mathematicians, in Greek Antiquity, the name of Claudius Ptolemy (second century A.D.) comes to the forefront. His work on astronomy and geography is well known, especially through his Mathematical syntaxis and his Geography. 6 Besides designing his famous maps of the Celestial sphere, Ptolemy conceived maps of the Earth, using the mathematical tools he developed in his astronomical works. In his Geography (cf. [START_REF] Ptolémée | Traité de géographie, translated from the Greek into French by l'Abbé de Halma[END_REF] and [154]) he uses the comparison of arcs of circles on the Celestial sphere with corresponding arcs of circles on the Earth, in order to compute distances. On should mention also Ptolemy's Planisphaerum, a treatise which reached us only through Arabic translations.
Like several mathematicians who preceded him, Ptolemy knew that in drawing geographical maps, it is not possible to preserve at the same time angles and ratios of distances, and he searched for the best reasonable compromise between the corresponding distortions.
It may be useful to recall in this respect that spherical geometry was developed several centuries before Ptolemy by Greek geometers. One may mention the works of Hipparchus of Nicaea (2nd c. B.C.), Theodosius of Tripolis (2nd c. B.C.) and Menelaus of Alexandria (1st-2nd c. A.D.). The notion of spherical angle was present in these works. In particular, conformality in the sense of angle-preservation was a meaningful notion.
Ptolemy's Geography is divided into eight books. It starts with the distinction between what he calls "regional cartography" and "world cartography." Different techniques are used in each subfield, and the goals in each case is different. In regional cartography, there is an aesthetic dominating element, whereas world cartography involves precise measurements and is used for practical needs. Let us quote Ptolemy ([154], Book 1, p. 58): 7 The goal of regional cartography is an impression of a part, as when one makes an image of just an ear or an eye; but [the goal] of world cartography is a general view, analogous to making a portrait of the whole head. [...] Regional cartography deals above all with the qualities rather than the quantities of the things that it sets down; it attends everywhere to likeness, and not so much to proportional placements. World cartography, on the other hand, [deals] with the quantities more than the qualities, since it gives consideration to the proportionality of distances for all things, but to likeness only as far as the coarser outlines [of the features], and only with respect to mere shape. Consequently, regional cartography requires landscape drawing, and no one but a man skilled in drawing would do regional cartography. But world cartography does not [require this] at all, since it enables one to show the positions and general configuration [of features] purely by means of lines and labels. 6 The two majors works of Ptolemy are his Mathematical syntaxis, known as the Almagest, a comprehensive treatise in thirteen books on astronomy (which also contains chapters on Euclidean and spherical geometry), and his Geography, an exposition of all what was known in geography in the second-century Greco-Roman world. It includes a compilation of geographical maps. The name Almagest derives from the Arabic "al-Majist ¯ī," which is a deformation of the Greek superlative "Megistos," meaning "the Greatest" and referring to the expression "the Greatest Treatise." The Arabs had the greatest admiration for this work, and several translations, sponsored by the Arabic rulers, were performed starting from the ninth century. A Latin translation was done in the twelfth century, from the Arabic. The Geography was translated into Arabic in the ninth century and, later on, from this language into Latin. One of the oldest such translations was made by Herman of Carinthia in 1143. We refer the interested reader to the discussion and references in the recent paper [START_REF] Abgrall | Les débuts de la projection stéréographique: conceptions et principes[END_REF]. 7 The English translation is from the edition of Berggren and Jones [154].
For these reasons, [regional cartography] has no need of mathematical method, but here [in world cartography] this element takes absolute precedence. Thus the first thing that one has to investigate is the Earth's shape, size, and position with respect to its surroundings [i.e., the heavens], so that it will be possible to speak of its own part, how large it is and what it is like, and moreover [so that it will be possible to specify] under which parallels of the celestial sphere each of the localities of this [known part] lies. From this last, one can also determine the lengths of nights and days, which stars reach the zenith or are always borne above or below the horizon, and all the things that we associate with the subject of habitations.
These things belong to the loftiest and loveliest of intellectual pursuits, namely to exhibit to human understanding through mathematics [both] the heavens themselves in their physical nature (since they can be seen in their revolution about us), and [the nature of] the Earth through a portrait (since the real [Earth], being enormous and not surrounding us, cannot be inspected by any one person either as a whole or part by part). Thus, regional cartography has a taste of topology. Distortions, whether at the level of angles, distances or areas, play a negligible role in this art. For a modern example of a map falling into Ptolemy's category of regional cartography, one may think of a railroad of subway network map, where the information that is conveyed is only contained in the lines and their intersections. In fact, in such a map, it is the topology of the network that is the interesting information.
Ptolemy's writings contain valuable information on works done by his predecessors. It is important to remember that Ptolemy, like Anaximander (c. ) and the other Greek mathematicians and geographers that he quotes, considered that the Earth has a spherical shape. The myth of a flat Earth had very little supporters among scientists in Greek antiquity. In his De Caelo [START_REF] Kühnau | The Complete Works of Aristotle: The Revised Oxford Translation[END_REF] (294 b14-15), Aristotle reports that the pre-Socratic philosophers Anaximenes (6th. century B.C.), who highlighted the concept of infinite, his student Anaxagoras (5th. century B.C.), who was the first to establish a philosophical school in Athens, and Democritus (4th-5th. century B.C.), who formulated an atomic theory of the universe, already considered that the Earth is spherical. Plato added philosophical reasons to the belief in a spherical Earth, namely, he considered that the Demiurge could only give to the material world the most symmetrical form, that is, the spherical one. In the Timaeus (33b) [151], we read:
He wrought it into a round, in the shape of a sphere, equidistant in all directions from the center to the extremities, which of all shapes is the most perfect and the most self-similar, since he deemed that the similar is infinitely fairer than the dissimilar. And on the outside round about, it was all made smooth with great exactness, and that for many reasons.
Let us note by the way that the eleventh-century Arabic polymath Ibn al-Haytham, at the beginning of his treatise on isoepiphany, declares that the reason for which the entire universe and the Earth have a spherical shape is that the sphere has the largest volume among the solid figures having the same area. (See the translation of Ibn al-Haytham's treatise in [START_REF] Rashed | Les mathématiques infinitésimales du IXème au XIème siècle[END_REF], English version, p. 305-342.)
The stereographic projection, that is, the radial projection of the sphere from a point on this surface (say the North pole) onto a plane passing through the center and perpendicular to the radius passing through this point (in the case considered, this would be the equator plane), which has probably always been the most popular projection of the sphere among mathematicians, was already used by Hipparchus back in the second century B.C.; cf. Delambre [START_REF] Delambre | Histoire de l'astronomie ancienne[END_REF], Vol. 1, p. 184ff., d'Avezac [START_REF]Coup d'oeil historique sur la projection des cartes de géographie (suite et fin), notice lue à la Société de géographie de Paris dans sa séance publique du 19 décembre 1862[END_REF] p. 465 and Neugebauer [START_REF] Neugebauer | The early history of the astrolabe[END_REF] p. 246. Let us also add that to Hipparchus is also attributed the invention of the astrolabe, a multipurpose hand-held geographical instrument which made out of several metallic plates that fit and roll into each other and where each of them is considered as a planar representation of the celestial sphere. The astrolabe allows its user to locate himself (altitude and longitude), know the hour of the day or of the night and find the location of the main stars and planets. The instrument was used by astronomers and navigators. It became very popular in the Islamic world, starting from the eighth century, because it allowed its user to find, at a given place, the exact lunar time and the direction of the Mecca for everyday prayer. Obviously, nontrivial geometrical problems arise in the conception of the astrolabe, since this requires the transformation of spherical periodic movements of the celestial sphere (the global movement of the whole sphere from East to West around an axis which is perpendicular to the equator, the movement of the sun and of the planets on the ecliptic circle, etc.) into plane geometrical ones. Since Greek antiquity, the mathematical theory of the astrolabe involved several aspects of geometry: Euclid's plane and solid geometry, the theory of proportions, Apollonius' theory of conics, and descriptive geometry. The use of the astrolabe disappeared with the popularization of modern clocks. There are basically three kinds of writings on the astrolabe: its confection, its use, and the mathematical theory that is at its basis. Sereval Arabic scholars, including al-S . āghānī, al-Qūhī, Ibn Sahl and Ibn 'Irāq, described the astrolabe in several writings, and they transmitted it to the West. The interested reader may consult the book [START_REF] Rashed | Les mathématiques infinitésimales du IXème au XIème siècle[END_REF] by Rashed.
The stereographic projection was not the most useful one for the drawing of geographical maps, because the distortion of distances and areas becomes very large at points which are far from the equator. Better projections were known to geographers; we shall mention some of them in what follows. Neugebauer, in his survey [START_REF] Neugebauer | Mathematical methods in ancient astronomy[END_REF] (p. 1037-1039) mentions a cylindrical projection by Marinus of Tyre in which distances are preserved along all the meridians and along the parallel passing by the island of Rhodes (36 o ). 9 In this projection, distances along the parallels which are north of Rhodes are expanded, whereas those along the parallels south of this island are contracted. Marinus' maps are considered as models for the Mercator maps, which appeared fourteen centuries later. It is also known that Christopher Columbus, in the preparation of his journeys, relied in part on Marinus' maps and computations, in part contained in Ptolemy's Almagest. Ptolemy introduced conical projections which in some cases are improvements of the one of Marinus. In these projections, distances along all meridians are preserved, and they are also preserved along the parallels which pass through Rhodes and through two other meridians: Thule (63 o ) and the equator (0 o ). We also mention that Marinus of Tyre, besides being a geographer, was, like Ptolemy, a mathematician. In Chapters XI to XX of his Geography (cf. [START_REF] Ptolémée | Traité de géographie, translated from the Greek into French by l'Abbé de Halma[END_REF] p. 40ff.), Ptolemy criticizes Marinus' cylindrical projections because of their significant amount of distortion of lengths.
During the middle ages, science was flourishing in the Arabic world, and geography was part of it. The Arabs studied the works of Marinus of Tyre and Ptolemy, and they built upon them a whole school of geographical map drawing, including valuable new developments. One should mention at least the works of the historian, traveler and geographer al-Mas'ūdī (c. 896-956), and those of the mathematician, astronomer, philosopher, geographer, physician and pharmacologist al-Bīrūnī (973 -1048) who wrote an important treatise in which he describes several new projections; cf. [START_REF] Richter-Bernburg | Al-Bīrūnī's Maqala fi tastih wa tabtikh al-kuwar: A translation of the preface with notes and commentary[END_REF].
At the Renaissance, the need for drawing geographical maps was strongly motivated by the discovery of new lands. Leonardo da Vinci and Albrecht Dürer, who, besides being artists, were remarkable mathematicians, were highly interested in geography. They worked on the technical problems raised by the drawing of geographical maps, cf. [START_REF] Oberhummer | Leonardo da Vinci and the art of Renaissance in its relation to geography[END_REF], [START_REF] Major | Memoir on a mappemonde by Leonardo da Vinci, being the earliest map hitherto known containing the name of America: now in the royal collection at Windsor London[END_REF] and [START_REF] Panofsky | The life and art of Albrecht Dürer[END_REF], and they also valued the beauty of these drawings. See Figure 2. In the modern period, the works of Euler and Gauss on geography are well known, but there are also many others. Mathematical questions related to conformality and close-to-conformality that arise from the drawing of geographical maps or that are directly related to that art were thoroughly studied by Lambert, Lagrange, Chebyshev and Beltrami. Furthermore, in the works of several French geometers including Liouville, Bonnet, Darboux and others, cartography became an application of infinitesimal calculus and an important chapter of the differential geometry of surfaces. We shall briefly review some of these important works. They constitute a significant part of the history of quasiconformal mappings.
We start with Euler, then talk about Lambert, Lagrange, Chebyshev, Gauss, Liouville, Bonnet, Darboux and Beltrami, before reviewing the pioneering work of Tissot.
Euler, at the Academy of Sciences of Saint Petersburg, besides being a mathematician, had the official position of cartographer. He participated to the huge project of drawing geographical maps of the new Russian Empire. Having reliable maps was important to the new Russian rulers, and they were used for several purposes: determining the precise boundary of the empire, determining areas of pieces of lands for taxation, irrigation, and other purposes, giving precise distances between cities, with the lengths of the various roads joining them, etc. It is interesting to know that Euler was so much involved in this activity that he considered (or at least he claimed) that it was responsible for the deterioration of his vision. In a letter to Christian Goldbach, dated August 21st (September 1st, new Style), 1740, he writes (cf. [START_REF] Euler | Series quarta A, commercium epistolicum, Volumen quartum, Pars prima[END_REF] p. 163, English translation p. 672):
Geography is fatal to me. As you know, Sir, I have lost an eye working on it; and just now I nearly risked the same thing again. This morning I was sent a lot of maps to examine, and at once I felt the repeated attacks. For as this work constrains one to survey a large area at the same time, it affects the eyesight much more violently than simple reading or writing. I therefore most humbly request you, Sir, to be so good as to persuade the President by a forceful intervention that I should be graciously exempted from this chore, which not only keeps me from my ordinary tasks, but also may easily disable me once and for all. I am with the utmost consideration and most respectfully, Sir, your most obedient servant. Among Euler's works on geography, we mention his Methodus viri celeberrimi Leonhardi Euleri determinandi gradus meridiani pariter ac paralleli telluris, secundum mensuram a celeb. de Maupertuis cum sociis institutam (Method of the celebrated Leonhard Euler for determining a degree of the meridian, as well as of a parallel of the Earth, based on the measurement undertaken by the celebrated de Maupertuis and his colleagues) [START_REF] Euler | Methodus viri celeberrimi Leonhardi Euleri determinandi gradus meridiani pariter ac paralleli telluris, secundum mensuram a celeb. de Maupertuis cum sociis institutam[END_REF]. This work was presented to the Academy of Sciences of Saint-Petersburg in 1741 and published in 1750. 10 From a more theoretical point of view, Euler published in 1777 three memoirs on mappings from the sphere onto the Euclidean plane: 10 There was sometimes a significant lapse of time between the time when Euler wrote a paper and the time when the paper appeared in print. One reason for this delay is that the journals of the Academies of Saint Petersburg and of Berlin, where most of Euler's papers were published, received a large number of such papers, and, since the number of pages of each volume was limited, the backlog became gradually substantial. In fact, several papers of Euler were presented to these Academies even after his death and published much later. Starting from the year 1729, and until 50 years after Euler's death, his works continued to fill about half of the scientific part of the Actes of the Saint-Petersburg Academy. Likewise, between the years 1746 and 1771, almost half of the scientific articles of the Mémoires of the Academy of Berlin were written by Euler. It is also known that in some cases, Euler purposely delayed the publication of his memoirs, in order to leave the primacy of the discoveries to a student or young collaborator working on the same subject, especially when he considered that their approach was at least as good as his. A famous instance of this generous attitude is when Euler waited so that Lagrange, who was nineteen years younger than him, finishes and publishes his work on the calculus of variations, before he sends his own work to his publisher. This was pointed out by several authors. The reader can refer for instance to Euler's obituary by Condorcet [START_REF] De Caritat De Condorcet | Euler par le Marquis de Condorcet[END_REF] p. 307.
(1) De repraesentatione superficiei sphaericae super plano (On the representation of spherical surfaces on a plane) [START_REF] Euler | De repraesentatione superficiei sphaericae super plano[END_REF]; (2) De proiectione geographica superficiei sphaericae (On the geographical projections of spherical surfaces) [START_REF] Euler | De proiectione geographica superficiei sphaericae[END_REF]; (3) De proiectione geographica Deslisliana in mappa generali imperii russici usitata (On Delisle's geographic projection used in the general map of the Russian empire) [START_REF] Euler | De proiectione geographica Deslisliana in mappa generali imperii russici usitata[END_REF].
These three memoirs were motivated by the practical question of drawing geographical maps, and they led to mathematical investigations on conformal and close-toconformal projections. The title of the third memoir refers to Joseph-Nicolas Delisle (1688-1768), a leading French geographer, astronomer, meteorologist, astrophysicist, geodesist, historian of science and orientalist who worked, together with Euler, at the Saint Petersburg Academy of Sciences. 11 We now review important ideas contained in these three memoirs. The first memoir [START_REF] Euler | De repraesentatione superficiei sphaericae super plano[END_REF] contains important results and techniques that include the questions on geographical maps in the setting of differential calculus and the calculus of variations. In the introduction, Euler declares that he does not consider only maps obtained by central projection of the sphere onto the plane which are subject to the rules of perspective, (he calls the latter "optical projections"), but he considers the mappings "in the widest sense of the word." Let us quote him ( §1 of [START_REF] Euler | De repraesentatione superficiei sphaericae super plano[END_REF]):
In the following I consider not only optical projections, in which the different points of a sphere are represented on a plane as they appear to an observer at a specified place; in other words, representations by which a single point visible to the observer is projected onto the plane by the rules of perspective; indeed, I take the word "mapping" in the widest possible sense; any point of the spherical surface is represented on the plane by any desired rule, so that every point of the sphere corresponds to a specified point in the plane, and inversely; if there is no such correspondence, then the image of a point from the sphere is imaginary.
In § 9, Euler proves that there is no "perfect" or "exact" mapping from the sphere onto a plane, that is, a mapping which is distance-preserving (up to some factor) on some set of curves. This result is obtained through a study of partial differential 11 Delisle is one of the eminent scientists from Western Europe who were invited by Peter the Great at the foundation of the Russian Academy of Sciences. The monarch signed the foundational decree of his Academy on February 2, 1724. The group of scientists that were present at the opening ceremony included the mathematicians Nicolaus and Daniel Bernoulli and Christian Goldbach. Delisle joined the Academy in 1726, that is, two years after its foundation, and one year before Euler. Delisle was in charge of the observatory of Saint Petersburg, situated on the Vasilyevsky Island. This observatory was one of the finest in Europe. Thanks to his position, Delisle had access to the most modern astronomical instruments. He was in charge of drawing maps of the Russian empire. Euler assisted him at the observatory and in his map drawing. Delisle stayed in Saint Petersburg from 1726 to 1747. In the first years following his arrival to Russia, Euler helped Delisle in recording astronomical observations which were used in meridian tables. Delisle returned to Paris in 1747, where he founded the famous observatory at the Hôtel de Cluny, thanks to a large amount of money he had accumulated in Russia. He also published there a certain number of papers containing information he gathered during his various journeys inside Russia and in the neighboring territories (China and Japan). It seems that some of the information he published was considered as sensible, and because of that, Delisle acquired the reputation of having been a spy.
equations. The precise statement of Euler's theorem, with a modern proof based on Euler's ideas, are given in the paper [START_REF] Charitos | On a theorem of Euler on mappings from the sphere to the plane[END_REF].
With this negative result established, Euler says that one has to look for best approximations. He writes: "We are led to consider representations which are not similar, so that the spherical figure differs in some manner from its image in the plane." He then examines several particular projections of the sphere, searching systematically searches for the partial differential equations that they satisfy. In doing so, he highlights the following three kinds of maps:
(1) Maps where the images of all the meridians are perpendicular to a given axis (the "horizontal" axis in the plane), while all parallels are sent parallel to it. (2) Maps which "preserve the properties of small figures," which, in his language, means conformal. (3) Maps where surface area is represented at its true size.
Euler gives examples of maps satisfying each of the above three properties. These maps are obtained by various ways: projecting the sphere onto a tangent plane, onto a cylinder tangent to the equator, etc. Euler then studies distance and angle distortion under these various maps. At the end of his memoir ( §60), he notes that his work has no immediate practical use: 12In these three hypotheses is contained everything ordinarily desired from geographic as well as hydrographic maps. The second hypothesis treated above even covers all possible representations. But on account of the great generality of the resulting formulae, it is not easy to elicit from them any methods of practical use. Nor, indeed, was the intention of the present work to go into practical uses, especially since, with the usual projections, these matters have been explained in detail by others. In contrast, in the memoir [START_REF] Euler | De proiectione geographica superficiei sphaericae[END_REF], Euler studies projections that are useful for practical applications. He writes ( §20):
Moreover, let it be remarked, that this method of projection is "extraordinarily appropriate" for the practical applications required by Geography, for it does not distort too much any region of the Earth. It is also important to note that with this projection, not only are all meridians and circles of parallels exhibited as circles or as straight lines, but all great circles on the sphere are expressed as circular arcs or straight lines. The memoir [START_REF] Euler | De proiectione geographica Deslisliana in mappa generali imperii russici usitata[END_REF] also contains important ideas. Euler starts by reviewing the main properties of the stereographic projection: the images of the parallels and meridians intersect at right angles, and it is conformal (he says, a similitude on the small scale). He then exhibits the inconveniences of this projection: length is highly distorted on the large, especially if one has to draw maps of large regions of the Earth. He gives the example of the map of Russia in which the province of Kamchatka is distorted by a factor of four, compared to a region on the center of the map. Another disadvantage he mentions is the distortion of the curvature of the meridians, even if they are sent to circles, and he again gives the example of the Kamchatka meridian. He then considers another possible projections which is commonly used, where the meridians are sent to straight lines which intersect at the image of the pole, and which presents similar disadvantages. Euler then explains the advantage of a new map elaborated by Delisle, and he develops a mathematical theory of this map. It is interesting to recall how Euler formulates the problem, as a problem of "minimizing the maximal error" over an entire region: Delisle, the most celebrated astronomer and geographer of the time, to whom the care of such a map was first entrusted, in trying to fulfill these conditions, made the relationship between latitude and longitude exact at two noteworthy parallels. He was of the opinion that if the named circles of parallel were at the same distance from the middle parallel of the map as from its outermost edges, the deviation could nowhere be significant. Now the question is asked, which two circles of parallel ought to be chosen, so that the maximum error over the entire map be minimized.
In developing the mathematical theory of Delisle's map, Euler finds that one of their advantages is that while meridians are represented by straight lines, the images of the other great circles "do not deviate considerably from straight lines" ( §22 of Euler's paper [START_REF] Euler | De proiectione geographica Deslisliana in mappa generali imperii russici usitata[END_REF]). Such a condition will appear several decades later (almost a century), in a paper by Beltrami [START_REF] Beltrami | Risoluzione del problema: Riportare i punti di una superficie sopra un piano in modo che le linee geodetiche vengano rappresentate da linee rette[END_REF], which we review below. It is probably the first time where we encounter, formulated in a mathematical setting, an early notion of "quasigeodesic." Euler then investigates the way the image of a great circle on the map differs from a straight line ( §23ff.) The conclusion of the memoir is:
In this projection is obtained the extraordinary advantage, that straight lines, which go from any point to any other point, correspond rather exactly to great circles and therefore the distances between any places on the map can be measured by using a compass without considerable error. Because of these important characteristics the projection discussed was preferred before all others for a general map of the Russian Empire, even though, under rigorous examination, it differs not a little from the truth.
Euler had several young collaborators and colleagues working on cartography. We mention in particular F. T. Schubert, 13 one of his direct followers who became a specialist of spherical geometry, a field closely related to cartography and on which Euler published several memoirs. We refer the reader to the article [START_REF] Papadopoulos | On the works of Euler and his followers on spherical geometry[END_REF] for more information about Schubert. Schubert's papers on cartography include [165], [START_REF] Schubert | De proiectione Sphaeroidis ellipticae geographica, Dissertatio Secunda. Proiectio stereographica horizontalis feu obliqua[END_REF] and [START_REF] Schubert | De proiectione Sphaeroidis ellipticae geographica, Dissertatio Tertia[END_REF]. The forthcoming book [START_REF]The works of Euler and his followers on spherical geometry[END_REF] contains an analysis of the works of Euler and his collaborators on spherical geometry. 13 Friedrich Theodor von Schubert (1758-1825), like Euler, was the son of a protestant pastor. His parents, like Euler's parents, wanted their son to study theology. Instead, Schubert decided to study mathematics and astronomy, and he did it without the help of teachers. He eventually became a specialist in these two fields, and he taught them as a private teacher during his stays outside his native country (Germany). After several stays in different countries, Schubert was appointed, in 1785, assistant at the Academy of Sciences of Saint Petersburg, at the class of geography. This was two years after Euler's death. In 1789 he became full member of that Academy, and in 1803 director of the astronomical observatory of the same Academy.
After Euler, we talk about his colleague and fellow countryman J. H. Lambert, 14 who is considered as the founder of modern cartography. His Anmerkungen und Zusätze zur Entwerfung der Land-und Himmelscharten (Remarks and complements for the design of terrestrial and celestial maps, 1772) [START_REF] Lambert | Anmerkungen und Zusätze zur Entwerfung der Land-und Himmelscharten[END_REF] contains seven new geographical maps, each one having important features. They include (among others) the so-called Lambert conformal conic projection, the transverse Mercator, the Lambert azimuthal equal area projection, and the Lambert cylindrical equal-area projection. The Lambert projections are mentioned in the modern textbooks of cartography, and some of them were still in use, for military and other purposes, until recent times (that is, before the appearance of satellite maps). For instance, the Lambert conformal conic projection was adopted by the French artillery during the First Wold War. It is obtained by projecting conformally the surface of the Earth on a cone that touches the sphere along a parallel. The images of the meridians are concurrent lines and those of the parallels are circles centered at the intersection points of the meridians. When the cone is unrolled it has the form of a fan. The distorsion is minimized between two parallels within which lies the region of interest. In the same memoir [START_REF] Lambert | Anmerkungen und Zusätze zur Entwerfung der Land-und Himmelscharten[END_REF], Lambert gave a mathematical characterization of an arbitrary angle-preserving map from the sphere onto the plane. Euler later gave another solution of the same question. Lambert, in his work, takes into account the fact that the Earth is spheroidal and not spherical. The memoir [START_REF] Lambert | Anmerkungen und Zusätze zur Entwerfung der Land-und Himmelscharten[END_REF] is part of the larger treatise Beiträge zum Gebrauche der Mathematik und deren Anwendung durch (Contributions to the use of mathematics and its applications) [START_REF] Lambert | Beiträge zum Gebrauche der Mathematik und deren Anwendung[END_REF].
Lagrange, whose name is associated to that of Euler for several mathematical resulrs, published two memoirs on cartography, [START_REF] De Lagrange | Sur la construction des cartes géographiques[END_REF], two years after those of Euler 14 Depending on the biographers, the Alsatian self-taught mathematician Johann Heinrich Lambert (1728-1777) is considered as French, Swiss or German. He was born in Mulhouse, today the second largest French city in Alsace, but which at that time was tied to the Swiss Confederation by a treaty which guaranteed to it a certain independence from the Holy Roman Empire and a certain relative peace away from the numerous conflicts which were taking place in the French neighboring regions. Lambert belonged to a French protestant family which was exiled after the revocation by Louis XIV of the Edict of Nantes. We recall that this edict of Nantes was issued (on April 13, 1598) by the king Henri IV of France at the end of the so called Wars of Religion which devastated France during the second half of the 16th century. Its goal was to grant the Protestants a certain number of rights, with the aim of promoting civil unity. Louis XIV suspended the Edict of Nantes on October 22, 1685, by another edict, called he Edict of Fontainebleau. At the same time, he ordered the closing of the Protestant churches and the destruction of their schools. This caused the exile of at least 200 000 protestants out of France.
Euler had a lot of consideration for his compatriot's work and he recommended him for a position at the Berlin Academy of Sciences, where Lambert was hired and where he spent the last ten years of his life. We mention by the way that Lambert was one of the most brilliant precursors of hyperbolic geometry, and in fact, he is considered as the mathematician who came closer to discovering it, before Gauss, Lobachevsky and Bolyai. Indeed, in his Theorie der Parallellinien (Theory of parallels), written in 1766, Lambert developed the bases of a geometry where all the Euclidean postulates hold except the parallel postulate, and where the latter is replaced by its negation. In developing this geometry, his aim was to find a contradiction, since he believed, like all the other geometers before him, that such a geometry cannot exist. The memoir ends without conclusion, and Lambert did not publish it, the conjectural reason being that Lambert was not sure at the end whether such a geometry exists or not. The memoir was published, after Lambert's death, by Johann III Bernoulli. We refer the reader to [START_REF] Papadopoulos | La théorie des parallèles de Lambert, critical edition of the Theorie der Parallellinien with a French translation and mathematical and historical commentary[END_REF] for the first translation of Lambert's Theorie der Parallellinien together with a mathematical commentary, and to the paper [START_REF] Papadopoulos | Hyperbolic geometry in the work of J. H. Lambert[END_REF] for an exposition of the main results in his memoir. appeared in print. In these memoirs, Lagrange extends some works by Euler and Lambert. In the introduction of his memoir [START_REF] De Lagrange | Sur la construction des cartes géographiques[END_REF], p. 637, he writes: 15A geographical map is nothing but a plane figure which represents the surface of the Earth, or some part of it. [...] But the Earth being spherical, or spheroidal, it is impossible to represent on a plane an arbitrary part of its surface without altering the respective positions and distances of the various places; and the greatest perfection of a geographical map will consist in the least alteration of these distances. Lagrange then surveys various projections of the Earth and of the Celestial sphere that were in use at his time. Among them is the stereographic projection, with its two main properties, which he recalls:
(1) The images of the circles of the sphere are circles of the plane. Thus, in particular, if we want to find the image of a meridian or a parallel, then the image of three points suffices. (2) The stereographic projection is angle-preserving, a property which is a consequence of the first one. Lagrange attributes the stereographic projection to Ptolemy ([97] p. 639) who used it in the construction of astrolabes and celestial planispheres. He says that Ptolemy was aware of the first property, and that he describes it in a treatise known as the Sphaerae a planetis projectio in planum. 16 Latin version is translated from the Arabic; the Greek original is lost.) Lagrange says that the angle-preserving property of the stereographic projection may not have been noticed by the Greek astronomer. He then introduces other conformal projections, and he notes that there are infinitely many such maps. In fact, after introducing the great variety of maps from the sphere onto a plane that are projections from a point (allowing also this point to be at infinity), Lagrange considers more general maps. He declares, like Euler before him, that one may consider geographical maps which are arbitrary representations of the surface of the sphere onto a plane. Thus, he is led to the question:
What is a general mapping between two surfaces? The question is a two-dimensional analogue of the question What is a function? which was an important issue in eighteenth-century mathematics. It is related to the question of whether a general function is necessarily expressed by an analytic formula or not, which gave rise to a fierce debate involving the most prominent mathematicians of that time, and which lasted several decades; see e.g. the exposition in [START_REF] Papadopoulos | Physics in Riemann's mathematical papers[END_REF].
Returning to the maps from the sphere onto the Euclidean plane, Lagrange writes (p. 640) that "the only thing we have to do is to draw the meridians and the parallels according to a certain rule, and to plot the various places relatively to these lines, as they are on the surface of the Earth with respect to the circles of longitude and latitude." Thus, the images of the meridians and the parallels are no more restricted to be circles or lines. They can be, using Lagrange's terms, "mechanical lines," that is, lines drawn by any mechanical device. We note that this notion of curve as a mechanical line exists since Ancient Greece. (In fact, this is the way curves were defined.) It is known today that these are the most general curves that one can consider; cf. the result of Kempe that any bounded piece of an algebraic curve is drawable by some linkage [START_REF] Kempe | On a General Method of describing Plane Curves of the nth degree by linkwork[END_REF] and its wide generalization by W. P. Thurston, and by Nikolay Mnev who proved a conjecture of Thurston on this matter [START_REF] Mnev | The universality theorems on the classification problem of configuration varieties and convex polytope varieties[END_REF]. This work is reported on by Sossinsky in [START_REF] Sossinsky | Configuration spaces of planar linkages[END_REF]. Regarding the generality of the mappings from the sphere onto the plane, Lagrange refers to the work of Lambert, who was the first to study arbitrary angle-preserving maps from the sphere to the plane, and who, in his memoir [START_REF] Lambert | Beiträge zum Gebrauche der Mathematik und deren Anwendung[END_REF], solved the problem of characterizing a least-distortion map among those which are angle-preserving. Lagrange declares that Lambert was the first to consider arbitrary maps from the sphere to the plane. He recalls that Euler, after Lambert, gave a solution of the problem of minimizing distortion of an arbitrary angle-preserving map, and he then gives his own solution, by a method which is different from those of Lambert and Euler. He considers in detail the case where the images of the meridians and the parallels are circles. He solves the problem of finding all the orthonormal projections of a surface of revolution which send meridians and parallels to straight lines or circles.
We highlight a formula by Lagrange that gives the local distortion factor of a conformal map. This quantity, which Lagrange denotes by m, is a function on the spherical (or the spheroidal) surface which measures the local distortion of length at a point. It is defined as the ratio of the infinitesimal length element at the image by the infinitesimal length element at the source. The fact that the map is conformal map makes this quantity well defined. Is is given by the following formula (using Lagrange's notation, p. 646 of Vol. IV of his Collected works [START_REF] De Lagrange | Sur la construction des cartes géographiques[END_REF]):
(1) m = f (u + t √ -1)F (u -t √ -1) sin s ,
where, in the case where the Earth is considered as spherical, (s, t) are the coordinates such that -s is the length of the arc of meridian counted from the pole; -t is the angle which the meridian makes with a fixed meridian; -u = log k tan s 2 ; -f and F are arbitrary functions. Lagrange then gives a formula for the distortion of maps which send meridians and parallels to circles or circles and straight lines. These formulae were used later on by Chebyshev in his work on geographical maps which we review below. We shall return below to Lagrange's work, when we shall talk about Chebyshev. In the meanwhile, let us talk about Gauss.
Gauss, in the preface of his paper [START_REF] Gauss | Allgemeine Auflösung der Aufgabe: die Theile einer gegebnen Fläche auf einer andern gegebnen Fläche so abzubilden, daß die Abbildung dem Abgebildeten in den kleinsten Theilen ähnlich wird[END_REF], declares that his aim is only to construct geographical maps and to study the general principles of geodesy for the task of land surveying. Indeed, it was this task that led him gradually to the investigation of surfaces and their triangulations, 17 to the method of least squares (1821), and then to his famous treatise Disquisitiones generales circa superficies curvas (General investigations on curved surfaces) (1827). In the latter, Gauss gives examples 17 Triangulations of surfaces originate in cartography. The idea is that to know the position of point, it is often practical to measure the angles in a triangle whose vertices are this point and two other reference points, and use (spherical) trigonometric formulae. In such a triangle, two angles and the length of a side contained by them are known, therefore the three side lengths are completely determined. We quote Hewitt from [START_REF] Hewitt | Map of a Nation: A Biography Of The Ordnance Survey[END_REF] (Chapter 3): "Triangulation had first emerged as a map-making method in the mid sixteenth century when the Flemish mathematician Gemma Frisius set out the idea in his Libellus de locorum describendorum ratione (Booklet concerning a way of describing places), and by the turn of the eighteenth century it had become the most respected surveying technique in use. It was a similar method to plane-table surveying, but its instruments conducted measurements over much longer distances." extracted from his own measurements. He writes, in §27 (p. 43 of the English translation [START_REF] Gauss | Disquisitiones generales circa superficies curvas (General investigations on curved surfaces)[END_REF]): "Thus, e.g., in the greatest of the triangles which we have measured in recent years, namely that between the points Hohehagen, Brocken, Inselberg, where the excess of the sum of the angles was 14."85348, the calculation gave the following reductions to be applied to angles: Hohehagen: 4."95113; Brocken: 4."95104; Inselberg: 4"95131." In the same work, Gauss proves the following, which he calls "remarkable theorem" (Theorema Egregium) that explains in particular why curvature is the obstruction for a sphere to be faithfully represented on the plane ( §12; p. 20 of the English translation)
If the curved surface is developed upon any other surface whatever, the measure of curvature in each point remains unchanged.
In 1825, Gauss published a paper in the Astronomische Abhandlungen 18 (Memoirs on astronomy) whose title is Allgemeine Auflösung der Aufgabe, die Teile einer gegebenen Fläche auf einer andern gegebenen Fläche so abzubilden dass die Abbildung dem Abgebildeten in den kleinisten Theilen ähnlich wird. (General solution of the problem: to represent the parts of a given surface on another so that the smallest parts of the representation shall be similar to the corresponding parts of the surface represented) [START_REF] Gauss | Allgemeine Auflösung der Aufgabe: die Theile einer gegebnen Fläche auf einer andern gegebnen Fläche so abzubilden, daß die Abbildung dem Abgebildeten in den kleinsten Theilen ähnlich wird[END_REF]. In this paper, Gauss shows that every sufficiently small neighborhood of a point in an arbitrary real-analytic surface can be mapped conformally onto a subset of the plane.
We mention two other important papers by Gauss related to geodesy: Bestimmung des Breitenunterschiedes zwischen den Sternwarten von Göttingen und Altona durch Beobachtungen am Ramsdenschen Zenithsector (Determination of the latitudinal difference between the observatories in Göttingen and Altona by observations with a Ramsden zenith sector) [START_REF] Gauss | Bestimmung des Breitenunterschiedes zwischen den Sternwarten von Göttingen und Altona durch Beobachtungen am Ramsdenschen Zenithsector[END_REF] (1928) and Untersuchungen über Gegenstände der höhern Geodäsie (Research on objects of higher geodesy) [START_REF] Gauss | Untersuchungen über Gegenstände der höhern Geodäsie. Erste Abhandlung[END_REF] (1843 and 1847). It is in the latter that Gauss uses the method of least squares.
One may also talk about Riemann, who was a student of Gauss. His famous Riemann Mapping Theorem, and its later culmination, the Uniformization Theorem, concern the existence of angle-preserving maps between simply connected Riemann surfaces. Reviewing these developments would be another story, on which several good survey articles exist. We refer the reader to the book [START_REF] De Saint-Gervais | Uniformisation des surfaces de Riemann : Retour sur un théorème centenaire[END_REF]. 18 This paper won a prize for a competition proposed by the Copenhagen Royal Society of Sciences. The subject of the competition was: "To represent the parts of a given surface onto another surface in such a way that the representation is similar to the original in its infinitesimal parts." The prize was set by Heinrich Christian Schumacher, a famous German-Danish astronomer in Copenhagen, who was a friend of Gauss and who had been his student in Göttingen. A letter from Gauss to Schumacher dated July 5, 1816 shows that the solution to this question was already known to Gauss at that time; cf. Gauss's Werke Vol. 8, p. 371. In fact, it was Gauss who, in that letter, proposed to Schumacher to set this problem as the prize contest. Schumacher responded positively, and the question became the subject of the Academy contest. It is generally assumed that Schumacher had no doubt that the prize will go to Gauss. An English translation of Gauss's paper was published in the Philosophical magazine (1828), with the subtitle: Answer to the prize question proposed by the Royal Society of Sciences at Copenhagen. It is reproduced in [START_REF] Smith | A Source book in mathematics[END_REF], Volume 3. There is also a French translation [START_REF] Gauss | Solution générale de ce problème : Représenter les parties d'une surface donnée de telle sorte que la représentation soit semblable à l'original en les plus petites parties, Mémoire couronné en réponse à la ques-tion proposée par la Société Royale des Sciences de Copenhague en 1822[END_REF].
The works of Euler, Lagrange and Gauss are reviewed in the second doctoral thesis 19 of Ossian Bonnet [START_REF] Bonnet | Thèse d'astronomie: Sur la théorie mathématique des cartes géographiques[END_REF], defended in 1852. Bonnet declares in the introduction that the aim of his thesis is to simplify the works of the three authors on the problem, which he says was first addressed by Lambert, of characterizing all maps between surfaces which are "similarities at the infinitesimal level." Thus, we are led once again to angle-preserving maps. Euler and Lagrange, he says, gave a solution of the problem of characterizing these maps, under the condition that the surface is a sphere or, at most, a surface of revolution, and Gauss gave the solution, in his Copenhagen-prize memoir, for maps between arbitrary surfaces.
The introduction of the thesis contains a short historical survey of cartography. The techniques that Bonnet uses are those of the differential geometry of surfaces. The thesis was published in the Journal de mathématiques pures et appliquées. At the end of the paper, Liouville, who was the editor of the journal, declares in a note that the questions addressed in this paper were treated in the lectures he gave at the Collège de France during the academic year 1850-1851, and that he hopes to publish these lectures some day. He also recalls that he explained part of his ideas on the subject in the Notes of his edition of Monge's Application de l'Analyse à la Géométrie [START_REF] Monge | Applications de l'analyse à la géométrie, 5e édition[END_REF]. These notes, all written by Liouville, contain the solution of the question he calls (Note V) the three-dimensional geographical drawing problem. The title is Du tracé géographique des surfaces les unes sur les autres (On the geographical drawing of surfaces one onto the other). In the introduction of that note, Liouville formulates again the problem as the one of finding a mapping between two surfaces which is a similarity at the infinitesimal level. He says that this amounts to requiring that each infinitesimal triangle on the first surface is sent by the map to a similar infinitesimal triangle on the second one. He then formulates the same problem using the infinitesimal length elements: the ratio between each infinitesimal length element ds at an arbitrary point of the first surface and the corresponding element ds at the image point does not depend on the chosen direction at the first point. He recalls that this condition is the one that Lambert, Lagrange and Gauss adopted as a general principle in their theory of geographic maps. In Notes V and VI to Monge's treatise, Liouville gives his own solution to this problem.
Let us now say a few words on the work of Chebyshev on cartography. Chebyshev (1821-1894) was a devoted reader of Lagrange.20 Like Euler, he was interested in almost all branches of pure and applied mathematics, and he spent a substantial part of his time working on industrial machines of all kinds. He conceived a computing machine, a walking machine, and other kinds of machines, and he worked on linkages and hinge mechanisms from the practical and theoretical points of view; cf. [START_REF] Sossinsky | Configuration spaces of planar linkages[END_REF]. Approximation theory and optimization were among his favorite fields. Thus, it is not surprising that the questions raised by the drawing of geographic maps naturally attracted his attention.
Chebyshev's Collected papers [START_REF] Chebyshev | [END_REF] contain two papers on cartography, [START_REF] Chebyshev | Sur la construction des cartes géographiques[END_REF] and [START_REF] Chebyshev | Sur la construction des cartes géographiques. Discours prononcé le 8 février 1856 dans la séance solennelle de l'Université Impériale de Saint-Pétersbourg[END_REF], both called Sur la construction des cartes géographiques. The two papers contain several interesting ideas on this subject.
In the paper [START_REF] Chebyshev | Sur la construction des cartes géographiques[END_REF], Chebyshev starts by recalling that it is easy to reproduce an arbitrary part of the globe while preserving angles (he writes: "such that there is constantly a similarity between the infinitely small elements and their representation on the map.") The problem, he says, is that the magnification ratio (a kind of scaling factor) in such a map, varies from point to point. This is one of the reasons for which the representation of the curved surface that we see on the map is not faithful -it is deformed. The scale of the map is not constant, it depends on the chosen point. In the general case, the magnification ratio depends on the choice of a point together with a direction at a point. For mappings that are angle-preserving (which Chebyshev call "similarities at the infinitesimal level"), the magnification ratio depends only on the point on the sphere and not on the chosen direction. Thus, the magnification ratio is a function defined on the sphere.
Chebyshev studies then the problem of finding geographical maps that preserve angles and such that this magnification ratio is minimal. To solve this problem, he starts by a formula of Lagrange from his paper [START_REF] De Lagrange | Sur la construction des cartes géographiques[END_REF] for the magnification ratio of such maps (Formula (1) again), which he writes in the following form:
m = f (u + t √ -1)F (u -t √ -1) 2 e u + e -u
, using the notation of Lagrange which we already recalled, and where f and F are again arbitrary functions. The formula leads to
log m = 1 2 log f (u + t √ -1) + 1 2 log F (u -t √ -1) -log 2 e u + e -u .
The first two terms in the right hand side of the last equation, which contain arbitrary functions, is the solution of the Laplace equation:
∂ 2 U du 2 + ∂ 2 U dt 2 = 0.
From the theory of the Laplace equation, Chebyshev concludes that the minimum of the deviation of a solution of this equation from the function log 2 e u + e -u , on a region bounded by an arbitrary simple closed curve is attained if the difference U -log 2 e u + e -u is constant on the curve. Thus, one obtains the value of
U = 1 2 log f (u + t √ -1) + 1 2 log F (u -t √ -1)
up to a constant, and from there, the value of the functions
f (u + t √ -1)
and F (u -t √ -1). Up to a constant factor, the functions that give the best projection are found.
Chebyshev then considers the important special case studied by Lagrange, where the images of the parallels and meridians by the projection map are circles in the plane. In this case, Lagrange's formula for the magnification ratio takes the form:
m = 1 2 e u +e -u a 2 e 2cu + 2ab cos 2c(t -g) + b 2 e -2cu .
Chebyshev then works out the details of the result saying that the best geographical mapping is obtained when m is constant on the curve. He obtains precise estimates in the case where the curve bounding the region which is represented is close to an ellipse. He concludes the first paper by highlighting the following general result:
There exists an intimate connection between the form of a country and its best projection exponent.
More details on this question are given in his second paper [START_REF] Chebyshev | Sur la construction des cartes géographiques. Discours prononcé le 8 février 1856 dans la séance solennelle de l'Université Impériale de Saint-Pétersbourg[END_REF].
The second paper is a sequel to the first. In the introduction, Chebyshev, who was particularly interested in practical application of mathematics, places this problem in a most general setting, namely, that of "finding the way to the most advantageous solution of a given problem." Going into more detail regarding this question would take us too far. Let us only recall that the theory of optimization was one of the favorite subjects of Chebyshev.
Returning to the problem of geographical maps, Chebyshev places it in the setting of the calculus of variations. He uses the computations made in his first paper [START_REF] Chebyshev | Sur la construction des cartes géographiques[END_REF], and in particular the relation with the Laplace equation. He returns to the question of the dependence of the choice of the most appropriate mapping on the shape of a country, that is, on the form of its boundary and to his results deduced from the work of Lagrange saying that the best projection is one for which the ratio of magnification is constant on the boundary of the country.
The details of Chebyshev's arguments are somehow difficult to follow, even if the general ideas are clear. Milnor, in his paper [START_REF] Milnor | A problem in cartography[END_REF], gives an exposition of the same result with complete proofs following Chebyshev's ideas. We report briefly on Milnor's exposition.
Let Ω be a simply connected open subset of the sphere of radius r in Euclidean 3-space. We consider a conformal mapping f from Ω onto the Euclidean plane E. Such a map has a well-defined infinitesimal scale at each point x in Ω, defined as
σ(x) = lim y→x d E (f (x), f (y))
d S (x, y) .
The limit exists because the mapping f is conformal (the deviation at a point does not depend on the direction).
For such a conformal map f , its distortion is defined as the ratio
sup x∈Ω σ(x) inf x∈Ω σ(x) .
The goal is to find a map with the smallest distortion. We have the following result: [Chebyshev's theorem -Milnor's revision] Assume that the boundary of the open set Ω is a twice differentiable curve. Then there exists one and (up to similarity) only one conformal mapping f from Ω onto the Euclidean plane with minimal distortion. This map is characterized by the property that its infinitesimal scale function σ(x) is constant at the boundary.
Milnor adds:
This result has been available for more than a hundred years, but to my knowledge it has never been used by actual map makers.
Milnor notes after his proof of this result that the "best possible" conformal map f that is given by this theorem is locally injective, but in general it is not globally injective. He adds that f will be globally injective if U is geodesically convex, and that in that case f (U ) is also geodesically convex.
In practice, very few countries are geodesically convex, but the result may be used to draw maps of geodesically convex regions of the Earth.
Let us note by the way that the definition of the infinitesimal scale function, and especially its global version which is also considered by Milnor, is very close to some basic definitions that appear in Thurston's theory of stretch maps between surfaces [START_REF] Thurston | Minimal Stretch maps between hyperbolic surfaces[END_REF]. The question of finding the best geographical map between surfaces bears a lot of analogy to that of finding the best Lipschitz map in the sense of Thurston.
In Milnor's exposition of Chebyshev's ideas, the search for the best geographical map amounts to the solution of the Laplace equation with given boundary values. Potential theory thus appears at the forefront.
In the same paper, Milnor continues his cartographical investigations by studying the magnification ratio for various kinds of geographical maps. He obtains a lower bound for this ratio in the case of the so-called azimuthal equidistant projection, and he proves the existence and uniqueness of a minimum distortion map in this case.
Darboux in [START_REF] Darboux | Sur la construction des cartes géographiques[END_REF] also gave a solution to the same problem, which is based on Chebyshev's ideas and using potential theory. It seems that Milnor was not aware of the work of Darboux on this question, since he does not mention it.
Summarizing the contribution of Chebyshev, we can highlight the following three reasons for which that work, and the works of the other cartographers that we mentioned, are interesting from the point of view of the subject of our present survey:
(1) The work involves definitions and notions related to mappings between surfaces which are close to those considered by quasiconformal theorists; (2) The techniques used in cartography are those of partial differential equations, as inaugurated by Euler and used later on by quasiconformal mappers; (3) Chebyshev proved existence and uniqueness results on extremal mappings under some conditions. There is more, namely, ideas related to moduli. Chebyshev, in his paper [START_REF] Chebyshev | Sur la construction des cartes géographiques. Discours prononcé le 8 février 1856 dans la séance solennelle de l'Université Impériale de Saint-Pétersbourg[END_REF] (p. 243), recalls that there are several kinds of angle-preserving maps from the sphere to the plane. On p. 244ff. of that paper, he addresses the question of how the map used changes, when the boundary of the region to be represented varies. He discusses the variation of a point called the center and the exponent of the projection, and he mentions in particular the cases of regions bounded by pieces of second-degree curves.
For a more detialed exposition on Chebyshev's work on geography and relations with his other works, the reader is referred to the paper [START_REF] Papadopoulos | Euler and Chebyshev: From the sphere to the plane and backwards[END_REF].
One of the last prominent representatives of the differential-geometric work on geography is Beltrami. He worked on problems related to cartography in the tradition of Gauss. One of his important papers in this field is the Risoluzione del problema: Riportare i punti di una superficie sopra un piano in modo che le linee geodetiche vengano rappresentate da linee rette (Solution of the problem: to send the points of a surface onto a plane in such a way that the geodesic lines are represented by straight lines) 21 (1865) [START_REF] Beltrami | Risoluzione del problema: Riportare i punti di una superficie sopra un piano in modo che le linee geodetiche vengano rappresentate da linee rette[END_REF]. Beltrami declares in this paper that a large part of the research done before him on similar questions was concerned with conservation either of angles or of area. He says that even though these two properties are considered as the simplest and most important ones for geographical maps, there are other properties that one might want to preserve. He declares that since the projection maps that are used in geography are mainly concerned with measurements of distances, one would like to exclude maps where the images of distance-minimizing curves are too remote from straight lines. Thus, we encounter again a notion of a map sending a geodesic to a quasigeodesic. (We already mentioned that this property was considered in a paper by Euler, [START_REF] Euler | De proiectione geographica Deslisliana in mappa generali imperii russici usitata[END_REF].) Beltrami calls the maps between surfaces that he considers "transfer maps." He mentions incidentally that the central projection of the sphere is the only map that transforms the geodesics of the sphere into Euclidean straight lines. He declares that beyond its applications to the drawing of geographical maps, the problem may lead to "a new method of geodesic calculus, in which the questions concerning geodesic triangles on surfaces can all be reduced to simple questions of plane trigonometry." This is a quite explicit statement in which one can see that questions on geography acted as a motivation for the development of the field of differential geometry of surfaces. There are other such explicit statements. For instance, Darboux gave a talk at the Rome 1908 ICM talk whose title is Les origines, les méthodes et les problèmes de la géométrie infinitésimale (The origins, methods and problems of infinitesimal geometry) [START_REF] Darboux | Les origines, les méthodes et les problèmes de la géométrie infinitésimale[END_REF]. He declares there that the origin of infinitesimal geometry lies in cartography. He starts by an exposition of the history of the subject. We quote an interesting passage: 22Like many other branches of human knowledge, infinitesimal geometry was born in the study of practical problems. The Ancients were already busy in obtaining plane representations of the various parts of the Earth, and they had adopted the idea, which was so natural, of projecting onto a plane the surface of our globe. During a very long period of time, people were exclusively attached to these methods of projection, restricting simply to the study of the best ways to choose, in each case, the point of view and the plane of the projection. It was one of the most penetrating geometers, Lambert, the very estimated colleague of Lagrange at the Berlin Academy, who, pointing out for the first time a property which is common to the Mercator maps and called reduced maps and to those which are provided by the stereographic projection, was the first to conceive the theory of geographical maps from a really general point of view. He proposed, with all its scope, the problem of representing the surface of the Earth on a plane with the similarity of the infinitely small elements. This beautiful question, which gave rise to the researches of Lambert himself, of Euler, and to two very important memoirs of Lagrange, was treated for the first time in all generality by Gauss. [...] Among the essential notions introduced by Gauss, one has to note the systematic use of the curvilinear coordinates on a surface, the idea of considering a surface like a flexible and inextensible fabric, which led the great geometer to his celebrated theorem on the invariance of total curvature, to the beautiful properties of geodesic lines and their orthogonal trajectories, to the generalization of the theorem of Albert Girard on the area of the spherical triangle, to all these concrete and final truths which, like many other results due to the genius of the great geometer, were meant to preserve, across the ages, the name and the memory of the one who was the first to discover them.
The work of Tissot
In this section, we review the work of Tissot. This mathematician-geographer discovered new projections of the sphere that are useful in cartography, some of which were still used recently. But most of all, Tissot brought substantial new theoretical ideas in this field. In particular, in his paper [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF], he developed the analytical tools needed to obtain formulae for the minimum of the distortion. His most important contribution, which is certainly the reason for which Teichmüller quotes him in his paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], is the discovery of a device that survived under the name Tissot indicatrix, or ellipse indicatrix. As we shall see, the underlying idea is at the basis of the notion of quasiconformal mapping which were later developed by Grötzsch, Lavrentieff, and Teichmüller. The Tissot indicatrix is also mentioned explicitly and several times in the paper [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF] by Grötzsch. We shall discuss this paper in the section on Götzsch, in §4 of the present paper.
The Tissot indicatrix was used during more than a hundred years in the drawing of geographical maps. Let us summarize right away the idea.
We consider a mapping between two surfaces. For each point of the domain surface, we take an infinitesimal circle centered at that point. If one does not want to talk about infinitesimals, he may consider that such a circle is in the tangent space at the given point, centered at the origin. In this way, the domain surface is equipped with a field of infinitesimal circles. If the mapping is conformal, then it sends infinitesimal circles to infinitesimal circles. But the ratios of the radii (Tissot says: "to the extent that one can talk about radii of infinitesimal circles") of the image circles to those of the original ones vary from point to point. For what concerns us here, we may assume that the radii of the infinitesimal circles of the first surface are constant. The collection of ratios obtained in this manner constitutes a measure of the amount by which the mapping distorts distances. In the general case, the image of an infinitesimal circle is an infinitesimal ellipse which is not necessarily a circle. The collection of shapes of the image ellipses (that is, the ratio of their major to the minor axes) is another measure of the distortion of the mapping. An area-preserving mapping is one for which the image ellipses have the same area as the original circles. The inclination of the ellipse (which is defined, like the ratio of the great axis to the small axis, provided the ellipse is not a circle), measured by the angle its great axis makes with the horizontal axis of the plane, is another measure of the distortion of the mapping. Knowing the axes of the ellipse, the angle distortion of the mapping is known. Thus, the study of the field of ellipses gives information on:
(1) the non-conformality of the map, that is, its angle distortion;
(2) the area distortion;
(3) the distance distortion.
By setting the radii of the infinitesimal circles on the domain surface to be equal to one, then the shape, inclination and area of each image ellipse, which Tissot says is "a kind of indicatrix," is expressed by some finite data which give precise information on the distorsion of the map.
In his work, Tissot establishes analytical formulae for the lengths and the directions of the axes of the image ellipses. He also considers the maximum of the distortion over the surface. Similarly, he studies the ratios according to which the length element is changed in the various directions, the greatest and the smallest such ratios (which are precisely equal to the semi-axes of the ellipses), and finally, the change in area.
Cartographers used to draw the Tissot indicatrix, that is, the field of ellipses of a geographical map. In Figure 3 is represented the field of ellipses corresponding to the so-called oblique plate carrée (also called equirectangular) projection. In such a projection, the meridians and parallels form a pair of perpendicular foliations. Tissot considers this foliation as a grid dividing the surface into equal infinitesimal rectangles whose side ratios are equal to some fixed ratio between meridian and parallel distances. Let us emphasize the fact that the Tissot indicatrix is not a measure of conformality in the strict sense, since it involves distances and areas and not only the conformal structures of the surfaces. It is however a measure of conformality when the given conformal structure underlies a metric structure. We shall talk more about this below, when we consider quasiconformal mappings, in particular when we talk about the so-called length-area method.
Tissot first published his work in a series of Comptes Rendus notes (see [START_REF] Tissot | Sur les cartes géographiques[END_REF]- [START_REF] Tissot | Sur la construction des cartes géographiques[END_REF].) Several years later, he gave the details of his work in installments (Tissot [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF]- [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF]), and then in the long memoir [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF], which reproduces the articles [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF]- [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF] and additional material. The paper [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF] and the introduction of [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF] contain an outline of the whole theory.
Tissot worked in the setting of arbitrary differentiable maps between surfaces, although primarily in the case where the domain surface is a subset of a sphere or spheroid, and the range a subset of the Euclidean plane. At several places he considers "any map (which he calls a representation) from a surface onto another." On p. 52 of his paper [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF], he mentions applications of his theory to the case where the domain is an arbitrary surface of revolution. We recall that this special case was also thoroughly investigated by Euler. Tissot proves the following fundamental (although easy to prove) lemma ( [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF] p. 49 and [201] p. 13):
At every point of the surface which has to be represented, there are two mutually perpendicular tangents, and if the angles are not preserved then there is a unique such pair, such that the corresponding tangent vectors on the other surface also intersect at right angles. Let us insist on the fact that with the above condition of not preserving angles, the setting of Tissot's study becomes different from that of his predecessors (in particular Lagrange and Chebyshev) who studied mainly angle-preserving maps with some other distortion parameters minimized.
From this lemma, Tissot deduces the following:
On each of the two surfaces there exists a "system of orthogonal trajectories," and if the representation is nowhere angle-preserving, there exist only such one pair, which are preserved by the map. In modern language, each system of trajectories in the pair is a foliation of the surface. It is interesting to see how Tissot expresses his ideas without the specialized geometric terminology (which was still inexistent). For the network consisting of two perpendicular foliations, He uses the word "canevas" (with reference to a canvas fabric).
From the practical point of view, these orthogonal foliations provide a system of "Euclidean" coordinates on the surface. 23 Tissot says that the surface admits infinitely many such networks, but that those associated to a map (projection) between two surfaces are unique. He formulates the following rule satisfied by an arbitrary deformation, which is independent of the nature of the surface or of the given map:
Any map between two surfaces may be replaced, around each point, by an orthogonal projection performed at a convenient scale. The rule is proved in a geometric way using the lemma we mentioned above. Tissot deduces from this rule a large number of properties.
In the second chapter of the memoir [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF], Tissot reports on specific applications of his theory. One important idea in this chapter is formulated right at the beginning of the memoir:
To find the projection mode that is the most appropriate to the plane representation of a given country. In other words, the projection mode will depend on the shape of the country. Thus, the emphasis is, like in Chebyshev's work, on the boundary of the region of the sphere (or the spheroid) that has to be represented.
In the memoir [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF], Tissot studies thoroughly projections that produce very small angle distortion (of the order of a few seconds), and where the length distortions are minimized. He says that a similar method may be applied to projections which distort areas by negligible quantities, and where angle distortion is minimized.
Tissot's memoir [START_REF] Tissot | Mémoire sur la représentation des surfaces et les projections des cartes géographiques[END_REF] also contains a section on the history of cartography (p. 139ff.).
We refer the reader to the article [START_REF] Papadopoulos | A link between cartography and quasiconformal theory[END_REF] for a more extended survey of the work of Tissot.
Darboux, who, as we already mentioned, was also interested in geography, wrote a paper on Tissot's work [START_REF] Darboux | Sur une méthode de Tissot relative à la construction des cartes géographiques[END_REF]. In the introduction, he recalls the techniques introduced by Tissot for the representation of a given country, and in particular his use of power series expansions of functions, but he says that "[Tissot's] exposition appeared to me a little bit confused, and it seems to me that while we can stay in the same vein, we can follow the following method ..." He then explains in his own way Tissot's work, making the relation with the works of Gauss, Chebyshev and Beltrami on the drawing of geographical maps. In particular, he extends Tissot's 23 The existence of two orthogonal foliations is also an important feature of the famous Mercator map, conceived by the mathematician and geographer Gerardus Mercator (1512-1594). Mercator was born under the name Geert (or Gerhard) De Kremer, meaning shopkeeper in Flemish. (Mercator is the Latinized version.) The Mercator map is conformal and it is based on a cylindrical projection of the sphere in which meridians and parallels become orthogonal straight lines. Mercator's maps were extensively used by navigators since the sixteenth century. Similar maps were known in antiquity. They were used since the work of Eratosthenes (third century B.C.), in particular to represent the new lands conquered by Alexander the Great.
theory to maps between surfaces considered in the differential-geometric setting of Gauss's theory, starting with arbitrary curvilinear coordinates with length element
ds 2 = Edu 2 + 2F dudv + Gdv 2
and using the theory of conformal representations.
We already mentioned another paper by Darboux on geographical maps, [START_REF] Darboux | Sur la construction des cartes géographiques[END_REF], in which he gives a detailed proof of the result of Chebyshev saying that the most advantageous representation of a region of the sphere onto the Euclidean plane is the one where the magnification ratio is constant on the boundary of the surface to be represented.
Let us also mention a third paper that Darboux wrote on cartography, [START_REF] Darboux | Sur un problème posé par Lagrange[END_REF], in which he solves a problem addressed by Lagrange in the paper [START_REF] De Lagrange | Sur la construction des cartes géographiques[END_REF] that we already mentioned. The question concerns a quantity which Lagrange calls the "exponent of the projection." This problem is reduced to the following question in elementary geometry:
Given three points on the sphere, can we draw a geographical map, with a given exponent, such that these three points are represented by three arbitrarily chosen points on the map? Lagrange declares in his paper that a geometric solution to this problem seems very difficult, and that he did not try to find a solution using algebra. Darboux solves the problem in a geometric manner. He declares that it is the recent progress in geometry that made this solution possible.
One can say much more about the contributions of geographers on close-toconformal mappings, but we must talk now about the modern period.
In the next three sections, we briefly report on the lives and works of four pioneers of the modern theory of quasiconformal mappings: Grötzsch, Lavrentieff, Ahlfors and Teichmüller. The questions addressed become, in comparison with those that were asked by geographers, as follows:
(1) The desired map is a close-to-conformal mapping between two Riemann surfaces (that is, surfaces equipped with a conformal structures). The metric structure is not given in the formulation of the problem, although it is used as a tool in the developments. [START_REF] Abikoff | Remembering Lipman Bers[END_REF] The two surfaces are no more restricted to be subsets of the sphere and of the Euclidean plane respectively. We already saw that several authors already worked in this generality (Euler, Lagrange, Chebyshev, etc.), although the surfaces they considered were simply connected, or at least planar (homeomorphic to subsets of the plane). (3) The desired extremal map minimizes the supremum over the surface of the ratios of the big axes to the small axes of the infinitesimal Tissot ellipses. (4) There are topological restrictions on the map. For instance, in the case of mappings between closed surfaces, one asks that they are in a given homotopy class. The theory also involves the case of mappings that preserve distinguished points. For surfaces with boundary, these distinguished points may be on the boundary. In the theory of quasiconformal mappings of the disc, the homotopies defining the equivalence relation are required to be the identity on the boundary of the disc. There are several other variants of the theory. Let us make a comment on [START_REF] Abikoff | Oswald Teichmüller[END_REF]. In principle, the theory of quasiconformal mappings does not involve distances, since, being maps between Riemann surfaces, they are sensible only to the conformal structure, that is, the angle structure on the surface, and how it is distorted. However, when the conformal structure is induced by a metric, relations between the angle structure and length and area arise, and we are led to questions similar to those in which cartographers were interested in length and area distortion of geographical maps. The "length-area method," was one of the essential tools in the works of Grötzsch, Ahlfors and Teichmüller on quasiconformal mappings. We shall return to this question in the second part of our paper. We also mention Grötzsch's paper [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF] which contains precise estimates on length distortion of quasiconformal mappings between multiply-connected domains of the plane.
The consideration of distinguished points mentioned in (4) is essential. Typically, if the surface is a disc, then the Riemann Mapping Theorem says that (in the case where both surfaces are not conformally equivalent to the plane) one can find a conformal mapping between them, and that furthermore there exists such a mapping with the property that any chosen three pair of distinct distinguished points on the boundary of the first surface are sent to any chosen three distinct points on the boundary of the second surface. The next step starts with discs with four distinguished points on their boundary, and this is the case considered by Grötzsch.
Part 2. Quasiconformal mappings
We start the second part of our paper with the work of Grötzsch. Then we continue with the works of Lavrentieff, Ahlfors and Teichmüller.
Grötzsch
Camillo Herbert Grötzsch (1902Grötzsch ( -1993) studied mathematics at the University of Jena, between 1922 and 1926, where Paul Koebe was one of his teachers. Koebe moved to Leipzig in 1926, and Grötszch followed him. He worked there on his doctoral dissertation, under the guidance of Koebe, and obtained his doctorate in 1929, cf. [START_REF] Grötzsch | Über konforme Abbildung unendlich vielfach zusammenhängender schlichter Bereiche mit endlich vielen Häufungsrandkomponenten[END_REF]. Reiner Kühnau, who was Grötzsch's student, wrote two long surveys on his work, [START_REF] Kühnau | Herbert Grötzsch zum Gedächtnis[END_REF] and [START_REF] Kühnau | Einige neuere Entwicklungen bei quasikonformen Abbildungen. Jahresber[END_REF]. Between the years 1928 and 1932, Grötzsch published 17 papers on conformal and quasiconformal mappings. He introduced the notion of extremal quasiconformal mapping, which he called "möglichst konform" (conformal as much as possible). Grötzsch's papers are generally short (most of them have less than 10 pages) and they almost all appeared in the same journal, the Leipziger Berichte. 24 In fact, Grötzsch published almost all his papers in rather unknown journals. Kühnau writes in [START_REF] Kühnau | Some historical commentaries on O. Teichmüller's paper "Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] that Grötzsch "did not like to be dependent on the grace of a referee. For him only Koebe was a real authority." His papers are quoted in the literature on quasiconformal mappings, but it seems that nobody read them. Volume VII of the Handbook of Teichmüller theory will contain translations of several of these papers (cf. [START_REF] Grötzsch | Über die Verzerrung bei schlichten nichtkonformen Abbildungen und über eine damit zusammenhängende Erweiterung des Picardschen Satzes[END_REF], [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF], and there are others). Kühnau adds in [START_REF] Kühnau | Some historical commentaries on O. Teichmüller's paper "Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] that "to his disadvantage was also the fact that Grötzsch almost never appeared at a conference. Ahlfors told me that he earlier thought that Grötzsch did not really exist."
Grötzsch is mostly known for his solution of a problem which is easy to state, namely, to find the best quasiconformal mapping between two quadrilaterals (discs with four distinguished points on their boundary). According to Ahlfors ([20] p. 153), this problem was first considered as a mere curiosity. Things changed, and this work is now considered as a key result, especially since the generalization of this problem and it solution by Teichmüller. Talking about the method of Grötzsch, Ahlfors writes in his 1953 paper celebrating the hundredth anniversary of Riemann's inaugural dissertation [START_REF] Ahlfors | Development of the theory of conformal mapping and Riemann surfaces through a century[END_REF]:
Comparison between length and area in conformal mapping, and the obvious connection derived from the Schwarz inequality, has been used before, notably by Hurwitz and Courant. The first to make systematic use of this relation was H. Grötzsch, a pupil of Koebe. The speaker hit upon the same method independently of Grötzsch and may, unwillingly, have detracted some of the credit that is his due. Actually, Grötzsch had a more sophisticated point of view, but one which did not immediately pay off in the form of simple results. In fact, Grötzsch studied, besides quasiconformal mappings between rectangles, maps between surfaces that are not necessarily simply-connected. For instance, his paper [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF] concerns quasiconformal mappings between multiply-connected subsets of the plane. The problem of finding the best quasiconformal mapping between two homeomorphic Riemann surfaces is not addressed (this was one of Teichmüller's achievements). Rather, in this paper, Grötzsch obtains estimates for the distortion of length and area under a quasiconformal mapping. It is interesting to see that Grötzsch, in the paper [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF], does not use any word equivalent to the word "quasiconformal" to denote these maps. In fact, he does not give them a name but only a notation, and he extensively uses the setting of Tissot's work. Let us be more precise.
Grötzsch considers in [START_REF] Grötzsch | Über die Verzerrung bei nichtkonformen schlichten Abbildungen mehrfach zusammenhängender schlichter Bereiche[END_REF] bijective and continuous mappings of a domain B that are uniform limits of affine mappings except for a finite number of points. This implies a differentiablity property (which Grötzsch does not state explicitly). He then assumes that at each point where the map is differentiable, the ratio a/b of the great axis to the small axis of an infinitesimal ellipse which is the image of an infinitesimal circle satisfies the condition 1/Q ≤ a/b ≤ Q for some Q ≥ 1 that is independent of the choice of the point. Grötszch formulates this result as follows:
The distorsion of the shape of Tissot's indicatrix (distortion ellipse) of the mapping is generally kept within fixed bounds. Grötzsch then says that a mapping B satisfying these conditions is called a mapping A Q of B. This suggests that the map is Q-quasiconformal (in the modern sense), but no special name is used for such a map.
Teichmüller mentions extensively the papers of Grötzsch, especially in his later papers. In the introduction to his paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF], he writes "Much of what is discussed here is already contained in the works of Grötzsch, but mostly hidden or specialized in typical cases and in a different terminology."
Apart from his method for the modulus of a rectangle which is repeated in several subsequent papers and books, Grötzsch's results on quasiconformal mappings are still very poorly known. Later on, Grötzsch became known for his work on graph theory. The reader is referred to the article [START_REF] Kühnau | Some historical commentaries on O. Teichmüller's paper "Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] by Kühnau for a lively information on Grötzsch in relation with Teichmüller.
Lavrentieff
Unlike Grötzsch, Mikhaïl Alekseïevitch Lavrentieff 25 (1900-1980) had a brilliant career, as a mathematician and as a leader in science organisation.
Between 1922 and 1926, Lavrentieff worked on his doctoral thesis, on topology and set theory, 26 with Luzin as an advisor. He later remained faithful to the Luzin school despite the terrible campaign that was made around him. 27 In 1927, Mikhaïl Lavrentieff made a stay in Paris during which he followed courses and seminars by Montel, Borel, Julia, Lebesgue and Hadamard. At this stage his interests shifted from topology to the theory of functions of a complex variable. He published two Comptes Rendus notes on the subject, [START_REF] Lavrentieff | Sur la représentation conforme[END_REF] and [START_REF] Lavrentieff | Sur un problème de M. P. Montel[END_REF]. The first one concerns boundary correspondence of conformal representations of simply connected regions. In this paper, Lavrentieff studies the ratios of the lengths of the boundaries of the two discs. We recall that the same kind of problem was addressed by Chebyshev in his study of geographical maps. Among the results that Lavrentieff obtained in the paper [START_REF] Lavrentieff | Sur la représentation conforme[END_REF] are the following two:
1.-Let D and D be two domains in the plane, bounded by two simple closed curves Γ and Γ whose curvature is bounded. Consider a conformal mapping between D and D . Then the ratio of lengths of corresponding arcs of Γ and Γ is bounded.
2.-Let D and D be two domains in the plane, bounded by two simple closed curves Γ and Γ which have continuously varying tangents. Consider a conformal mapping between D and D . If δ and δ denote the the lengths of corresponding arcs on Γ and Γ , then we have
K 1 δ 1-> δ > K 2 δ 1+
where K 1 and K 2 are constants that depend only on .
The second paper, [START_REF] Lavrentieff | Sur un problème de M. P. Montel[END_REF], concerns function theory. Lavrentieff founded new areas of research in mechanics and he applied the theory of functions of a complex variable to the study of non-linear waves. His interest in quasiconformal mappings was not independent of his work on physics. Some of Lavrentieff's papers are written in French, and his works were known in the West. Courant, in his 1950 book [START_REF] Courant | Dirichlet's Principle, Conformal Mapping, and Minimal Surfaces[END_REF], has a section on a method developed by Lavrentieff to show that some extremal domains have analytic boundaries. Lavrentieff did important work in aerohydrodynamics which led him very far in the applications. In particular, he conceived a new kind of bullet, known under the name Katyusha, which played a very important role during World War II. It 26 At the epoch of Luzin, the expressions "topology" and "set theory" often indicated the same subject. The reader may recall that one of the first books on modern topology is Hausdorff's Grundzüge der Mengenlehre (Foundations of set theory) (1914). The second edition was simply called Mengenlehre. 27 Nikolai Luzin (1883-1950) is the main founder of the Moscow school of function theory, and one of the architects of descriptive set theory. In 1936, after a very successful career in mathematics, Luzin fell into disfavor, and the Soviet authorities started a violent political campaign against him. Six years before, his mentor, Egorov, had experienced a similar disgrace, due to his religious sympathies. Egorov died miserably in detention, from a hunger strike, in 1931, after having been dismissed in 1929 from his position at the university, and jailed in 1930 with the accusation of being a "religious sectarian." Luzin lost his position at the university. His case is famous, and what became known as the "Luzin affair" shook the scientific world in the Soviet Union and beyond. The story is recounted in several books and articles, e.g. [START_REF] Demidov | The case of Academician Nikolai Nikolaevich Luzin[END_REF]. In 1974, Lavrentieff published a memorial article on Luzin [START_REF] Lavrentieff | Nikolai Nikolaevich Luzin (on the 90th Anniversary of His Birth)[END_REF]. The book [112] which appeared on the occasion of Lavrentieff's hundredth anniversary [112] contains a chapter on the Luzin affair. The relation between Luzin and the Lavrentieff family remained constant.
is possible that his work on quasiconformal mappings was of great help in that domain. Lavrentieff received the most prestigious prizes awarded in the Soviet Union, among them the Lenin Prize and the Lomonosov gold medal.
Lavrentieff's work on quasiconformal mappings dates back to his early career. In 1928, he gave a talk at the Bologna ICM [START_REF] Lavrentieff | Sur une méthode geométrique dans la représentation conforme[END_REF] in which he described an application of mappings which in some specific sense are quasiconformal mappings. The question was to construct the Riemann Mapping Theorem using a sequence of explicit mappings obtained from the theory of partial differential equations, using a minimization principle. We note incidentally that a similar application of quasiconformal mappings is mentioned by Teichmüller in the last part of his paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF]. In his Comptes Rendus note [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] and his paper [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF], both published in 1935, Lavrentieff returns to this subject with more details. He introduces the notion of quasiconfomal mappings with a very weak condition of differentiablilty. The two papers are titled "Sur une classe de représentations continues." Let us give a few more details.
The Note [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] starts with the definition of an almost analytic mapping ("fonction presque analytique") f . This definition involves two functions, p and θ, which he calls the characteristic functions of f . The function p is the ratio of the great axis to the small axis of the ellipse which is the image by f of an infinitesimal circle. The function θ is the angle between the great axis of that ellipse and the real axis. The name characteristic function reminds us of the Tissot characteristic which we mentioned in §3. The definitions introduced by Tissot and Lavrentieff are very close. Lavrentieff was probably not aware of the work of Tissot. The definition that Lavrentieff gives in [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] and ( [START_REF] Lavrentieff | Sur un problème de M. P. Montel[END_REF] p. 407) is the following:
A function w = f (z) of a complex variable z in a domain D is termed almost analytic if the following three properties hold:
(1) f is continuous on D.
(2) In the complement of a countable and closed subset of D, f is an orientationpreserving local homeomorphism. (3) There exist two functions, p(z) ≥ 1 and θ(z) defined on D such that • In the complement of a subset E of D consisting of a finite number of analytic arcs, p is continuous and θ is continuous at any point z satisfying p(z) = 1. • On any domain ∆ which does not contain any point of E and whose frontier is a simple analytic curve, p is uniformly continuous, and if such a domain ∆ and its frontier do not contain any point such that p(z) = 1, then θ is also uniformly continuous on ∆. • For any point z 0 in D which is not in E, let E be the ellipse centered at z 0 such that the angle between the great axis of E and the real axis of the complex plane is θ(z) and p(z) ≥ 1 the ratio of the great axis to the small axis of E. Let z 1 and z 2 be two points on the ellipse E at which the expression |f (z) -f (z 0 )| attains its maximum and minimum respectively. Then,
lim a→0 f (z 1 ) -f (z 0 ) f (z 2 ) -f (z 0 ) = 1
where a is the great axis of the ellipse.
Lavrentieff says that if we assume that the function p is bounded, then we recover a class of functions which is analogous to the one considered by Grötzsch in [START_REF] Grötzsch | Über die Verzerrung bei schlichten nichtkonformen Abbildungen und über eine damit zusammenhängende Erweiterung des Picardschen Satzes[END_REF]. After Lavrentieff wrote his Comptes Rendus note, another comptes Rendus note was published by Stoilov, [START_REF] Stoilov | Remarques sur la définition des fonctions presque analytiques de M. Lavrentieff[END_REF], showing that the first two conditions in Lavrentieff's definition of an almost analytic function can be expressed using a notion of interior transformation that he had introduced. Stoilov concludes that for any almost analytic transformation w = f (z) in the sense of Lavrentieff, there exists a topological transformation of the domain of the variable, z = t(z ) such that the composed map f (t(z )) is analytic. Thus, from the topological point of view, analytic and almost analytic mappings are the same.
Lavrentieff obtained the following result stating that from the two functions p and θ one can recover the almost analytic function f (Theorem 1 in [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] p. 1011, and Theorem 3, [START_REF] Lavrentieff | Sur un problème de M. P. Montel[END_REF] p. 414):
Given any two function p(z) ≥ 1 and θ(z) defined on the closed unit disc such that p(z) is continuous and θ is continuous at any point satisfying p(z) = 1, there exists an almost analytic function w = f (z) satisfying f (0) = 0, f (1) = 1 and sending homeomorphically the closed unit disc to itself and having the functions p and z as characteristics functions. Lavrentieff obtained the following additional uniqueness theorem ([105] p. 1012), saying that the two functions p and θ determine f provided these functions coincide on a non-discrete subset:
Two almost analytic functions defined on the same domain D having the same characteristic functions p and θ and coinciding on a subset which has a limit point in D coincide on the whole set D. The two preceding results are versions of the integrability of almost-complex structures in dimension 2, and they also constitute geometric versions of later existence and uniqueness results for quasiconformal mappings with a given dilatation. A huge literature was dedicated later on to get more precise analytic forms of these results, with a minimum amount of continuity assumptions on the dilatation. These results culminated in the so-called Ahlfors-Bers measurable Riemann mapping theorem and its refinements.
Lavrentieff also obtained the following result, which has the same flavor as a later result of Teichmüller [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF] in which a growth condition is assumed on the quasiconformal dilatation (Theorem 2 in [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] p. 1011 and Theorem 7 in [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] p.
418):
Let p(z) ≥ 1 and θ(z) be two functions defined on the closed unit disc such that p is continuous and θ is continuous at every point satisfying p(z) = 1. For every r < 1 let q(r) be the maximum of p(z) on the circle |z| = r. If the integral r 0 dr rq(r) is divergent, then one can construct an almost analytic function w = f (z) satisfying f (0) = 0, f (1) = 1, whose characteristic functions are p and θ and which sends homeomorphically the closed unit disc to itself. There is a condition involving the divergence of the integral r 0 dr rq(r) in another result of Lavrentieff's [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] which concerns the so-called type problem. We shall quote this result below, but first, we briefly recall the type problem, to which we shall also refer later in the present paper, in our review of the works of Teichmüller and of Ahlfors.
To understand the type problem, one starts by recalling that by the uniformization theorem, every simply connected Riemann surface which is not the sphere is conformally equivalent to either the complex plane or the unit disc. The type problem asks for a way to decide, for a given simply-connected surface defined in some manner (a surface associated to a meromorphic function, defined as a ramified cover, etc.) whether it is conformally equivalent to the complex plane or to the unit disc. In the first case, the surface is said to be of parabolic type, and in the second case of hyperbolic type. In some sense, the type problem is a constructive complement to the uniformization question. There are precise relations between the type problem and Picard's theorem, and we shall mention one of them below.
Despite its apparent simplicity, the type problem turned out to be very difficult, and it gave rise to a profound theory. The importance of this problem is stressed by Ahlfors in his paper [START_REF] Ahlfors | Quelques propriétés des surfaces de Riemann correspondant aux fonctions méromorphes[END_REF], in which he formulates the type problem for Riemann surfaces, but also a related "type problem" for univalent functions: Such functions are of two types: either infinity is the only point omitted, or it is not. In the first case, he calls the function parabolic, and in the second case he calls it hyperbolic. He says that instead of talking about parabolic or hyperbolic functions, one could talk about their associated Riemann surfaces, as parabolic and hyperbolic surfaces. He adds that the notion of multiform function (an inverse function of a univalent function is generally multiform) is "infinitely simpler than that of a Riemann surface." He then writes that "This problem is, or ought to be, the central problem in the theory of functions. It is evident that its complete solution would give us, at the same time, all the theorems which have a purely qualitative character on meromorphic functions."
Needless to say, to a geometer, the complex plane is "naturally" Euclidean, and the unit disc is "naturally" hyperbolic. Making this precise and useful in function theory is another matter, and there are instances where this intuition is misleading, as we shall see e.g. in § 8, concerning a disproof by Teichmüller of a conjecture by Nevanlinna.
Lavrentieff studied applications of quasiconformal mappings to the type problem in [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] p. 1012 and [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] p. 421. It is possible that this was the first time quasiconformal mappings are used in the type problem. In the 3-dimensional space with coordinates (x, y, z) we consider a simply connected surface S defined by an equation t = t(x, y), where x and y vary in R and where t(x, y) is a function of class C 1 . The question is to find the type of S. Lavrentieff proves the following:
Let q(r) be the maximum of 1+|grad t(x, y)| on the circle x 2 +y 2 = r 2 . If ∞ 1 dr rq(r) diverges, then S is of parabolic type. The proof is based on the existence theorem for almost-analyic functions, Theorem 7 in [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] which we quoted above.
After this result, Lavrentieff gives a construction of surfaces of hyperbolic type:
With the above notation, for the surface S to be of hyperbolic type, it suffices that the following condition be satisfied: For any domain D contained in S and containing the point (0, 0, t(0, 0)), the area of D is smaller than l 2-where l is the length of the frontier of D and is an arbitrary fixed positive number.
The paper [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] contains further results. Lavrentieff addresses the question of finding a conformal representation of a two-dimensional Riemannian metric defined by a length element of the general form (2) ds 2 = Edx 2 + 2F dxdy + Gdy 2 , EG -F 2 > 0 onto a domain in the Euclidean plane. This question is essentially the one that was raised by the ninteenth-century cartographers/differential geometers, which we reviewed in §2.
Lavrentieff assumes that the functions E, F, G are continuous. He mentions again the 1928 paper [START_REF] Grötzsch | Über die Verzerrung bei schlichten nichtkonformen Abbildungen und über eine damit zusammenhängende Erweiterung des Picardschen Satzes[END_REF] by Grötzsch, and he declares that they both discovered the same notion of almost analytic functions.
The papers [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] and [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] contain other analytic and geometric applications. In particular, Lavrentieff obtains an analogue of Picard's theorem generalized to almost analytic functions. We recall the statement of Picard's theorem, because quasiconformal mappings play an important role in the later developments of this theorem.
Picard discovered the famous theorem which carries his name in 1879 [START_REF] Picard | Sur une propriété des fonctions entières[END_REF]; cf. also the paper [START_REF] Picard | Mémoire sur les fonctions entières[END_REF] in which he gives more details. The result has two parts, the first concerns entire functions (that is, holomorphic functions defined on the whole complex plane), and the second concerns meromorphic functions. The two results are sometimes called the Picard "small" and "big" theorem respectively. The result concerning entire functions says that for any non-constant entire function f , the equation f (z) = a has a solution for any complex number a except possibly for one value. This generalizes in a significant way Liouville's theorem saying that the image of a non-constant entire function is unbounded. The result concerning meromorphic functions says that such a function, in the neighborhood of an essential singularity (that is, a point where the function has no finite or infinite limit) takes any complex value infinitely often, with at most one exception.
The proofs by Picard of his results use the theory of elliptic functions and results on the Laplace equation. A large amount of papers were published, starting from 1896 (17 years after Picard obtained his result!), whose aim was to find simpler proofs and generalizations of this theorem. They include Borel [START_REF] Borel | Démonstration élémentaire d'un théorème de M. Picard sur les fonctions entières[END_REF], Schottky [START_REF] Schottky | Über den Picard'sehen Satz und die Boret'sehen Ungleichungen[END_REF], Montel [START_REF] Montel | Sur les familles de fonctions analytiques qui admettent des valeurs exceptionnelles dans un domaine[END_REF], [START_REF] Bloch | Démonstration directe de théorèmes de M. Picard[END_REF], Carathéodory [START_REF] Carathéodory | Sur quelques généralisations du théorème de M. Picard[END_REF], Landau [START_REF] Landau | Ueber eine Verallgemeinerung des Picard schen Satzes Berl[END_REF], Lindelöf [START_REF] Lindelöf | Sur le théorème de M. Picard dans la théorie des fonctions monogènes[END_REF], Milloux [START_REF] Milloux | Le théorème de M. Picard, suites de fonctions holomorphes, fonctions méromorphes et fonctions entières (Thesis)[END_REF], Valiron [203], and there are many others. These various geometric proofs and generalizations of Picard's theorem are the basis of the so-called value distribution theory, which was born officially in the work of Nevanlinna in the 1920s, at the publication of his papers [START_REF] Nevanlinna | Sur les relations qui existent entre l'ordre de croissance d'une fonction monogène et la desnité de ses zéros[END_REF], [START_REF] Nevanlinna | Untersuchungen über den Picard'schen Satz[END_REF] [START_REF] Nevanlinna | Zur Theorie der meromorphen Funktionen[END_REF] and his book Le théorème de Picard-Borel et la théorie des fonctions méromorphes, [START_REF] Nevanlinna | Le théorème de Picard-Borel et la théorie des fonctions méromorphes[END_REF]. This theory became one of the most active mathematical theories of the twentieth century. The introduction of quasiconformal mappings in that theory was one of the major steps in its developement.
Lavrentieff obtained ( [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] p. 1012) the following generalization of Picard's theorem:
Let w = f (z) be an almost analytic function defined on the unit disc. Consider the characteristic functions p(z) ≥ 1 and θ(z) and assume that θ is continuous at every point z satisfying p(z) = 1. Assume also that 0 is an essential singularity. Then, in the neighborhood of z = 0 the equation f (z) = a has an infinite number of solutions, with the possible exception of one value of a.
Lavrentieff notes that Grötzsch in [START_REF] Grötzsch | Über die Verzerrung bei schlichten nichtkonformen Abbildungen und über eine damit zusammenhängende Erweiterung des Picardschen Satzes[END_REF] proved the same theorem under the condition that the function p is bounded (this is the strong form of quasiconformality).
There are geometric applications given by Lavrentieff in his paper [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF], concerning the Riemann mapping theorem applied to surfaces equipped with Riemannian metrics given in the form [START_REF] Abikoff | Remembering Lipman Bers[END_REF].
Lavrentieff is also one of the main founders of the theory of quasiconformal mappings in dimensions ≥ 3, but this is another story.
The book [112] contains very rich information about Lavrentieff, his life and his epoch. It is written by his former students and colleagues. One chapter consists of an edition of Lavrentieff's memories. Lavrentieff writes there (p. [START_REF] Bers | Spaces of Riemann surfaces[END_REF][START_REF] Bers | On a theorem of Mori and the definition of quasiconformality[END_REF] that in 1929, as a senior engineer at the Theoretical Department of Russian Institute of aerodynamics, he was given the task of determining the velocity field of the fluids, in a problem related to thin wings, and that he wanted in some way to "justify the mathematics." In a lapse of time of six months, he managed, on the basis of variational principles using conformal mappings, to find a number of estimates for the desired solution. These estimates allowed him to identify a class of functions, among which the solution had to be found. The problem was reduced to the question of solving a system of linear equations, and this led to the solution of this problem. It turned out that the theory of conformal mappings could not fully meet the needs of aerodynamics. This was related to the necessity of taking into account the compressibility of the air and the possibility of exceeding the speed of sound, and this led to the study of a non-linear system of partial differential equations. The theory of conformal mappings needed to be extended to a wider class, and this was the birth of a new theory of quasi-conformal mappings.
In the same book [112], in the chapter by Ibragimova, it is recounted how Lavrentieff, while he was waiting for his future wife at a tram stop in Riga, solved a problem with which he had fought unsuccessfully during more than two years; the main point of the solution consisted in the introduction of quasiconformal mappings, in the problem at hand.
The reader can find more details on Lavrentieff's work in the papers [START_REF] Alexandrov | In memory of Mikhail Alekseevich Lavrent'ev (Russian)[END_REF] and [START_REF] Leray | [END_REF].
Ahlfors
In this section, after a few words on Ahlfors' life, we highlight a few aspects of his early work on quasiconformal mappings.
There are several good biographies of Ahlfors and we shall be brief. We refer the interested reader to his short autobiography in Volume I of his Collected works edition [START_REF] Ahlfors | Collected works[END_REF], to Gehring's biography in the first part of [76], to the notes of his work by Gehring, Ossermann, Kra which appeared in the Notices of the AMS [START_REF] Gehring | The mathematics of Lars Valerian Ahlfors[END_REF] and which are reproduced in [76], to Lehto's article [START_REF] Lehto | On the life and works of Lars Ahlfors[END_REF] in the Mathematical Intelligencer and to the lively biography by Lehto which appeard recently in English translation [START_REF] Lehto | At the summit of mathematics[END_REF].
Ahlfors studied at the University of Helsinki, in the 1920s. This university was already an internationally known center for function theory, with the presence of Ernst Lindelöf and Rolf Nevanlinna as professors. Ahlfors obtained his Master degree in the Spring of 1928. In the autumn of the same year, he accompanied Nevanlinna, who was one of his teachers, for a stay in Zurich where the latter was an invited professor in replacement of Hermann Weyl at the Eidgenössische Technische Hochschule. Weyl was on leave that year. Ahlfors later on described his new environment as follows: "I found myself suddenly transported from the periphery to the center of Europe" ( [START_REF] Lehto | On the life and works of Lars Ahlfors[END_REF] p. 4).
In 1929, Ahlfors created a surprise in the mathematical world by proving a 21-year old conjecture by Denjoy concerning the asymptotic values of an entire function. Ahlfors heard about this conjecture at Nevanlinna's lectures in Zurich. The conjecture says that the number of finite asymptotic values of an entire function of order k is at most 2k. Making sense of this statement needs the definition of an appropriate notion of "order" and "asymptotic value" of an entire function, and this is done in the setting of the so-called Nevanlinna theory, or value distribution theory. Ahlfors' approach to the Denjoy conjecture was completely new, based on quasiconformal mappings. His solution became part of his doctoral thesis, written under Nevanlinna, and submitted in 1930. In his article "The joy of function theory" [START_REF] Ahlfors | The joy of function theory[END_REF], Ahlfors writes:
In retrospect, the problem by itself was hardly worthy of the hullabaloo it had caused, but it was not of the kind that attracts talents, especially young talents. It is not unusual that the same mathematical idea will surface, independently, in several places, when the time is ripe. My habits at the time did not include regular checking of the periodicals, and I was not aware that H. Grötzsch had published papers based on ideas similar to mine, which he too could have used to prove the Denjoy conjecture.
Ahlfors' work on quasiconformal mappings was first essentially directed towards applications to function theory, which remained his main field of research during more than 10 years (roughly speaking, from 1929 to 1941). The type problem was one of his central objects of interest during this period. After that, Ahlfors became thoroughly involved Teichmüller theory and Kleinian groups; this was also the subject of his remarkable collaboration with Bers. Regarding this collaboration, Abikoff writes in his paper [START_REF] Abikoff | Remembering Lipman Bers[END_REF]: It was during this period of extensive expository writing that Bers's research blossomed as well. In his 1958 address to the International Congress of Mathematicians, Bers announced a new proof of the so-called Measurable Riemann Mapping Theorem. He then essentially listed the theorems that followed directly from this method, including the solution to Riemann's problem of moduli. He was outlining the work of several years, much of which was either joint with or paralleled by work of Lars Ahlfors. Their professional collaboration was spiritually very close; they were in constant contact although they wrote only one paper together, the paper on the Measurable Riemann Mapping Theorem. Nonetheless, their joint efforts, usually independent and often simultaneous, and their personal generosity inspired a cameraderie in the vaguely defined group which formed around them. The group has often been referred to as the "Ahlfors-Bers family" or "Bers Mafia"; cf. the recent article by Kra [START_REF] Kra | The Ahlfors-Bers family[END_REF]. It is gratifying to see that the spirit of cooperation they fostered lives on among many members of several generations of mathematicians.
Until the end of his life, quasiconformal mappings continued to play an essential role in Ahlfors' work. His 1954 paper [START_REF] Ahlfors | On quasiconformal mappings[END_REF], which concerns a new proof of Teichmüller's Theorem, contains several basic results on quasiconformal mappings, including a general definition of such mappings using the distortion of quadrilaterals -a definition where the differentiability of the map becomes irrelevant. 28 The paper also contains a reflection principle, a compactness result, and an analogue of the Hurwitz theorem for quasiconformal mappings.
Like Lavrentieff did before him, Ahlfors used quasiconformal mappings as an important tool to prove results in function theory. One of the important by-products of his 1935 paper paper Zur Theorie der Überlagerungsflächen [START_REF] Ahlfors | Zur Theorie der Überlagerungsflächen[END_REF] (On the theory of covering surfaces) was to show that several of Nevanlinna's results on value distribution theory had little to do with conformality; in fact they hold for quasiconformal mappings. This came quite as a surprise, since quasiconformal mappings are very different from the conformal ones. The latter satisfy very strong rigidity properties; for instance, the values of such a mapping in an arbitrary small region determines its values everywhere. Quasiconformal mappings do not satisfy such properties. Ahflors writes in the introduction to his paper [START_REF] Ahlfors | Zur Theorie der Überlagerungsflächen[END_REF] (translation in [START_REF] Lehto | At the summit of mathematics[END_REF] p. 28):
This work had its origin in my endeavor to get by geometric means the most significant results of meromorphic function theory. In these attempts it became evident that only an easily limited porof R. Nevanlinna's Main Theorems, and thereby nearly all classical results, were dependent on the analyticity of the mapping. In contrast, their entire structure is determined by the metric and topological properties of the Riemann surface, which is the image of the complex plane. The image surface is then thought of being spread out above the Riemann sphere, i.e. to be the covering surface of a closed surface.
Ahlfors also introduced the techniques of quasiconformal geometry to prove results on conformal function theory. The need for a space of more flexible functions to work with was one reason for which quasiconformal mappings naturally appeared in the theory of functions of a complex variable.
Another important point of view which Ahlfors brought in function theory is topology. The topological techniques of branched coverings, cutting and pasting pieces of the complex plane, the Euler characteristic, the Riemann-Hurwitz formula, and other topological tools, became with Ahlfors part of the theory of functions of a complex variable. In some sense, Ahlfors' new approach constituted a return to the sources, that is, to the geometrical and topological methods introduced by Riemann, for whom meromorphic functions were nothing else than branched coverings of the sphere characterized by some finite data describing their singularities. Carathéodory, in presenting Ahlfors' work at the Fields medal ceremony in 1936, declared that Ahlfors opened up a new chapter in analysis which could be called "metric topology" (quoted in [START_REF] Lehto | On the life and works of Lars Ahlfors[END_REF] p. 4). The theorems of Picard and Nevanlinna on value distribution became, under Ahlfors, theorems on "islands": instead of counting the number of times that certain values are omitted, in Ahlfors' theory, one counts how many "regions" are omitted. Ahlfors also brought the Gauss-Bonnet theorem and other results of differential geometry into the realm of Nevanlinna's theory.
Let us finally mention that Ahlfors' first works on the type problem (see [9] and [START_REF] Ahlfors | Sur le type d'une surface de Riemann[END_REF]), unlike the works of Laverentieff on the same problem, involves the so-called length-area method which is a common tool used by Grötzsch and himself, and later on by Teichmüller, in relation with quasiconformal mappings. Regarding this method, Ahlfors writes, commenting on his first two published papers on asymptotic values of entire functions of finite order in his Collected Works [START_REF] Ahlfors | Collected works[END_REF] Vol. 1 (p. 1):
The salient feature of the proof is the use of what is now called the length-area method. The early history of this method is obscure, but I knew it from and was inspired by its application in the well-known textbook of Hurwitz-Courant to the boundary correspondence in conformal mapping. None of us [Ahlfors is talking about Nevanlinna and himself] was aware that only months earlier H. Grötzsch had published two important papers on extremal problems in conformal mapping in which the same method is used in a more sophisticated manner. My only priority, if I can claim one, is to have used the method on a problem that is not originally stated in terms of conformal mapping. The method that Grötzsch and I used is a precursor of the method of extremal length [...] In my thesis [START_REF] Ahlfors | Untersuchungen zur Theorie der konformen Abbildung und der ganzen Funktionen[END_REF] the lemma on conformal mapping has become the main theorem in the form of a strong and explicit inequality or distortion theorem for the conformal mapping from a general strip domain to a parallel strip, together with a weaker inequality in the opposite direction. [...] A more precise form of the first inequality was later given by O. Teichmüller. Ahlfors refers here to Teichmüller's paper [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF], on which we comment in what follows.
On Teichmüller's writings
During his short lifetime, Oswald Teichmüller (1913[START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] published a series of fundamental papers on geometric function theory and on moduli of Riemann surfaces. At the same time, he developed to a high degree of sophistication the theory of quasiconformal mappings and their applications. He also wrote several papers on algebra and number theory. His writings were read by very few mathematicians. His article Extremale quasikonforme Abbildungen und quadratische Differentiale (Extremal quasiconformal mappings and quadratic differentials) [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], published in 1939, is known among Teichmüller theorists, but it is seldom quoted in the mathematical literature. In fact, to this day, mathscinet mentions only 35 citations of that paper: 6 between 1949 and 1984, and the rest (29 citations) between 2000 and 2016. These figures are ridiculously low, in view to the major impact that the paper had and the extensive literature to which it gave rise. Furthermore, when Teichmüller's papers were quoted, they were generally accompanied by a little bit of suspicion, and the impression which resulted from the comments on them is that Teichmüller did not provide proofs for his results. Bers, in his 1968 ICM address [START_REF] Bers | Spaces of Riemann surfaces[END_REF], referring to the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] we just quoted, writes the following:
Much of [my] work consists in clarifying and verifying assertions of Teichmüller whose bold ideas, though sometimes stated awkwardly and without complete proofs, influenced all recent investigations, as well as the work of Kodaira and Spencer on the higher dimensional case. Ahlfors, in his 1954 paper [START_REF] Ahlfors | On quasiconformal mappings[END_REF], writes:
In a systematic way the problem of extremal quasiconformal mapping was taken up by Teichmüller in a brilliant and unconventional paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF]. He formulates the general problem and, although unable to give a binding proof, is led by heuristic arguments to a highly elegant conjectured solution. The paper contains numerous fundamental applications which clearly show the importance of the problem. It is a matter of fact that Teichmüller cared much more about transmitting ideas than about writing detailed proofs. In the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], he introduced the space that became later known as Teichmüller space, he equipped it with the metric that bears the name Teichmüller metric and he thoroughly investigated its properties, including its infinitesimal (Finsler) structure, and he proved the existence and uniqueness of geodesics between any two points. He developed the theory of quasiconformal mappings between arbitrary Riemann surfaces and the partial differential equations they satisfy, giving a characterization of the tangent and cotangent spaces at each point of Teichmüller space as a space of equivalence classes of partial differential equations and quadratic differentials respectively, he also studied Teichmüller discs (which he called complex geodesics), he investigated global convexity properties of the space. There are many other ideas and results in that paper, including the development of quasiconformal invariants of Riemann surfaces. We shall comment on these ideas in §8 below.
There are other remarkable papers of Teichmüller, and all of them are almost never quoted. Among the most important ones, we mention the paper Untersuchungen über konforme und quasikonforme Abbildungen (Investigations on conformal and quasiconformal mappings) [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF] in which Teichmüller brings new techniques and results on quasiconformal geometry with applications to Nevanlinna's theory, the paper Veränderliche Riemannsche Flächen (Variable Riemann surfaces) [185], in which he lays down the bases of the complex structure of Teichmüller space, and the paper Über Extremalprobleme der konformen Geometrie (On extremal problems in conformal geometry) [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] which contains a broad research program in which Teichmüller displays analogies between Riemann surface theory, algebra and Galois theory. The exact content of all these papers and the methods developed there are very poorly known, even among the specialists. Our aim in the next section is to give an overview of some of Teichmüller's important ideas on quasiconformal mappings and their applications in geometric function theory, Riemann surfaces and moduli. Many of these papers are still worth reading in detail today; they contain ideas that lead directly to interesting research projects. Let us now say a few words on Teichmüller's writings.
In the first 40 years after their publication, Teichmüller's works attracted almost exclusively the interest of analysts, represented in a masterly manner by Ahlfors and Bers. Geometers became interested in Teichmüller's work essentially when Thurston entered the scene, in the 1970s. Let us quote Thurston, from his foreword to Hubbard's book on Teichmüller spaces [START_REF] Hubbard | Teichmüller theory and applications to geometry, topology, and dynamics[END_REF]:
Teichmüller theory is an amazing subject, richly connected to geometry, topology, dynamics, analysis and algebra. I did not know this at the beginning of my career: as a topologist, I started out thinking of Teichmüller theory as an obscure branch of analysis irrelevant to my interests. My first encounter with Teichmüller theory was from the side. I was interested in some questions about isotopy classes of homeomorphisms of surfaces, and after struggling for quite a while, I finally proved a classification theorem for surface homeomorphisms. [...] Bers gave a new proof of my classification theorem by a method that was much simpler than my own, modulo principles of Teichmüller theory that had been developed decades earlier. From this encounter I came to appreciate the beauty of Teichmüller theory, and of the close connections between 1-dimensional complex analysis and two and three-dimensional geometry and topology. A great deal of mathematics has been developed since that time and there are many active connections between geometry, topology, dynamics and Teichmüller theory.
Teichmüller's papers have a conversational and informal style, but the details of the proofs are difficult to follow. At several places, difficult statements are given with only sketches of proofs. At some points, the proofs, when they exist, involve new ideas from various directions including geometry, topology, partial differential equations, etc. This makes them arduous for a reader whose knowledge is specialized. Teichmüller sometimes states a problem using several equivalent formulations, explains the difficulties and the methods of attack, returns to the same problem several pages later, sometimes several times, each time from a new point of view and with a new explanation, and eventually comes up with a new idea and declares that the problem is solved, after he had presented it as a conjecture. In the introduction of the paper Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen (Determination of extremal quasiconformal mappings of closed oriented Riemann surfaces) [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] (1943), he writes, referring to his previous paper Extremale quasikonforme Abbildungen und quadratische Differentiale [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF]:
In 1939, it was a risk to publish a lengthy article entirely built on conjectures. I had studied the topic thoroughly, was convinced of the truth of my conjectures and I did not want to keep back from the public the beautiful connections and perspectives that I had arrived at. Moreover, I wanted to encourage attempts for proofs. I acknowledge the reproaches, that have been made to me from various sides, even today, as justifiable but only in the sense that an unscrupulous imitation of my procedure would certainly lead to a barbarization of our mathematical literature. But I never had any doubts about the correctness of my article, and I am glad now to be able to actually prove its main part.
At that time, I did not have an exact theory of modules, the conformal invariants of closed Riemann surfaces and similar "principal domains." In the meantime I have developed such a theory aiming at the intended application to quasiconformal mappings. I will have to briefly report on it elsewhere. The present proof does not depend on this new theory, and works with the notion of uniformization instead. However, I think one will have to combine both to bring the full content of my article [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] in mathematically exact form.
At the end of the same paper, he writes: [...] I cannot elaborate on these and similar questions and only express my opinion that all those aspects that have been treated separately in the discussion so far will appear as a great unified theory of variable Riemann surfaces in the near future.
Teichmüller died soon after, without being able to develop the unified theory he was aiming for.
Ahlfors and Gehring write in the preface to Teichmüller's Collected works:
Teichmüller's style was unorthodox, to say the least. He himself was well aware of the difference between a proof and an intuitive reasoning, but his manner of presentation makes it difficult to follow the frequent shifts from one mode to another.
Abikoff, in his expostory article on Teichmüller [START_REF] Abikoff | Oswald Teichmüller[END_REF], recalls that the aim of Deutsche Mathematik, the journal in which most of Teichmüller's papers were published, was to concentrate on general ideas and not on technical details. This was supposed to be the tradition to which the name "German mathematics" refers, a tradition that was shared by Riemann and Klein. Abikoff adds that he learned from a conversation with Herbert Busemann that "Teichmüller manifested those traits early in his career but when pressed could offer a formal proof." In fact, at some places, Teichmüller, with his style, exceeded the norms of Deutsche Mathematik. For instance, the editors of that journal included the following footnote in the introduction of the paper Über Extremalprobleme der konformen Geometrie [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] (1941):
The following work is obviously unfinished, it has the character of a fragment. Unreasonably high demands are made on the reader's cooperation and imagination. For the assertions, which are not even stated precisely with rigor, neither proofs nor even any clues are given. One thing not of fundamental importance, the "residual elements" occupy a broad space for something almost unintelligible, while far too scarce hints are provided for fundamentally important individual examples. But the author explains that he cannot do better in the foreseeable future. -If we still publish the author's remarks, despite all lack that distinguishes the work against the other papers in this journal, it is to bring up for discussion the thoughts contained therein relating to the theme of estimates for schlicht functions.
Concerning the purely mathematical content and as a general rule, Teichmüller avoided the use of sophisticated tools, and his proofs are based on first principles. This is probably another reason for which his papers are difficult to read. Quoting theorems and building upon them often makes things easier than developing a theory from scratch. Teichmüller writes, in the paper [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] (1943):
It is rather because of my unaptness and lack of knowledge on continuous topology, that I can only prove the purely topological lemma taking such long detours. But in my opinion it would be absurd to introduce the length of the required proof as a measure for the value of a statement. Of course none of the necessary details of a proof may be left out if a statement seems worth a proof at all. But as our example shows clearly, there is a natural division in main points and minor points that we may not arbitrarily change.
Let us now say a few words about Teichmüller's life. Teichmüller entered the university of Göttingen in 1931, at a time where this university was at its full glory. Hilbert had retired from his professorship the year before, but was still lecturing on philosophy of science. Hermann Weyl, who came back from Zurich, officially as Hilbert's successor, was among Teichmüller's teachers. He was lecturing on differential geometry, algebraic topology and the philosophy of mathematics. Teichmüller's other teachers included Richard Courant (who was the head of the institute), Otto Neugebauer, Gustav Herglotz, Emmy Noether and Edmund Landau. Besides the professors, there were brilliant assistants and graduate students at the mathematics department, including Saunders Mac Lane, Herbert Busemann, Ernst Witt, Hans Wittich, Werner Fenchel and Fritz John. Max Born and James Franck were among the professors at the physics department.
The Nazis came to power in Germany in February 1933 and the deluge arrived very rapidly. On April 7, 1933, a law was voted, excluding the Jews and the "politically unreliable" from civil service. In particular, Jewish faculty members at all state universities had to be immediately dismissed, with a few exceptions (those who served the German army during World War I and those who had more than 25 years of academic service). At the same time, a state law restricted the number of Jewish students at German universities. The University of Göttingen immediately declined. It continued though to receive distinguished visitors. In particular, Rolf Nevanlinna lectured there in 1936 and 1937. Teichmüller was very much influenced by these lectures.
Teichmüller obtained his doctorate two years later, in 1935, under Helmut Hasse. The subject was functional analysis. Right after his thesis, he published several papers on algebra and geometric algebra. In October 1936, he started working on his Habilitation under Bieberbach, and he obtained it in 1938. The subject was function theory. The habilitation is published as [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF]. Most of Teichmüller's papers on function theory and Riemann surfaces were published during the Second World War (between 1939 and 1943). In the summer of 1939, and just after he was offered a position at the University of Berlin, he was drafted into the army. The war soon broke out and he was sent to the Norwegian front. He spent there a few weeks, after which he returned to Berlin, but he never officially took a job at the university. Until the year 1943, he occupied a position at the high command of the army.
It seems however that he was not satisfied with this work at the army. In 1941, he applied for a position at Munich's Ludwig-Maximilians University, to take over the position which would eventually to vacant after Carathéodory's retirement. The paper [START_REF] Litten | Die Carathéodory-Nachfolge in München 1938-1944[END_REF] by Freddy Litten describes the complex events related to Carathéodory's succession. The book [START_REF] Georgiadou | Mathematics and politics in turbulent times. With a[END_REF] by Maria Georgiadou also contains several pages devoted to this question. This author writes (p. 355) that "in the six years preceding the appointment of his successor, Carathéodory was active in proposing or blocking candidacies." At a certain stage, Helmuth Kneser and Teichmüller were on the top of the list. We read in [START_REF] Georgiadou | Mathematics and politics in turbulent times. With a[END_REF] p. 360 (information obtained from [START_REF] Litten | Die Carathéodory-Nachfolge in München 1938-1944[END_REF], note 166 p. 163) that Wilhelm Führer, who was the mathematics referee at the education ministry regarding that position, described Teichmüller as an "old Nazi, free from any kind of authoritarian behavior, and Kneser as a party comrade and an active SA member." We also read in [START_REF] Georgiadou | Mathematics and politics in turbulent times. With a[END_REF] p. 361 (information from [START_REF] Litten | Die Carathéodory-Nachfolge in München 1938-1944[END_REF], note 166 p. 163ff.) that "Carathéodory complained about the imperfect reasoning in a treatise of Teichmüller's." On November 13, 1941, Carathéodory wrote to Wilhelm Süss (a former Nazi and member of the SA, who was the president of the DMV -the German Mathematical Society -and who by the way is the founder of the Oberwolfach Mathematics Research Institute) that "Teichmüller was not sufficiently mature for one of the main posts at a great German university" ( [START_REF] Georgiadou | Mathematics and politics in turbulent times. With a[END_REF] p. 361, from Nachlaß Wilhelm Süss C89/48, Freiburg Universitätsarchiv). Despite Carathéodory's reservations, at a certain stage, Teichmüller was ranked first. But at a later senate's session, "it was revealed that the head of the lecturers had altered documents in Teichmüller's favor, so Teichmüller was crossed off the list" ( [START_REF] Georgiadou | Mathematics and politics in turbulent times. With a[END_REF] p. 361, information from [START_REF] Litten | Die Carathéodory-Nachfolge in München 1938-1944[END_REF], note 166 p. 165ff.). Thus, Teichmüller did not get the position at Munich. In the fall of 1943, he volunteered as a soldier at the Eastern Front, and he presumably died there, the same year. In a set of notes, written by Teichmüller's mother and translated by Abikoff in [START_REF] Abikoff | Oswald Teichmüller[END_REF], one can read: "Even as a soldier he found the time to write papers while his comrades rested." In the introduction of his paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF], Teichmülle writes: Because I only have a limited vacation time at my disposal, I cannot give reasons for many things, but only assert them. This is unfortunate because I do not know anyway the exact and generally valid proofs of the principles to be drawn up. After all, the expert familiar with Extremale quasikonforme Abbildungen und quadratische Differentiale will be able to add much of what is missing. Also I have not yet been able to perform many individual studies.
In the introduction of [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], he writes:
As I said before, the main results will not be proven in the strictest sense. I only hope to establish them in a way that any serious doubts are practically excluded, and to encourage finding the proofs. Thus when possible, the ideas that led to discovering the solution are conveyed in direct chain. As far as I can see, attempts of exact proofs find their right place only where this train of thought ends, according to the paradox "proving is reversing the train of thought." So it is expected that a systematic theory progressing toward exact proofs would cause for the time being bigger difficulties in understanding than the present heuristic introduction.
Many people were reluctant to read Teichmüller's papers, because they had heard of his extreme political ideals. Concerning his active involvement in the Nazi movement during his years in Göttingen, there are several versions and explanations. Abikoff collected information from Busemann, Fenchel and others who were present at Göttingen at that period. In his paper [START_REF] Abikoff | Oswald Teichmüller[END_REF], he reports that Fenchel recalls: "That Teichmüller was a member of the Nazi party, we learned when he distributed Nazi propaganda in the Mathematics Institute. Otto Neugebauer who assisted Courant in the administration of the Institute threw him out." In a recent book [START_REF] Lehto | At the summit of mathematics[END_REF], Lehto writes (p. 74): "Hans Wittich had been in Göttingen at the same time as Teichmüller. Both had listened to Rolf Nevanlinna's lectures concerning Riemann surfaces and had had mathematical collaboration. Wittich who had many contacts in Zürich, never joined the National Socialist party, and soon after the Allied forces deemed him politically fit to serve as a professor. In his view, Teichmüller could not have had many chances to be politically active during his time at Göttingen since he spent all days sitting in the library with his mathematical papers." Lehto adds: "But many dissenting opinions were voiced, particularly among Jewish mathematicians. They acknowledged Teichmüller's mathematical merits but regarded him as a human being of the basest sort possible. In various ways they aimed to show that Teichmüller's mathematical achievements were not as groundbreaking as was given to believe since numerous mathematicians before him had worked on similar questions."
In conclusion, there are several reasons for which results of Teichmüller remained unknown for a long time. Some of them still remain so. As a consequence, some of these results were rediscovered by others. They are contained in papers written by experts in the field, without any mention of Teichmüller's name and probably without any knowledge of the fact that they had already been discovered by him. 29It is important to make straight the history of science and to attribute the origin of an idea to the person who discovered it first.
Teichmüller's work on quasiconformal mappings
It is not our intention in the rest of this paper to write a detailed summary of Teichmüller's results nor their impact, and we shall leave aside many theorems he proved. Instead, we concentrate on some of the ideas he brought in the theory of quasiconformal mappings and their applications. The reader who wishes to learn more about Teichmüller's specific results is referred to his original papers. Some of them exist now in English translation and appeared in various volumes of the Handbook of Teichmüller theory [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF][START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF][START_REF] Teichmüller | M. Karbe, Complete solution of an extremal problem for the quasiconformal mapping[END_REF][START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF][START_REF] Teichmüller | Karbe, A displacement theorem for quasiconformal mapping[END_REF]185,[START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF]. Others will appear in the same series. We also refer to the commentaries [START_REF] Alberge | A commentary on Teichmüller's paper Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF][START_REF] 'campo-Neuen | A commentary on Teichmüller's paper Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF][START_REF] 'campo-Neuen | A commentary on Teichmüller's paper Veränderliche Riemannsche Flächen[END_REF][START_REF] Campo | A commentary on Teichmüller's paper Über Extremalprobleme der konformen Geometrie[END_REF][START_REF] Alberge | A commentary on Teichmüller's paper Ein Verschiebungssatz der quasikonformen Abbildung[END_REF][START_REF] Alberge | A commentary on Teichmüller's paper Vollständige Lösung einer Extremalaufgabe der quasikonformen Abbildung[END_REF][START_REF] Alberge | A commentary on Teichmüller's paper Untersuchungen über konforme und quasikonforme Abbildungen[END_REF] which appeared in the same volumes.
We have divided our survey of Teichmüller's contribution to quasiconformal mappings into the following subsections:
(1) The notion of quasiconformal invariant and the behavior of conformal invariants under quasiconformal mappings. (2) The existence and uniqueness theorem for extremal quasiconformal mappings.
(3) The theory of infinitesimal quasiconformal mappings. (4) Quasiconformal mappings in function theory. [START_REF] 'campo-Neuen | A commentary on Teichmüller's paper Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] The non-reduced quasiconformal theory. (6) A distance function on Riemann surfaces using quasiconformal mappings. [START_REF] Campo | A commentary on Teichmüller's paper Über Extremalprobleme der konformen Geometrie[END_REF] Another problem on quasiconformal mappings (8) A general theory of extremal mappings motivated by the extremal problem for quasiconformal mappings.
We now elaborate on these topics.
8.1. The notion of quasiconformal invariants and the behavior of conformal invariants under quasiconformal mappings. In [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], Teichmüller considers a general surface of finite type, oriented or not, with or without boundary, and with or without distinguished points in the interior or on the boundary. He denotes by R σ the Riemann moduli space of that surface; σ is the dimension of the space. He was aware of the fact that R σ is an orbifold (and not a manifold). He introduces a covering, Teichmüller space, which he shows is a manifold, and which he denotes by R σ , and he gives the following formula for the dimension:
σ = -6 + 6g + 3γ + 3n + 2h + k + ρ
where g is the number of handles; γ is the number of crosscaps; n is the number of boundary curves; h is the number of distinguished points in the interior; k is the number of distinguished points on the boundary; ρ is the dimension of the group of conformal self-mappings of the Riemann surface.
There are several relations with quasiconformal geometry, which we recall now.
In the introduction of the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], Teichmüller asks in all generality and in a precise manner the question of finding mappings between two arbitrary surfaces that are closest to conformal, in relation with the general question of the behavior of conformal invariants. He writes:
In the present study, the behavior of conformal invariants under quasiconformal mappings shall be examined. This will lead to the problem of finding the mappings that deviate as little as possible from conformality under certain additional conditions.
Teichmüller was able to develop these ideas after he introduced Teichmüller space, equipping it with a topology. We note in passing that Riemann's cryptic statement about the existence of 3g -3 moduli was a parameter count, and can hardly be considered as a dimension count, since there was no topology and no manifold structure, let alone complex structure within sight. We refer the interested reader to the surveys [START_REF] 'campo | On the early history of moduli and Teichmüller spaces[END_REF] and [START_REF] Ji | Historical development of Teichmüller theory[END_REF] on the birth of the topology and the complex structure on Teichmüller space.
We briefly recall some facts on the topology that Teichmüller introduced on Teichmüller space, and the resulting quotient topology on Riemann's moduli space. This is no closely related to the subject of our survey, since these topologies are based on the notion of quasiconformal mapping.
The topology is induced by the so-called Teichmüller metric, where the distance between two Riemann surfaces is the logarithm of the least dilatation of all quasiconformal mappings between them. This is defined in §18 of the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF]. (Note that the metric, at this point of the paper, is defined on moduli space, and not yet on Teichmüller space.) We shall recall the precise statement below. In §19, Teichmüller formulates the following questions in which P and Q denote points in moduli space:
What is the distance [P Q] between P and Q? In particular, can we determine all extremal quasiconformal mappings whose dilatation quotients are everywhere ≤ e [P Q] ?
It is expected that there always exists an extremal quasiconformal mapping and that it is unique up to conformal self-mappings of the given principal regions. This is the main "conjecture" which Teichmüller eventually proved. The result became known under the name Teichmüller existence and uniqueness theorem. He obtained the proof in the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] and in the later paper [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF].
Teichmüller introduces a conformal invariant as a function on Riemann's moduli space R σ ( §13). He says that locally, there are precisely σ independent conformal invariants. In §16, he talks about the behavior of conformal invariants under quasiconformal mappings. From the definition, a conformal invariant undergoes little changes under quasiconformal mappings if the supremum of the dilatation quotient is sufficiently close to 1. Teichmüller asks the following:
Let J be a conformal invariant, considered as a function on R σ . Consider a given Riemann surface, and let C > 1 be a given real number. What are the values that J takes at those Riemann surfaces onto which the given Riemann surface can be quasiconformally mapped so that the dilatation quotient remains everywhere ≤ C?
This conceptually important question is a formulation of one of the main problems he was interested in. At the end of §17, he says that the notion of quasiconformal dilatation can be used to define the topology of Riemann's moduli space. This is explained more precisely in §18, where Teichmüller introduces his distance. He writes:
Let us consider two Riemann surfaces of the same type represented in R σ by the points P and Q. Given any quasiconformal mapping between these Riemann surfaces, let C denote the supremum of its dilatation quotient. We define the distance [P Q] between the two points, or between the two Riemann surfaces, as the logarithm of the infimum of all these C.
Teichmüller formulates a more general question (end of §16) where the quasiconformal bound C of a quasiconformal mapping is not a real number, but a function on the surface. He says that this general question is hard. Such a theory of quasiconformal mappings with dilatation not necessarily bounded uniformly was developed later on by various authors, e.g. Lehto in [START_REF] Lehto | Homeomorphisms with a given dilatation[END_REF] and [START_REF] Lehto | Remarks on generalized Beltrami equations and conformal mappings[END_REF], David in [START_REF] David | Solutions de l'équation de Beltrami avec µ = 1[END_REF] and Brakalova and Jenkins in [START_REF] Brakalova | On solutions of the Beltrami equation[END_REF]; see also the survey by Otal in Volume IV of the Handbook of Teichmüller theory [START_REF] Otal | Quasiconformal and BMO-quasiconformal homeomorphisms[END_REF]. A notion of quasiconformal mappings whose dilatation is not bounded by a constant but by a function with a specific behavior at infinity is also used in Teichmüller's paper [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF]. Here, he proves that if a oneto-one map from the complex plane to itself has a dilatation quotient D(z) which satisfies D(z) ≤ C(|z|) where C(r) is a real function defined for r ≥ 0 satisfying
C(r) → 1 as r → ∞ such that ∞ (C(r) -1) dr r < ∞,
then as z → ∞ we have |w| ∼ const • |z|. We noted that Lavrentieff, before Teichmüller, in his paper [START_REF] Lavrentieff | Sur une classe de représentations continues[END_REF] obtained a related result. Lavrentieff's definition of quasiconformal mappings did not involve a uniform bound of the dilatation.
In his study of the behavior of conformal invariants under quasiconformal mappings, Teichmüller a result on hyperbolic lengths of closed curves. In §35 of [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF], he establishes an inequality which compares the length of a simple closed geodesic on a hyperbolic surface to the length of the corresponding geodesic when the hyperbolic surface is transformed into another one by a quasiconformal mapping. The inequality involves the dilatation of the quasiconformal mapping. This inequality was rediscovered by Sorvali in [START_REF] Sorvali | On the dilatation of isomorphisms between covering groups[END_REF] (1973) and then by Wolpert in [START_REF] Wolpert | The length spectra as moduli for compact Riemann surfaces[END_REF] (1979). The result is known today as Wolpert's inequality. Teichmüller obtains this inequality using the point of view of Fuchsian groups; he analyses the effect of a quasiconformal mapping conjugating two Fuchsian groups on the dilatations of the hyperbolic transformations. The same method was used by Sorvali and Wolpert.
The point of view on the behavior of conformal invariants under quasiconformal mappings allows a variety of new definitions of quasiconformal mappings. Indeed, one starts with a conformal invariant of the surface (a Green function, extremal length, hyperbolic length, etc.) One then declares that a homeomorphism of the surface is quasiconformal if it distorts the given conformal invariant by a bounded amount. In the 1967 paper [START_REF] Gehring | Definitions for a class of plane quasiconformal mappings[END_REF] by Gehring, based on this idea, 12 definitions of quasiconformal mappings are given.
Let us note, to end this section on conformal invariants, that in §17 of [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], Teichmüller considers a problem related to surfaces of infinite type. He says that it is possible to start with the conformal invariants of a family of Riemann surfaces of finite type and study the behavior of limits of these invariants on families of Riemann surfaces that converge to a Riemann surface of infinite type. 8.2. The existence and uniqueness theorem for extremal quasiconformal mappings. In his paper Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen (Determination of extremal quasiconformal mappings of closed oriented Riemann surfaces) [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] (1943), Teichmüller completed the proof of his major result concerning extremal quasiconformal mappings. The theoremsays that in any homotopy class of homeomorphism between two closed orientable Riemann surfaces of genus g ≥ 2 there exists a unique extremal quasiconformal mapping, and that this mapping, in appropriate local coordinates, is affine, that is, it has the form:
z → K • Re (z) + Im (z) .
The local coordinates on the surface are given by the horizontal and the vertical measured foliations of a holomorphic quadratic differential associated to the homeomorphism. The resulting extremal mapping is usually called the Teichmüller mapping. As we already noted, the result was stated in the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] as a "conjecture" and in the setting of an arbitrary surface of finite type (with or without boundary, and with or without distinguished points in the interior and on the boundary). The uniqueness part of the statement was proved in that paper. The existence was obtained for closed surfaces in [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF]. In the same paper, Teichmüller says that he will publish later on a proof of his theorem for an arbitrary surface of finite type.
One of the main tools in Teichmüller's existence and uniqueness theorem is the notion of holomorphic quadratic differential. The complex dilatation of an extremal quasiconformal mapping is expressed in terms of such a differential. in §46 of his paper, Teichmüller begins a study of the connection between quasiconformal mappings and quadratic differentials. He reports the following:
During a night of 1938, I came up with the following conjecture:
Let dζ 2 be an everywhere finite quadratic differential on F, different from 0. Assign to every point of F the direction where dζ 2 is positive. Extremal quasiconformal mappings are described through the direction fields obtained this way and through arbitrary constant dilatation quotients K ≥ 1. This dream that he had was the starting point of an extremely rich theory which transformed the field of quasiconformal theory.
The appearance of holomorphic quadratic differentials to describe the dilatation of a quasiconformal mapping is in itself surprising because a quasiconformal mapping is a non-holomorphic object. Ahlfors writes in his paper [START_REF] Ahlfors | The joy of function theory[END_REF], p. 445: Over the years this important notion [quasiconformal mappings] has changed the nature of function theory quite radically, and so on many different levels. The ultimate miracle was performed by O. Teichmüller, a completely unbelievable phenomenon for better and for worse. He managed to show that a simple extremal problem, which deals with quasiconformal mappings of a Riemann surface, but does not involve any analyticity, has its solution in terms of a special class of analytic functions, the quadratic differentials. Over the years since Teichmüller's demise his legacy has mushroomed to a new branch of mathematics, known as Teichmüller theory, whose connection with function theory is now almost unrecognizable.
On the other hand, a quadratic differential on a surface is characterized by a pair of orthogonal measured foliations. It is good to recall here that a pair of orthogonal foliations on a Riemann surface, in relation with extremal problems for mappings between surfaces, already appear in the work of Tissot; see §3.
In the theory of quadratic differentials that Teichmüller developed, the consideration of distinguished points is a significant issue. Technically speaking, this involves the study of the behavior of quadratic differentials at these points, which is an important aspect. Such a theory was further developed by Jenkins in several papers. We shall mention some of these works below. Teichmüller's idea led to an outline of a far-reaching theory of extremal maps, with relations with algebra and Galois theory which Teichmüller outlined in his paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] and on which we shall also report below.
Teichmüller's existence and uniqueness theorem generalizes Grötzsch's existence and uniqueness result of a quasiconformal mapping with least dilatation between quadrilaterals. Teichmüller, in [START_REF] Teichmüller | M. Karbe, Complete solution of an extremal problem for the quasiconformal mapping[END_REF], gave a proof of an analogous theorem for the case of a pentagon, using the so-called method of continuity. In the last section of the paper [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF], he writes: "During my research, I have always kept in mind the aim to give a continuity proof for the existence of quasiconformal mappings similar to the one in the case of the pentagon." In his 1964 survey paper on quasiconformal mappings and their applications [20], Ahlfors, writes that the result on pentagons is "already a sophisticated result." Teichmüller writes ( §1 of his paper [START_REF] Teichmüller | M. Karbe, Complete solution of an extremal problem for the quasiconformal mapping[END_REF]) that the case of the pentagon is "the simplest case of the higher cases," and that this simple case already "shows how far one has to go beyond and extend the methods of Ahlfors and Grötzsch."
Teichmüller's existence and uniqueness theorem for arbitrary closed surfaces of genus g ≥ 2 has far-reaching applications. For instance, it shows that in the Teichmüller metric, any two points are connected by a unique geodesic and it gives a proof that the Teichmüller space of a closed surface of genus g is homeomorphic to R 6g-6 . Note however that Teichmüller, in his paper [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF], uses the dimension count, which he had already obtained in [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] using other methods, to prove his existence theorem.
Ahlfors and Bers tried, during many years, to give simpler and more accessible proofs of Teichmüller's theorem. In his paper [START_REF] Bers | Quasiconformal mappings and Teichmüller's theorem[END_REF], written 18 years after Teichmüller's paper, Bers writes:
Our arrangement of the arguments preserves the logical structures of Teichmüller's proof; the details are carried out differently. More precisely, we work with the most general definition of quasiconformality, we rely on the theory of partial differential equations in some crucial parts of the argument, and we make use of a simple set of moduli for marked Riemann surfaces.
In another paper, [START_REF] Bers | On Teichmüller's proof of Teichmüller's theorem[END_REF], written 16 years after the paper [START_REF] Bers | Quasiconformal mappings and Teichmüller's theorem[END_REF] which we just mentioned, and which is among the very last papers that Bers wrote, Bers declares again that all the arguments given by Teichmüller in his paper [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] are correct. In the abstract of that paper, he says that he gives a "simplified version of Teichmüller's proof (independent of the theory of Beltrami equations with measurable coefficients) of a proposition underlying his continuity argument for the existence part of his theorem on extremal quasiconformal mappings." He then writes, in the introduction:
In proving a basic continuity assertion (Lemma 1 in § 14C of [START_REF] Bers | Quasiconformal mappings and Teichmüller's theorem[END_REF]) I made use of a property of quasiconformal mappings (stated for the first time in [START_REF] Bers | On a theorem of Mori and the definition of quasiconformality[END_REF] and also in § 4F of [START_REF] Bers | Quasiconformal mappings and Teichmüller's theorem[END_REF]) which belongs to the theory of quasiconformal mappings with bounded measurable Beltrami coefficients (and seems not to have been known to Teichmüller). Some readers concluded that the use of that theory was indispensable for the proof of Teichmüller's theorem. This is not so, and Teichmüller's own argument is correct.
In Volume 2, p. 1 of the edition of his Collected works [START_REF] Ahlfors | Collected works[END_REF], commenting on his paper [START_REF] Ahlfors | On quasiconformal mappings[END_REF], Ahlfors writes: More than a decade had passed since Teichmüller wrote his remarkable paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] on extremal quasiconformal mappings and quadratic differentials. It has become increasingly evident that Teichmüller's ideas would profoundly influence analysis and especially the theory of functions of a complex variable, although nobody at that time could foresee the extent to which this would be true. The foundations of the theory were not commensurate with the loftiness of Teichmüller's vision, and I thought it was time to reexamine the basic concepts. My paper has serious shortcomings, but it has nevertheless been very influential and has led to a resurgence of interest in quasiconformal mappings and Teichmüller theory. Based on this definition the first four chapters are a careful and rather detailed discussion of the basic properties of quasiconformal mappings to the extent that they were known at that time. In particular a complete proof of the uniqueness part of Teichmüller's theorem was included. Like all other known proofs of the uniqueness it was modeled on Teichmüller's own proof, which used uniformization and the length-area method. Where Teichmüller was sketchy I tried to be more precise. In the original paper Teichmüller did not prove the existence part of his theorem, but in a following paper [START_REF] Teichmüller | Bestimmung der extremalen quasikonformen Abbildungen bei geschlossenen orientierten Riemannschen Flächen[END_REF] he gave a proof based on a continuity method. I found his proof rather hard to read and although I did not doubt its validity I thought that a direct variational proof would be preferable. My attempted proof on these lines had a flaw, and even my subsequent correction does not convince me today. In any case my attempt was too complicated and did not deserve to succeed. All this gives an idea of the fate of one of the results of Teichmüller's paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], and of Ahlfors and Bers's involvement in the task of making Teichmüller's results clearer, several decades after they were written. 8.3. The theory of infinitesimal quasiconformal mappings. In §54 of his paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], Teichmüller introduces the notion of infinitesimally quasiconformal mapping, and he develops this theory thoroughly in the later sections of the paper. Instead of being a mapping, an infinitesimally quasiconformal mapping is a vector field on the surface. Teichmüller develops the relation between these objects and the so-called Beltrami differentials. He works with conformal structures underlying Riemannian metrics, written locally as quadratic forms in the tradition of Gauss, and he studies the effect of quasiconformal metrics on the coefficients of these quadratic forms. In fact, he develops the partial differential equation approach to quasiconformal mappings. In particular, he determines the expression of the metric after an infinitesimal perturbation by an infinitesimally quasiconformal mapping. The deformation is expressed in terms of a function B which depends on local coordinates, and Teichmüller notes that the expression B dz 2 |dz| 2 does not depend on them. Later in the paper, B will be considered as a tangent vector to Teichmüller space, that is, an element of a vector space he denotes by L σ and which will play an extremely important role. Teichmüller defines a norm • on the space L σ . The extremal problem is stated in a new form, using the notion of extremal infinitesimal quasiconformal mapping. Teichmüller announces that the solution of this problem, in the case of closed surfaces, will be given in terms of a holomorphic quadratic differential. This is done in the subsequent sections of the paper.
In §68 of the same paper, Teichmüller introduces the notion of locally extremal differentials. He highlights the link between extremal quasiconformal mappings and the orthogonal pair of foliations on a surface that are given by the quadratic differential. He then defines an equivalence relation on the set of B dz 2 |dz| 2 , and a pairing between these forms and quadratic differentials. He associates to each class of B dz 2 |dz| 2 , 3g -3 complex numbers (k µ ) 1≤µ≤τ . He showed that conversely, for any complex numbers k 1 , k 2 , • • • , k τ , there exists one and only one class of invariant forms B dz 2 |dz| 2 having the desired properties. He deduces that the space L σ is a complex vector space of dimension 3g -3. He asks whether the norm • comes from a Hermitian structure. This question was answered by Weil, who introduced the so-called Weil-Petersson structure on Teichmüller space. We refer the reader to the paper [START_REF] Alberge | A commentary on Teichmüller's paper Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] which is a commentary on these questions.
Teichmüller's theory of infinitesimal quasiconformal mappings is at the basis of the theory of deformation of higher-dimensional complex structures that was developed later on By Kodaira and Spencer. 8.4. Quasiconformal mappings in function theory. The applications of the theory of quasiconformal mappings to function theory are highlighted by Teichmüller since the beginning of his paper Extremale quasikonforme Abbildungen und quadratische Differentiale [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] which we mentioned several times. In the introduction, he writes:
As I have done it before, I shall here also examine quasiconformal mappings not exclusively for their own sake, but chiefly because of their connections with notions and questions that interest function theorists. It is true that quasiconformal mappings have been for only very few years systematically applied to purely function theoretical questions, and so this method has managed to gain until now only a limited number of friends. It may nevertheless be mentioned that the contribution I have been able to make some time ago in this journal to the Bieberbach coefficients problem relies in fact on ideas that shall be developed here. 30In the last sections of the same paper ( §167-170), Teichmüller reviews several applications of quasiconformal mappings to function theory. He also says, in relation with these applications, that one should not only consider quasiconformal mappings with bounded dilatation quotient, but also those satisfying some appropriate upper estimates of the dilatation quotient (that is, this quotient is bounded pointwise, by some function defined on the surface). We already commented on this point of view.
The paper Eine Anwendung quasikonformer Abbildungen auf das Typenproblem (1937) [START_REF] Teichmüller | Eine Anwendung quasikonformer Abbildungen auf das Typenproblem[END_REF] is one of the first papers that Teichmüller wrote on function theory, and it concerns the type problem. 31 He proves there that the type of a simply connected Riemann surface is invariant by quasiconformal mapping. This is based on the fact that there is no quasiconformal mapping between the unit disc and the complex plane. It seems that this fact is proved for the first time in the paper [START_REF] Teichmüller | Eine Anwendung quasikonformer Abbildungen auf das Typenproblem[END_REF].
Teichmüller also introduces in the paper [START_REF] Teichmüller | Eine Anwendung quasikonformer Abbildungen auf das Typenproblem[END_REF] the theory of quasiconformal mappings in the study of line complexes. A line complex is a combinatorial graph on a Riemann surface which is given in the form of an infinite branched covering of the sphere, with a finite number of branching points. Such an object was introduced by Nevanlinna in [START_REF] Nevanlinna | Über die Riemannsche Fläche einer analytischen Funktion, Proceedings[END_REF] and Elfving in [START_REF] Elfving | Uber eine Klasse von Riemannschen Flächen und ihre Uniformisierung[END_REF], and it has the same flavor as an object introduced by Speiser in [START_REF] Speiser | Ueber Riemannsche Flächen[END_REF] and which became known as a Speiser tree. The line complex captures the combinatorics of the branched covering, but it does not determine uniquely the surface. The reader might remember that the description of a Riemann surface as a branched covering of the sphere, provided with the combinatorial information at the branch points, is in the pure tradition of Riemann, in particular, of the work done in his doctoral thesis [START_REF] Riemann | Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse[END_REF]. We briefly recall the definition of a line complex.
We start with a simply connected surface M given as an infinite branched covering of the Riemann sphere. We let a 1 , . . . , a q be the set of branching values (this is a subset of the Riemann sphere). We assume that this set of branching values is finite.
We let L be a simple closed curve on the sphere that passes once through each of the branching values points a 1 , . . . , a q . The sphere is then decomposed into two subsets I and A which are homeomorphic to closed discs. The covering surface M is obtained by gluing copies of simply-connected pieces called "half-planes," each one covering either the disc I or the disc A. Thus, a half-plane is a copy of either I or A. The Riemann surface M is assumed to be an infinite branched covering of the sphere, but the number of half-planes that are glued among each other at a given branching point of M may be finite or infinite. The line complex is a graph embedded in M which encodes the overall gluing of half-planes that constitutes this surface M. To define this graph, we choose a point in the interior of each of the two polygons I and A; let these points be P 1 and P 2 respectively. Let s 1 , . . . , s q be the q arcs that are cut on the closed curve L by the points a 1 , . . . , a q . We join P 1 and P 2 be q simple arcs, each of them crossing the interior of one of the arcs s i (i = 1, . . . , q) transversely and no other point of L. We obtain a graph on the Riemann sphere, which has two vertices and q edges connecting them. The inverse image of this graph by the covering map from M to the sphere is the line complex associated to the covering.
The line complex does not completely determine the surface M, and the problem which Teichmüller addresses in the paper [START_REF] Teichmüller | Eine Anwendung quasikonformer Abbildungen auf das Typenproblem[END_REF] is whether one can determine, from the line complex associated to a the surface M, the type of M. He says that this problem is still far from being solved. His main contribution in this paper is the fact that any two such surfaces M and M that have the same line complex have the the same type. The proof of this result is done in two steps, first showing that two surfaces M and M with the same line complex are mapped quasiconformally to each other and then that the complex plane and the unit dic are not quasiconformally equivalent.
The paper Untersuchungen über konforme und quasikonforme Abbildungen (Investigations of conformal and quasiconformal mappings) [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF] (1938) is also on function theory. Teichmüller gives a criterion for a class of surfaces to be of hyperbolic type, and a negative response to a conjecture by Nevanlinna concerning line complexes. A review of this result is contained in Chapter II of Nevanlinna's book Analytic functions (p. 312), where the latter says that Teichmüller answered in the negative a question he raised, asking "whether a transcendental simply connected surface described as a certain infinite ramified covering is parabolic or hyperbolic according to whether its angle geometry is Euclidean or Lobachevskian." We give a rough idea of the ideas that are involved in this work because they are interesting for low-dimensional geometers and topologists.
Nevanlinna's conjecture is based on the fact that the type of the simply connected surface M (we use the above notation), given a branched covering of the sphere, is related to the degree of ramification of its line complex. If the ramification is small, the surface will be of parabolic type, and if it is large, it will be of hyperbolic type. The motivation behind this conjecture is that a large degree of ramification will put a large angle structure at the vertices of the graph Γ, which corresponds to negative combinatorial curvature. Nevanlinna writes (p. 309 of [START_REF] Nevanlinna | Analytic functions[END_REF]) that "it is natural to imagine the existence of a critical degree of ramification that separates the more weakly ramified parabolic surfaces from the more strongly ramified hyperbolic surfaces." Indeed, Nevanlinna introduced a notion of combinatorial curvature of a line complex, which he called the excess. His conjecture ([136] p. 312) says that the surface M is parabolic or hyperbolic according as the mean excess of its line complex is zero or negative. Teichmülller in his paper [START_REF] Teichmüller | Untersuchungen über konforme und quasikonforme Abbildungen[END_REF], disproved the conjecture by exhibiting a simply connected surface S ramified over the sphere whose corresponding line complex is hyperbolic and whose mean excess is zero.
Teichmüller's paper Ungleichungen zwischen den Koeffizienten schlichter [184] (1938) is related to the so-called coefficient problem, or Bieberbach conjecture. In this paper, Teichmüller initiated the use of quadratic differentials in the study of the coefficient problem. In the same paper, he obtained a sequence of inequalities which were supposed to prove the Bieberbach conjecture. Abikoff, in [START_REF] Abikoff | Oswald Teichmüller[END_REF] writes the following: "Teichmüller [START_REF] Teichmüller | Ungleichungen zwischen den Koeffizienten schlichter Funktionen[END_REF] gave a heuristic argument for the existence of the quadratic differential in the course of proving a somewhat bizarre collection of inequalities satisfied by the coefficients a n . He claims that a proper use of these inequalities leads to the solution of Bieberbach's conjecture; however neither he nor any other user of this technique ever came close to succeeding." 8.5. The non-reduced quasiconformal theory. The non-reduced quasiconformal Teichmüller theory is the Teichmüller theory that includes the quasiconformal techniques that are involved in the setting of surfaces with boundary in which every point on the boundary is considered as a distinguished points. In particular, the homotopies which define the equivalence relation between Riemann surfaces (marked by quasiconformal mappings) fix pointwise the boundary. An example of a nonreduced Teichmüller space is the Teichmüller space of the disc, which is in essence the so-called universal Teichmüller space. Such a theory was already conceived by Teichmüller. In fact, in his paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], Teichmüller mentions an even more general theory, namely, he considers arbitrary bordered Riemann surfaces with distinguished arcs on the boundary where every point of such an arc is considered as a distinguished point. The later paper [START_REF] Teichmüller | Karbe, A displacement theorem for quasiconformal mapping[END_REF], which we discuss below, also deals with a problem that belongs to this setting.
In the non-reduced theory, there is a non-reduced Teichmüller space, and a nonreduced moduli space. There are nontrivial problems that arise in the non-reduced theory which have no analogue in the reduced theory. For instance, there was a long-standing conjecture about the general form of extremal mappings in the nonreduced theory. Teichmüller suggested in [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] (p 185) that the extremal maps of the disk, with some given boundary condition, are Teichmüler mappings. V. Božin, N. Lakić, V. Marković and M. Mateljević showed in [START_REF] Božin | Unique extremality[END_REF] that there exist extremal mappings (and even, uniquely extremal ones) which are not Teichmüller mappings and which are associated to quadratic differentials which may be of finite or infinite norm. These authors obtained conditions under which the extremal mapping is unique [START_REF] Božin | Unique extremality[END_REF] [47] [START_REF] Marković | The unique extremal QC mapping and uniqueness of Hahn-Banach extensions[END_REF]. 8.6. A distance function on Riemann surfaces using quasiconformal mappings. In §27 of the paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF], Teichmüller defines a distance function on a sphere with three distinguished points which is based on the notion of quasiconformal mappings. Precisely, the distance between two points is the logarithm of the least quasiconformal constant of a mapping of the surface that carries one point to another. He shows that this distance coincides with the hyperbolic distance. In §160, he gives an analogous definition in the case of an arbitrary closed surface, and he states that up to a condition, the surface, equipped with this distance, is a Finsler manifold.
Kra rediscovered this distance, and studied it in his paper [START_REF] Kra | On the Nielsen-Thurston-Bers type of some self maps of Riemann surfaces[END_REF] in which he showed that the Bers fibres are not Teichmüller discs. Kra proved that an extremal mapping that realizes the distance between two points is unique if these points are close enough for the hyperbolic metric. He also showed that the distance is equivalent to the hyperbolic metric but not proportional to it unless the surface is the thrice-punctured sphere. The distance is called the Kra distance in [START_REF] Shen | Some notes on Teichmuüller shift mappings and the Teichmüller density[END_REF]. 8.7. Another problem on quasiconformal mappings. In the paper Ein Verschiebungssatz der quasikonformen Abbildung (A displacement theorem of quasiconformal mapping) [START_REF] Teichmüller | Karbe, A displacement theorem for quasiconformal mapping[END_REF], published in 1944, Teichmüller solves a problem on extremal quasiconformal mappings that has many ramifications and which is in the spirit of ideas contained in §27 of [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] which we just mentioned. The question is to find an extremal quasiconformal self-map of the disk which satisfies the following two properties:
• the image of 0 is -x, for some prescribed 0 < x < 1;
• the restriction of the map to the unit circle is the identity. Teichmüller describes an extremal mapping for this problem. He shows that this mapping is unique and that it is a Teichmüller map. It is associated to a quadratic differential which has finite norm and a singularity which is a simple pole.
We recall that a conformal mapping of the disc which fixes pointwise the boundary is necessarily the identity. In the present setting, one relaxes the conformality property, and asks for the best quasiconformal mapping which is the identity on the boundary and which satisfies a certain property for some points in the interior. This is an extremal problem in the setting the non-reduced theory. The existence of the extremal quasiconformal mapping is reduced to the problem of obtaining an affine map between the surfaces bounded by ellipses. The solution of the problem is again described in terms of a quadratic differential. The dilatation of the extremal mapping is shown to be constant on the surface. In [START_REF] Reich | On the mapping with complex dilatation ke iθ[END_REF], Reich gave a generalized formulation of this problem, replacing, from the beginning, the unit disc by the interior of an ellipse. He solved a more general problem concerning mappings of conics. In the case where the conic is a parabola, this leads to a case where the extremal mapping is non-unique. Ahlfors, in his papers [START_REF] Ahlfors | Bounded analytic functions[END_REF] and [START_REF] Ahlfors | Open Riemann surfaces and extremal problems on compact subregions[END_REF], considered similar extremal problems.
Teichmüller's paper is commented on by Vincent Alberge in [START_REF] Alberge | A commentary on Teichmüller's paper Ein Verschiebungssatz der quasikonformen Abbildung[END_REF].
8.8. A general theory of extremal mappings motivated by the extremal problem for quasiconformal mappings. Teichmüller's paper Über Extremalprobleme der konformen Geometrie (On extremal problems in conformal geometry) [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF], published in 1941, contains ideas that generalize some of the ideas expressed in Teichmüller's paper [START_REF] Teichmüller | Extremale quasikonforme Abbildungen und quadratische Differentiale[END_REF] on the use of quadratic differentials to solve the extremal problem of quasiconformal mappings. His aim is to apply his techniques to general geometric extremal problems. Ahlfors made strong statements on the importance of general extremal problems. He writes in [START_REF] Ahlfors | Development of the theory of conformal mapping and Riemann surfaces through a century[END_REF] p. 500: I have frequently mentioned extremal problems in conformal mapping, and I believe their importance cannot be overestimated. It is evident that extremal mappings must be the cornerstone in any theory which tries to classify conformal mappings according to invariant properties. In [START_REF] Ahlfors | Classical and Contemporary Analysis[END_REF] p. 3, Ahlors writes:
In complex function theory, as in many other branches of analysis, one of the most powerful classical methods has been to formulate, solve, and analyze extremal problems. This remains the most valuable tool even today, and constitutes a direct link with the classical tradition. The generality of the extremal mappings that Teichmüller alludes to in his paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] is much beyond that of the extremal mappings to which Ahlfors refers.
Teichmüller claims in [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] that some of his ideas expressed in his previous papers on moduli of Riemann surfaces and on algebra are related to each other, that they may lead to general concepts, and that they are applicable to the coefficient problem for univalent functions. These idea involves the introduction of a new structure at the distinguished points of a surface, viz., an "order" for the series expansions of functions in the local coordinate charts at these distinguished points (the Riemann surface is characterized, as Riemann showed, by the field of meromorphic function it carries). The general principle is that if at the distinguished point the extremal problem requires a function that has a fixed value for its first n derivatives, then the quadratic differential should have a pole of order n + 1 at that point. This kind of idea was used by Jenkins in his approach to the Bieberbach conjecture, cf. [START_REF] Jenkins | A general coefficient theorem[END_REF][START_REF] Jenkins | Univalent functions and conformal mapping[END_REF][START_REF] Jenkins | On normalization in the general coefficient theorem[END_REF][START_REF] Jenkins | The method of extremal length[END_REF]; we discuss this fact again below.
The paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF] is difficult, and it was probably never read thoroughly by any other mathematician than its author. Teichmüller starts by commenting on the fact that function theory is closely related to topology and algebra. For instance, one is led, in dealing with function-theoretic questions, to prove new generalizations of the Riemann-Roch theorem. The emphasis, he says, should not be on the Riemann surface but rather on the marked points. He makes an analogy with a situation in algebra, that he had considered in [START_REF] Teichmüller | Multiplikation zyklischer Normalringe[END_REF], namely, one is given three objects, A, A 1 , A 2 ; in the geometric case, A is a Riemann surface with distinguished points, A 1 is the support of the Riemann surface, that is, the Riemann surface without points distinguished, and A 2 the set of distinguished points. In the algebraic context, A is a normal (Galois) extension of a field, A 1 is a cyclic field extension and A 2 is the set of generators of the Galois group. At a distinguished point on a Riemann surface, one chooses a local coordinate z and assumes that the other local coordinates z are of the form z = z + a m+1 z m+1 + a m+2 z m+2 + . . . where m is an integer. This puts a restriction on the first m derivatives of a function belonging to the field of meromorphic functions associated to the Riemann surface. Te definition is done so that this restriction does not depend on the choice of the local coordinates. Such a distinguished point is said to be of order m.
Teichmüller notes that the results apply to abstract function fields instead of Riemann surfaces, and that estimates on the coefficients of a univalent function may be obtained through a method involving extremal mappings associated with quadratic differentials with some prescribed poles. He makes relations with several classical problems, including the question of finding the Koebe domain of a family of holomorphic functions defined on the disk, that is, the largest domain contained in the image of every function in the family.
It is possible that his main interest in developing this theory arose from the Bieberbach conjecture. 32 We recall that this conjecture concerns the coefficients of holomorphic injective functions defined on the unit disc D = {z ∈ C| |z| < 1} by a Taylor series expansion:
f (z) = ∞ n=0
a n z n normalized by a 0 = 0 and a 1 = 1. The conjecture was formulated by Bieberbach in 1916 and proved fully by Louis de Branges in 1984. It says that the coefficients of such a series satisfy the inequalities |a n | ≤ n for any n ≥ 2. Bieberbach proved in his paper [START_REF] Bieberbach | Über die Koeffizienten derjenigen Potenzreihen, welche eine schlichte Abbildung des Einheitskreises vermitteln[END_REF] the case n = 2. He also showed that equality is attained for the functions of the form K θ (z) = z/(1 -e iθ z) 2 with θ ∈ R. Such a function if the composition of the so-called "Koebe function" k(z) = z/(1 -z) 2 = z + 2z 2 + 3z 3 + . . . with a rotation. In a footnote ([40] p. 946), Bieberbach suggested that the value n = |a n (K θ )| might be an upper bound for all the functions satisfying the given assumptions.
Bieberbach's result is related to the so-called "area theorem," which is referred to by Teichmüller in the paper [START_REF] Teichmüller | Über Extremalprobleme der konformen Geometrie[END_REF], a theorem which gives the so-called "Koebe quarter theorem" saying that the image of any univalent function f from the unit disc of C onto a subset of C contains the disc of center f (0) and radius |f (0) |/4. The important fact for the subject of our present article is that for this geometric form of the Bieberbach conjecture, Teichmüller introduced the techniques of extremal quasiconformal mappings and quadratic differentials. This is the content of the last part of this paper.
Jenkins, in his series of papers [START_REF] Jenkins | A general coefficient theorem[END_REF][START_REF] Jenkins | Univalent functions and conformal mapping[END_REF][START_REF] Jenkins | On normalization in the general coefficient theorem[END_REF][START_REF] Jenkins | The method of extremal length[END_REF] and others, developed an approach to the coefficient theorem unsing quadratic differentials and based on the works of Teichmüller and Grötzsch. In a 1962 ICM talk [START_REF] Jenkins | On normalization in the general coefficient theorem[END_REF], Jenkins writes:
Teichmüller enunciated the intuitive principle that the solution of a certain type of extremal problem for univalent functions is determined by a quadratic differential for which the following prescriptions hold. If the competing mappings are to have a certain fixed point the quadratic differential will have a simple pole there. If in addition fixed values are required for the first n derivatives of competing functions the quadratic differential will have a pole of order n + 1 at the point. He proved a coefficient result which represents a quite special case of the principle but did not obtain any general result of this type. The General Coefficient Theorem was presented originally as an explicit embodiment of Teichmüller's principle, that is, the competing functions were subjected to the normalizations implied by the above statement.
In [START_REF] Jenkins | On normalization in the general coefficient theorem[END_REF], Jenkins states a theorem which gives a more precise form of a result stated by Teichmüller. This result concerns a Riemann surface of finite type equipped with a quadratic differential, with a decomposition of the surface into subdomains defined by the trajectory structure of this differential. There is a mapping from each of these subdomains onto non-overlapping subdomains of the surface, and these mappings are subject to conditions on preservation of certain coefficients at the poles. He obtains an inequality that involves the coefficients of the quadratic differential at poles of order greater than one and those of the mapping, with a condition for the inequality to be an equality. This condition states that the function must be an isometry for the metric induced on the surface by the quadratic differential. It is followed by a detailed analysis of the equality case. Jenkins also refers to numerous specific applications of such a result which Teichmüller has conjectured, see [START_REF] Jenkins | A general coefficient theorem[END_REF], p. 278-279.
After the work done by Teichmüller, the theory of quasiconformal mappings exploded and became one of the major tools in function theory, in terms of the volume of the work done, of the difficulties and the deepness of the results, and of its applications in the other fields of mathematics. In the survey that we made of the work of Teichmüller on quasiconformal mappings, we tried to convey the fact that several of his ideas may still be further explored, and that reading his work will be beneficial.
Figure 1 .
1 Figure 1. Celestial globe with coordinates, Roman, c. 50-40 B.C., Metropolitan Museum of Art, New York (Photo A. Papadopoulos).
610 -546 B.C.), Eratosthenes (3d c. B.C.), 8 Hipparchus of Nicaea (c. 190-c. 120 B.C.), Marinus of Tyre (c. 70-130 A.D.
Figure 2 .
2 Figure 2. A map drawing by Leonardo da Vinci.
Figure 3 .
3 Figure 3. The Tissot indicatrix for the Oblique Plate Carrée projection; from the Album of map projections [169] p. 28.
Eratosthenes gave a measure of the circumference of the Earth which is correct up to an error of the order of 1%.
This is the parallel which was situated at the middle of the civilized world, from Spain to China.
For the three papers of Euler on geography, we are using the translation by George Heine.
This and the other translations from the French in this paper are ours.
This the is famous Planisphaerum which we mentioned in §2.
In France, a doctoral thesis had two parts, a first thesis and a second thesis. The subject of the second thesis was proposed by the jury, a few weeks before the thesis defense. It was not necessary that this work be original; generally its was an exposition of some topic (different from the first thesis's subject). The second thesis disappeared with the reform of the doctoral studies in France, in the 1990s.
Chebyshev's biographer in[START_REF] Possé | Excerpts of a biography of Chebyshev, contained in his Collected Works[END_REF] reports that Chebyshev thoroughly studied the works of Euler, Lagrange, Gauss, Abel, and other great mathematicians. He also writes that, in general, Chebyshev was not interested in reading the mathematical works of his contemporaries, explaining that spending time on that would prevent him of having original ideas.
This is the paper which contains Beltrami's famous result saying that a Riemannian metric on a surface which can be locally mapped onto the plane in such a way that the geodesics are sent to Euclidean lines has necessarily constant curvature.
In the present article, the translations rom the French are mine.
This journal is sometimes referred to under other names, see Grötzsch's references in the bibliography of the present paper.
Like all the Russian similar names, Lavrentieff may also be written with a terminal v instead of ff. We are following Lavrentieff's own transcription of his name in his papers written in French, cf. [101] [102] [103] [105] [106] [107].
Such a definition is also attributed to Pfluger[START_REF] Pfluger | Quasiconforme Abbildungen und logarithmische Kapazität[END_REF], independently. It was noticed later that the class of functions obtained by this definition coincides with a class introduced by Morrey in 1938, consisting of weak homeomorphic solutions f (z) of the Beltrami equation fz = µ(z)fz, where µ(z) is a measurable function satisfying sup |µ(z)| < 1.
It is a fact of experience that in the mathematical literature, references and original sources are often uncontrolled by the authors. As soon as a mathematician gives a reference, it is likely that several other authors will repeat the reference without checking the original paper. Thus, an error in an attribution usually propagates, especially if the real author of the result is no more alive, and even if his name is Euler or Poincaré. The author of the present article fell a few times into this trap. For instance, in[START_REF] Papadopoulos | Introduction to Teichmüller theory, old and new I[END_REF], he wrote, repeating what is usually claimed in the literature, that "the bases of the complex analytic theory of Teichmüller space were developed by Ahlfors and Bers" without any mention of Teichmüller, whose paper [185] is entirely dedicated to this subject and lays the bases of this theory, many years before Ahlfors and Bers worked on the subject.
[Teichmüller's footnote] O. Teichmüller, Ungleichungen zwischen den Koeffizienten schlichter Funktionen, these proceedings, 1938. (This is Item[START_REF] Teichmüller | Ungleichungen zwischen den Koeffizienten schlichter Funktionen[END_REF] in the bibliography of the present paper.)
Drasin notes in[START_REF] Drasin | On the Teichmüller-Wittich-Belinski theorem[END_REF] that Teichmüller was the first to realize the use of quasi-conformal mappings in the type problem. This remark is not correct, because as we noted, Lavrentieff did this before Teichmüller.
Acknowledgements. I would like to thank Vincent Alberge and Olivier Guichard who read a preliminary version of this paper, Mikhail M. Lavrentiev, Jr. for a correspondence about his grandfather, and Nikolai Abrosimov for his help in reading documents in Russian about Lavrentiev.
The work was completed during a stay of the author at the the Max-Plank-Institut für Mathematik (Bonn). | 222,106 | [
"914069"
] | [
"57856",
"93707"
] |
01466210 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466210/file/978-3-319-24315-3_11_Chapter.pdf | Andreas Mladenow
email: [email protected]
Niina Maarit Novak
email: [email protected]
Christine Strauss
email: [email protected]
Online Ad-Fraud in Search Engine Advertising Campaigns Prevention, Detection and Damage Limitation
Keywords: Online Advertising, Online Ad-Fraud, Search Engine Marketing, SEM, Search Engine Advertising, SEA, Paid-Search, Security, Availability, Reliability, Online Campaigns, Typology
Search Engine Advertising has grown strongly in recent years and amounted to about USD 60 billion in 2014. Based on real-world data of online campaigns of 28 companies, we analyse the incident of a hacked campaign-account. We describe the occurred damage, i.e. (1) follow-up consequences of unauthorized access to the account of the advertiser, and (2) limited availability of short-term online campaigns. This contribution aims at raising awareness for the threat of hacking incidents during online marketing campaigns, and provides suggestions as well as recommendations for damage prevention, damage detection and damage limitation.
Introduction
From the viewpoint of information economics peoples' attention is seen as a scarce commodity [START_REF] Goldhaber | The attention economy and the net[END_REF][START_REF] Mladenow | Kooperative Forschung[END_REF][START_REF] Mladenow | Social Crowd Integration in New Product Development: Crowdsourcing Communities Nourish the Open Innovation Paradigm[END_REF][START_REF] Mladenow | Towards cloud-centric service environments[END_REF]. Through the use of search engine advertising (SEA), companies aim at improving their visibility among the search engine results in order to attract the attention of potential customers [START_REF] Ghose | An empirical analysis of search engine advertising: Sponsored search in electronic markets[END_REF]. It is a global phenomenon that companies tend to invest more and more in sponsored search in electronic markets. It is estimated that in 2014 businesses have spent USD 60 billion in Search Engine Marketing (SEM) [6].
From an international perspective, Google was the undisputed global leader in 2014 with a market share of more than 90% in Germany and 65% in the US, which allowed the company to increase its revenues from advertising by 17% compared to the prior year [6].
Quality, costs and time are major drivers on key success factors. For this reason, it is for many companies one of the main motivational reasons to perform online-campaigns [START_REF] Goldfarb | Search engine advertising: Channel substitution when pricing ads to context[END_REF][START_REF] Langville | Google's PageRank and beyond: the science of search engine rankings[END_REF][START_REF] Xiang | Travel queries on cities in the United States: Implications for search engine marketing for tourist destinations[END_REF]. Unlike traditional marketing campaigns, online campaigns provide the advertiser with fast information on the campaign's effectiveness. Moreover, online campaigns are highly flexible and allow a custom-tailored and targeted advertising approach. The flexibility and the time savings of online campaigns rely on fast transaction processes, based on the usage of tools such as Google AdWords [10] and Bing Ads [11] which allow to create online campaigns within a couple of minutes, and which allow to evaluate the campaigns' success within a few hours by means of predefined and builtin analysis functions [10,11].
However, whereas these advantages seem to be quite obvious for the success of a marketing campaign, dealing with online campaigns often results in facing different kinds of problems such as trust and security issues. In this regard, advertisers and search engines representatives are confronted with the topic of ad-frauds. In the literature, this subject has mainly focussed on the so-called "click-fraud" problem [START_REF] Kitts | Click Fraud Detection: Adversarial Pattern Recognition over 5 Years at Microsoft[END_REF][START_REF] Immorlica | Click fraud resistant methods for learning click-through rates[END_REF][START_REF] Wilbur | Click fraud[END_REF][START_REF] Haddadi | Fighting online click-fraud using bluff ads[END_REF][START_REF] Liu | DECAF: detecting and characterizing ad fraud in mobile apps[END_REF], while other issues concerning ad-frauds have been neglected.
Against this background, this paper contributes in filling this research gap by providing a typology covering current types of ad-frauds as well as analysing the neglected topic of ad-frauds caused by unauthorized access to campaign accounts, so-called "hacks". What happens to hacked advertising accounts in the context of short-term online campaigns using the example of Google AdWords? What are the alternatives of action for campaign providers in the event of a hacker attack? Are there any prophylactic security controls to be set? Hence, this paper is structured as follows: the next section pinpoints theoretical insights of both, online ad-campaigns using the Google search engine tool AdWords as well as a typology of ad-fraud. Section 3 analyses scenarios based on real-world data. Section 4 provides a discussion, and the final section summarizes major findings and gives a brief outlook on future development.
2
Online Campaigns and Ad-Fraud Using AdWords Accounts
Online Ad-Campaigns
When battling for customers' attention, businesses seek for the best ranking positionand thus visibility -in the results' list of search engine queries. In addition to the possibility of improving the organic search results through search engine optimization (SEO), which very often turns out to be highly time-consuming, search-engine advertisements (SEA) allow to address the customer in a rapid and targeted manner. The display of advertisements in search engines follows the keyword-principle, which allows buying an advertisement-position on the first page of the search engine results based on specific keywords. In the case of the big players, including Google, Yahoo and Bing, paid advertisements are grouped together in a commercial advertisementblock and are thus visually separated and highlighted from the unpaid (organic) results [cf. 10,11].
Since the beginnings of text-based advertisement in search engines, advertisers aim to create specific advertisements matching query results, hoping to create a highly effective advertising tool. In contrast to traditional advertising (where costs incur when placing an advertisement) in SEA one does not pay for impressions, but for clicks made. This is referred to as cost per click (CPC) or pay per click (PPC) [START_REF] Goldfarb | Search engine advertising: Channel substitution when pricing ads to context[END_REF], [START_REF] Xiang | Travel queries on cities in the United States: Implications for search engine marketing for tourist destinations[END_REF]. Paying for SEA is performance-based advertising [START_REF] Langville | Google's PageRank and beyond: the science of search engine rankings[END_REF]. Thus, the advertiser has to pay only for the click, and -as a consequence -only if a user visits the advertised website. Impressions are free of charge. This concept seems to be highly effective as auctionbased text advertisements are by far the largest source of income for search engines, and there are still no signs indicating that this will change in the foreseeable future [START_REF] Langville | Google's PageRank and beyond: the science of search engine rankings[END_REF].
In the case of Google the AdWords tool supports the development of effective SEAcampaigns. When it comes to SEA, the positioning of an advertisement-candidate related to a specific search term is based on the willingness of the advertiser to pay a certain amount, previously specified by the advertiser. The entire set of advertiser candidates takes part in an auction for the entered search term of the Internet user. The order of display, or the positioning of advertisements is (for example in the case of Google) determined through an "advertisement-ranking procedure", a weighted second price auction. This ranking-position is further determined by the maximum amount for the homepage visit set by the advertiser and by Google's advertisement quality score (QS), an indicator which is highly influenced by the performance of the advertiser.
Typical advantages of online campaigns include factors such as elimination of geographic barriers, cost-efficiency, target group precision, measurability of the response, and personalization [START_REF] Langville | Google's PageRank and beyond: the science of search engine rankings[END_REF]. But what are the challenges, limitations and challenges of online marketing campaigns for companies? Experts agree that the integration of search engine marketing and more traditional forms of marketing is one of the biggest challenges [17]. Multi-channel marketing often require coordinated action in terms of content and time. In this regard, the design and creation of online marketing campaigns require specific competences and different strategic and operational approaches compared to conventional marketing. In their effort to reach potential consumers, advertisers are facing several risks. Besides the possibility to have advertisements blocked by the consumer, e.g. through software-based AdBlocker applications, online-advertising accounts may be the target of unauthorized usage through hacking activities leading to considerable loss of money and/or reputation. In the case of a hacker attack advertisers are confronted with problems of loss of integrity and reputation, lack of face-to-face communication, violation of privacy-issues, lack of trust and security. Moreover, advertiser and client depend on the availability and reliability of the provided online tools.
Types of ad-fraud
Two types of ad-fraud in the context of online campaigns are to be distinguished, i.e. hacking and click-fraud. A third type of ad-fraud are so-called customer-misleading ads, which is beyond the scope of this paper due to the fact, that it is not directly targeting a certain advertiser. Furthermore, we refer to the most widely used search engine, i.e. Google (cf. figure 1). In the case of click-fraud the goal of the attacker is an increase of the advertisers' costs by artificially increasing the number of clicks per ad. Click-fraud is a method frequently used by both, publishers and competing businesses, with the intention to significantly damage and downgrade a competitors ranking-position or/and improving one's own search engine ranking-position.
Ad-fraud by Hacking Online-Campaign Accounts
Whereas click-frauds are difficult to be detected and revealed by the advertiser, hacking very often implies the difficulty for the advertiser to regain access to his/her own account, third party control through account blocking and limited or no access to the online campaigns. This may result in disrupting short-term online-campaigns, which might have been carefully orchestrated with other marketing activities e.g. in multi-channel campaigns, or which were launched with the intention to promote a specific time-dependent event.
Against this background, we analyse an ad-fraud scenario, which is based on realworld data selected from a set of a total of 28 online campaigns that were performed between 2013 and 2015 from the Google online marketing challenge [START_REF] Gomc | [END_REF] each during a three-week period. In the following selected companies and their usage of SEA will be described in detail, based on their necessity for marketing campaigns with shortterm availability. During these campaigns, one account was hacked. Hence, we describe this event of campaign account hacking and the perceived difficulty to regain access [START_REF] Strauss | Informatik-Sicherheitsmanagement: eine Herausforderung für die Unternehmensführung[END_REF][START_REF] Strauss | Multiobjective decision support in IT-risk management[END_REF][START_REF] Kiesling | Multi-objective evolutionary optimization of computation-intensive simulations: The case of security control selection[END_REF] for the remaining campaign time in subsection 3.2. The selected cases had to be anonymised to protect the involved parties. The selected examples shall give an insight into the importance of time-critical availability of campaign-tools during certain time-windows due to either event-driven necessities (Case3 and Case4) or multi-channel ad-strategies (Case1 and Case2).
Four Exemplary Cases for Short-term Online Campaign
Case1 is a family-owned brand of fur products based in Vienna, Austria, originally established in 1948 in Prague. The company's core business is the manufacturing of fur-based products, including coats, jackets, blankets, pillows, accessories and customtailored products. The company positions its own brand as a high quality brand and expert for fur products. Although the company possesses a license for trading its products online, the company's online presence and penetration could be described as modest mainly depending on the company's website created in 2008, a Facebook-fan page, Pinterest and Instagram accounts in addition to an email newsletter and a cooperation with the online shop platform case1partner.com. As case1.com relies heavily on the local customer base in its three main cities (Vienna, Budapest and Bratislava), the client database generated by using case1partner's online presence offers the company the possibility to acquire global customers.
Case2 was originally founded in August 2010 in order to satisfy the local demand for limited, exclusive and edition-specific sneakers in Vienna. Since April 2012 the business runs as well an online-shop. Besides sneakers the company sells a small selection of T-shirts, shoelaces and shoe-cleaning kits. In addition, customers have the possibility to sell and trade their private and limited editions of sneakers that are no longer commercially available for sale via the online-shop. This feature represents a marketing tool for case2.at and generates traffic for the online-shop. The company's website, customer-management and social media presence (including Facebook, Google+, Twitter, Tumblr, Instagram and Pinterest) is maintained by the company itself. Currently the larger amount of sales is done over-the-counter, highlighting the company's necessity for an online-marketing campaign with the goal of increasing online sales. Both case1.com and the Case2 operate in a small but highly competitive niche segment of the fashion industry. In these niche segments companies rely strongly on local customer bases and word-of-mouth. A company such as Case1, having a long tradition of operating in the fur segment of the fashion industry, undergoes by definition some seasonality. AdWords campaigns are thus especially used to promote after-season products, as well as to announce sales and new season or product lines. Furthermore, they allow for targeting not only German-speaking customers but also customers e.g. from Czechia, Hungary, Slovakia and Russia. Case2 utilizes AdWords campaigns to target specific consumer-groups and to promote their online shop as well as special sales and products, which allows the company to differentiate itself from big players of the industry segment (e.g., Zalando, Foot Locker, etc.).
Case3 is a website reporting on all games of the Austrian amateur soccer-league and is available free of charge. More than 100 private editors ("fans") cover the various national leagues. The company has four employees only who maintain the website and perform online marketing. This highlights that the website is fully dependent on the work of private individuals, or fans to report the latest news. The website has its own self-administered content management system, which delivers the latest information to readers (e.g., game reports, photos, previews, headlines, etc.). Particular attention should be paid to the so called "live ticker" covering even the smallest soccer-leaguegames in real time. The live ticker is also available as a mobile-app to keep readers 24/7 up-to-date. Since 2009 when Case3 was founded, the number of users had continuously increased and large social media communities around the topic of the amateur-soccerleague with multiple thousands of participants had been formed. Besides the information-portal the company also operates a web-shop, selling soccer related sport products. As Case3 is a website which heavily relies on advertising to finance itself, a high website traffic is crucial for its success. Thus targeted AdWords campaigns generating website-traffic through website promotion and its special features such as the live ticker, are of paramount importance. Furthermore, due to the fact that AdWords campaigns can be used to promote time-and date-specific events during a short time-window (e.g., final game of a sports tournament) emphasizes the importance of perpetual availability of this marketing tool.
Case4, is a famous and exquisite party and bar location offering a relaxed and beachlike atmosphere. At Case4 one can enjoy delicious Israeli dishes together with various cocktails or drinks. The owners cooperate with an advertising agency to promote the location using various media channels. For restaurants no longer it is just about quality of food, beverages and services but also about location and about image and reputation. Diversification is an important strategy in a saturated market. One way to do so is to organize special events such as in the case of Case4, which organises special viewing parties, e.g. for the finals of the Eurovision Song Contest. This kind of events, which are planned on a short-term basis, are best promoted through short time advertisements. Based on the fact that individuals use online search engines in order to find out what is happening in town tonight and where to go, AdWords campaigns are the self-evident choice. Another argument strongly suggesting the use of AdWords campaigns is the fact that these events are weather-dependent open-air events. Thus, the chosen media channel needs to be very flexible in order to stop or change the advertisement in case the event has to be cancelled, substituted or postponed.
Hacking a Campaign-Account
In one of the described cases the campaign-account had been subject to unauthorized access, i.e. hacking. Due to the fact that the verification process of new advertisements and keywords normally takes some time the hacked account was not in use during a period of about 12 hours. During this time window the account has been hacked and a new advertisement in Russian language and Cyrillic writing was implemented and approved by Google. The fast approval by Google was the result of setting the newly created advertisement budget as: 'Budget delivery method set to accelerated: show ads as quickly as possible', which in turn may accelerate the approval procedure of Google. The campaign created produced stunning numbers in less than 18 hours: 1,646,824 impressions and 6,190 clicks, generating costs of 207.66$. Achieving these amazing performance index figures in such a short period of time reveals professional skills and malicious intentions as drivers of such an activity.
After reporting the attack to Google, the AdWords account was deactivated and all AdWords campaigns were put on hold. Since the investigation process lasted longer than the remaining campaign-window (i.e., two weeks) the campaigns could not be restarted until Google finished the investigation process and had re-credited the amount lost due to the attack. Creating a new account to run another short-term campaign would have involved considerable amount of additional time, effort and expenses. To put it in a nutshell: the attack resulted in the loss of potential sales revenues as the AdWords campaigns had to be suspended.
In addition to the experience gained by the advertisers, a desk research revealed that AdWords seems to be an attractive target for hacking activities as many entries on web pages, in blogs and IT journals can be found on the internet. It is to be assumed that security mechanisms have not provided sufficient protection for the advertisers of the hacked campaign-accounts.
Prevention, detection and damage limitation
Since its launch in the year 2000 Google AdWords [START_REF]GOOGLE Company -Our history in depth[END_REF] developed into Google's main source of revenue but became at the same time a target for online ad-fraud. From various incident reports and forum entries can be concluded that in most cases the Gmailaccount address and password were used for unauthorized access to the campaign-account. In the case of Google AdWords most hacks follow a simple but effectual pattern involving (i) brute-force login, (ii) phishing carried out via email spoofing and very similar looking phishing websites asking the user to sign-in, as well as (iii) spy-and malware tools in order to acquire the user's account details [START_REF] Balogh | Digital Marketing Testing Ground[END_REF]. Once the fraudsters have gained access to a campaign-account they typically duplicate campaigns, add a fast variety of keywords aimed to generate high amounts of clicks and redirect the target URL to some African airfare company [START_REF]MOZ -Blogs: AdWords Hackers -What a Nightmare[END_REF], a Spanish online shop or some Russian website advertising bracelets [START_REF]GOOGLE -Official AdWords Community[END_REF]. These practices cause fraudulent charges on users' credit cards quite often in the amount of several thousand US-Dollars caused by a single fraud-campaign over a time frame of less than 24 hours. Furthermore, for people and businesses depending on Google AdWords to generate traffic for example to their online shops for direct sale purposes, considerable amounts of money are also lost in terms of revenue, reputation and benefits from historical account performance [START_REF] Web | [END_REF].
What can be done to prevent an attack? Choosing strong passwords is the most obvious and effective measure. Choosing a complex and long password involving not only letters but also numbers and special characters as well as changing the password on a regular basis increases security and protects the account owner from simple attacks. Checking for spyware and using browsers with phishing filters [START_REF]GOOGLE -Official Blog -Insights from Googlers into our products, and technology[END_REF], especially when signing into a Google-account from an unsecured Wireless Fidelity (WIFI) connection are highly recommended. Moreover, for businesses it is advised to have a contingency advertising-plan to be able to counter possible revenue drops caused by fraudulent campaigns [START_REF]GOOGLE Product Forums[END_REF].
How to detect an attack? Checking ones' account several times per day is not only a good practice to improve the performance and the cost-benefit ratio of the campaign itself. At the same time frequent checks protect from possible money and reputation losses caused by an attack. Best practice involves close monitoring and analysing the performance of each individual AdWords campaign several times per day.
How to behave during an attack? What happens after an attack? If the advertiser is able to access his/her own account the immediate activity is to change the password. This action will lock-out hackers. Ongoing campaigns shall be suspended or deleted. If access to the advertisers account is denied, the hacker may have changed the password, preventing the account owner from regaining access to his/her own account [START_REF]PPCDISCUSSIONS[END_REF]. In such cases the account owner needs to contact the Google AdWords Support either per email or via the live help desk and report the incident. Google typically reimburses the fraud-victim for the money lost during the attack, following a thorough investigation. Moreover, it should be mentioned that Google AdWords has an inherent fraud-detection mechanism disabling the account [START_REF]Google AdWords Account Hacked: False Ads & False Charges[END_REF] once a possible fraud is detected based on unusual activity, preventing both the display of campaigns created by hackers and further money loss of the user [START_REF]Google AdWords Account Hacked: False Ads & False Charges[END_REF].
Based on our analysis of online campaigns of 28 companies in the following we suggest several methods, focusing on authentication, notification and budget limitation, aimed to increase the security of online campaigns and to protect users from ad-fraud.
Authentication. Unauthorized access to online advertising accounts could be impeded by use of encryption (adoption of best practice for security from online banking), digital signatures or a mandatory two-step verification process for each login-process (e.g., including username, password and for example a code sent to the user by email or SMS).
Notification. Push messages, SMS or E-Mail notifications for defined activities which are sent automatically and with the intention to inform advertisers about signification changes and performance details of their online campaigns could help to detect fraudulent activities in a timely manner and could thus prevent losses.
Budget limitation. Pre-setting a fixed daily maximum budget per campaign or a total maximum budget for the whole account prevents from considerable financial losses.
Conclusion and Outlook
Paid search marketing through Google AdWords exists now for more than 15 years and has been established as an essential and indispensable advertising channel. In every industry and every sector local, regional and global advertisements are placed in search engines, the majority of advertisements are placed in Google. AdWords has become a widespread and widely accepted tool characterized by high reliability and flexibility.
The analysis provides a typology of occurring ad-frauds and points to potential flaws in account handling that might be used for hacker attacks. When it comes to the integration of online marketing activities temporarily blocked accounts can cause financial losses. Ultimately, the understanding of search engine developments dependent on the dominant search-engine market providers represents a challenge for companies.
In the future the risk of ad-fraud seems to remain a major challenge for advertising companies, search engines and ad campaigners. Whereas ad-centers from Google, Yahoo and Microsoft have developed large data mining systems to score traffic quality, some types of ad-fraud are still to be resolved. More research needs to be addressed to all types of occurring ad-fraud. In the context of online marketing campaigns this includes not only click-fraud, but also problems such as hacking ad-campaign accounts.
Fig. 1 .
1 Fig. 1. Click-fraud and hacking | 26,943 | [
"1001260",
"1001261",
"1001262"
] | [
"263350",
"19098",
"300742"
] |
01466211 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466211/file/978-3-319-24315-3_12_Chapter.pdf | Guido David
email: [email protected]
Markov Chain Solution to the 3-Tower Problem
Keywords: Markov Chains, Graph Theory, Discrete Mathematics, 3dimensional gambler's ruin, Applied Probability, Tower of Hanoi
The 3-tower problem is a 3-player gambler's ruin model where two players are involved in a zero information, even-money bet during each round. The probabilities that each player accumulates all the money has a trivial solution. However, the probability of each player getting ruined first is an open problem. In this paper, the 3-tower problem recursions are modeled as a directed multigraph with loops, which is used to construct a Markov chain. The solution leads to exact values, and results show that, unlike in other models where the first ruin probabilities depend only on the proportion of chips of each player, the probabilities obtained by this model depend on the number of chips each player holds.
Introduction
The gambler's ruin problem for two players is solved using recursion when the bets are even money. The solution gives the expected time until one player is ruined, and the probabilities each player acquires all the money. The multiplayer problem presents more difficulties. Consider a three-player game and let the amount of money (or chips) of the players be S 1 , S 2 , S 3 . Let S = S 1 + S 2 + S 3 . In each time step, a game is played, with winners and losers. Suppose that each game involves exactly two players, each one having a 50% chance of winning the bet. The model goes as: f ij is chosen randomly with probability 1/6, where i, j = 1, 2, 3 and f ij : (S i , S j ) → (S i + b, S j -b).
If the bet is fixed at b = 1 and the stacks are positive integers, the resulting model is the 3-tower game. The 3-tower model is loosely based on the tower of Hanoi problem, with no constraints on the order by which the chips are stacked. One application of this model is tournaments that involve the accumulation of chips or wealth, for example, poker tournaments. In such cases, a partial information game may be modeled as a zero information games to determine players' equities independent of skill. Other forms of the gambler's ruin for three players are the symmetric problem and the C-centric game [START_REF] Finch | Gambler's Ruin[END_REF]. The time until a player is ruined in the 3-tower problem has been solved [START_REF] Bruss | On the N -Tower-Problem and Related Problems[END_REF][2][10] [START_REF] Swan | A Matrix-Analytic Approach to the n-Player Ruin Problem[END_REF] and is given by
T = 3S 1 S 2 S 3 S . (2)
The probability that each player is ruined first is an open problem [START_REF] Finch | Gambler's Ruin[END_REF]. Ferguson used Brownian motion to numerically approximate the probability of ruin [START_REF] Ferguson | Gambler's Ruin in Three Dimensions[END_REF], and Kim improved on this by using numerical solutions to Markov processes [START_REF] Kim | Gambler's Ruin in Many Dimensions and Optimal Strategy in Repeated Multi-Player Games with Application to Poker[END_REF]. An alternative method for calculating placing probabilities is the Independent Chip Model (ICM), credited to Malmuth-Harville [START_REF] Roberts | Ben Roberts ICM Model[END_REF], although no proofs of the method are found. Let n be the number of players, X i be the random variable denoting the placing of Player i, and S i be the current stack size of Player i. Then the probability of player i placing 1st is
P (X i = 1) = S i /S (3)
where S = S 1 +S 2 +. . .+S n . To obtain the probability of placing 2nd, conditional probabilities are used for each opponent finishing 1st, and from the remaining players the probability of placing 2nd (which is essentially 1st among the remaining n -1 players) is the proportion of player i's stack to the total stack not including the stack of the conditional 1st place finisher. Thus,
P (X i = 2) = j =i P (X j = 1)P (X i = 2|S j = 0) = j = S j S S i S -S j . (4)
Continuing, the probability of Player i finishing 3rd is
P (X i = 3) = j =i k =i,j S j S S k S -S j S i S -S j -S k . ( 5
)
It should be emphasized that ICM does not make any assumptions about the bet amount, hence is slightly different from the 3-tower problem.
In the 3-tower model, the probability of finishing 1st is easily solved by recursion. The result is exactly the same as [START_REF] Ferguson | Gambler's Ruin in Three Dimensions[END_REF]. A method of calculating 3rd place probabilities (and hence, 2nd place probabilities) for the 3-tower problem is presented here using Markov chains constructed using directed multigraphs with loops.
Methods
Consider a 3-player model where the players are involved in even money bets. We define a state as an ordered triple (x, y, z) with x ≥ y ≥ z ≥ 0. Because the games in each round are all fair and random, the probabilities of becoming ruined first of each player depends only on the amount of money each player has in a given state. We also define chip position (or simply, position) to be the number of money (or chips) a player has in a given state. Let us also define the function p(u|v, w) to be the probability that a player with a given chip position u in the state (u, v, w) will finish 3rd, or become ruined first. If the state is understood from context, we will simply write this as p(u). If v and w are positions in the same state such that v = w, then it is assumed that p(v) = p(w).
A terminal state is one wherein the probabilities of placing 3rd are known. There are two types of terminal states.
1. If one of the three positions is zero, i.e. z = 0 2. If all three positions are equal, i.e. x = y = z.
Note that for the terminal state (x, y, 0), then p(x) = p(y) = 0 and p(0) = 1. For the terminal state (x, x, x), p(x) = 1/3 using the previous assumption.
A state (x, y, z) is adjacent to a state (u, v, w) if the former state can move to the latter state in one round. A state that is adjacent to a terminal state is called a near-terminal state.
Lemma 1. A state (x, y, z) with x ≥ y ≥ z that satisfies one of the following is a near-terminal state:
(i) z = 1 (ii) x = y + 1 and z = y -1.
In constructing the multigraph, all possible states of S are represented by nodes. The transitions between adjacent states are given by directed edges. The states are arranged such that all states (x, y, z) with a fixed value of z are aligned vertically, with the the highest value of x in the topmost position, in decreasing order going down (i.e. from North to South), while y is increasing at the same time. All states (x, y, z) with fixed x are aligned horizontally, with y in decreasing order from left to right (i.e. West to East) and z increasing at the same time. Consequently, all states with fixed y are aligned diagonally, with x decreasing and z increasing as the states move from Northwest to Southeast. An example of the resulting multigraph is given in Figures 1 and2. From the construction, it is clear that for a given node, its adjacent nodes are the ones located to its immediate top, bottom, left, right, top left and bottom right positions (i.e. North, South, East, West, Northwest and Southeast). A state may be adjacent to itself if the following holds: Lemma 2. A state (x, y, z) is adjacent to itself if y = x -1 and/or z = y -1.
In the "and" case in Lemma 2, the state is doubly adjacent to itself. The state is also doubly adjacent to itself for states of the form (x, x, x-1) and (x, x-1, x-1). This and the following Lemma can be proved using the definition of adjacent nodes and (1) with b = 1. Lemma 3. A state of the form (x, y, y) or (x, x, z) is doubly adjacent to its adjacent nodes.
There is always at least one edge from a non-terminal state to its adjacent state. A state that is doubly adjacent to another state has two edges going to that other state. If a state A is doubly adjacent to a non-terminal state B, it does not follow that B is doubly adjacent to A.
Results and Discussion
Based on the multigraph, we construct the Markov chain. Let τ be a relation that maps a position from one state to a position in another non-terminal state. We can think of τ as directed edges that connect specific positions within states to other positions in other states (or possibly within the same state). Let the function φ denote the ruin probability of the position in one move. Note that the unique non-terminal positions are exactly the transient states in the Markov chain, while φ gives the probability of absorption to the first-ruined state. The Theorem below follows from the previous Lemmas.
Theorem 1. Let the state corresponding to a non-terminal vertex be denoted by (x, y, z) where x ≥ y ≥ z such that z ≥ 1. Let S = x + y + z and let (u, v, w) be an adjacent non-terminal vertex.
(i) If z = y -1, then τ (x) → u, τ (y) → w, τ (z) → v (ii) If y = x -1, then τ (x) → v, τ (y) → u, τ (z) → w (iii) If z = y, then τ (x) → u twice, τ (y) → v and τ (y) → w (iv) If y = x, then τ (z) → w twice, τ (x) → u and τ (x) → v.
In all other cases, the transitions are τ (x) → u, τ (y) → v and τ (z) → w.
Remark 1. If the state is self-adjacent, then the corresponding transition of positions within the same state in Theorem 1(i),(ii) are given by
(i) τ (x) → x, τ (y) → z, τ (z) → y (ii) τ (x) → y, τ (y) → x, τ (z) → z.
Each mapping of positions by τ gives a transition probability of 1/6, except when the mapping occurs twice, as in the 3rd and 4th cases in the Theorem, then the transition probability is 2/6. These values are then used to generate the transient matrix Q in the Markov chain. For the absorption probabilities, we use the following: Theorem 2. Given a position u, then φ(u) = 1/3 when u = 1. If u > 1, then φ(u) = 0, the only exception is for the state (u + 1, u, u -1), wherein φ(u + 1) = φ(u) = φ(u -1) = 1/18. If S = 6, then in the state (3, 2, 1), φ(1) = 1/3 + 1/18, by combining the two cases in Theorem 2. The values of φ on the various positions are then used to construct the vector r. Finally, we solve the system
(I -Q)p = r (6)
where the vector p gives the probabilities of first ruin for each position and I is the identity matrix with the same dimension as Q. It is easy to show the transient matrix I -Q is invertible [START_REF] Grinstead | Introduction to Probability[END_REF].
Q = 1 6 0 2 1 1 , r = 0 1/3 . ( 7
)
Substituting ( 7) in ( 6 Example 2. Figure 2 illustrates the various states for S = 9. In this example, there are 6 non-terminal states and a total of 15 unique positions. For labeling purposes, letters are affixed to the position in cases when there are two or more unique states with the same position value. Starting from the top left non-terminal state moving downwards (or South), the non-terminal states are (7, 1a, 1a), (6, 2a, 1b), (5a, 3a, 1c), (4a, 1d, 1d), (5b, 2b, 2b), and (4b, 3b, 2c). The unique positions are then arranged starting from the x positions in each of the states above, then the y positions, and then the z positions, skipping the nonunique positions as needed. Thus our indices correspond to 7, 6, 5a, 4a, 5b, 4b, 1a, 2a, 3a, 1d, 2b, 3b, 1b, 1c and 2c, respectively. For example, index i = 1 of Q, r and p corresponds to values for position 7 from the state (7, 1, 1), while index i = 15 corresponds to the state-position 2c from the the state (4, 3, 2). In constructing Q, note that by Theorem 1(iii), τ (7) → 6 twice, and using the 1 and 2-index for positions 7 and 6, respectively, we have Q 12 = 2/6. Because all other transitions of 7 are towards terminal states, then Q 1j = 0 for j = 2. From Theorem 2, r 1 = φ(7) = 0 because 7 is not a near-terminal position. For position 6, we have τ (6) → {7, 6, 5a, 5b} using Theorem 1(i), hence
Q 21 = Q 22 = Q 23 = Q 25 = 1/6.
For position 5a in the state (5a, 3a, 1c), the 'regular' case of Theorem 1 applies, hence τ (5a) → {6, 5b, 4a, 4b}. The rest of the entries are computed similarly, using Theorems 1 and 2. The transient matrix Q and absorption vector r in ( 6) are obtained as follows:
Q = 1 6
0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 1 0 0 0 0 0 2 2 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 1 0 1 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1
(8)
and r = (0 0 0 0 0 1/18 1/3 0 0 1/18 0 1/3 1/3 1/3 1/18) T .
The solution of ( 6) using ( 8) and ( 9) then produces the 3rd place probabilities of the various positions. For example, given the state (5, 3, 1), the probabilities of placing 3rd are p(5) = 0.075227, p(3) = 0.196310, and p(1) = 0.728463. The results are similar to the values obtained using numerical solutions to Markov processes, which were given to 4 decimal places [START_REF] Kim | Gambler's Ruin in Many Dimensions and Optimal Strategy in Repeated Multi-Player Games with Application to Poker[END_REF]. The corresponding ICM values are (0.0972, 0.2083, 0.6944).
Unlike ICM or other methods, the actual number of chips of each player, rather than just the proportion of the chips to those of the other players, affects that player's probability of first ruin. The accuracy of the method allows very minute differences in probabilities to be observed. As an illustration, the 3rd place probabilities for states in multiples of (3, 2, 1) are shown in Table 1. As x, y and z increase, the ruin probabilities approach a limiting value. The ICM values are shown for comparison. The time until first ruin can also be calculated from the model. For this purpose, we adjust the transient matrix Q when S mod 3 = 0 by regarding the state (S/3, S/3, S/3) as non-terminal. The time until ruin is calculated from the row sum of N = (I -Q) -1 . In Example 1, for the state (2, 1, 1), Q given by ( 7) and I the 2 × 2 identity, we obtain N = 1 14 15 6 3 18 [START_REF] Stirzaker | Tower Problems and Martingales[END_REF] and the row sum for both positions is 3/2, exactly the same value using the time until ruin formula (2).
Conclusion
In this paper, a method of solving players' first-ruin probabilities in the 3-tower problem or gambler's ruin with three players was presented. The assumptions were that each player started with some nonzero number of chips and during each round, two players were randomly selected in an even-money bet with a randomly chosen winner winning one chip from the loser. One application of this is computing equities in a partial information game (e.g. poker tournaments) modeled as a random game. A multigraph of the various states given S total chips was constructed. The method specified how to obtain the state transitions and absorption probabilities, as given by Theorems 1 and 2. The resulting linear system of the Markov chain were then used to solve the 3rd place (and thus 2nd place) probabilities of any state in S. Although a closed form formula was not derived for the probabilities, the method produces exact solutions instead of numerical approximations. This made it possible to show subtle differences in probabilities of first ruin as S was increased, while preserving the relative chip ratios. In contrast, other methods, such as ICM or Brownian models, only depend on the proportion of chips each player has, thus are independent of any scaling factor. The calculated results were similar to previous numerical approximations using Brownian motion, but differed from ICM by up to 15%, although as mentioned, ICM and the 3-tower problem do not use the same assumptions.
The 3-tower model may be extended to one wherein bet sizes are not fixed. The multigraph form of such a model would be much more complex because the number of edges and adjacent nodes is not limited to six. The model may also be applied to other forms of the three-player gambler's ruin such as player-centric and symmetric games. An extension to an N -tower problem may be done but the increase in complexity of the graph is expected to be significant.
Fig. 1 .Example 1 .
11 Fig. 1. Multigraph with loops for S = 4.
), we obtain the solution p = (1/7, 3/7) T , i.e. the probabilities of being ruined first are p(2) = 1/7, p(1) = 3/7. The 2nd place probabilities are thus 10/28 and 9/28 for positions 2 and 1, respectively. In comparison,[START_REF] Ferguson | Gambler's Ruin in Three Dimensions[END_REF] obtained (0.35790, 0.32105) using a Brownian motion model, which are good approximations of the true values obtained by our method. The computed ICM values are (1/3, 1/3), which are slightly different from the 3-tower values.
Fig. 2 .
2 Fig. 2. Directed multigraph (with loops) of state transitions for S = 9.
Table 1 .
1 Probabilities of placing 3rd for given states (x, y, z), where x : y : z are in the ratio 3 : 2 : 1. The ICM values are shown for comparison.
(x, y, z) p(x) p(y) p(z)
(3, 2, 1) 0.12690355 0.25888325 0.61421320
(6, 4, 2) 0.12672895 0.25857077 0.61470028
(12, 8, 4) 0.12671616 0.25854223 0.61474162
(24, 16, 8) 0.12671533 0.25854029 0.61474439
(48, 32, 16) 0.12671528 0.25854016 0.61474456
(96, 64, 32) 0.12671527 0.25854016 0.61474457
ICM 0.1500 0.2667 0.5833
Acknowledgments. This project was supported by the National Sciences Research Institute of the University of the Philippines, Project Reference No. 2008.149. A special thanks to Ramon Marfil of the Institute of Mathematics, University of the Philippines, for his assistance in the project. | 17,647 | [
"1001263"
] | [
"300802"
] |
01466212 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466212/file/978-3-319-24315-3_13_Chapter.pdf | Yusuke Watanabe
Mayumi Takaya
Akihiro Yamamura
Fitness Function in ABC Algorithm for Uncapacitated Facility Location Problem
Keywords: Swarm Intelligence, Artificial Bee Colony Algorithm, Uncapacitated Facility Location Problem, Fitness Function
We study the fitness function of the artificial bee colony algorithm applying to solve the uncapacitated facility location problem. Our hypothesis is that the fitness function in the artificial bee colony algorithm is not necessarily suitable for specific optimization problems. We carry out experiments to examine several fitness functions for the artificial bee colony algorithm to solve the uncapacitated facility location problem and show the conventional fitness function is not necessarily suitable.
Introduction
Efficient supply chain management has led to increased profit, increased market share, reduced operating cost, and improved customer satisfaction for many businesses [START_REF] Simchi-Levi | Designing and Managing the Supply Chain. Concepts, Strategies and Case Studies[END_REF]. For this purpose, it is getting more and more important in information and communications technologies to solve optimization problem such as the uncapacitated facility location problem (UFLP) which is a combinatorial optimization problem. The objective of the UFLP is to optimize the cost of transport to each customer and the cost associated to facility opening, when the set of potential locations of facilities and the customers are given, whereas UFLP is known to be NP-hard (see [START_REF] Korte | Combinatorial Optimization: Theory and Algorithms[END_REF]). In the context of performing economic activities efficiently, various objects have been considered as facilities, such as manufacturing plants, storage facilities, warehouses, libraries, fire stations, hospitals or wireless service stations. Several techniques have been applied to the UFLP such as the swarm intelligence algorithms or a meta-heuristics algorithm; particle swarm optimization (PSO) [START_REF] Guner | A Discrete Particle Swarm Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF], ant colony optimization (ACO) [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF], artificial bee colony algorithm (ABC) [START_REF] Tuncbilek | Artificial Bee Colony Optimization Algorithm for Uncapacitated Facility Location Problems[END_REF], and genetic algorithm [START_REF] Kratica | Solving The Simple Plant Location Problem By Genetic Algorithm[END_REF].
The method of simulating swarm intelligence is inspired by the movement and foraging behavior in herd animals [START_REF] Yang | Swarm Intelligence and Bio-Inspired Computation -Theory and Applications[END_REF]. As typical examples, particle swarm optimization was inspired by the behavior of groups of birds and fish, ant colony optimization was inspired by the foraging behavior of ants. These have been applied to a wide range of computational problems like data mining and image processing [START_REF] Abraham | Swarm Intelligence in Data Mining[END_REF] in addition to numerous optimization problems like the traveling salesman problem and the flow shop scheduling problem.
Inspired by the foraging behavior of honey bee swarm, D. Karaboga [START_REF] Karaboga | An Idea Based on Honey Bee Swarm for Numerical Optimization[END_REF] proposed the artificial bee colony (ABC) algorithm, which is an optimization algorithm developed for a function optimization. In the ABC algorithm model, a honey bee swarm consists of three types of bees which carry out different tasks. The first group of bees are called employed bees. They have a potential solution, evaluate its fitness value and keep the better solutions in their memory. The second group of bees are called the onlooker bees. They choose potential solutions on the basis of information provided by the employed bees. If a potential solution has high fitness value, many onlooker bees choose the solution. The third group of bees are called the scout bees. They explore new potential solutions randomly. An employed bee whose potential solution has been abandoned becomes a scout bee. The ABC algorithm is operated in an iterative manner by these three groups of artificial honey bees to search a better solution. The ABC algorithm is applied to many real-world problems, however, its mechanism has not been understood in detail. In this paper we examine the fitness function that is an important piece of the ABC algorithm by implementing several variants and comparing with the original fitness function.
The paper is organized as follows. In Section 2 we review the uncapacitated facility location problem and application of the ABC algorithm to the UFLP. In Section 3 we show the results of our experiments and we examine the fitness function of the ABC algorithm applied to the UFLP. In the last section, we summarize our findings.
Artificial Bee Colony Algorithm for the Uncapacitated Facility Location Problem
Uncapacitated Facility Location Problem
In the UFLP, a finite set F of facilities can be opened and a finite set D of customers are present. Each facility i in F has a fixed opening cost f i ∈ R + , where R + stands for the set of positive real numbers. For each pair of i in F and j ∈ D, a transport cost c ij ∈ R + for serving the customer j from facility i is specified. It is possible to open any number of facilities and to assign each user to any of the facilities opened that will serve for the user. The task of the UFLP is to minimize the sum of the opening costs of the facilities and the transport costs for each customer, that is, to minimize
∑ i∈X f i + ∑ j∈D c σ(j)j
where X ⊆ F is a subset of facilities to be opened and σ : D → X is an assignment of each customer to an appropriate facility.
Artificial Bee Colony Algorithm
In the UFLP with n potential facilities, that is For the open facility vector Y k of an employed bee k, X is defined to be the set of facilities i in F such that y ki = 1. Then the allocation σ : D → X for Y k from each customer in D to an appropriate facility in X is uniquely determined for the transport costs c ij as follows. For each j in D, σ(j) is defined to be i ∈ X so that c ij is the smallest among {c hj | h ∈ X} If there are several such i, we choose one of them randomly. The total cost x k for each employed bee k is computed as the sum of the fixed cost of opening facility determined by the open facility vector Y k and the transport cost of each customer to the opened facilities. Let us see a concrete example shown in Table 1. In this example, 5 facilities A, B, C, D, E and 4 customers a, b, c, d are given and the open facility vector Y k for an employed bee k is given as [1, 0, 0, 1, 1]. The total cost x k for the employed bee k is calculated as follows:
|F | = n,
x k = open facilities fixes costs + min (cost of supply from open facilities to customer) = (10 + 7 + 3) + min(1, 10, 12) + min(9, 4, 3) + min(8, 5, 7) + min(15, 10, 13) = (20) + (1 + 3 + 5 + 10)=20 + 19=39.
The fitness value f itness(x k ) of the potential solution of the employed bee k is calculated from its total cost x k by the formula below that is given in [START_REF] Karaboga | An Idea Based on Honey Bee Swarm for Numerical Optimization[END_REF].
f itness(x k ) { 1 1+x k if x k ≥ 0 1 + abs(x k ) if x k < 0 (1)
where abs(x k ) stands for the absolute value of x k . We remark that a larger fitness value is better.
Step 1: Initialization First, each entry in the open facility vector Y k of each employed bee k is initialized to either 0 or 1 at random. We compute the fitness value f itness(x k ) by the formula (1) for each employed bee k in the population B 1 of employed bees. We remark that we do not use the later part 1 + abs(x k ) in the formula (1) because x k is always a positive number in case of the UFLP. Step 3: Onlooker Bee Phase To update onlooker bees, we select stochastically one of the potential solutions, that is, one of the open facility vectors of the employed bees, according to the probability p k determined by the following formula:
p k = f itness(x k ) ∑ B1 l=1 f itness(x l ) (2)
Note that the selection follows the probability above and so employed bees with larger fitness value have more bigger chance to be chosen. Like the process similar to Step 2, if the open facility vector of the selected employed bee has larger fitness value than the onlooker bee's current solution, then we replace the potential solution of the onlooker bee with the open facility vector of the selected employed bee. This procedure is repeated for all onlooker bees in the population B 2 of the onlooker bees.
Step 4: Scout Bee Phase For a cut limit c which is determined in advance, if the fitness value of the open facility vector of an employed bee is not improved during c times iteration, then the employed bee is discarded. We harvest the employed bee whose open facility vector was discarded and turn it into a scout bee by giving a new random open facility vector. This operation prevents the search from falling into local optima.
Step 5: Termination Condition of Iteration
ARP E = R ∑ i=1 ( H i -U U ) × 100 R ( 3
)
where H i denotes the i-th replication solution value, U is the optimal value provided by [START_REF] Beasley | [END_REF], and R is the number of replications. In Table 2, we summarize the data sets we used for our experiments. It shows the size of data sets and the optimal solutions.
Experiment 1
We conducted an experiment to investigate 5 different candidates for fitness value as follows:
f itness(x k ) = 1 (4) f itness(x k ) = 1 1 + x k (5) f itness(x k ) = 1 1 + (x k -x * ) (6) f itness(x k ) = 1 1 + (x k -x * ) 2 (7) f itness(x k ) = 1 1 + √ x k -x * ( 8
)
where x * represents the best solution obtained so far. Note that the formula ( 5) is the original fitness function given in [START_REF] Karaboga | An Idea Based on Honey Bee Swarm for Numerical Optimization[END_REF].
The results obtained in this experiment are shown in Fig. 2 and Fig. 3. For the benchmark problem cap131, we set a population of employed bees B 1 = 50, a population of onlooker bees B 2 = 200, and the number of repetitions N = 100
Conclusions
In this paper we discussed the fitness function of the ABC algorithm for the UFLP by carrying out experiments of several variants and comparing with the original fitness function. Our experiment indicates the conventional formula [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF] for the fitness value of the ABC algorithm is almost same as the fitness of a constant function given by (4) for the benchmark problems and so it seems inadequate for the UFLP. Therefore, we proposed a new formula (9) which can appropriately weight the fitness value according to problem instances. Using the formula (9) with a smaller value of Q, we can find more open facility vectors whose total costs are low and it makes the search efficient. However, we consider that the algorithm falls in a local optima when we use a too small value of Q and the algorithm does perform adequately. In our experiment, the ABC algorithm with the fitness function [START_REF] Tuncbilek | Artificial Bee Colony Optimization Algorithm for Uncapacitated Facility Location Problems[END_REF] with Q = 10 4 performs best and exceeds the original formula [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF]. The new candidate for the fitness function is worth being applied to other problems, although we do not know how to adjust the value of Q for specific optimization problem. It will be our future research to study how to adjust the value Q for a specific optimization problem such as TSP.
for each employed bee k is given the open facility vector Y k = [y k1 , y k2 , y k3 , ..., y kn ] representing a potential solution, where y ki = 1 if the i-th facility is open and y ki = 0 otherwise.
Fig. 1 .
1 Fig. 1. Update of open facility vector Step 2: Employed Bee Phase In each iteration, each employed bee k obtains a new potential solution candidate by manipulating the current potential solution. The open facility vector for k is updated by randomly selecting one facility i in F and switching the bit corresponding to the selected facility (see Fig. 1). If the fitness value of the new open facility vector for k is larger than the fitness value of the previous vector for k, we update the open facility vector for k.
Fig. 2 .Fig. 3 .Fig. 4 .Fig. 5 .
2345 Fig. 2. Experiment 1 : HR
Table 1 .
1 Open facility vector for an employed bee k
Facility A B C D E
Open facility vector Y k 1 0 0 1 1
Opening cost 10 8 4 7 3
a 1 4 3 10 12
Customer b 9 8 7 4 3
(Transportation costs) c 8 12 6 5 7
d 15 10 6 10 13
The termination condition is determined by the number N of iterations set in advance. When the program reaches N , it outputs the open facility vector whose fitness value is the largest and execution stops.
ARPE is defined by
Begin
Initialize solution randomly (Step1)
Do
For Each employ bee (Step2)
search the new solution
If the solution is improved
Update the solution
For Each Onlooker bee (Step3)
Choice a solution by probability
search the new solution
If the solution is improved
Update the solution
For Each solution (Step4)
If it is not updated
search new solution randomly and update
While (Maximum Iteration is not reached) (Step5)
End
Algorithm : Pseudocode of the ABC algorithm for UFLP.
Table 2 .
2 Benchmark
Data set Size(m × n) Optimum
cap71 16×50 932615.75
cap72 16×50 977799.40
cap73 16×50 1010641.45
cap74 16×50 1034976.98
cap101 25×50 796648.44
cap102 25×50 854704.20
cap103 25×50 893782.11
cap104 25×50 928941.75
cap131 50×50 793439.56
cap132 50×50 851495.33
cap133 50×50 893076.71
cap134 50×50 928941.75
Experimental Results
We examine the fitness function given by (1) when we implement the ABC algorithm for the UFLP to investigate the adequateness of the fitness function.
The ABC Algorithm was coded in C# language using Visual Studio and run on an Intel Core i3 3.07GHz Desktop with 2.0GB memory. We used 12 data sets (cap 71, 72, 73, 74, 101, 102, 103, 104, 131, 132, 133, 134) of benchmark problems from OR-Library [2] compiled by J. E. Beasley.UFLP is called Uncapacitated Warehouse Location Problem in the OR-Library. There are three groups of benchmark problems of size m × n, where m is the number of customers and n is the number of facilities: 50 × 16 (cap 71, 72, 73, 74), 50 × 25 (cap 101, 102, 103, 104), 50 × 50 (cap 131, 132, 133, 134), and the optimal solutions for those instances are known. The performance of our program was evaluated by average relative percent error (ARPE), hit to optimum rate (HR) and average computational processing time (ACPU) that are introduced in [3].
ARPE is the average of the difference from the optimum expressed in percentages, and the lower ARPE shows it produces better solutions. HR represents the number that the algorithm finds the optimal solutions across all repetitions and the higher HR shows better performance. HR takes a value from 0.00 to 1.00 where 1.00 implies that the algorithm finds the optimal solution with probability 1. ACPU represents the time (in seconds) that the algorithm spends to output one solution and the lower ACPU shows better performance.
and the cut limit c ranges between 5 and 50. The x-axis of the graph represents the cut limit c and the y-axis of the graph represents HR in Fig. 2 and ARPE in Fig. 3, respectively. Comparing the formulas (4) and [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF], we found that the two formulas show almost the same results. This implies that the formula [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF] does not have a desired property as a fitness function. The formulas ( 6), ( 7) and ( 8) are designed so that the fitness value is affected largely by the value of x k . If the cut limit c is set small, we obtain better results in the formulas ( 6), [START_REF] Kratica | Solving The Simple Plant Location Problem By Genetic Algorithm[END_REF] and ( 8) than the formula [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF], on the other hand, the cut limit c gets larger, then the formulas (5) gives a better result. This indicates that we can obtain a better result in a shorter computation, although the search may fall in a local optimal.
Experiment 2
Considering the results of Experiment 1, we propose the following formula for the fitness value:
where Q is an arbitrary constant, given as a parameter to the algorithm. Fig. 4 and Fig. 5 show how the value of cut limit c between 5 and 50 affects HR and ARPE for the benchmark problem cap131 with bee populations B 1 = 50, B 1 = 20 and the number of iterations N = 100. If Q = 10 6 or 10 8 , the fitness value of ( 5) is almost the same as the one for [START_REF] Tuncbilek | Artificial Bee Colony Optimization Algorithm for Uncapacitated Facility Location Problems[END_REF]. If Q = 1, the fitness value of ( 9) is almost the same as the one for [START_REF] Korte | Combinatorial Optimization: Theory and Algorithms[END_REF]. If Q = 10 4 , the fitness value of ( 9) is better than [START_REF] Kole | An Ant Colony Optimization Algorithm for Uncapacitated Facility Location Problem[END_REF]. As a result of the experiment, we believe that the ABC algorithm operates more efficiently by the fitness formula [START_REF] Tuncbilek | Artificial Bee Colony Optimization Algorithm for Uncapacitated Facility Location Problems[END_REF] with Q = 10 4 . Finally, we show the result of performing the proposed fitness value and conventional fitness value for each set of benchmark problems in Table 3, where we set the population of employed bees B 1 = 50, the population of the onlooker bees B 1 = 200, the cut limit c = 20 and the number of repetitions N = 100.
Table 3. Experimental Results of the proposed fitness value | 18,249 | [
"1001264"
] | [
"472230",
"472230",
"472230"
] |
01466213 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466213/file/978-3-319-24315-3_14_Chapter.pdf | Szilárd Hikari Kato
Zsolt Fazekas
Mayumi Takaya
Akihiro Yamamura
email: [email protected]
Comparative Study of Monte-Carlo Tree Search and Alpha-Beta Pruning in Amazons
Keywords: Amazons, Two player games, Monte-Carlo tree search, UCT, Game programs, Playouts, Alpha-Beta pruning, Evaluation function
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Artificial intelligence is an important technology in our digitalized society. There are many applications of artificial intelligence to industry, e.g., data mining in big data processing, natural language processing, robotics, intelligent agents and machine learning, etc. One of the most important applications, as well as proving grounds for artificial intelligence methods is to create game playing programs for board games like chess or Go. Monte-Carlo Tree Search is one of the simple, yet often efficient approaches along this line. We shall study the performance of a simple Monte-Carlo Tree Search program playing Amazons compared with traditional artificial intelligence methods like Alpha-Beta pruning.
Alpha-Beta pruning is a search algorithm that applies an evaluation function to each leaf node in the game tree and selects the node with the highest evaluation based on the Mini-Max principle. It has been widely studied for a long time as a search program for two player games such as Shogi and Reversi. When applying this method, it is important to use strong evaluation functions [START_REF] Kaneko | Evaluation Functions of Computer Shogi Programs and Supervised Learning Game Records[END_REF] and enhanced pruning techniques of the game tree [START_REF] Hoki | Efficiency of three forward-pruning techniques in shogi: Futility pruning, null-move pruning, and Late Move Reduction (LMR)[END_REF]. Monte-Carlo Tree Search (MCTS for short) is a search algorithm based on probability statistics, and it can create a strategy without using an evaluation function which is first implemented by Coulom [6]. This property made it a prime candidate for games for which it is difficult to create an evaluation function such as Go [START_REF] Gelly | Modification of UCT with patterns in Monte-Carlo Go[END_REF] and Arimaa [START_REF] Kozelek | Methods of MCTS and the game Arimaa[END_REF]. For instance, it was considered difficult to write a strong program for Go using conventional Mini-Max search technique. However, programs such as CrazyStone [START_REF] Coulom | Computing Elo Ratings of Move Patterns in the Game of Go[END_REF], based on MCTS, were able to win computer Go tournaments, proving the validity of the approach.
The game of Amazons (Amazons for short) is a two player game [START_REF] Zamkauskas | Amazons[END_REF] sharing some attributes with both chess and Go, but also being different from them in crucial ways. There are more legal moves in each turn in the game of Amazons than in chess. A strong game playing program must explore the game tree to great depth. However, searching deeply in Amazons with a simple Alpha-Beta pruning is ineffective, because the state space is huge; the number of legal moves is so great that doing a full width search is impractical throughout at least the first two thirds of an Amazons game [START_REF] Avetisyan | Selective search in an Amazons program[END_REF]. Therefore, creating a strong player using only Alpha-Beta pruning is impossible.
Amazons has been extensively studied, see, for instance, the analysis of 2 × n Amazons [START_REF] Berlekamp | Sums of N × 2 Amazons[END_REF], the analysis of endgames [START_REF] Buno | Simple Amazons endgames and their connection to Hamilton circuits in cubic subgrid graphs[END_REF][START_REF] Kloetzer | A Comparative Study of Solvers in Amazons Endgames[END_REF][START_REF] Kloetzer | Playing Amazons Endgames[END_REF], the Amazons opening book [START_REF] Kloetzer | Monte-Carlo Opening Books for Amazons[END_REF], and a study of creating strong programs by combining an evaluation function and MCTS [START_REF] Kloetzer | The Monte-Carlo Approach in Amazons[END_REF]. Nobody succeeded in creating a strong game playing program of Amazons based on simple MCTS. J. Kloetzer et al. [START_REF] Kloetzer | Experiments in Monte-Carlo Amazons[END_REF][START_REF] Kloetzer | Monte-Carlo Techniques: Application to Monte Carlo tree search and Amazon[END_REF] gave much stronger approaches by combining MCTS and an evaluation function in the search process. They also showed that the strength of MCTS combined with an evaluation function for Amazons can be enhanced by increasing the number of simulations. However, we are not aware of any previous analyses of direct play between simple MCTS not using an evaluation function and Alpha-Beta pruning. As such a study would emphasize the gain brought by combining evaluation functions with MCTS, we conducted experiments in this direction.
We will carried out experiments in which a simple MCTS program not using an evaluation function plays against an Alpha-Beta pruning program using the classical evaluation function. We recorded how many times the MCTS program won against the Alpha-Beta pruning program and the average time that each program took to output a move. Our experiment showed that Alpha-Beta pruning is stronger than the simple MCTS program and increasing the number of simulations in a simple MCTS is inefficient for strengthening the strategy for Amazons.
The Game of Amazons
The game of Amazons is a combinatorial two-player game invented in 1988 by Walter Zamkauskas [START_REF] Zamkauskas | Amazons[END_REF] and first published (in Spanish) in issue 4 of the puzzle magazine El Acertijo in 1992. Amazons is played on a 10 × 10 chess-style board. The game starts by placing four black and white queens on the specified cells on the board. The first player (P W ) selects and moves one of the white queens according to the movement of a queen in chess (vertical, horizontal and diagonal straight lines on the board). Then, P W chooses any empty cell in the range of the queen moved and thwarts it. No piece can be placed on the thwarted cell nor pass through it thereafter. Similarly, the second player (P B ) selects and moves one of the black queens and chooses any empty cell in the range of the queen moved and thwarts it. The players P W and P B move alternately and the player who can no longer complete their moves (both moving a queen and thwarting a cell) loses the game. Amazons resembles Go in that it is considered good strategy for one to create their own territory, while the movement of the pieces on the board is borrowed from chess. It should be noted, that while Amazons originally uses a 10 × 10 board, the game can also be considered in a more general manner on an n × n board as a variant. Figure 1 shows the initial setting of an Amazons game and the board after the first player made their first move. Amazons is known to have a very large number of legal moves in a given turn compared with other board games such as chess, Shogi or Go. For example, the number of legal moves of a player in their turn in chess is about 35 on average (see [START_REF] Allis | Searching for Solutions in games and Artificial Intelligence[END_REF]), in Shogi it is 80 (see [START_REF] Matsubara | Go, natural developments in game research[END_REF]) and in Go it is 361 on a 19 × 19 board. In contrast, in the first turn of Amazons the starting player has 2176 legal moves, and each player has 400 legal moves per turn on average even during the game. Therefore, the evaluation of the game tree involves many more states than in the case of previously mentioned board games and creating a strong computer player for Amazons is considered difficult [START_REF] Avetisyan | Selective search in an Amazons program[END_REF].
Alpha-Beta Pruning
The game tree for a two-player game is a directed graph whose nodes are states in the game and whose edges are legitimate moves. Alpha-Beta pruning is an algorithm to find the best move from a state according to an evaluation function and the Min-Max principle (see [START_REF] Knuth | An Analysis of alpha-beta pruning[END_REF]) by analyzing part of the game tree. The algorithm has been studied since J. McCarthy showed an idea in 1956. It is a widely used algorithm in the field of two player games; notable examples includes chess and Reversi. First, Alpha-Beta pruning expands the game tree until a specified search depth. After that, it applies the evaluation function to the child nodes of the portion of the game tree expanded up to then. The evaluation of the nodes higher up in the tree is done by the Mini-Max principle. After obtaining the evaluation of all the nodes, the algorithm selects the move leading to the child node with the highest value.
Alpha-Beta pruning can lead to a stronger strategy if one increases the allowed search depth. However, since Amazons has a very large number of legal moves on average, it is difficult to explore the game tree to a large depth because of time and memory limitations. The Alpha-Beta pruning program in our experiment used depth-first search for evaluating the nodes to a depth of 2. Several different heuristics for Amazons have already been studied [START_REF] Hensgens | A Knowledge-based Approach of the Game of Amazons[END_REF]. For the Alpha-Beta pruning programs we used the three evaluation functions given in [START_REF] Lieberum | An evaluation function for the game of Amazons[END_REF] and described below. In what follows, by turn player in a state we mean the player whose turn it is to move in that state and by opponent we mean the other. When not specified explicitly, by evaluating a state we mean evaluation from the point of view of the turn player.
Mobility Evaluation
In Amazons, it is advantageous to have more legal moves available in one's turn because players who cannot move, lose the game. Consequently, if the number of legal moves is small in a state, it is considered to be an unfavorable game-state for the player. In other words, reducing the number of legal moves of the opponent is considered as an effective strategy. Let mobility evaluation (ME) of a state be the value obtained by subtracting the number of legal moves of the opponent from the number of legal moves of the turn player in a given game state. For example, if P W has 161 legal moves in a certain game state and P B has 166 moves, then ME of the state X from P W 's point of view is ME W (X) = 161 -166 = -5.
Territory Evaluation
The concept of territory is important in Amazons. A territory of a player is a cell which is reachable by the queens of only that player, so advantageous game states should have many of these. The player who has access to more empty cells is in advantage because the playing area is divided in several separated subareas in the end-game of Amazons.
Figure 2 shows the minimum number of moves required to reach each cell on the game board by any of the pieces of the players. The value in the upper left corner of each blank cell is the minimum number of moves needed for P W to reach it, whereas the value in the lower right corner represents the corresponding number required for P B . For example, to reach cell C6, player P W needs at least four moves, e.g., B8 → A7 → A6 → B5 → C6. In contrast, P B requires only two moves, B3 → A4 → C6. Therefore, P B has a faster access to C6 than P W so according to territory evaluation, cell C6 belongs to the territory of P B . Let D X (A) denote the minimum number of moves needed for player X to reach cell A. In the example above, D W (C6) = 4 and D B (C6) = 2.
We compute the minimum number of moves needed for each player in this manner for each blank cell on the board and take the sum over all blank cells Fig. 2. Territory evaluation to obtain the evaluation of the game-state based on the territories. We define territory evaluation of a state X for P W as follows.
T W (X) = ∑ empty cells A ∆ 1 (D W (A), D B (A)),
where
∆ 1 (n, m) = 0 (n = m = ∞) 1 5 (n = m < ∞) 1 (n < m) -1 (n > m)
The evaluation value of the cells that both players reach in the same number of moves may be set to an arbitrary value; in setting it to 1 5 we followed [START_REF] Lieberum | An evaluation function for the game of Amazons[END_REF].
Relative Territory Evaluation
The basic idea of relative territory evaluation is similar to that of territory evaluation. In essence, territory evaluation counts the number of cells that can be reached by a player in less moves than by the other player, disregarding the actual difference in the number of moves. In contrast, relative territory evaluation assesses the difference in the number of moves needed by the two players to reach each blank cell. Let us define relative territory evaluation of state X for P W as follows.
RT
W (X) = ∑ empty cells A ∆ 2 (D W (A), D B (A))
where
∆ 2 (n, m) = 5 (m = ∞, n < ∞) -5 (n = ∞, m < ∞) 0 (n = ∞, m = ∞) m -n (otherwise)
For a cell which can be reached by only one of the players, the difference in number of moves is by default infinite. However, for implementation purposes it is convenient to avoid treating infinity and so we set the difference to 5 in those cases.
Monte-Carlo Tree Search
A Monte-Carlo algorithm [START_REF] Motwani | Randomized Algorithms[END_REF] is a randomized algorithm whose output is allowed to be incorrect with a certain probability. Even though the answer may be incorrect, in some cases this approach can be much more efficient than using deterministic algorithms. A Monte-Carlo tree search (MCTS) [START_REF] Coulom | Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search[END_REF] is a Monte-Carlo algorithm suitable for certain decision processes, most notably employed in game playing. Random simulations in a game tree called playouts are employed to select the next move by game playing programs. MCTS has received considerable interest due to its great success in playing Go [START_REF] Coulom | Computing Elo Ratings of Move Patterns in the Game of Go[END_REF].
MCTS employs playouts, which are simulations to determine the outcome of a game played by two players who choose their moves randomly until the game ends. As a refinement of MCTS, the method of upper confidence bounds applied to trees (UCT) was introduced by Kocsis and Szepesvári [START_REF] Kocsis | Bandit based Monte Carlo planning[END_REF] based on the UCB1 algorithm proposed by Auer, Cesa-Bianchi, and Fischer [START_REF] Auer | Finite-time Analysis of the Multiarmed Bandit Problem[END_REF]. In a game state G with child states G 1 , . . . , G k , a UCT algorithm selects a child state for which the UCB1 value is maximal among the ones computed for each G i . The UCB1 value of each child state G i is defined by the following equation:
UCB1(G i ) = x i + √ log n n i min( 1 4 , x i -x i 2 + √ 2 log n n i )
where n is the number of playouts executed from game state G, n i is the number of playouts executed from child state G i , x i = xi ni is the win-loss ratio of G i , where x i is the number of wins among playouts from G i .
A UCT program does not necessarily have an evaluation function and its move selection depends only on the result of playouts.
Through selecting a child node, executing the playout, and repeatedly updating winning percentages, it is possible to find the most selected child node and recommend it as the final move. UCT explores the game tree to a great depth by repeating playouts, therefore it can be configured to be a strong player by increasing the number of searches (or the allowed search time).
Experiment
UCT vs. Alpha-Beta
We compared a UCT algorithm and Alpha-Beta pruning by letting these two programs play Amazons against each other. This experiment not only compared the relative strength of UCT and Alpha-Beta pruning in Amazons but also aimed to evaluate the improvement of the UCT program when increasing the number of allowed searches.
We employed a simple UCT program not using heuristic techniques such as pruning. The number of playouts performed was 10000, 30000, 50000, 100000, 200000. The Alpha-Beta pruning programs had maximum search depth 2 and used one of the three types of evaluation functions described in Section 4, respectively.
We executed the experiment on a 10 × 10 board. For each match-up, we performed 50-50 simulations with the UCT program being the first player and the second player, respectively, and we recorded the number of times the UCT program won. In addition, we recorded the average time taken by UCT and Alpha-Beta pruning to make one move and the average time it took to play one game. This experiment was performed by using an Amazons match simulator written in C#, developed by us. To be able to measure the times correctly, while one game playing program is searching for a move, the other does not compute anything. Our experiments were run on a computer with Windows 7 Professional(64bit), having an Intel(R) Xeon(R) CPU E31245(3.30GHz) and memory of 16GB.
Experimental Results
Table 1 shows the number of times that the UCT programs (with different number of playouts) won against the three Alpha-Beta pruning players out of 100 games (50-50 as first and second player, respectively). Figure 3 shows the average time taken for a move by the programs. The vertical axis of the graph is average search time for one move; the horizontal axis represents the number of playouts performed by the UCT. The time taken by the Alpha-Beta pruning algorithms to compute a move only depends on the depth (2) and on the individual states being evaluated. As the depth was fixed, the slight variations in average computing time for the Alpha-Beta pruning programs is probably due to having to evaluate different game states reached by the changing strategy of the UCT programs and to the relatively low number of matches between the programs. We expect that increasing the number of head-to-head matches would drive the averages closer to each other approximating a horizontal line.
With respect to the number of wins of UCT against Alpha-Beta pruning, the winning percentages of UCT were very different depending on the evaluation function of the opponent. territory evaluation is strongest when comparing the three types of evaluation functions. Against TE, even the UCT program with 200000 playouts won only 34 out of 100. In contrast, when playing against mobility evaluation, the UCT gained significant strength by increasing the number of playouts, the strongest one (200000 playouts) winning all 100 matches. There are clear differences in the number of wins against the three evaluation functions, but it can be clearly seen that even when the UCT performed poorly (vs. TE) its number of wins was much higher with 200000 playouts than with 10000. Now let us see the computation time required to make one move by the programs. With 10000 playouts, the UCT needed on average 4 seconds to decide on a move and this is almost the same as for Alpha-Beta pruning. However, the required time increased in accordance with the increase in the number of playouts. The UCT program with 100000 playouts needed on average 30 seconds, while the UCT with 200000 playouts took on average 55 seconds. The UCT program with 200000 playouts had less wins than losses against Alpha-Beta pruning using territory evaluation, even though it required more than 30 times the computation time of the Alpha-Beta pruning program.
Meanwhile, among the three versions of Alpha-Beta pruning there was only a small difference in computing time due to the difference in the evaluation function. Alpha-Beta pruning using mobility evaluation did not record a single win against the UCT with 200000 playouts. However, against relative territory evaluation under the same conditions the UCT won only 34 times. This means that while increasing the number of playouts improved the UCT program, the gain depended heavily on the evaluation function of the opponent, while Alpha-Beta pruning greatly enhanced by changing the heuristic. Moreover, the increase in playouts caused a significant increase in computing time for UCT, whereas the computing time for Alpha-Beta pruning was not greatly influenced by the change in the evaluation function.
Conclusions
We conducted an experiment in which we set UCT based strategies against Alpha-Beta pruning ones in Amazons matches. We showed that even Alpha-Beta pruning with territory evaluation is faster than the simple UCT. Increasing the number of playouts (and thus computing time, too) led to improvements in the UCT. However, with 200000 playouts allowed, the UCT program consumed 30 times more computation time and still only won 34 out of 100 games against Alpha-Beta pruning using territory evaluation. In conclusion, it looks like there is more to gain in playing strength for Amazons programs by improving the evaluation function and using classical methods like Alpha-Beta pruning than by increasing the number of playouts using the MCTS strategy.
Fig. 1 .
1 Fig. 1. From initial placement to the first movement of PW
Fig. 3 .
3 Fig. 3. UCT vs Alpha-Beta : The average time taken by the programs to make a move (sec).
Table 1 .
1 UCT vs Alpha-Beta : Number of wins for the UCT.
Mobility Territory Relative Territory
10000 71 1 9
30000 89 8 40
UCT 50000 90 11 43
100000 97 24 57
200000 100 34 66 | 21,494 | [
"1001265",
"1001266",
"1001267",
"1001268"
] | [
"472230",
"472230",
"472230",
"472230"
] |
01466219 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466219/file/978-3-319-24315-3_19_Chapter.pdf | Sergey Krendelev
email: [email protected]
Mikhail Yakovlev
email: [email protected]
Maria Usoltseva
email: [email protected]
Secure database using order-preserving encryption scheme based on arithmetic coding and noise function
Keywords: Cloud computing security, order-preserving encryption, symmetric-key cryptosystems, order-preserving hash functions
Order-preserving symmetric encryption (OPE) is a deterministic encryption scheme which encryption function preserves numerical order of the plaintexts. That allows comparison operations to be directly applied on encrypted data in case, for example, decryption takes too much time or cryptographic key is unknown. That's why it is successfully used in cloud databases as effective range queries can be performed based on. This paper presents order-preserving encryption scheme based on arithmetic coding. In the first part of it we review principles of arithmetic coding, which formed the basis of the algorithm, as well as changes that were made. Then we describe noise function approach, which makes algorithm cryptographically stronger and show modifications that can be made to obtain order-preserving hash function.
Finally we analyze resulting vulnerability to chosen-plaintext attack.
Introduction
Nowadays, the amount of information stored in various databases steadily increases.
In order to store and effectively manage large amounts of data it is needed to increase data storages capacity and allocate funds for its administration. Another way that was chosen by many companies is to give the database management to a third-party. Such service is managed by a cloud operator and is called Database as a Service, DBaaS. Obviously, this approach has its own flaws. And the most important of them is security issue. Data can be stolen by the service provider itself or by someone else from its storage. Fortunately, this problem can be solved by encryption. Of course if we just encrypt the whole database with a conventional encryption algorithm, we'll have to encrypt and decrypt it each time we need something. So, all advantages will be lost. That's why special encryption schemes, such as homomorphic encryption and order-preserving encryption, are developed. The first one allows us to handle encrypted data, and the secondto sort them and select the desired.
All known order-preserving schemes have significant problems, such as low level of security (polynomial monotonic functions [START_REF] Ozsoyoglu | Anti-Tamper Databases: Querying Encrypted Databases[END_REF], spline approximation [2], linear functions with random noise [START_REF]Commutative order-preserving encryption[END_REF]), low performance (summation of random numbers [START_REF] Bebek | Anti-tamper database research: Inference control techniques[END_REF], B-trees [START_REF] Raluca | An Ideal-Security Protocol for Order-Preserving Encoding[END_REF]) or too-large numbers proceeding (scheme by Boldyreva [START_REF] Boldyreva | Order preserving symmetric encryption[END_REF]). Proposed scheme doesn't have these disadvantages and, furthermore, unlike all the others can be used to encrypt real numbers. Also it can be used to obtain orderpreserving hash function.
This algorithm combines two main ideas, which the majority of OPE schemes operate with: monotonic functions design and elements of coding theory (implicit monotonic functions design). It is claimed that scheme is based on arithmetic coding and noise function, but, in fact, this article considers only the case with binary alphabet. In theory, nothing prevents the use of an arbitrary one.
First, let's give a definition of order-preserving encryption. Assume there are two sets A and B with order relation < . Function f: A → B is strictly increasing if ∀x, y ∈ A, x < 𝑦 ⇔ 𝑓 x < 𝑓 y . Order-preserving encryption is deterministic symmetric encryption based on strictly increasing function.
The described order-preserving encryption scheme was developed in Laboratory of Modern Computer Technologies of Novosibirsk State University Research Department as a part of "Protected Database" project 1 and is based on arithmetic coding and noise function. Let us consider them precisely.
Splitting procedure of arithmetic coding
Suppose c is non-negative integer number requiring for its representation n bits, i.e.
c = α i 2 i n i=1
where α 1 , α 2 , … , α n is a bit string, α 1 is the MSB. Let us define the bijection f. Assume that the string α 1 , α 2 , … , α n defines certain real number s ∈ 0, 1 as follows:
s = c 2 n .
Let us find another representation for the number s. In order to do it, we use the idea of arithmetic coding. Notice that the number s satisfies the equation 2 n s = c. The equation
G x = 2 n x -c = 0
has only one solution on the interval 0, 1 . If we solve this equation using a standard binary search, we get the initial number s after n steps. The main idea of arithmetic coding is that intervals can be split into parts randomly. In this case approximate solution of the equation can be found after the less number of steps. That allows us to achieve compression of data while using arithmetic coding. First of all, let us consider the splitting procedure. This interval is again split into parts in the ratio γ: μ. According to the sign of functionG(x) in the splitting point, one of the segments is selected. Proceeding by induction, the interval a k , b k can be calculated for ∀k. Its length is γ r μ n-r , where r is the number of zeros in string β . If ∀r: 1 2 n < γ r μ k-r , then s ∈ a k , b k and c = 2 n s are uniquely defined by β = (β 1 , … , β k ). It is also obvious that this mapping preserves an order.
Generalizing used in the adaptive arithmetic coding, as well as in the proposed algorithm, is that it is possible to use different ratio on each step. This allows us to achieve stronger security of encryption.
Noise function
It is known that the composition of two strictly increasing functions strictly increases. Therefore, to provide stronger security of cryptographic algorithm special random strictly increasing function is used in addition to the splitting procedure. In fact, we use inverse function of the one that was generated. It was proved [START_REF] Boldyreva | Order preserving symmetric encryption[END_REF] that OPE schemes cannot satisfy the standard notions of security, such as indistinguishability against chosen-plaintext attack (IND-CPA) [START_REF] Bruce | [END_REF], since they leak the ordering information of the plaintexts. If an adversary knows plaintexts p 1 , p 2 and corresponding ciphertexts c 1 , c 2 and c , such that c 1 < 𝑐 < c 2 , it is obvious that the plaintext for c lies in the interval (p 1 , p 2 ) . In addition, the adversary can always find the decryption function in some approximation, for instance, using linear interpolation.
And moreover, in case of using, for example, encryption method developed by David A. Singer and Sun S. Chung [START_REF] Ozsoyoglu | Anti-Tamper Databases: Querying Encrypted Databases[END_REF], where strictly increasing polynomial functions f x = a 0 + a 1 x + ⋯ + a n x n are used for encryption, the adversary can calculate the exact encryption function if he has (n + 1) arbitrary pairs (plaintext, ciphertext). It is enough to solve the system of equations:
a 0 + a 1 x 0 + ⋯ + a n x 0 n = y 0 a 0 + a 1 x 1 + ⋯ + a n x 1 n = y 1 ⋮ a 0 + a 1 x n + ⋯ + a n x n n = y n
Thus, the adversary can get a 0 , … a n and correspondingly encryption function f(x).
In order to complicate his task it is necessary to maximize the amount of pairs required for this attack and complexity of the system of equations f x i = y i . Therefore, it was decided to generate noise function from class of function , where c is an arbitrary constant and coefficients a i are selected so that a 0 + a 1 t + a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) > 0
f x = a 0 + a 1 t
for ∀t ∈ (c; x max ). In this case f(x) is strictly increasing function (see Fig. 1). This integral can be calculated explicitly, which increases the speed of function value calculation.
Fig. 1. Example of the correct noise function from the class. Due to such combination of sine and cosine, its behavior is hard to predict without a 0 , … a 9 coefficients knowledge.
Nevertheless, the system of equations a 0 + a 1 t + a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) dt
x 0 c = y 0 a 0 + a 1 t + a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) dt
x 1 c = y 1 ⋮ a 0 + a 1 t + a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) dt
x k c = y k is difficult to solve, which indicates that proposed algorithm is cryptographically strong against this type of attack.
4 Cryptographic scheme
Key generation
As a private key of encryption algorithm we consider noise function f x = a 0 + a 1 t + a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) dt
x c
and a set of ratios p i , q i .
In order for an encrypted n-bit number to be uniquely decrypted, the length of intervals computed during decryption has to be less than 1 2 n . The largest length of the interval that can be obtained during decryption is max p i ,q i p i +q i i f max ′ (x) . So the algorithm of calculation the set of ratios is:
1. Generate random ratios p i , q i . 2. Check the condition
max p i , q i p i + q i i f max ′ x < 1 2 n
If this conditions if satisfied, go to the step 3, else go back to the step 1. 3. Output the set of ratios p 1 , q 1 , p 2 , q 2 , … , p k , q k . The key is the set K = [ a 0 , … , a 9 , p 1 , q 1 , p 2 , q 2 , … , p k , q k ].
Encryption
Assume we need to encrypt n-bit integer s with the key K = [f(x), p 1 , q 1 , p 2 , q 2 , … , p k , q k ], where f(x) is a noise function, f a 0 = 0, f(b 0 ) = 2 n , and (p i , q i ) is a set of ratios. Consider the i-th iteration of algorithm.
The current interval a i-1 , b i-1 is split in the ratio p i : q i . Let it be split at the point x ∈ a i-1 , b i-1 , i.е.
x = a i-1 + b i-1 -a i-1 p i p i + q i . If f(x) > 𝑠, then β i = 0, a i = a i-1 , b i = x. Otherwise, β i = 1, a i = x, b i = b i-1 .
Notice that ∀i, f -1 (s) ∈ a i , b i according to the selection of a i and b i . After performing k iterations, (where k is the size of the key, i.e. the number of ratios) we obtain the bit sequence β = β 1 , … , β k , β i ∈ 0,1 , which is a ciphertext for s.
Decryption
Suppose there is a bit sequence β = β 1 , … , β k , β i ∈ 0,1 , which is the ciphertext for s, encrypted with some key K. Let us consider the i-th iteration of the algorithm.
Similar to the encryption algorithm, current interval a i-1 , b i-1 is split in the ratio p i : q i . Let it be split at the point x ∈ a i-1 , b i-1 , i.e.
x = a i-1 + b i-1 -a i-1 p i p i + q i .
If
β i = 0, then a i = a i-1 , b i = x. Otherwise, a i = x, b i = b i-1 .
After performing k iterations, we obtain the interval a k , b k and the condition f(b k ) -f(a k ) < 1 2 n is satisfied according to the key selection. As s ∈ f(a k ), f(b k ) , the s is uniquely decoded as follows:
s = 2 n f(a k ) + 1,
where x is the largest integer, which comes before x.
5
Scheme modifications
Application of the scheme for fixed-point arithmetic
It is easy to see that this scheme can be generalized to the set of rational numbers. Encryption and decryption algorithms are the same except for the final operationthe length of the segment a k , b k that determines encrypted number is reduced to 2 l times, where l is the number of bit decimal places. It should be known at the stage of key generation and condition from point 2 takes the following form:
max p i , q i p i + q i i * f max ′ (x) < 1 2 n+l
After key generation number l can't be modified and is a part of the key. So, the secret key K now is the set [l, a 0 , … , a 9 , p 1 , q 1 , p 2 , q 2 , … , p k , q k ].
Strictly increasing hash function
This algorithm can also be modified to produce a strictly increasing hash function. It can be used, for example, in encrypted database, if it stores two entities for each data: ciphertext, that was obtained from cryptographically strong algorithm and hash value returned by hash function. This allows both to be sure that the data won't be decrypted by adversary (first entity is secure and the second can't be decrypted at all) and apply comparison operations on encrypted data to some extent.
To begin, we note that output has the same bit size as the number of ratios p i , q i from the secret key. So, in order to obtain a hash function, it is enough to change the procedure of key generation, and more precisely, its ratios generation part.
Instead of the condition checking from the point 2, satisfaction of which guaranteed that the data can be decrypted, now we need to perform the first pointpair p i , q i generationa number of times. This number, evidently, is equal to the number of bits that hash function returns.
Thus, the key generation algorithm for order-preserving m-bit hash function is:
1. Select strictly increasing noise function f(x). To do this, generate a 0 , … a 9 so that a 0 + a 1 t + a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) > 0
for ∀t ∈ (c; x max ), where c is a fixed constant. 2. Generate random set of ratios p 1 , q 1 , p 2 , q 2 , … , p m , q m . 3. The key is the set K = [ a 0 , … , a 9 , p 1 , q 1 , p 2 , q 2 , … , p m , q m ].
To get rid of the big numbers processing, for instance, if we need to get hash of a large file, it is possible to split input data into parts with acceptable size and calculate hash for each of them. The result hash value of the whole file can be found as their concatenation. This approach allows us to hash data of any predetermined dimension. So, there are three parameters that we can select arbitrarily depending on our purpose: s 1size of the processed parts, s 2hash size for each of them (s 2 < s 1 ), and s 3maximum file size. Obviously, final hash is s 2 s 3 s 1 -bit. Since encryption algorithm remains the same, the hash function running time depends linearly on its output size (it is equal to the number of algorithm iterations). Therefore, it is not recommended to choose too-big s 2 number.
In order to process files smaller than the maximum size, they can be padded with zeros on the left. In this case, order is still preserves. Since this is a hash function algorithm, decryption is no longer exists.
Encryption security
As we have seen (see Section 3) OPE schemes cannot satisfy the standard notions of security against chosen-plaintext attack. Different methods of cryptoanalysis are considered to determine the notion of order-preserving encryption security [2], [START_REF] Boldyreva | Order-preserving encryption revisited: Improved security analysis and alternative solutions[END_REF], [START_REF] Martinez | Security Analysis of Order Preserving Symmetric Cryptography[END_REF], [START_REF] Xiao | Security Analysis for Order Preserving Encryption Schemes[END_REF]. Generally, the security of such schemes is based on the fact that monotonic function, the scheme is based on, must be completely indistinguishable from truly random monotonic function. This means that only an access to the private key allows performing accurate data decryption. So let us check this algorithm for this condition in practice. To do that, we encrypted all 16-bit numbers (from 0 to 65535) with the same random key and analyzed the results.
As a subject of analysis we chose the difference between two ciphertexts for nearby integers. For example, if f x = 2186003864819 and f x + 1 = 2186004033407 , where f(x) is encryption function, then f x + 1 -f x = 168588 is considered. One of the reasons for this choice was the fact that success of chosen-plaintext attack by interpolation depends on this differrences (see Fig. 2). As a result, we obtained the following data (see Fig. 3). In this chart the Y-axis displays the difference value between two ciphertexts (higher values were rounded), and the X-axis shows the number of them was found. As we see, this chart and right hyperbola y = 1 x are alike. It is typical for monotonic functions that were generated randomly and indicates that the maximum available security of the algorithm was achieved.
But the distribution of the differences itself is also important (see Fig. 4). The Yaxis displays f x + 1 -f(x) when the X-axis shows x (from 0 to 65535). We can see that the differences are distributed very irregularly. As it is a feature of secure encryption, we can claim that proposed algorithm is cryptographically strong.
1
1 p, q are random natural numbers. Obviously, γ + μ = 1 . Let us split the interval 0, 1 into two parts 0, is selected, and β 1 = 1. Let us denote a 1 , b 1 the interval was selected.
Fig. 2 .
2 Fig. 2. Chosen-plaintext attack using values interpolation. Ciphertext for some b 1 -bit plaintext x is approximated by the value of x 2 b 2 , where b 2 is size of ciphertext. Approximation in the other direction is counted similarly.
Fig. 3 .
3 Fig. 3. Frequency distribution of the differences between ciphertext.
Fig. 4 .
4 Fig. 4. Distribution of the differences on the interval.
+ a 2 t 2 (a 3 + a 4 sin a 5 + a 6 t + a 7 cos(a 8 + a 9 t)) dt
x
c
This research is performed in Novosibirsk State University under support of Ministry of Education and Science of Russia (contract no. 02.G25.31.0054). | 17,404 | [
"1001280",
"1001281",
"1001282"
] | [
"4744",
"4744",
"4744"
] |
01466220 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466220/file/978-3-319-24315-3_1_Chapter.pdf | Kamil Burda
email: [email protected]
Martin Nagy
email: [email protected]
Ivan Kotuliak
email: [email protected]
Reducing Keepalive Traffic in Software-Defined Mobile Networks with Port Control Protocol
Keywords: middleboxes, keepalives, Port Control Protocol, mobile networks, software-defined networking
User applications, such as VoIP, have problems traversing NAT gateways or firewalls. To mitigate these problems, applications send keepalive messages through the gateways. The interval of sending keepalives is often unnecessarily short, which increases the network load, especially in mobile networks. Port Control Protocol (PCP) allows the applications to traverse the gateways and to optimize the interval. This paper describes the deployment of PCP in software-defined networks (SDN) and proposes a method to measure keepalive traffic reduction in mobile networks using PCP. The proposed solution extends the battery life of mobile devices and reduces the traffic overhead in WCDMA networks.
Introduction
User applications that require long-term connections, such as Voice over IP (VoIP), Instant Messaging or online gaming, may have problems establishing connections if hosts running the applications are located behind network address translation (NAT) gateways or firewalls, hereinafter referred to as middleboxes.
For each connection, a middlebox contains a mapping entry that is manually configured or dynamically created when the connection is being established. In case of NAT gateways, the mapping entry usually consists of the following fields: internal IP address, external IP address, internal port, external port and mapping lifetime.
If a connection is idle for longer than the corresponding mapping lifetime, the middlebox blocks the connection without notifying the communicating hosts. To keep the connection alive, the application sends keepalive messages (such as empty TCP or UDP datagrams) toward the destination host. Because the application does not know the exact connection timeout, keepalives are sent in very short intervals, which increases the network load. The unnecessarily high amount of the keepalive traffic reduces battery lifetime on mobile devices, especially those connected to mobile networks, where each message sent imposes additional overhead in the form of signaling traffic.
This paper proposes a network architecture to deploy the Port Control Protocol (PCP) in the core of software-defined mobile networks and a method to measure the keepalive traffic reduction with PCP in WCDMA networks.
The rest of this paper is structured as follows. Section 2 briefly reviews existing NAT traversal and keepalive reduction methods. Section 3 describes the basics of the PCP protocol and the advantages of the deployment of PCP in SDN networks. Section 4 describes the architecture of the core network and its components. Section 5 describes the method to measure the battery life extension of mobile devices and signaling traffic reduction in WCDMA networks [START_REF] Haverinen | Energy Consumption of Always-On Applications in WCDMA Networks[END_REF][START_REF]Smartphones and a 3G Network, reducing the impact of smartphone-generated signaling traffic while[END_REF]. The final section provides concluding remarks and challenges for future work.
Related Work
Protocols such as Session Traversal Utilities for NAT (STUN) [START_REF] Wing | Session Traversal Utilities for NAT (STUN)[END_REF], Traversal Using Relays around NAT (TURN) [START_REF] Matthews | Traversal Using Relays around NAT (TURN): Relay Extensions to Session Traversal Utilities for NAT (STUN)[END_REF] or Interactive Connectivity Establishment (ICE) [START_REF] Rosenberg | Interactive Connectivity Establishment (ICE): A Methodology for Network Address Translator (NAT) Traversal for Offer/Answer Protocols[END_REF] can resolve NAT traversal issues for user applications. Additional methods for proper NAT traversal are defined for IPSec ESP [START_REF] Huttunen | UDP Encapsulation of IPsec ESP Packets[END_REF] and mobile IP [START_REF] Levkowetz | Mobile IP Traversal of Network Address Translation (NAT) Devices[END_REF].
A method proposed in [START_REF] Eronen | TCP Wake-Up: Reducing Keep-Alive Traffic in Mobile IPv4 and IPsec NAT Traversal[END_REF] aims to reduce the keepalive traffic in mobile IPv4 networks and in IPSec communication by replacing UDP keepalives with the socalled TCP wake-up messages, given the considerably greater mapping lifetime for TCP connections on NAT and firewall devices from popular vendors [START_REF] Haverinen | Energy Consumption of Always-On Applications in WCDMA Networks[END_REF]. The results of the experiments conducted suggest that the keepalive traffic reduction is significant in 2G (GSM) and 3G (WCDMA, HSDPA) networks, but not in IEEE 802.11 Wireless LAN [START_REF] Eronen | TCP Wake-Up: Reducing Keep-Alive Traffic in Mobile IPv4 and IPsec NAT Traversal[END_REF].
Port Control Protocol
PCP [START_REF] Boucadair | Port Control Protocol (PCP)[END_REF] allows IPv4 and IPv6 hosts to determine or explicitly request network address mapping, port mapping and mapping timeout (also called mapping lifetime) directly from middleboxes. From this information, a host behind a middlebox can establish communication with a host in an external network or in another internal network behind another middlebox and can optimize the interval of sending keepalives. PCP does not replace the function of proxy or rendezvous servers to establish connections between hosts in different internal networks. PCP requires that hosts run a PCP client and middleboxes run a PCP server [START_REF] Boucadair | Port Control Protocol (PCP)[END_REF].
Based on the existing research [START_REF] Eronen | TCP Wake-Up: Reducing Keep-Alive Traffic in Mobile IPv4 and IPsec NAT Traversal[END_REF], the reduction of the keepalive traffic in mobile networks can be considerable. PCP introduces a more universal approach that allows to optimize keepalive traffic for multiple transport protocols (any protocol with 16-bit port numbers) and other upper-layer protocols, such as ICMP or IPSec ESP [START_REF] Boucadair | Port Control Protocol (PCP)[END_REF].
PCP may be vulnerable to security attacks such as denial of service or mapping theft [START_REF] Boucadair | Port Control Protocol (PCP)[END_REF]. The security of PCP is currently under discussion [START_REF] Wing | Port Control Protocol[END_REF]. An RFC draft specifies an authentication mechanism to control access to middleboxes [START_REF] Reddy | Port Control Protocol (PCP) Authentication Mechanism[END_REF].
Port Control Protocol in Software-Defined Networks
Software-defined networking (SDN) [START_REF]Open Networking Foundation: Software-Defined Networking: The New Norm for Networks[END_REF][START_REF] Nadeau | SDN: Software Defined Networks[END_REF][START_REF] Nagy | Utilizing OpenFlow, SDN and NFV in GPRS Core Network[END_REF][START_REF] Skalný | Application of Software Defined Networking (SDN) in GPRS Network[END_REF]] is a novel approach to managing computer networks which separates the control and data planes of network devices to controllers and forwarders, respectively, and achieves greater network flexibility by allowing to program the network behavior. Existing networks are expected to migrate to SDN given the aforementioned advantages.
With SDN, a PCP server can run on a controller, thereby reducing the processing overhead on middleboxes, increasing vendor compatibility and avoiding the need to upgrade the middleboxes to support PCP server functionality. If multiple middleboxes are placed in an SDN network, mapping lifetime can be determined from the controller instead of every middlebox separately. There is an ongoing effort to support advanced firewall functionality in SDN networks by introducing new PCP message types [START_REF] Reddy | PCP Firewall Control in Managed Networks[END_REF].
Network Architecture
This section describes the architecture of the SDN-based mobile core network, which incorporates PCP to reduce the signaling traffic. The essential components of the architecture are shown in Fig. 1. For the implementation, OpenFlow [START_REF] Nagy | Utilizing OpenFlow, SDN and NFV in GPRS Core Network[END_REF][START_REF]Open Networking Foundation: OpenFlow Switch Specification[END_REF] is used as the communication protocol between the controller and the forwarders. An end host running a user application with a PCP client is located behind an existing access network (such as UTRAN in case of 3G networks). The access network connects to the core network via the edge forwarder. The edge forwarder forwards PCP requests to the controller, PCP responses from the controller back to the PCP client and other traffic further through the core network.
In the proposed architecture, the control and the data plane of a middlebox are decoupled. The middlebox data plane resides on another forwarder, placed between the core and the external network (the Internet). The middlebox data plane executes the rules installed by the control plane, such as overwriting IP addresses and transport protocol ports in packets in case of NAT.
The controller runs the middlebox control plane, which is responsible for maintaining mappings stored in a table. The handler accepts requests from the PCP server to create or remove a mapping and instructs the controller to add or remove the corresponding rules on the forwarder.
The PCP server running on the controller receives PCP requests, instructs the middlebox control plane to create a mapping for the client and sends PCP responses back to the client once the middlebox control plane successfully creates a mapping. The PCP server address is assumed to be the address of the default gateway, so PCP clients must use this address to communicate with the PCP server. Dynamic PCP server discovery options [START_REF] Boucadair | Port Control Protocol (PCP)[END_REF][START_REF] Boucadair | DHCP Options for the Port Control Protocol (PCP)[END_REF][START_REF] Penno | PCP Anycast Address[END_REF] are currently not considered.
In order to verify the proper traversal of packets behind middleboxes and the keepalive traffic reduction, a custom, simple NAT gateway is implemented in the network that supports only IPv4 addresses and TCP and UDP as the upper-layer protocols. A custom firewall, IPv6 or other upper-layer protocols are not implemented in the network, as the verification and evaluation method of the proposed solution is identical and would not affect the results.
The control plane of the NAT is responsible for creating NAT table entries from the configured pool of external IP addresses and ports. Each NAT table entry contains the following items: internal IP address, internal port, external address, external port, upper-layer protocol and mapping lifetime. The NAT data plane is represented as a set of flow tables and entries shown in Fig. 2. The design does not address the security of PCP. The authentication mechanism specified in [START_REF] Reddy | Port Control Protocol (PCP) Authentication Mechanism[END_REF] could be used to control the access to the controller running the PCP server.
Evaluation
This section describes a method to measure the battery life extension and signaling traffic reduction based on the values of keepalive intervals to prove the feasibility of the deployment of PCP in WCDMA networks. The method is based on the measurements performed by Haverinen et al. [START_REF] Haverinen | Energy Consumption of Always-On Applications in WCDMA Networks[END_REF] and Signals Research Group, LLC [START_REF]Smartphones and a 3G Network, reducing the impact of smartphone-generated signaling traffic while[END_REF], both performed in WCDMA networks.
Requirements
No other network data, except keepalives, are sent over the network. This is done to isolate the useful network traffic that is usually unpredictable in practice, which would distort the results.
CELL PCH and CELL FACH Radio Resource Control (RRC) states are assumed to be enabled in the WCDMA network. When sending a keepalive, the mobile device uses the following RRC state transition: CELL FACH → CELL PCH → CELL FACH → CELL PCH → . . .
The WCDMA inactivity timers are assigned the values used in the first measurement in [START_REF] Haverinen | Energy Consumption of Always-On Applications in WCDMA Networks[END_REF]. In particular, the T2 inactivity timer (causing transition from CELL FACH to CELL PCH) is set to 2 seconds. Keepalives must be sent one at a time, until the mobile device re-enters the lower RRC state (with lower power consumption).
To quantify the battery life saving and signaling traffic reduction, reference values must be defined. For example, suppose that an application currently uses a keepalive interval of 20 seconds (such as IPSec ESP [START_REF] Huttunen | UDP Encapsulation of IPsec ESP Packets[END_REF]). If the keepalive interval is increased, the mobile device consumes that much less battery charge compared to the original (reference) keepalive interval. Likewise, the network and the device generate fewer signaling messages.
Battery power saving
Battery consumption figures. The first experiment in [START_REF] Haverinen | Energy Consumption of Always-On Applications in WCDMA Networks[END_REF] consisted of sending one keepalive at a time. In the experiment, the inactivity timers and the RRC state transitions were identical to those specified in the section 5.1. The results showed that the average current in the CELL FACH state is 120 mA (disregarding the negligible variance of the current due to the actual data transmission), and the cost of a single keepalive in the 3G WCDMA network ranged from 0.15 to 0.6 mAh.
Method. Let T be the time period over which measurement is performed. Over time period T , n ref keepalives are sent given the original (reference) keepalive interval t ref (i.e. the user application originally used the interval t ref ). Likewise, n keepalives are sent given the new keepalive interval t new . The number of keepalives sent can be computed as n = T /t, where 1/t is the number of keepalives sent per second.
The amount of battery consumption saved (in mAh) can be determined as follows:
reduction (t new ) = k ref -k = (n ref -n) • cost = cost • T • 1 t ref - 1 t new (1)
where k = n • cost is the total cost of keepalives over time T . In order to determine the battery power saving given the desired and reference keepalive intervals (t new and t ref , respectively), the battery capacity C of the mobile device (in mAh) must be known. The relative amount of battery life consumed by sending keepalives can be determined as k/C.
By increasing the keepalive interval to t new , the amount of the battery life saved, given the battery capacity C, can be determined as follows:
battery power saved = k ref C - k new C = reduction (t new ) C (2)
From the equation ( 2), one can conclude that, by using a higher keepalive interval t new , such percentage of battery consumption was saved over time T . From the end-user perspective, an alternative measure may better indicate the power consumption reduction: how much longer the battery will last before recharging it. Suppose that the cost of a single keepalive (cost) and the average current while sending a single keepalive ( Īkeepalive ) are known. The total time of the battery life saved can then be computed as follows:
battery lif e saved = (n ref -n) • cost Īkeepalive = reduction (t new ) Īkeepalive (3)
Results. Fig. 3 shows the battery power saving with increasing keepalive interval, given the time period, cost of a single keepalive, battery capacity and the reference keepalive interval of 20, 40, 80 and 120 seconds, respectively. The percentage of the battery power saving increases significantly when the keepalive interval is increased by the first few tens of seconds from the reference interval. Above 400-600 seconds, the difference in the increase starts to be negligible.
Table 1 shows the percentage of the battery power saving for the chosen representative values of battery capacity, reference values and the keepalive interval of 400 seconds. If the application running on a smartphone with battery capacity of 2550 mAh (Samsung Galaxy S6) originally used the keepalive interval of 20 seconds, 1-4% of the battery life can be saved over 3600 seconds for the cost ranging from 0.15 to 0.6 mAh. For the reference interval of 40 seconds, the battery power saving is halved. The battery power saving proves to be significant for devices with relatively low battery capacity, such as smart watches (provided that they support WCDMA), and less significant for devices with higher battery capacity, such as tablets.
If the battery lifetime saving is considered, approx. 13-52 minutes of battery life can be saved for the keepalive cost ranging from 0.15 to 0.6 mAh, the average current of 120 mA (CELL FACH state) [START_REF] Haverinen | Energy Consumption of Always-On Applications in WCDMA Networks[END_REF], the reference keepalive interval of 20 seconds and the new keepalive interval of 400 seconds.
Signaling traffic reduction
Signaling traffic figures. In [START_REF]Smartphones and a 3G Network, reducing the impact of smartphone-generated signaling traffic while[END_REF], several measurements were performed in two 3G WCDMA networks, observing the number of signaling messages generated in the networks and the battery power consumption in mobile devices. In one of the measurements, the mobile devices sent keepalive messages to the network. In the observed networks, the mobile devices entered the CELL DCH state when sending a keepalive. According to the results, sending one keepalive causes 40-50 signaling messages to be exchanged between a mobile device and the network (referred to as "observed" messages), and estimated 20 signaling messages generated in the network not captured on the mobile device (referred to as "unobserved" messages).
Method. Let s be the number of signaling messages sent per a single keepalive. The total number of signaling messages sent over time T given keepalive interval t new is S = n • s. The reduction of signaling messages in the network with increased keepalive interval can then be computed as:
reduction (t) = S ref -S = (n ref -n) • s = 1 t ref - 1 t new • s • T (4)
Results. As seen in Fig. 4, the reduction of the number of signaling messages grows rapidly up to the keepalive interval of approx. 400 seconds. The growth of the reduction starts to be negligible from approx. 1800 seconds, which can be considered an acceptable keepalive interval for WCDMA networks. Table 2 quantifies the results for reference keepalive intervals of 20 and 120 seconds. It should be noted that the reduction of the number of signaling messages was computed for one mobile device running a single application. Considering that hundreds of thousands of mobile devices are connected to a network, each running one or more always-on applications, the decrease in the network load on elements in the network core may prove to be significant.
Determining PCP mapping lifetime
From the perspective of a mobile device and its battery life, the keepalive interval of 400--600 seconds is suitable for most applications. When considering the amount of signaling traffic generated in a mobile network, the keepalive interval of approx. 1800 seconds is sufficient to greatly reduce the signaling traffic.
For mappings created by PCP requests with the MAP opcode (i.e. user applications function as servers), user applications must send PCP MAP requests at the interval of at least 1/2 of the mapping lifetime [START_REF] Boucadair | Port Control Protocol (PCP)[END_REF]. In order to sustain the interval of 1800 seconds, the mapping lifetime for PCP MAP mappings should be doubled, i.e. set to 3600 seconds. Beside PCP MAP requests, applications may still have to send keepalives to the destination host to maintain the endto-end connectivity. In order to keep the number of RRC state transitions to a minimum, applications should send PCP MAP requests to the PCP server and keepalives to the destination host at the same time.
For mappings created by PCP PEER requests, and given the relatively high keepalive interval of 1800 seconds suitable for WCDMA networks, it may be sufficient for the application to send the keepalives 7/8 of the mapping lifetime. Therefore, the mapping lifetime for PCP PEER mappings could be approx. 2100 seconds.
Conclusions
This paper described the architecture for the deployment of the Port Control Protocol in software-defined networks. Using the SDN approach, this architecture separates the control and the data plane of middleboxes and allows to run the PCP server outside the middleboxes according to SDN principles. This improves vendor device compatibility and avoids the processing overhead imposed by running the PCP server.
With PCP deployed in the network, mobile devices connected to mobile networks can reduce the amount of keepalive traffic sent, which results in extended battery life of mobile devices. Additionally, the network throughput is increased due to signaling traffic reduction in access networks. The keepalive interval of approx. 1800 seconds proves to be suitable for most applications in WCDMA networks. The battery power saving by using higher keepalive intervals is more significant in devices with relatively small battery capacity, such as smartphones and smart watches. Given the recommended keepalive interval, PCP server should assign mapping lifetime of at least 3600 seconds for PCP MAP mappings and 2100 seconds for PCP PEER mappings.
Fig. 1 .
1 Fig. 1. Architecture of the proposed network.
Fig. 2 .
2 Fig. 2. Flow entries in the NAT forwarder.
Fig. 3 .
3 Fig. 3. Battery power saving based on keepalive intervals relative to reference values.
Fig. 4 .
4 Fig. 4. Number of signaling messages reduced based on keepalive intervals and reference values.
Table 1 .
1 Battery power saved for a mobile device connected a WCDMA network given battery capacity and the following reference values: t ref = 20 s, tnew = 400 s, T = 3600 s, cost : 0.15 -0.6 mAh.
Battery capacity [mAh] Battery power saved [%]
300 (Samsung Gear S smart watch) 8.5-34.2
2550 (Samsung Galaxy S6 phone) 1-4
7340 (iPad Air 2 tablet) 0.35-1.4
Table 2 .
2 Number of signaling messages reduced given the following reference values: tnew = 1800 s, T = 3600 s.
Number of Reference Number of Reference Number of
signaling mes- keepalive signaling keepalive signaling
sages per interval messages interval messages
keepalive [s] reduced [s] reduced
40 (observed) 20 7120 120 1120
50 (observed) 20 8900 120 1400
20 (observed) 20 3560 120 560
Acknowledgments. This work is a result of the Research and Development Operational Program for the projects Support of Center of Excellence for Smart Technologies, Systems and Services, ITMS 26240120005 and for the projects Support of Center of Excellence for Smart Technologies, Systems and Services II, ITMS 26240120029, co-funded by ERDF. This project was also partially supported by the Tatra banka Foundation under the contract No. 2012et011. | 23,309 | [
"1001283",
"1001284",
"1001285"
] | [
"259428",
"259428",
"259428"
] |
01466221 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466221/file/978-3-319-24315-3_20_Chapter.pdf | Shuichiro Yamamoto
email: [email protected]
An approach for evaluating softgoals using weight
Keywords: Non-Functional Requirements, NFR framework, Softgoal Interdependency Graphs, softgoal weight
The resolution of conflicts among non-functional requirements are difficult problem during the analysis of non-functional requirements. To mitigate the problem, the weighted softgoal is proposed based on the Softgoal Interdependency Graphs (SIG) that help engineers resolve conflicts among non-functional requirements. It is also shown evaluation results of the weighted SIG applications to develop non-functional requirements and choose alternative design decisions.
Introduction
Non-functional requirements are used to define qualities and validate that system architectures achieve the quality requirements. NFR framework is traditionally focusing on qualitative evaluation of softgoals, and different kinds of softgoals are evaluated separately. For example, it is difficult to evaluate security and safety softgoals in the integrated way. Therefore, conflicts between safety and security softgoals should resolve implicitly. This paper discusses the effectiveness of a softgoal weight extension based on SIG diagrams of the NFR framework. The weight values are assigned to softgoal decompositions and contributions link. The new main top goal is introduced to resolve conflicts between security and safety softgoals that are decomposed from the main softgoal. The weights assigned to each link define clearly priority between decomposed softgoals.
Section 2 describes related work of non-functional requirements approach based on softgoals. Section 3 proposes an approach to introduce quantitative weights to SIG diagrams. Section 4 describes examples of applying the proposed approach to evaluate operationalization softgoals for simple cases. In section 5, we discuss the effectiveness of the proposed approach. Section 6 concludes the paper and shows future work.
Related work
The NFR framework [START_REF] Chung | Non-Functional Requirements in Software Engineering[END_REF] is a Goal Oriented Requirements Engineering method can be used to evaluate architecture by defining levels of safety and security requirements. The SIG (Softgoal Interdependency Graph) is used to represent security and safety goals. The NFR soft goals are non-functional requirements softgoals, operationalization softgoals, and claim softgoals. First, constraints of target systems are clarified by non-functional requirements softgoals. Softgoals are, then, decomposed into sub softgoals to develop SIG. NFR softgoals are allocated to operationalization softgoals for describing target system functions.
In SIG the design decisions for the target system are represented by operationalization softgoals. The operationalization softgoals are validated for satisfying parent soft goals.
For analysing functional requirements, alternative requirements are selected to satisfy non-functional softgoals. If the conflict between non-functional softgoals is occurred, the conflict should be resolved by using the criteria whether non-functional requirements are satisfied.
To evaluate the quality of architecture, the following methods are proposed.
1) Check list based method
The operationally critical threat, asset, and vulnerability evaluation (OCTAVE) for security provides the check list to evaluate vulnerability [START_REF]OCTAVE: Operationally Critical Threat, Asset, and Vulnerability Evaluation[END_REF].
However, the standard check list did not satisfy every safety requirements and security requirements. The check list also have the problem that it cannot be applied to resolve conflicts and interactions between safety and security requirements.
2) Scenario based method Scenarios can be developed to describe critical factors that impact on architectures significantly. The scenarios are used to identify important factors that affect high priority requirements. Utility trees are used to define scenarios in ATAM (Architecture Trade off Analysis Method) [START_REF] Bass | Software Architecture in Practice[END_REF]. ATAM provides the quality trade off analysis method to analyze safety and security requirements.
3) Subramanian method Subramanian proposed the following method that NFR framework is applied to analyse safety and security [START_REF] Subramanian | Quantitative Assessment of Safety and Security of System Architectures for Cyberphysical Systems Using the NFR Approach[END_REF]. Safety and security are decomposed by using non-functional softgoals. The target architecture is decomposed by operationalization softgoals. The contribution relationship from operationalization softgoals to non-functional softgoals are defined. By using propagation rules, labels of softgoals are investigated to evaluate safety and security of the target architecture. However their approach did not consider quantitative relationship between values of child softgoals. And they evaluate safety and security softgoals independently.
To analyse system faults, FTA (Fault Tree Analysis) has been used [START_REF] Leveson | Safeware -System Safety and Computers[END_REF]. Although the fault tree of FTA can be considered as an AND/OR goal graph, nodes of the fault tree represent fault events and logic gates. An upper event is resulted from a combination of lower events through a logic gate. The lowest events are primary events that require no further logical decomposition.
Softgoal weight
In case of introducing weights to SIG, nodes and relationships are candidates to assign weights. Names are assigned to SIG nodes, but SIG relationships do not have names. Therefore, we consider to add weights as the attributes of node names. This approach is comfortable because it does not affect the SIG grammar syntax. Attributes are not assigned for the SIG relationship, because attributes for the relationship between softgoals can be represented by node attributes.
In NFR framework, the achievement of soft goals are shown by the check symbol for each soft goal. Fig. 1 shows an example portion of NFR framework to assure the safety of elevator control. The buffer device evades a car collision to the ground when the car is going down. This supports the claim that the down movement of the elevator is safe.
AND Decomposition weight
If a softgoal decomposed by the AND relationship, then the AND decomposition weight label <W 1 , .., W k > is appended to the parent softgoal name.
Where k is the number of sub softgoals, and Wi are defined to satisfy Σ i=1,k Wi =1.
OR Decomposition weight
If a softgoal decomposed by the OR relationship, then the OR decomposition weight label <max (W 1 , .., W k ) > is appended to the parent softgoal name, where W i is the weight of the i-th sub softgoal.
Operationalization weight
If a softgoal is related to operationalization softgoals, then the weight ratio label <W 1 , .., W k > is appended to the parent softgoal name.
Where k is the number of operationalization softgoals that are related to the softgoal, and Wi are defined to satisfy Σ i=1,k Wi =1.
Contribution weight
There are positive and negative contributions in SIGs. The weights of positive and negative contributions are +N and -N, respectively. N is either 1 or 2. 1 and 2 also mean weak and strong contribution, respectively. The positive and negative contributions can be represented by styles of relation lines. The solid and dotted lines show positive and negative contributions, respectively.
Weight Propagation Rules
There are two rules for operationalization and decomposition. The operationalization propagation rule is defined as follows.
Let a parent softgoal is contributed by k operationalization goals. And let <P> be the weight of the softgoal. Let <Q 1 , ..Q k > be the operationalization weight ratio of the softgoal. Let <Ri> be the weight of i-th operationalization goal, and Ci be the contribution weight of the operationalization goal to the softgoal, respectively. Then the weight value P of the softgoal is calculated by the following equation. P = (Σ i=1,k Q i *R i *C i ) , where Σ i=1,k Q i =1 The Fig. 2 shows an example of the operationalization propagation. Suppose the weights of operationalization goals are <Q>, <R>, <S>, and the contribution weight ratio is <1/3, 1/3, 1/3>, then the weight P of the softgoal is Q/3+R/3-S/3, where Q and R are the positive, but S is the negative contribution.
Fig. 2. Operationalization propagation
In the same way, the decomposition propagation rule can be defined as follows. Let a parent softgoal is decomposed by k sub softgoals with AND decomposition. Let <P> be the weight of the softgoal. Let <Q 1 , ..Q k > be the AND decomposition weight ratio of the softgoal. Let <Ri> be the weight of i-th sub softgoal. Then the weight value P of the softgoal is calculated by the following equation. <P>; P=Q/3 +R/3 -S/3 <S> <Q> <R> <1/3, 1/3,1/3>: weight ratio
P = (Σ i=1,k Q i *R i )
, where Σ i=1,k Q i =1 The Fig. 3 shows an example of the AND decomposition propagation. Suppose the weights of sub softgoals are <Q>, and <R>. And the contribution weight ratio is <1/2, 1/2>, then the weight P of the softgoal is Q/2+R/2.
Fig. 3. AND decomposition propagation example
In case of OR decomposition, P is also calculated by the following equation. P = max ( R 1 ,.., R k ) The Fig. 4 shows an example of the OR decomposition propagation. Suppose the weights of sub softgoals are <Q>, and <R>. Then the weight P of the softgoal is the max( Q, R). Examples of weighted SIG
Credit card account
The credit card system shall manage customer accounts. The NFR of the customer accounts consists of good performance, security, and accessibility. The weighted SIG for the customer accounts of a credit card system is shown in Fig. 5. The main NFR is decomposed into the above three softgoals. The top softgoal of the figure integrates these softgoals as well as defines that these softgoals have the same priority by using the weight attribute clause <1/3, 1/3, 1/3>. The impact of the operationalization softgoals to achieve the above NFRs can be evaluated by adding operationalization softgoals to the weighted SIG. An example of the interrelationship between NFR and operationalization softgoals are shown in Fig. 6.
Fig. 6. Evaluation of the decision impact
There are 8 operationalization softgoals. The Authorize access to account information softgoal is decomposed into Validate access against eligibility rules, Identify users, and Authenticate user access softgoals by AND decomposition. Authenticate user access softgoal is also decomposed into Compare signatures and Additional ID softgoals by OR decomposition. The Uncompressed format operationalization have the positive and negative contribution to the Space and Response time softgoals, respectively. The indexing operationalization has the positive contribution to the Response time softgoal. The Validate access against eligibility rules operationalization has negative and positive contribution to the Response time and accurate softgoals, respectively. Additional ID operationalization has negative contribution to the Accessibility softgoal.
The impact of the operationalization can be evaluated as follows.
(-1/2+ (1/3 + 1/3 -1/3)/2)/3 + ((0+ 1/2)/3+ 0 +0)/3 + (-1)/3 = (-1/2 +1/6)/3 + 1/18 -1/3 = -1/9 -5/18 = -7/18 Therefore the operationalization is not a good decision in total. It is difficult to evaluate the total impact of the operationalization without the top main softgoal.
4.2
Alternative design decision Fig. 7 shows the comparison of alternative operationalization to manage the system data. The top NFR softgoal is decomposed into comprehensibility, modifiability, performance, and reusability softgoals. The shared data and abstract data type are two operationalization alternatives. The total impact value of the quality requirements for shared data is calculated as follows.
(
1/2-1/2)/6 + (-1/3 -1/3 + (1/2 +0)/3)/3 + (1/2 +0)/3 -1/6 = 0+ (-2/3 +1/6)/3 + 1/6 -1/6 = -1/6
The total impact value of the main non-functional requirements for Abstract Data Type is calculated as follows.
1/6 + (-1/3 +1/3 +0)/3 -(1/2)/3 +1/6 = 1/6 +0 -1/6 +1/6 = 1/6
The result shows that Abstract data type is better than shared data.
It is worth to remark that contribution weight are not assigned to the bottom level non-functional requirement softgoals in Fig. 7. Because the shared data and abstract data type softgoals are different alternative operationalization softgoals, therefore the bottom level non-functional requirement softgoals have no weight value list. The impact evaluation can also be represented by tabular form. Table 1 shows the tabular evaluation of Abstract Data Type solution for the SIG diagram. The column values of the Abstract Data Type shows the contribution values for NFR softgoals. On the top left first column corresponds to the main softgoal. The value in the next row of the same top left column is the total evaluation value for the selected alternative solution. The second column shows the decomposition coefficient values.
The table shows the coefficient vector <1/12, 1/12, 1/6, 1/6, 1/6, 1/6, 1/4, 1/4, 1/6> for the SIG decomposition. The summation of the multiplication of the contribution vector and coefficient vector becomes the evaluation value for the selected alternative solution.
Discussion
Effectiveness of argument patterns
As examples showed, the weighted SIG approach was useful to analyze the satisfing relationship between non-functional softgoals and operationalization softgoals. This showed that the effectiveness of the weight propagation method. Although the evaluation was only executed for small examples, it is clear the same results can be derived for other applications. The conflict among different quality characteristics can be resolved by using decomposition weight list defined by decomposition link of SIGs. The mechanism is generic and widely applicable to quantitative evaluation of the validity of various architectures. The proposed approach can be applicable for evaluating not only software architectures, but also business and technology architectures [START_REF] Josely | TOGAF ® Version 9.1 A Pocket Guide[END_REF][START_REF] Josely | ArchiMate ® 2.0, A Pocket Guide, The Open Group[END_REF].
Limitation
This paper only examines the effectiveness of the proposed method for a simple example SIG diagrams. It is necessary to show the effectiveness of the method by evaluating more number of applications. This paper qualitatively examines the effectiveness of the weighted SIG approach. Quantitative evaluations of the proposed method are also necessary.
Conclusion
This paper introduced the softgoal weight for evaluating NFRs. Evaluation examples of the approach was also shown for quantitatively validating quality satisfying levels of solutions represented by operationalizing softgoals. The example evaluations showed the effectiveness of the approach. Future work includes more experimental evaluation of the proposed approach, comparative analysis of different quantitative extensions to the NFR framework. The claim softgoals are not mentioned in this paper. It is also necessary to consider the effect of introducing weight for claim softgoals.
Fig. 1 .
1 Fig. 1. A portion of NFR framework to assure safety of elevator control
Fig. 4 .
4 Fig. 4. OR decomposition propagation example
<1/ 2 Fig. 5 .
25 Fig. 5. Decomposition of a main NFR with weights
Fig. 7 .
7 Fig. 7. Evaluating the impact of solution alternatives on the integrated NFR
Table 1 .
1 Tabular evaluation of Abstract Data Type solution
Main NFR
[system] <1/6,1/3,1/3,1/6>
Comprehensibility [system] <1/2, 1/2> Modifiability [system] <1/3,1/3,1/3> Modifiability [Function] Performance [system] <1/2,1/2>
coherence [system] simplicity [system] Modifiability [process] Modifiability [Data Rep.] Extensibility [Function] <1/2,1/2> Updatability Space performance [system] [Function] Time performance [system] Reusability [system]
Shared Data Abstract Data Type
[Target system] [Target system]
Acknowledgment
This work was supported by KAKENHI (24220001). | 16,231 | [
"993472"
] | [
"472208"
] |
01466222 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466222/file/978-3-319-24315-3_21_Chapter.pdf | Chunqing Wu
email: [email protected]
Peige Ren
email: [email protected]
Xiaofeng Wang
email: [email protected]
Hao Sun
Fen Xu
Baokang Zhao
email: [email protected]
An Efficient Unsavory Data Detection Method for Internet Big Data
Keywords: high-dimensional feature space, principal component analysis, multi-dimensional index, semantics-based similarity search
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In recent years, with the era of Internet big data coming, the volume and diversity of internet data objects in the cyberspace are growing rapidly. Meanwhile, more and more various unsavory data objects are emerging, such as various malwares, violent videos, subversive remarks, pornographic pictures and so on [START_REF] Fedorchenko | Integrated Repository of Security Information for Network Security Evaluation[END_REF][START_REF] Shahzad | Comparative Analysis of Voting Schemes for Ensemble-based Malware Detection[END_REF][START_REF] Skovoroda | Securing mobile devices: malware mitigation methods[END_REF]. The unsavory data are harming our society and network security, so efficiently detecting the unsavory data objects from the Internet big data is of growing importance. But the traditional accurate matching based data detection methods cannot identify the inner semantic information of various internet data objects and cannot realize intelligent data detection.
To realize the intelligent semantics-based data detection, we need to extract the features of internet data collection to construct a high-dimensional feature space [START_REF] Zhan | A Convergent Solution to Matrix Bidirectional Projection Based Feature Extraction with Application to Face Recognition[END_REF],
and the data objects are expressed as high-dimensional points in the feature space, so we can discover the data objects semantics-similar to a given query data object (unsavory data) based on the distances between high-dimensional points. While the efficiency of semantics-based similarity search in feature space is sensitive to the dimensionality of the space, when the dimensionality is too high, the efficiency of similarity search can be so worse that cannot meet our needs.
In the other hand, when searching the semantics-similar data objects to a given query point in feature space, the multi-dimensional indexes [START_REF] Bohm | Searching in High-Dimensional Spaces: Index Structures for Improving the Performance of Multimedia Databases[END_REF] can prune away the data objects semantics-irrelevant (with large distance) to the query point, reducing the searching zone and searching paths and increasing the efficiency of similarity search.
While the existing multi-dimensional indexes have several shortcomings for processing the internet big data: Firstly, the multi-dimensional indexes are influenced by the dimensionality of feature space, when the dimensionality is very high, the efficiency of multi-dimensional indexes might become worse than sequential scan; Secondly, most existing multi-dimensional indexes are proposed in some particular situations. For instance, the Pyramid-Technique [START_REF] Zhang | Making the Pyramid Technique Robust To Query Types and Workloads[END_REF] is efficient at processing uniformly distributed data set but inefficient when data set is irregularly distributed, while the iDistance [START_REF] Jagadish | iDistance Techniques[END_REF] method is efficient at kNN query but cannot carry out range query; Thirdly, for the semantics-based similarity searching in feature space, the semantic information of data set is usually embedded in a lower-dimensional subspace so the original high-dimensional feature space can be compressed. Besides, there are many correlated features, noise and redundant information in the original feature space, which can impair the efficiency of semantics-based similarity search.
To realize intelligent and efficient unsavory data detection for internet big data, we proposed the i-Tree method, a semantics-based data detection method. The method firstly utilize the PCA [START_REF] Zhan | Robust local tangent space alignment via iterative weighted PCA[END_REF] method to reduce the dimensionality of original high-dimensional feature space, eliminating ill effects of "curse of dimensionality" meanwhile diminishing the redundant and noise interference; secondly, we adopt a multi-dimensional index which is robust for arbitrarily distributed data set in the feature space, the index can effectively divide, organize and map multi-dimensional data objects into one-dimensional values; finally, to validate the validity of our method, we realize similarity search algorithm using our method and compare our method with other classic methods. Our method can avoid "curse of dimensionality", and the method is adaptive for various distributed data set, which can provide inspiration to efficient unsavory data detection for internet big data.
The rest of the paper is organized as follows. Section 2 introduced the related technologies and methods of this paper. Section3 proposed the semantics-based unsavory data detection method for internet big data based on PCA and multi-dimensional indexes. Section 4 presents the experimental results of our method.
Finally we conclude in Section 5.
2
Related work
Principal Component Analysis
Principal component analysis (PCA) is widely used in analyzing multi-dimensional data set, which can reduce the high dimensionality of original feature space to a lower intrinsic dimensionality, and can realize redundancy removal, noise elimination, data compression, feature extraction, etc. The PCA is widely employed in many actual applications with linear models, such as face recognition, image processing, sex determination, time series prediction, pattern recognition, communications, etc.
The basic idea of PCA is representing the distribution of original date set as precisely as possible using a set of features that containing more amount of information, in other words, it computes an orthogonal subspace with lower dimensionality to represent the original high-dimensional data set.
For an N-dimensional data set X containing M data objects (expressed as N-dimensional vectors): ∈ × ( = 1, 2, … , ), let m represent the mean vector: m = ∑ , and the covariance matrix can be represented as
S = ∑ ( -)( -) ∈ × . Let Z = [ -m, -m, … , -m ] ∈ × , then S = ∈ × .
The optimal projected vectors of PCA are a set of orthonormal vectors (u 1 , u 2 , …, u d ) when the evaluation function J(u) = attends its maximal value, and the retained variance in the projection is maximal. In fact, the vectors u 1 , u 2 , …, u d are the corresponding orthonormal eigenvectors of d larger eigenvalues of S i ( ≥ ≥ ⋯ ≥ ), the vector u i is also called the i-th principal component. The evaluation function of PCA can also be represented as J(W) = ( ), where the optimal projection matrix is 1 2 arg max ( ) ( , ,..., )
opt d W W J W u u u .
The contribution rate (energy) of k-th principal component u k can be defined as: is large enough (such as ≥90%), then we can think that the d principal components almost contain all the useful information of original features.
When d<< n, we can achieve the goal of dimensionality reduction of original feature space, then we can construct a lower-dimensional feature space using the eigenvectors: u 1 , u 2 , …, u d .
Multi-dimensional Indexes
The multi-dimensional indexes can efficiently divide and organize the data objects in feature space, making sure data objects that close to each other are likely to be stored in the same page, so the useless data zones can be pruned away in advance for processing similarity queries. So far, a series of multi-dimensional indexes have been proposed, the typical ones include Pyramid-Technique, iDistance, and so on.
The partitioning strategies of multi-dimensional indexes can be divided into Finally the iDistance uses a B+-tree to index the resulting one-dimensional space.
The Overview of Our Method
To realize intelligent semantics-based unsavory data detection, we proposed the i-Tree method, and our method consists of the following three phases:
Dimensionality reduction of feature space based on PCA;
Adaptive multi-dimensional index for data distribution;
Semantics-based similarity search.
Dimensionality Reduction of Feature Space based on PCA
The dimensionality of internet big data's feature space is usually too high to easily cause the "curse of dimensionality". In this section we realize the dimensionality re-duction of the original feature space using the PCA method, eliminating the ill effects of "curse of dimensionality" and the redundant and noise interference.
Firstly we use the PCA to compute the features of a new feature space, the features (vectors) of the new space are in the direction of the largest variance of the original internet data. Then we use the new features to construct a lower-dimensional feature space, and the internet data set are expressed in this new feature space by their feature coefficients (weights). A user query is also projected into the feature space generated by PCA, and we can find semantics-similar internet data by searching the internet data near it; if the query is not projected into the feature space, we can conclude that there is no semantics-similar internet data to the query.
By means of the PCA method, we can realize the dimensionality reduction of the original feature space, which can eliminate the impact of "curse of dimensionality", remove the noisy and redundancy information during the similarity search and reduce the complexity of computation and storage for further data processing.
Adaptive Multi-dimensional Index for Data Distribution
In this section we divide and manage the data objects in feature space based on the idea of multi-dimensional indexes. For the internet data objects irregularly distributed in feature space, we proposed an adaptive multi-dimensional index. Our index can be realized by the following three steps: 1, partitioning the data set according to the data distribution to form a series of data clusters; 2, transforming the data clusters into regular-shaped data subspaces; 3, mapping the high-dimensional data objects into one-dimensional values and index them using a B+-tree.
Firstly, we partition the data set according to their distribution to form data clusters. The data objects distribute irregularly in feature space, usually semantics-similar data objects gather together. So we here employ the data-based partitioning strategy to partition the feature space. Specifically, we utilize the K-means clustering algorithm to cluster the data objects into a series of data clusters, the data objects in a same cluster are near to each other, having similar semantic information.
Secondly, we transform the data clusters into regular-shaped subspaces. To utilize the Pyramid-Technique to process each data cluster, we transform the data clusters into unit high-dimensional hyper-cube shaped subspaces and move the cluster centers to the centers of the unit subspaces. For the data objects in each data clusters, the transformation is a one-to-one mapping [START_REF] Zhang | Making the Pyramid Technique Robust To Query Types and Workloads[END_REF], so we can perform similarity search algo-rithm based on the Pyramid-Technique. Figure 1 shows the process of data clustering and transforming. Thirdly, we map high-dimensional data objects in each subspace into one-dimensional values and index them. For each hyper-cube shaped subspace, we utilize the Pyramid-Technique to map the data objects. We firstly number each subspace with subspace number i. Then we use the Pyramid-Technique to map data objects in each subspace respectively. For an m-dimensional subspace C i , we partition the subspace into 2m hyper-pyramids with the subspace center as their common top, and number the pyramids counterclockwise with pyramid-number j. For a high-dimensional data object v in pyramid j of subspace i, we compute the height h v (to its top) and map v into a one-dimensional value p v =i+j+(0.5-h v ). Using the above method, we can map all data objects in feature space into a one-dimensional space.
Finally, we index the one-dimensional space using the B+-tree, the high-dimensional data objects and the corresponding one-dimensional keys are stored in the data pages of the B+-tree. The mapping of the data objects is shown in the figure 2.
3.3
Semantics-based Similarity Search
In the feature space, the semantics-similar data objects are near to each other, so we can utilize the distance information to discover data objects that are semantics-similar to a given query q. In this section, we mainly study the range query.
The range query is a very popular similarity search algorithm. In this paper, the range query can be realized as follows: firstly extract the features of the query and express it as a multi-dimensional point in feature space; secondly determine the searching spaces and abandon the space that does not intersected with the query range; finally scan the data objects in searching spaces to find the right answers. The range query can be realized by the following algorithm:
Algorithm 1 Semantics-based range search 1. Read the range query RangeQuery (D, q, r, M).
2. Map the query range to one-dimensional space.
3. Determine which subspaces are affected with the range query according to the intersected zone in one-dimensional space.
4. Determine the searching spaces by reversely mapping the intersected zone to multi-dimensional feature space.
5. Scan the data objects in the searching spaces, find the final answers to the query q.
Performance Evaluation
In this section, we evaluate the effectiveness and efficiency of our method by analyzing the experimental results. We implement the range query based on our method in C, and chose the sequential scan and Pyramid-Technique as the reference algorithms. We choose a computer with Intel(R) Core (TM) 2 Quad CPU Q8300 2.5GHz and 4GB RAM, and the operating system is CentOS 5. For each experiment, we run 20 times and computed the average results as the final experimental results. For the input data set, we generate synthetically a series of clustered data sets of different data sizes and different dimensionality. The dimensionalities of the data sets are respectively 16, 32, 64 and 128, and the data size of them varies from 100000 to 2000000.
We compare the response time of our method with the two reference algorithms in the same conditions. The experimental results are shown in the Figure 3 and Figure 4.
As shown in the Figure 3, the input data set is a clustered 32-dimensional data set with data size varying from 100000 to 2000000, and the data set has 8 natural clus-ters. We can observe from the Figure 3 that the response time of the three methods increases with the data size, while the response time of our method is less than the sequential scan and the Pyramid-Technique.
As shown in the Figure 4, we observe the response time with the dimensionality of feature space varying from16 to 128. Here we choose the input data set with 1000000 data objects and 8 natural clusters. We can observe that the response time of three methods increases with the dimensionality of feature space, and the sequential scan may be faster than the Pyramid-Technique when the dimensionality is high enough, but our method is more efficient than the other methods. The experimental results from Figure 3 and Figure 4 show that our method can effectively find the semantics-similar data objects to a given query.
Conclusion
In this paper, we proposed an efficient unsavory data detection method for Internet big data. To realize semantics-based similarity search of various unsavory data, we express the data objects as high-dimensional points in feature. To solve the problem of "curse of dimensionality" caused by the high dimensionality of feature space, we used the PCA to reduce the dimensionality of feature space. By partitioning the feature space into subspaces and transform them into unit hyper-cubes, we could utilize the Pyramid-Technique to index the data objects and realize efficient semantics-based similarity search. Finally, the performance evaluation results revealed that our method could efficiently discover the semantics-similar data objects to a given query.
⋯
, that is the rate of the k-th principle component variance in the sum of all principle component variances. Owing to ≥ ≥ ⋯ ≥ , the contribution rate of anterior principal component is greater than the contribution rate of latter principal component. When the contribution rate sum of d anterior principal components:
space-based partitioning and data-based partitioning. The Pyramid-Technique is based on the space-based partitioning strategy. It firstly divides the d-dimensional feature space into 2d subspaces such that the resulting subspaces are shaped liked pyramids with the center-point of the feature space as their common top, and then every subspace is cut into slices that are parallel to the pyramid's basis to form data pages. The technique defines a pyramid number for each subspace. For a d-dimensional data object, the technique determines the pyramid number i in which the data object is located and computes the height h of the data object to the pyramid top. So we can obtain the one-dimensional mapping value of the data object through adding the pyramid number i and the height h of the data object.The partitioning strategy of the iDistance method can be space-based partitioning strategy or data-based partitioning strategy. The iDistance firstly divides the feature space into subspaces equally or according to the distribution of the data objects and determines a subspace number i for each subspace; secondly, selects a reference point for each subspace and computes the distance d of a given data object p to its nearest reference point; finally, the data object can be mapped into a one-dimensional value y based on the formula: y = i × c + d, where c is a constant to make sure that the data objects in different subspaces are mapped into different one-dimensional intervals.
Fig. 1 .
1 Fig. 1. The process of data clustering and transforming
Fig. 2 .
2 Fig. 2. Mapping of the data objects
Fig. 3 . 4 .
34 Fig. 3. Effects of data size Fig. 4. Effects of Dimensionality
Acknowledgment
The work described in this paper is partially supported by the grants of the National Basic Research Program of China (973 project) under Grant No.2009CB320503, 2012CB315906; the project of National Science Foundation of China under grant No. 61070199, 61103189, 61103194, 61103182, 61202488, 61272482;the National High Technology Research and Development Program of China(863 Program) No. 2011AA01A103, 2012AA01A506, 2013AA013505, the Re-search Fund for the Doctoral Program of Higher Education of China under Grant No. 20114307110006, 20124307120032, the program for Changjiang Scholars and Innovative Research Team in University (No.IRT1012), Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Province("network technology"). | 19,344 | [
"993525",
"993526",
"993528",
"1001286",
"993483",
"993527"
] | [
"302677",
"302677",
"302677",
"484811",
"302677",
"302677"
] |
01466224 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466224/file/978-3-319-24315-3_23_Chapter.pdf | Mi-Young Cho
email: [email protected]
Young-Sook Jeong
email: [email protected]
Face Recognition Performance Comparison between Real Faces and Pose Variant Face Images from Image Display Device
Keywords: Face Recognition, Image Display Device, Performance Evaluation
Face recognition technology, unlike other biometric methods, is conveniently accessible with the use of only a camera. Consequently, it has created an enormous interest in a variety of applications, including face identification, access control, security, surveillance, smart cards, law enforcement, human computer interaction. However, face recognition system is still not robust enough, especially in unconstrained environments, and recognition accuracy is still not acceptable. In this paper, to measure performance reliability of face recognition systems, we expand performance comparison test between real faces and face images from the recognition perspective and verify the adequacy of performance test methods using an image display device.
Introduction
Face recognition is a widely used biometric technology because it is more direct, user friendly, and convenient to use than other biometric approaches. Face recognition technology is now significantly advanced, has great potential in the application systems. However, it is difficult to guarantee of performance due to insufficient test methods in real environment. The best method is direct evaluation from human subjects in real environment. Unfortunately, in this case, it would be considered impossible to consistently obtain the same way for a lengthy period of time a certain number of persons. That is, it's difficult to guarantee objectivity and reproducibility.
There are many approaches for performance evaluation of the face recognition in the system level including methods using an algorithm [START_REF] Ttak | Performance Evaluation Method of Face Extraction and Identification Algorithm for Intelligent Robots: Part 1 Performance Evaluation of Recognition Algorithm[END_REF], a mannequin [START_REF]Performance Evaluation Method of Face Extraction and Identification Algorithm for[END_REF], and a highdefinition photograph [START_REF] Ttak | Performance Evaluation Method of Face Extraction and Identification Algorithm for Intelligent Robots: Part 3[END_REF]. The first method simply evaluates the performance of an algorithm installed in a face recognition system. However, the performance of an algorithm cannot guarantee the performance of a face recognition system. The second method uses mannequin instead of real human face. This method has a number of problems because the material coating the mannequin is not the same as human skin. Last, the method using a high-definition photograph has overcome some of the existing problems. However, it still experiences minor difficulties with automatic control interoperation with a computer, and a lack of reproducibility in real situations.
In this paper, we expand performance comparison test between real faces and face images from the recognition perspective and verify the adequacy of performance test methods using an image display device. The paper is organized as follows: in Section 2, we explain limitation of precious works. Section 3 describes how to construct the facial DB. In section 4, we show and analyze the experimental results. Section 5 concludes this paper.
Previous works
In the previous works, we have introduced performance evaluation method of face recognition using face images from a high definition monitor and prove similarity between real faces and face images [5][11]. However, the previous work has a limitation to reflect performance in real environments as it is a test only using frontal pose images. Recognizing faces reliably across changes in pose and illumination has proved to be a much more difficult problem [START_REF] Phillips | Face recognition vendor test 2002[END_REF]. So, we need verification about the proposed test method according to not only illumination but also pose. In this paper, we expand previous works and compare face recognition performance according to various poses.
Facial DB
The majority of facial images used to evaluate face recognition algorithms such as Feret [START_REF] Phillips | The FERET evaluation methodology for face recognition algorithms[END_REF], PF07 [START_REF] Lee | The POSTECH Face Database (PF07) and Performance Evaluation[END_REF], and CMU PIE [START_REF] Sim | The CMU pose, illumination, and expression database[END_REF] could be used for the proposed test method. However, most images are not adequate because of the low-resolution output of the image display device. To overcome this challenge, high-resolution facial DB was required.
To obtain subject images under various pose conditions, seven cameras were used. The locations of cameras are shown in Figure 2. We took ultra-high definition images using a Sony Nex 7 so that the face area took up at least two thirds of the whole area of the image. The height of the camera was fixed, and we controlled the height of the chair depending on the subject's height. We captured 4200 real face images from 60 subjects, which were captured under ten different lighting directions and seven pose for each subject. Figure 3 shows sample images for one subject. To the re-capture, we displayed the high definition images captured with a camera on a 27-inch image display device to provide an output similar to a real face. The image display device was calibrated and characterized according to the ISO 15076-1:2010 standard [START_REF]Image technology colour management --Architecture, profile format and[END_REF], which contains the criteria for color management and standard image reproduction. To ensure proper display output, we used 2.2 gamma tone reproduction curve, D65 whitepoint color temperature as stated in IEC 61966-2-1:1999 [START_REF]Multimedia systems and equipment -Colour measurement and management -Part 2-1[END_REF], which contains the sRGB and HDTV color space standards. The procedure for the face image DB construction is presented in Fig. 4.
Experiment
This experiment verifies the similarity of real faces and face images from an image display device from the perspective of face recognition performance. In particular, we focused on changes in face recognition performance according to pose. The test engines registered ten frontal pose images under ten lighting conditions and obtained recognition results from test images that consists of six groups according to pose. Figure 5 illustrates sample face images for registration and test. The performance comparison results from four face recognition engines that are used for commercial purposes are shown Table 1. To analyze the similarity of real faces and the facial images captured from the image display device, recognition rate deviations were analyzed. As a result, the maximum deviation between the real facial and face images is 1.56. Figure 6 shows performance changes according to the pose for each engine. Engine A and B get results from all test images, other engines get those from face images of only 4 poses(top/bottom/left/right 15º) because of coverage. The x-axis represents recognition rate and the y-axis represents pose. The number means recognition rate deviations between real faces and face images. Although each engine exhibited different recognition performance according to pose, the deviations between the real face and face images were all less than 3%. In other words, there is no significant difference in face recognition performance when using face images instead of real faces.
Conclusion
In this paper, we expand the previous works and verified the similarity of real face and face images from an image display device by comparing face recognition performance changes according to pose. Based on the comparison results using an image display device, the proposed method can be applied to the face recognition performance evaluation in system level.
Fig. 1 .
1 Fig. 1. Previous works.
Fig. 2 .
2 Fig. 2. Environment for capturing real face images.
Fig. 3 .
3 Fig. 3. Sample images for one subject.
Fig. 4 .
4 Fig. 4. Procedure for building face image DB.
Fig. 5 .
5 Fig. 5. Registration and test purpose sample images.
Fig. 6 .
6 Fig. 6. Performance changes according to the pose for (a) Engine A. (b) Engine B. (c) Engine C. (d) Engine D.
Table 1 .
1 Overall results.
Engine Face recognition rate(%) Real faces Face images deviation
A 97.09 95.90 1.19
B 98.96 99.01 0.05
C 97.78 98.23 0.45
D 87.62 86.06 1.56
Acknowledgments. This work is partly supported by the R&D program of the Korea Ministry of Trade, Industry and Energy (MOTIE) and the Korea Evaluation Institute of Industrial Technology (KEIT). (Project: Technology Development of service robot's performance and standardization for movement/manipulation/HRI/Networking, 10041834). | 8,937 | [
"1001290",
"1001291"
] | [
"171895",
"171895"
] |
01466226 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466226/file/978-3-319-24315-3_25_Chapter.pdf | Han Gang
Hongyang Yan
email: [email protected]
Lingling Xu
Secure Image Deduplication in Cloud Storage
Keywords: Cloud Computing, Image Deduplication, Cloud Storage, Security
With the great development of cloud computing in recent years, the explosive increasing of image data, the mass of information storage, and the application demands for high availability of data, network backup is facing an unprecedented challenge. Image deduplication technology is proposed to reduce the storage space and costs. To protect the confidentiality of the image, the notion of convergent encryption has been proposed. In the deduplication system, the image will be encrypted/decrypted with a convergent encryption key which is derived by computing the hash value of the image content . It means that identical image copies will generate the same ciphertext, which used to check the duplicate image copy. Security analysis makes sure that this system is secure.
Introduction
With the great development of cloud computing in recent years, the application of information and communication in the Internet has drew more and more attentions. Cloud computing [1] is a new computing paradigm with the dynamic extension ability, through the Internet to on-demand and extensible way of obtaining computing resources and services. It attracts much concern because of its unique technique and the emerging business computing model from the academic and industry.
However, with the explosive increasing of data, the mass of information storage, and the application demands for high availability of data, network backup is facing an unprecedented challenge. On the one hand, human society produce the data information from the Internet. On the other hand, we get the information from the daily production and kinds of scientific experiments ( e.g. scientific computing and simulation, flight dynamics, a nuclear blast simulation, space exploration, and medical image data.). The growth of data information produced each day is to the impressive degree. According to the resent analysis report of IDC (International Data Corporation), the whole world produced 281 EB data in 2007, it is corresponded to everyone in the world owns 45 GB data. The world produced the amount of data will be close to 1800 EB, it is ten times than the amount of data in 2006 [START_REF] Gantz | The Diverse and Exploding Digital Universe: An updated forecast of world wide information growth through 2010[END_REF]. And the volume of data in the world is expected to reach 40 trillion gigabytes in 2020 [START_REF] Gantz | The Digital Universe in 2020: Big data, bigger digital shadows, and biggest growth in the far east[END_REF].
For above situation, data deduplication technology is proposed recently. Data deduplication technology [START_REF] Asaro | Data De-duplication and Disk-to-Disk Backup Systems: Technical and Business Considerations[END_REF] is a lossless data compression technology, mainly based on the principle of repeated data will be delete. This technology could reduce the cost of data transmission and storage [START_REF] Tolia | Using Content Addressable Techniques to Optimize Client-Server System[END_REF]. Especially the image files in the social network, in most cases a celebrity public a message, it will be forwarded more than one thousand times soon. And popular images are also repeated many times. If such store operations occur every time, it certainly will cause waste of storage space. So simple to increase storage capacity does not solve the problem. Image deduplication have to be applied to the social network.
To protect the confidentiality of the image, the notion of convergent encryption [START_REF] Douceur | Reclaiming Space from Duplicate Files in A Serverless Distributed File System[END_REF] has been proposed. In the deduplication system, the image will be encrypted/decrypted with a convergent encryption key which is derived by computing the hash value of the image content [START_REF] Douceur | Reclaiming Space from Duplicate Files in A Serverless Distributed File System[END_REF][7] [START_REF] Yuriyama | Integrated Cloud Computing Environment with It Resources and Sensor Devices[END_REF]. It means that identical image copies will generate the same ciphertext, which allows the cloud storage server perform deduplication on the ciphertexts. Furthermore, image user make use of attribute-based encryption scheme to share images with friends by setting the access privileges.
In the rest of the paper is organized as follows. We introduce related work about deduplication in section II. Some preliminary works are introduced in section III. The architecture of image deduplication cloud storage system including security analysis will be described in section IV. Finally, we include this paper in section V.
Related Work
There have been a number of deduplication technologies proposed recently. Most researchers fuccus on text deduplication like [START_REF] Li | Secure Deduplication with Efficient and Reliable Convergent Key Management[END_REF]. They proposed a scheme to address the key management in deduplication system. There are different ways according techniques.
Technique based on the file-level deduplication is to delete the same file to reduce the data storage capacity, save storage space. It uses a hash function for each file to compute a hash value. Any two files with the same hash value is considered to be the same file. For example, SIS [START_REF] Bolosky | Single Instance Storage in Windows 2000[END_REF], FarSite [START_REF] Adya | Federated, Available, and Reliable Storage for an Incompletely Trusted Environment[END_REF], EMC Center [START_REF]EMC Centera: Content Addressed Storage System[END_REF] systems use this method.
Technique based on the block-level deduplication is to delete the same data block to reduce storage space [START_REF] Policroniades | Alternative for Detecting Redundancy in Storage Systems Data[END_REF]. This method is to divide a file into some data blocks [START_REF] Rabin | Fingerprinting by Random Polynomials[END_REF] , and uses hash functions compute the hash value, which be named as block fingerprint. Any two data block with the same block fingerprint are defined duplicate data block [START_REF] Henson | An Analysis of Compare-by-hash[END_REF].
Based on the deduplication delete time, deduplication technology could divided to on-line deduplication [START_REF] Ungureanu | A High-Throughput File System for the HYDRAstor Content-Addressable Storage System[END_REF] and post-processing deduplication [START_REF] Clements | Decentralized Deduplication in SAN Cluster File Systems[END_REF] . On-line deduplication is to delete the duplicate data before storing, the storage service always stores a unique data copy. Post6-processing deduplication needs additional storage buffer to realize delete repeated data.
Based on the deduplication delete place, it can be divided to client deduplication [START_REF] Fu | AA-Dedupe: An Application-Aware Source Deduplication Approach for Cloud Backup Services in the Personal Computing Environment[END_REF] and service deduplication [START_REF] Tan | DAM: A Data Ownership-Aware Multi-Layered De-duplication Scheme[END_REF]. Client deduplication is before transferring the data copy to cloud server, user check and delete duplicate data. Service deduplication is performing duplicate data check and delete with service's resource in cloud server.
However, multi-media data like images, videos are larger than text. So image deduplication is becoming more important. Researchers have pay attention to this field like [START_REF] Li | A secure cloud storage system supporting privacy-preserving fuzzy deduplication[END_REF]. We have to handle the images before uploading them to server, a general way is watermarking [START_REF] Pizzolante | A Secure Low Complexity Approach for Compression and Transmission of 3-D Medical Images[END_REF] [START_REF] Pizzolante | Protection of Microscopy Images through Digital Watermarking Techniques[END_REF]. Compression technique save the space of cloud storage in some way, but deduplication will address this problem from the root.
Preliminaries
Bilinear mapping
Definition 1: Let G 1 , G 2 be two cyclic groups of the number q(q is a prime number.), g is a generating element of G 1 , a bilinear map is a map e : G 1 × G 1 → G 2 which satisfies the following three properties:
-Bilinear: e(g a 1 , g b 2 ) = e(g 1 , g 2 ) ab for all a, b ∈ Z p and g 1 , g 2 ∈ G 1 .
-Non-degenerate: e(g, g) = 1 computable: e(g 1 , g 2 ) can be computed effectively with an algorithm for all g 1 , g 2 ∈ G 1 . so e is an efficient bilinear mapping from G 1 to G 2 .
Access Structure
Let P ={P 1 , P 2 , • • • , P n } be a set of parties. A collection A ⊆ 2 P is monotone. If ∀B, C, if B ∈ A and B ⊆ C, then C ∈ A.
An access structure(respectively, monotone-access-structure) is a collection(respectively,monotone collection)A ⊆ 2 P φ. The set in A are called the authorized sets, and the sets not in A are called the unauthorized sets. In this context, the attributes decide the role of the parties. So the authorized sets of attributes are included in A.
Convergent Encryption
Convergent encryption [START_REF] Douceur | Reclaiming Space from Duplicate Files in A Serverless Distributed File System[END_REF][23] provides image confidentiality in deduplication. Because it uses the image content to compute encryption Hash value as the image encryption key. It makes sure that the key is directly related to the image content. The encryption key will not be leak under the condition of no leaking of the content of the image. And at the same time, because of the one-way operation of hash function, the image content will not be leaked when the key is leaked. Above all, it also can ensure the ciphertext is only related to the image content, but has nothing to do with the user.
In addition, we have to compute a tag to support deduplication for the image and use it to detect duplicate copy in the cloud storage server. If two image copies are the same, then their tags are the same. The user first sends the tag to the cloud storage server to check if the image copy has been already stored. We can not guess the convergent key in terms of the tag because they are derived independently. In a general way, the convergent encryption scheme has four primitive functions:
-KeyGen CE (M ) → K M This algorithm computes a convergent key K M which maps an image copy M to a key. -Encrypt CE (K M , M ) → C This algorithm uses symmetric encryption algorithm outputs a ciphertext C, with taking both the convergent key K M and the image copy M as inputs. -T agGen CE (M ) → T (M ) This is the tag generation algorithm that maps the image copy M to a tag T (M ). We make T agGen CE to generate a tag from the corresponding ciphertext as index, by using
T (M ) = T agGen CE (C), where C = Encrypt CE (K M , M ). -Decrypt CE (K M , C) → M
This is the decryption algorithm which outputs the original image M , with taking both the convergent key K M and the ciphertext C as inputs .
KP-ABE Scheme
This scheme is used to encrypt the K M , which computed from image content called convergent key. As the same time we delete the duplicate copy, the image owner wants some other friends access this image file. In key-policy ABE(KP-ABE) scheme [START_REF] Goyal | Attribute Based Encryption for Fine-Grained Access Conrol of Encrypted Data[END_REF][25], the access policy is embedded into the decryption key. The image owner signs the K M ciphertexts with a set of attributes, when a user wants to access the image, the cloud storage server judges the user's attribute and decides which type of ciphertexts the key can decrypt. We show the KP-ABE scheme by the following four polynomial algorithms.
- Here we note that the convergent key k M is seen as the message m in this paper.
Setup(1 n ) -→ (parameters,
Proof of Ownership
Proof of ownership [START_REF] Halevi | Proofs of Ownership in Remote Storage Systems[END_REF] is a protocol to be used to prove the user indeed has the image to the cloud storage server. This is to solve the problem of using a small hash value as a proxy for the whole image in client side deduplication. In order to describe the proof of ownership in details, we suppose a prover (i.e. a user) and a verifier (i.e. the cloud storage server). The verifier derives a short value φ(M ) from an image copy M . And the prover needs to send φ and run a proof algorithm to prove the ownership of the image copy M . It is passed if and only if φ = φ(M ). In this paper, we consider a deduplication cloud system consisting of image owner, image user, cloud service provider. The image is assumed to be encrypted by the image owner before uploading to the cloud storage server. We assume the authorization between the image owner and users is appropriately done with some authentication and key-issuing protocols. After uploading the encrypted image to the cloud server, image users who are authorized could access the encrypted image. In more details, an authorized image user send a request to the cloud storage server, the server will verify the proof of ownership. The image user needs to send φ and run a proof algorithm to prove the ownership of the image copy M . It is passed if and only if φ = φ(M ). It is passed if and only if φ = φ(M ).
-Image Owner. The image owner is an entity that send the image to the cloud service to storage, share and access again. In order to protect the image content, the owner have to encrypt the image before uploading to the cloud. In a client side image deduplication system, only the first image owner could store in the cloud. If it is not the first one, then the storage server will tell the owner this image is duplicate. So there is only one image copy in the cloud storage. -Image User. The image user is an entity that has privileges to access the same image by passing the proof of ownership in the deduplication cloud system. And image user also includes the friends of image owner who shared the image resource in the cloud storage. -deduplication Cloud Service Provider. The entity of deduplication cloud storage server provides the image storage service for the image owners and users. Moreover, the cloud storage server will also play the role of performing duplicate image before users upload their images. The users couldn't upload the image again if there is an identical content image stored in the cloud storage server, and then they will get the privileges of accessing the same image by using the proof of ownership.
Deduplication Cloud System
Figure 1 shows the participants of deduplication cloud system and the specific work process. It goes as follows:
-System Setup: Define the security parameter 1 λ and initialize the convergent encryption scheme. We assume that there are
N encrypted images C = (C M1 , C M2 , • • • , C M N )
stored in the cloud server by a user. Then we could compute
K M = H 0 (M ) and C M = Enc CE (K M , M ).
The user also could compute a tag T M = H(C) for duplicate check.
-Image Upload: Before uploading an image M, the user interacts with the cloud server and use the tag to check if there is any duplicate copy stored in the cloud storage server. The image tag will be computed T M = H(C) to check the duplicate image. If the image is the first time to upload, then the cloud storage server will receive the image ciphertext. At the same time, image owner could set the attributes to control access privileges.
• If there is a duplicate copy founded in the storage server, the user will be asked to verify the proof of ownership, if the user pass, then he will be assigned a pointer, which allows him to access the image. In details, the image user needs to send φ and run a proof algorithm to prove the ownership of the image copy M . It is passed if and only if φ = φ(M ). It is passed if and only if φ = φ(M ). By using proof of ownership, users have privileges to access the same image. • Otherwise, if there is no duplicate images in the storage server, the user computes the encrypted image C M = Enc CE (k M , M ) with the convergent key K M = H 0 (M ), and uploads C M to the cloud server. The user also encrypts the convergent key K M with attributes for setting the access privileges. He will get the C K M = Enc(sk, K M ) also be uploaded to the cloud server. -Image retrieve: Supposing that a user wants to download an image M . He first sends a request and the image names to the cloud storage server. When the cloud storage server receive the request and the image name , it will check whether the user is eligible to download the files. If pass, the cloud server returns the ciphertext C M and C K M to the user. the user decrypts and gets the key K M by using sk which stored locally. If the user's attributes match the owner setting, then the cloud storage server will send the corresponding sk. With the convergent encryption key, the user could recover the original images. If failed, the cloud storage server will send an abort signal to user to explain the download failure.
Security Analysis
In this section, we present the security analysis for the deduplication cloud system.
-Confidentiality: The image user stored in the cloud will not be read because the image have to be encrypted to C M = Enc CE (K M , M ) with the convergent key K M = H 0 (M ). Therefore, we couldn't get the content of the image which stored in the cloud from a ciphertext. -Privacy protection: Because it uses the image content to compute encryption Hash value as the image encryption key. It makes sure that the key is directly related to the image content, it will not be leak under the condition of no leaking of the contents of the image. And at the same time, because of the one-way operation of hash function, the image content will not be leaked when the key is leaked. Above all, it also can ensure the ciphertext is only related to the image content, but has nothing to do with the user. Therefore, it can protect the privacy of users as more as possible. -Completeness: We suppose that if the images have been successfully uploaded to the cloud server, the image owner can retrieve them from the cloud storage server and decrypt the ciphertext by using the correct convergent encryption key. Furthermore, a user who has the same image wants to upload to the cloud server, will perform the proof of ownership and get the privilege to access the stored image.
Conclusion
In this paper, we propose the image deduplication cloud storage system. To protect the confidentiality of sensitive image content, the convergent encryption has been used while supporting image deduplication. Owner could download the ciphertext again and retrieve the image with secret key , as the same time , image owner makes use of attribute-based encryption scheme to share images with friends by setting the access privileges. A user who has the same image copy could get the privilege to access the ciphertext by passing the proof of ownership and delete his duplicate copy. If a user's attributes match the owner's setting access control , then he also could download the images. Security analysis makes sure that this system is secure in confidentiality, privacy protection and completeness.
Acknowledgement
This paper is supported by Fundamental Research Funds for the Central Universities(South China University of Technology)(No. 2014ZM0032), the Guangzhou Zhujiang Science and Technology Future Fellow Fund (Grant No. 2012J2200094), and Distinguished Young Scholars Fund of Department of Education(No. Yq2013126), Guangdong Province.
msk): The probabilistic polynomial time(PPT) algorithm takes a security parameter n as input. It outputs the public parameters and the master secret key(msk) which is known only to the trusted the cloud storage server. -Encrypt(m, parameters, µ)-→ c: The PPT encryption algorithm takes as a input with a message m, the public parameters and a set of attributes mu. It outputs the ciphertext c. -KeyGen(parameters, msk, A) -→ SK w : The PPT key generation algorithm takes as a input with the public parameters, the master secret key and an access structure A. It outputs the decryption key D A . -Decrypt(parameters, c, D A ) -→ m or ⊥: The Decryption algorithm takes as a input with c, the public parameters and the decryption key. It outputs the message m if µ ∈ A or else it outputs an error message.
Fig. 1 .
1 Fig. 1. Deduplication Cloud Storage System | 20,583 | [
"1001292",
"1001293"
] | [
"409787",
"440588",
"538359"
] |
01466227 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466227/file/978-3-319-24315-3_26_Chapter.pdf | Chunlu Chen
email: [email protected]
Hiroaki Anada
email: [email protected]
Junpei Kawamoto
email: [email protected]
Kouichi Sakurai
email: [email protected]
Hybrid Encryption Scheme Using Terminal Fingerprint and Its Application to Attribute-Based Encryption Without Key Misuse
Keywords: Key misuse, Terminal fingerprint, Re-encryption
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In the last few years, the amount of data stored in the cloud server has been increasing day by day with the rapid development of the Internet in order to reduce the cost of using local storage and data sharing. However, information disclosure and trust issues arise in third party management cloud servers. Therefore, improving the security of the data stored in the cloud became a critical task. Typically, data stored in the cloud must be encrypted in order to achieve this goal of ensuring data security. Public-key cryptography is one of the methods to encrypt data, which uses a pair of keys, a secret key (SK) and a public key (PK). Although, public key encryption can enhance security, the complexity of key management is a big issue in this kind of cryptography.
Although public key cryptography can help us to protect the massage, it also can be used to make illegal acts such as transferring and copying secret key unauthorized. Furthermore, it is possible to copy secret key from other users illegally. It is difficult to identify the source of leaking or the responsible entity if the secret key leaks.
Various methods are proposed to utilize unique information for secret key generation to prevent this behavior, bur the leakage of secret key is still a weak point of encryption waiting to be solved in the near future. Three related technologies are introduced as follows.
Hardware Certification
Physical Unclonable Function (PUF) achieved by a physical device using differential extraction of the chip manufacturing process inevitably leads to generate an infinite number, unique and unpredictable "secret key" [START_REF] Suh | Physical unclonable functions for device authentication and secret key generation[END_REF]. PUF system receives a random code, and generates a unique random code as a response. Due to differences in the manufacturing process, the produced chip cannot be imitated and copied.
Kumar et al. [START_REF] Kumar | The butterfly PUF protecting IP on every FPGA[END_REF] designed a system, where PUF output defines and gives a certain input, while other PUFs produce different outputs. According to the uniqueness of this chip output, it can be widely utilized in smart cards, bank cards and so on. In this way, we can protect massage through the uniqueness of the secret key from copying and other illegal activities.
Biometric authentication
Biometric technology consists of using computers and optics, acoustics, biosensors and other high-tech tools to retrieve the body's natural physiological characteristics (such as a fingerprint, finger vein, face, iris, etc.) and behavioral characteristics (e.g. handwriting, voice, gait, etc.) to identify personal identity. Biometric technology is not easy to forget, good security performance, and not copy or stolen "portable" and can be used anywhere [START_REF] Jain | Biometric identification[END_REF]. Furthermore, biometric can be used as a unique, unalterable secret key but the safety is still taken seriously.
Jain, Anil et al. [START_REF] Jain | Biometric template security[END_REF] analyzed and evaluated these biometric authentication systems. Moreover, biometric authentication is also used in various fields, for example, Uludag et al. [START_REF] Uludag | Biometric cryptosystems: issues and challenges[END_REF] proposed the biometric authentication, which can be used to construct a digital rights management system.
In fact, in the present life, the biometric authentication has been very widely utilized, such as bank card fingerprint authentication, and face authentication in customs. Although biometrics brought us convenience, biometrics privacy protection has become an important research challenge.
Terminal Fingerprint
In general, the type of font, screen resolution and network environment are different for each browser terminal that is used to receive fingerprint information. This information can be used as the feature points to identify the terminal. The various sets of features possessed by the browser in this way is referred as browser fingerprint [START_REF] Doty | Fingerprinting Guidance for Web Specification Authors (Unofficial Draft)[END_REF][START_REF] Aggarwal | An Analysis of Private Browsing Modes in Modern Browsers[END_REF][START_REF] Eckersley | How unique is your web browser?[END_REF]. In this paper the terminal fingerprint is assumed to be unchangeable and unextractable.
Terminal fingerprint has been applied in a variety of locations. For example, terminal fingerprint is used to track the behavior of users on the web by collecting the trend of web sites that the user has accessed. As a result, it is possible to provide advertisements tailored to the interest and favorite of the user. It has also been made applicable to the risk-based authentication. The authentication terminal fingerprints are taken at the time of login of the user, and save. The terminal fingerprints are compared with those of the previous log. If it is significant difference, it will be determined that there is a high possibility of access from another terminal, which causes a higher strength authentication.
The hardware based authentication and Biometric based authentication methods mentioned above ensure the uniqueness of the key. These still cannot guarantee the security of keys. The update of hardware based authentication requires the replacement of the hardware itself, which will increases system cost. Biometric based authentication is impossible to alter but it is possible to be copied.
In order to meet the point, this paper utilizes the terminal fingerprint information because every terminal fingerprint information is different for an attacker. Even if attacker launches a collusion attack, it still cannot be decoded. Hence, in the proposed scheme, the terminal fingerprint information of the user can be utilized as a secret key, and it is never revealed outside even once. Unless the owner leaks information, otherwise the security of the key is guaranteed. Safety of the secret key is increased in this way.
For this purpose, we propose a hybrid encryption scheme that consists of a common-key encryption scheme and two public key encryption schemes. The hash value of a terminal fingerprint will be used as a secret key in the second public key scheme. In this paper, we employ Waters' CP-ABE [START_REF] Bethencourt | Ciphertext-policy attribute-based encryption[END_REF] as the first encryption scheme, but any public key encryption scheme could be used as the first. Our scheme does not only utilize terminal fingerprint for generating unique secret key, but also updates itself according to user settings with relatively low cost to keep the freshness of the terminal fingerprint.
The rest of this paper is structured as follows. Section 2 introduces background information, formal definitions and CP-ABE scheme. Section 3 describes our encryption scheme. Section 4 discusses the security and advantage of the proposed scheme. Finally, conclusion and future work in section 5.
Preliminaries
In this section, we give background information on bilinear maps and our cryptographic assumption.
Bilinear Maps
We present a few facts related to groups with efficiently computable bilinear maps. Let 𝐺 and 𝐺 be two multiplicative cyclic groups of prime order . Let be a generator of 𝐺 and be a bilinear map 𝐺 𝐺 𝐺 . The bilinear map has the following properties:
1. Bilinearity: for all 𝑢, 𝑣 ∈ 𝐺 and , 𝑏 ∈ 𝑍 , we have (𝑢 𝑎 , 𝑣 𝑏 ) = (𝑢, 𝑣) 𝑎𝑏 , 2. Non-degeneracy: ( , ) .
Access Structure and Linear Secret Sharing Scheme
We will review here the definition of access structure and Linear Secret Sharing Schemes (LSSS) [START_REF] Waters | Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization[END_REF]. For all i = 1, ..., ℓ, the i-th row of M, we let the function ρ defined the party labeling row i as ρ(i). When we consider the column vector ɤ = (s, r 2 , ..., r n ), where ∈ 𝑍 is the secret to be shared, and , , ∈ 𝑍 are randomly chosen, then Mɤ is the vector of ℓ share of the secret s according to ∏. The share (Mɤ) i belongs to party ρ(i).
Here the ∏ is a Linear Secret Sharing Schemes(LSSS) composed of . Let s be any attribute set of authenticated user, and define I * , , , + as * ( ) ∈ +.
For ∏ , there exist a structure * ∈ 𝑍 + that if * + are valid shares of any secret s , than ∑ = ∈ .
CP-ABE
There are a lot of studies on enhance the security of system. Cheung and Newport [START_REF] Cheung | Provably secure ciphertext policy ABE[END_REF] proposed CP-ABE scheme based on DBDH problem using the CHK techniques [START_REF] Canetti | Chosen-ciphertext security from identity-based encryption[END_REF], which satisfies IND-CPA secure and pioneers the achievement of IND-CCA secure. In this method, a user's secret key is generated by calculating user attributes and system attributes. Naruse et al. [START_REF] Naruse | Attribute Revocable Attribute-Based Encryption with Forward Secrecy[END_REF] proposed a new CP-ABE mechanism with re-encryption. Their method is based on the CP-ABE scheme to make the cipher text and has re-encryption phase to protect the message. Li et al. [START_REF] Li | A2BE: Accountable Attribute-Based Encryption for Abuse Free Access Control[END_REF] proposed an encryption system using trusted third party, who issues authentication information embed user key to achieve better safety in decryption phase than CP-ABE. However, it is difficult to implement due to the complexity of the computational process required from the third party. Finally, Li et al. [START_REF] Li | Privacy-aware attribute-based encryption with user accountability[END_REF] proposed encryption scheme crowded included in the ID of the user attribute, decrypts it when ID authentication is also carried out at the same time, although this scheme can improve the safety, but the public key distribution center will increase the workload. Hinek et al. [START_REF] Hinek | Attribute-Based Encryption with Key Cloning Protection[END_REF] proposed a tk-ABE(token-based attribute-based encryption) scheme that includes a token server to issue a token for a user to decrypt the cipher text, thus making the key cloning meaningless.
Our proposal scheme aims to increase the safety of the secret key without third party. When the cipher text corresponds to an access structure and secret key corresponds to a set of attributes, only if the attributes in the set of attributes is able to fulfill the access structure.
An (Ciphertext-policy) Attribute Based Encryption scheme consists of four fundamental algorithms: Setup, Encrypt, KeyGen, and Decrypt.
Setup(𝛌, 𝐔) (𝐏𝐊, 𝐌𝐊):
The Setup algorithm takes security parameter and an attribute universe U as input. It outputs the public parameter PK and the system master secret key MK.
Encrypt(𝐏𝐊, 𝐌, 𝐖) 𝐂𝐓:
The Encrypt algorithm takes the public parameter PK, a message M, and an access structure was input. It output a cipher text CT.
KeyGen(𝐌𝐊, 𝐒) 𝐒𝐊:
The KeyGen algorithm takes the master secret key MK and a set S of attributes as input. It output a secret key SK.
Decrypt(𝐂𝐓, 𝐒𝐊) 𝐌:
The Decrypt algorithm takes as input the cipher text CT and the secret key SK. If the set S of attributes satisfies the access structure W then the system will output the message M.
Our System Model
In this section, we propose a hybrid encryption scheme. Then, we propose an attribute-based encryption scheme without key misuse. Finally, we provide a concrete realization of our attribute-based encryption scheme without key misuse.
Our system consists of three parts:
User needs to provide their attributes information and legitimate manner to use the content. They also need to manage the terminal fingerprint information that their own;
Data server needs to manage the attribute information, a common key and public parameter PK and issue the secret key that contains the attribute information of the user;
Document sender needs to issue the common key and encrypt the contents.
Our Hybrid Encryption Scheme
We propose a hybrid encryption scheme HybENC that uses terminal fingerprint. HybENC consists of a common-key encryption scheme, CKE, two public key encryption schemes, PKE and PKE , and a hash function, 𝐻 : HybENC = (CKE, PKE , PKE , 𝐻). Informally, CKE is used for fast encryption and decryption of data of large size such as pictures and movies. PKE1 is used to encrypt the common key of CKE. Later, PKE1 will be replaced with an attribute-based encryption. And Finally, PKE2 is used to re-encrypt the common key of CKE; fingerprint is used here as the secret key of PKE2 through a hash function.
Formally, our HybENC is described as follows.
𝐇𝐲𝐛𝐄𝐍𝐂
Our Concrete Construction of ABE without Key Misuse
We apply the above template of our hybrid encryption scheme to a scheme in the attribute-based setting. Plaintext is encrypted by using the attribute information and terminal fingerprint information. The advantages of this scheme, confirmation of the terminal fingerprint information is difficult to use except by authorized users.
We now give our construction by employing Water's CP-ABE as PKE1 in our hybrid encryption in Section 3.1.
In our construction the set of users is U = * , , , + and the attribute universe is = * , , , +. A random exponent for encryption is denoted as ∈ 𝑍 . Note that secret keys below are randomized to avoid collusion attacks. Auth.Setup (𝛌) 𝐏𝐊, 𝐌𝐊:The Auth.Setup algorithm will choose a bilinear group G 1 of prime order with generator , and e be a bilinear map, e: G 1 ×G 1 → G 2 . It then chooses two random exponents 𝑎, 𝑏 ∈ 𝑍 and hash function H *0, + * G as input. The Common key is published as PK = , 𝑏 , ( , ) 𝑎 The system master secret key is published as
DO.Setup(𝒗,
MK = 𝑎
Auth.Ext(𝐌𝐊, 𝑺) 𝐒𝐊:The Auth.Ext algorithm takes the master secret key MK and a set of attributes S as input. And algorithm chooses a random 𝑡 ∈ 𝑍 for each user.
It creates the secret key as 𝐾 = ( 𝑎+𝑏𝑡 , 𝑡 , (𝐾 𝑋 ) 𝑋∊𝑆 ), 𝑋∊𝑆 𝐾 𝑋 = 𝐻(𝑋) 𝑡 U.Setup(𝐒𝐊, 𝒇) 𝐅, 𝐃:The U.Setup algorithm takes user's fingerprint information f. Then it calculates the hash value 𝐻(𝑓) = 𝐷 (in this paper we use the RSA encryption for our re-encryption). It chooses two primes , 𝑞. Make 𝑁 = 𝑞. Next it computes E s.t. 𝐷𝐸 ≡ mod ( -)(𝑞 -). The user's terminal-fingerprint public key is 𝐹 = (𝑁, 𝐸). The user keeps D as the user's terminal-fingerprint secret key.
Auth.Enc(𝐏𝐊, 𝐅𝐊, 𝑾) 𝐅𝐓:The Auth.Enc algorithm takes the public parameter PK, common key FK, and an access structure (W, ρ) over the all of attributes to encrypts a message M. The function ρ associates row of W to attributes. Where W is an matrix. First the algorithm generates a vector ɤ = ( , 𝑦 , , 𝑦 ) ∈ 𝑍 and , , , ∈ 𝑍 randomly. The vector is made for sharing the encryption exponent s. then 𝑊 is the vector corresponding to the i-th row of W, calculates = ɤ • 𝑊 from 1 to . It output a ciphertext FT as FT = (𝐹𝐾 ( , ) 𝑎𝑠 , 𝑠 , ̂), ̂= ( 𝑏λ 1 𝐻 (𝑋 ) 𝑟 1 , 𝑟 1 ) , ( 𝑏λ 2 𝐻 (𝑋 ) 𝑟 2 , 𝑟 2 ) , , , ( 𝑏λ 𝐻 (𝑋 ) 𝑟 , 𝑟 ).
Auth.ReEnc(𝐅𝐓, 𝐅) 𝐅𝐓 ′ :The Auth.ReEnc algorithm takes the cipher text FT and user's terminal-fingerprint public key F as input.
The re-cipher text is published as
FT ′ = (FT) E mod N,
Where (FT) E = (𝐹𝐾 ( , ) 𝑎𝑠E , 𝑠E , ( ̂)E ).
U.Dec(𝐅𝐓
Discussion
This paper shows that confidentiality of the shared data that has been encrypted can be protected and it is difficult to reveal the secret keys in the proposed scheme. The proposed scheme is secure against chosen-plaintext attacks because the underlying ABE scheme is secure against chosen-plaintext attacks. If the encrypted data is published, our scheme also resists attacks from colluding users. If the attacker did not know the terminal fingerprint information of the legitimate user, they wouldn't be able to get the secret key.
In this study, we proposed a cryptosystem to improve security. In the proposed method, the data server only sends re-cipher text and private information to the user, while the data server has to send both cipher text and secret key. In addition, the user creates secret key and re-encrypt key using the private information. Henceforth, user keeps the secret key and sends the re-encrypt key to the data server, and then the data server use the re-encrypt key to re-encrypt cipher text. Finally, data server sends back the re-cipher text to the user.
The proposed cryptosystem utilizes the terminal fingerprint information of the user. The terminal fingerprint is assumed to be unchangeable and unknowable. Also, only the key generation, encryption and decryption programs running on the trusted terminal can get the value of the fingerprints. The proposed scheme is built on the above-mentioned conditions. Here, the terminal fingerprint information is different for each user. It can be used as a user ID, and you can guarantee the anonymity of the user's own information. Misuse of the terminal fingerprint, such as transfer of the secret key, is incorrect behavior and meaningless,. Since the secret key that has legitimate user includes their terminal fingerprint information, the terminal fingerprint information is different in the other terminal, and the secret key is revoked. Safety of the secret key is increased in this way.
We proposed a hybrid encryption scheme in which a public key encryption scheme can be utilized. It is also easy to add, update and delete user's information. Then, we do not need a credible third party to guarantee the security of encryption and authenticate a user. In this scheme, the secret key is generated and stored by the user, protecting the secret key against communication channel attack.
Our scheme requires that each user provide their own encryption terminal information to key management center. If there is large number of simultaneous user application, the workload of management center can be quite heavy. So in the future we should consider decreasing the computational complexity of re-encrypted.
Finally, the system ensures that the key cannot be copied, forwarded and etc. if there the safety of the security key is provided.
Conclusion and Future Work
In this study, we combine user terminal fingerprint data with a public key and secret key pair. Furthermore, we proposed a cryptographic scheme to update the secret key during decryption phase using terminal fingerprint information. As a result, the secret key is protected by ensuring that it does not operate except in the generated terminal key pair, even if an attacker eavesdrops the user secret key.
The encryption and decryption time can be optimized by proposing suitable algorithm as the future work. Furthermore, the security issue of our proposed method is that if the user connects to the Internet, the terminal fingerprint can be eavesdropped by an attacker. Hence, the proper solution should be proposed to mitigate this issue.
𝐇𝐲𝐛𝐄𝐍𝐂 𝐃𝐞𝐜(𝐅𝐊, 𝐒𝐊𝟏, 𝐒𝐊𝟐, 𝐂𝐓, 𝐂𝐓𝟐) 𝒎 The HybENC.Dec algorithm takes keys FK, SK , SK and cipher texts CT, CT , CT as input. It executes decryption as follows; PKE Dec(SK , CT ) m = CT , PKE Dec(SK , CT ) m = FK, CKE Dec(FK, CT) 𝑚. Then it outputs the decryption result 𝑚.
𝒘) 𝐅𝐊:The DO.Setup algorithm will choose a prime order with generator 𝑞 in the system. Next it will choose two random exponents 𝑣, 𝑤 ∈ 𝑍 as input.The common key is published by the Diffie-Hellman key exchange FK = (𝑞 𝑣 ) 𝑤 𝑚𝑜𝑑 = (𝑞 𝑤 ) 𝑣 𝑚𝑜𝑑 C.Enc(𝐅𝐊, 𝒎) 𝐂𝐓 :The common-key encryption, C.Enc algorithm takes FK and a plaintext m as input. It outputs a ciphertext CT.
′ , ) 𝐅𝐓:The U.Dec algorithm takes as input the cipher text FT ′ and 𝐷 The decryption algorithm first computes. The decryption algorithm computes (FT′) 𝐷 = (FT E ) 𝐷 = FT mod N. U.ReDec(𝐅𝐓, 𝐒𝐊) 𝐅𝐊:The U.ReDec algorithm takes the cipher text FT and secret key SK as input. The secret key for an attribute set , and the cipher text FT for access structure (𝑊, ). Suppose that S satisfies the access structure and define I as * = ( ) ∈ +, I ∈ {1,2,…, } for ∏ , there exist a structure * ∈ 𝑍 + that if * + are valid shares of any secret s , than ∑ = ∈ .the U.ReDec algorithm will output the common key FK. The re-decryption algorithm computes e( 𝑠 , 𝑎+𝑏𝑡 ) ∏ ( ( 𝑏λ 𝑖 𝐻 (𝑋 ) 𝑟 𝑖 , 𝑡 ) (𝐻 (𝑋 ) 𝑎𝑠 = 𝐹𝐾 C.Dec(𝐅𝐊, 𝐂𝐓) 𝒎: The C Dec algorithm takes the common key FK and the cipher text CT as input. It output the message m.
𝐊𝐞𝐲(𝛌) 𝐅𝐊, (𝐏𝐊𝟏, 𝐒𝐊𝟏), (𝐏𝐊𝟐, 𝐒𝐊𝟐): The HybENC.Key algorithm takes a security parameter as input. It calculates keys as follows; CKE Key( ) FK, PKE Key( ) (PK , SK ), 𝐻 𝜆 (f ngerpr nt) SK , PKE Key(SK ) PK . Then it outputs keys; FK, (PK , SK ), (PK , SK ). 𝐇𝐲𝐛𝐄𝐍𝐂 𝐄𝐧𝐜(𝐅𝐊, 𝐏𝐊𝟏, 𝐏𝐊𝟐, 𝒎) 𝐂𝐓, 𝐂𝐓𝟐 : The HybENC.Enc algorithm takes keys FK, PK , PK and a plaintext 𝑚 as input. It calculates cipher texts as follows; CKE Enc(FK, 𝑚) CT, PKE Enc(PK , m ≔ FK) CT , PKE Enc(PK , m = CT ) CT2. Then it outputs cipher texts; CT, CT .
Acknowledgements
The second author is partially supported by Grants-in-Aid for Scientific Research of Japan Society for the Promotion of Science; Research Project Number:15K00029. The fourth author is partially supported by Grants-in-Aid for Scientific Research of Japan Society for the Promotion of Science; Research Project Number: 15H02711. (Authors: Chunlu Chen, Hiroaki Anada, Junpei Kawamoto, Kouichi Sakurai) | 21,842 | [
"1001294"
] | [
"21443",
"484818",
"484818",
"21443",
"484818",
"21443",
"484818"
] |
01466229 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466229/file/978-3-319-24315-3_29_Chapter.pdf | Bo Liu
Baokang Zhao
email: [email protected]
Chunqing Wu
email: [email protected]
Wanrong Yu
email: [email protected]
Ilsun You
email: [email protected]
Efficient Almost Strongly Universal Hash Function for Quantum Key Distribution
Keywords:
Quantum Key Distribution (QKD) technology, based on principles of quantum mechanics, can generate unconditional security keys for communication parties. Information-theoretically secure (ITS) authentication, the compulsory procedure of QKD systems, avoids the man-in-the-middle attack during the security key generation. the construction of hash functions is the paramount concern within the ITS authentication. In this extended abstract, we proposed a novel Efficient NTT-based ε-Almost Strongly Universal Hash Function. The security of our NTT-based ε-ASU hash function meets
With ultra-low computational amounts of construction and hashing procedures, our proposed NTT-based ε-ASU hash function is suitable for QKD systems.
Introduction
With the rapid development of computing technologies, the importance of secure communication is growing daily [START_REF] Liu | A study of IP prefix hijacking in cloud computing networks[END_REF][START_REF] Rieke | Security Compliance Tracking of Processes in Networked Cooperating Systems[END_REF][START_REF] Kotenko | Guest Editorial: Security in Distributed and Network-Based Computing[END_REF][START_REF] Skovoroda | Securing Mobile Devices: Malware Mitigation Methods[END_REF]. Unlike conventional cryptography which based on the computational complexity, Quantum Key Distribution (QKD) can achieve the unconditional security communication [1, 2] [18, 19, 20]. By transmitting security key information with quantum states, the final key generated by QKD system is information-theoretically secure (ITS), which is guaranteed by the non-cloning theorem and measuring collapse theorem in quantum physics [START_REF] Bennett | Quantum cryptography: Public key distribution and coin tossing[END_REF][START_REF] Ma | Universally composable and customizable post-processing for practical quantum key distribution[END_REF]. Nowadays, QKD has been one of the research focuses around the world. In recent years, the famous QKD network projects mainly include SECOQC in Europe [START_REF] Leverrier | Unconditional security of continuousvariable quantum key distribution[END_REF], UQCC in Tokyo [START_REF] Sasaki | Field test of quantum key distribution in the Tokyo QKD Network[END_REF] and NQCB in China [7] and so on. ITS authentication is the compulsory procedure of QKD system and also the key procedure which ensures the security of generated keys between communication parties [START_REF] Ma | Universally composable and customizable post-processing for practical quantum key distribution[END_REF][START_REF] Ma | Practical Quantum key Distribution post-processing[END_REF]. Otherwise, is vulnerable to the man-in-the-middle attack [START_REF] Abidin | Authentication in Quantum Key Distribution: Security Proof and Universal Hash Functions[END_REF][START_REF] Pacher | Attacks on quantum key distribution protocols that employ non-ITS authentication[END_REF][START_REF] Ioannou | Unconditionally-secure and reusable public-key authentication[END_REF]. The main challenge about the research of ITS authentication is the construction of hash functions which are suitable for ITS authentication with less security key [START_REF] Abidin | Authentication in Quantum Key Distribution: Security Proof and Universal Hash Functions[END_REF][START_REF] Portmann | Key Recycling in Authentication[END_REF][START_REF] Krawczyk | LFSR-based Hashing and Authentication[END_REF][START_REF] Wegman | New hash functions and their use in authentication and set equality[END_REF].
Usually, ε-Almost Strongly Universal (ε-ASU) hash functions can be used to construct ITS authentication schemes in a natural way. Majority construction schemes focus on the ε-ASU2 hash function families, such as Wegman-Carter's and Krawczyk's construction schemes [START_REF] Krawczyk | LFSR-based Hashing and Authentication[END_REF][START_REF] Wegman | New hash functions and their use in authentication and set equality[END_REF]. Nowadays, the photon transmission frequency has reached to about ten GHz [START_REF] Wang | 2 GHz clock quantum key distribution over 260 km of standard telecom fiber[END_REF][START_REF] Tanaka | High-Speed Quantum Key Distribution System for 1-Mbps Real-Time Key Generation[END_REF]. With heavy computational amounts, ITS authentication schemes which based on ε-ASU2 hash functions cannot meet the high performance requirement of QKD systems [START_REF] Abidin | Authentication in Quantum Key Distribution: Security Proof and Universal Hash Functions[END_REF][START_REF] Krawczyk | LFSR-based Hashing and Authentication[END_REF][START_REF] Carter | Universal classes of hash functions[END_REF].
In this extended abstract, with NTT technology, we proposed a novel Efficient ε-Almost Strongly Universal Hash Function. With the special features of numbertheoretic transforms (NTT) technology, our ε-ASU hash function family is constructed in the prime ring L
NTT-based Almost Strongly Universal Hash Function
Since the construction has to consume a very long key, Gilles's NTT-based almost universal hash function is not suitable for ITS authentication [START_REF] Liu | Qphone: A quantum security VoIP phone[END_REF]. With a partially known security key and a LFSR structure [START_REF] Krawczyk | LFSR-based Hashing and Authentication[END_REF], a random bit stream can be generated to construct the NTT-based almost strongly universal (NASU) hash functions. Let R be the set of messages, where
T i i p i p i p i s s s s ,
T i i p i s sf . ( 1
) Let 1 L and 2 log 1 01 2 , 2 , , 2 p K . For , L p C R Z let 1 0,1, , 1 h F C R C R
be the inverse NTT of their component-wise product, taking only the first elements of the result. Assume that 2 log up , we define that the set
, , , , , 1 1 : mod
, p L i i u i u H h C K p i s f C s (2)
is an almost strongly universal family of hash functions with
2 log 2 2 log 2 / 2 p L L p . Assume that nu , we have 2 2 / 2 n L nL .
Potential Advantages
Comparing with ASU2 hash functions, our proposed NASU hash functions have the following potential advantages:
(a) NASU hash functions can be easily constructed with a partially known security key and a LFSR structure. (b) With the special features of number-theoretic transforms (NTT) technology, the computational amounts of our NASU hashing procedure is much less than Krawczyk's scheme and other ASU2 hash functions. (c) Treating the elements of input messages as non-binary integers of the ring L p Z , our proposed NTT-based ε-ASU hash function is very suitable for ITS authentication in QKD systems.
In the future, we will explore the detailed security proof of NASU hash functions and its deployment within the QKD system.
pZ.
In order to construct the NTT-based ε-ASU hash function efficiently, we assume that 2 L , and the prime number 1 pL . We assume that the set of all messages is R , where L p R Z with length of L , and the length of authentication tag is n , where 2 log np . The security of our NTT-based ε-ASU key length of ITS authentication scheme is less than 31 n .
The corresponding author: Dr. Baokang Zhao, email address: [email protected]. | 7,612 | [
"1001297",
"993483",
"994601",
"993532",
"993476"
] | [
"302677",
"302677",
"302677",
"302677",
"472285"
] |
01466230 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466230/file/978-3-319-24315-3_2_Chapter.pdf | Andrej Binder
email: [email protected]
Tomas Boros
email: [email protected]
Ivan Kotuliak
email: [email protected]
A SDN Based Method of TCP Connection Handover
Keywords: Software Defined Networks, Network Protocols, Transmission Control Protocol, Telecommunications
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
TCP handover is the act of handing over the role of one of the two communicating endpoints to a third endpoint that was initially not a part of the communication. The reasons for this can be for example:
• Load-balancing • Traffic path optimization • A transparent redirection mechanism • Switchover of network interfaces
The common solution for this problem was to terminate the running connection and re-initiate the connection with a new host. This is a common practice on the interned today. [START_REF]RFC: 2616 -Hypertext Transfer Protocol[END_REF] [START_REF]Transmission control protocol[END_REF] The problems with this approach are:
• Latency caused by additional TCP handshake • Needs to be implemented on application layer • Non-transparent • TCP-windows are reset resulting in sub-optimal performance One area where this problem is especially apparent is the area of Content Delivery Networks. Most CDN architectures leverage a redirect mechanism to initiate connection between a client and the most appropriate server to serve specific content. Introducing delays in this step results in noticeably slower content playback startups that are even more apparent in the case of CDN Federations where multiple redirects often take place before the client can connect to the server. [START_REF] Faucheur | Content Distribution Network Interconnection (CDNI) Problem Statement[END_REF] [7] [START_REF]CDN Interconnection Architecture[END_REF] Our method to address this issue is to make use of Software Defined Network technology. This technology makes it possible to enhance the network with the functionality that not only allows TCP handovers but also makes them controllable by the SDN controller itself. [START_REF] Kim | CDNI Request Routing with SDN[END_REF] [START_REF] Halagan | Modification of TCP SYN flood (DoS) attack detection algorithm[END_REF] 2
Software defined networks
The main disadvantage of traditional network technologies is lack of flexibility in implementing new features. Because of requirements related to standardization, testing and the drawbacks of deploying new code in a fully proprietary environment, new features usually take years to be agreed upon. Even then they often face limited success because of the difficulties and risks related to changing something in an environment that was essentially designed to serve a very specific purpose. One example of such technology is multicast that has existed for decades but did not succeed in being globally distributed because of the reasons listed above. [START_REF] Feamster | The Road to SDN: an intellectual history of programmable networks[END_REF] [11] [START_REF] Bonaventure | An Overview of Multipath TCP[END_REF] [13] [START_REF] Kozemčák | Different Network Traffic Measurement Techniques -Posssibilities and Results[END_REF] Software Defined Networks (SDN) present a radically different approach to designing networks that is built from ground up to make implementation of new features and services as easy as possible. It achieves this by splitting the data plane (responsible for forwarding traffic) and the control plane (responsible for higher level decisionmaking and configuration of the data plane) into two separate entities. Furthermore it also changes the logical placement of these entities. In traditional networks both the control plane and the data plane was confined within a single networking device, making development of complex control plane to control plane communication protocols necessary. In SDNs the data plane stays distributed but the control plane is removed from the physical device and placed into a centralized node responsible for managing all the data planes in the network. This centralized control plane is called a Controller in SDN terminology. [START_REF]OpenFlow Switch Specification: Version 1.3.0 Implemented[END_REF] [START_REF] Kim | CDNI Request Routing with SDN[END_REF] The SDN Controller is a fully software-based element that does not have the burden of having to communicate every single decision to any of its peers. This means that new features can be quickly added to the controller and they will be instantly available throughout the whole network that is under its control. The resources available in the data plane under its controls are the Controller`s only limiting factor. It does not have to follow a specific protocol that dictates exactly how these resources should be used. [START_REF] Team | RYU: SDN Framework (Online)[END_REF]
TCP handover method in SDN networks
Our approach to addressing TCP handover relies on the following features of Software Defined Networks:
• The ability to intercept specific packets and redirect them for processing in the control plane • The ability to modify the data plane in such a way that it rewrites the destination IP address of a packet according to a rule defined by the control plane • The fact that a SDN network is limited to a single autonomous system (administered by a single organization), in which the occurrence of triangular routing is not considered a problem as long as its fully controlled
In addition to these requirements at least one of the following features must also be available:
• The ability to synchronize (increment or decrement according to a rule) TCP SEQ and ACK numbers in the data plane • The ability to synchronize (increment or decrement according to a rule) TCP SEQ and ACK numbers in the host device • The ability to be able to predict the SEQ number that would be chosen by a host for a new incoming connection (described later)
The initial use case that this method was designed for was the implementation of a transparent redirect mechanism for use in Content Delivery Networks so we will use this environment to describe the method`s operating principle. We will later describe how to use the method in any other scenario.
When a client initializes a new TCP connection to a server, it sends a TCP segment encapsulated in IPv4 or IPv6 packet to a destination address that identifies the service to be accessed. In the first TCP segment the client sets the SYN flag, chooses a initial sequence number (SEQ), sets the ACK number to 0, sets the window size and optionally sends some OPTION parameters. The server on the receiving side goes to the SYN_RCVD state and sends back his TCP packet and parameters to the client. Sets the SYN and the ACK flag, choses a sequence number and sets the ACK number to sequence number + 1 of the client. Using this he acknowledges the client to send the next TCP window. The server chooses a window size too and sets some optional parameters in OPTION fields. The client then sends back an ACK message to acknowledge the parameters of the server. At this moment the session goes to ESTABLISHED state. Now the client may request the data (for example in a HTTP GET message).
This happens normally in networks but lets say that the IP address that the client was communicating with was not an IP address directly attached to a specific server but an IP address defined in the network as and address used to identify a specific service. Any TCP packets sent to this IP, that are meant to initiate a TCP connection with a server, will not be delivered directly to a server but redirected to a SDN Controller for further processing instead. The controller would then keep on acting on behalf on the server up to the point when it can decide which actual server would be best to deliver the service. In the context of CDN networks this means up until the point when the Controller is aware of the HTTP URI that the client intends to access. It would then modify the data plane to rewrite the destination IP address of all future packets from the client to the IP address of the chosen server.
This would work perfectly in an UDP-based scenario where packets are considered as separate atomic elements. In TCP environment all communication is treaded in the context of sessions that are kept consistent by communicating the sequence numbers of packets in each transmission and acknowledging them on the other side. The problem with this approach in context with our method is that we cannot control the initial SEQ number that the client choses or the SEQ number that the final server would chose. This means that, without addressing this issue, the communication would not work because even if the source and destination addresses of the packets were correct, the TCP session would not work because both sides would not be able to agree on which sequence number should follow.
The full method is depicted in the following sequence diagram:
There are two basic ways to address this:
• Be able to synchronize the SEQ and ACK numbers by incrementing or decrementing them in the data plane • Be able to predict the SEQ number that the server will chose so that the connection from the controller can be started with a SEQ number that would be in sync with the SEQ number that the server would chose right from the start
The first approach has the only disadvantage that the SDN data plane (also called SDN Forwarder) closest to the server would need to have the capability to increment and decrement the SYN and ACK numbers according to a chosen rule. The fact is that while rewriting of destination IP address is a standard SDN data plane function that is available in basically all SDN Forwarders, the functions of incrementing or decrementing of SYN and ACK numbers are not standard functions. This means that most hardware data plane elements would not be able to perform the operation.
This can be easily addressed in environments where we have the server under control. We simply place a small SDN Forwarder in the operating system of the server and link it to the controller. This small forwarder would be a data plane element only capable of doing the operation of synchronizing the SYN/ACK numbers, an operation that is very easy to implement in the all-software environment of a server.
The following figure depicts this scenario:
The second approach requires the modification of the TCP stack on the server. The SEQ number the TCP stack choses would not be chosen randomly as it is usually done, but it will be chosen according to a hash of the incoming SYN packet. This means that when a SYN packet is sent to such server, the sender has the ability to calculate and predict the initial SEQ number that the server will chose. In order to maintain security, a shared secret (shared between the controller and the server) can also be added to the SYN packet in order to make it harder for a third party to step into the communication.
Implementation and testing
To prove that the approach is fully functional, we have implemented a prototype and tested it with real clients and servers in the environment of CDN networks.
We have created a new version of the Ofsoftswitch13 SDN Forwarder with the additional TCP SEQ and ACK synchronization functions. We did this by adding a new action based on to the SET_FIELD action defined by the OpenFlow 1.3 standard. We called these actions SET_TCP_ACK and SET_TCP_SEQ in order to be able to modify the ACK and SEQ numbers respectively.
For example if we install an action with SET_TCP_SEQ with argument 1000, incoming TCP connection which matches the matching rule will have its Sequence number incremented by 1000 on the outgoing interface. The same thing will happen for the ACK number. Using correctly these actions we will be able to synchronize the TCP sequence and acknowledge numbers for the two separate TCP connections.
In addition to the modified SDN Forwarder we also needed our own SDN Controller that we could easily modify. In the end we chose the Ryu SDN controller. It is an open-source SDN controller that is freely available, well documented and easy to modify. This controller also fully supports the OpenFlow 1.3 protocol which allowed us to re-use most of the needed functionality. The controller was also modified to be able to track the state of the session in order to get more visibility into what is happening in the network.
These two components allowed us to fully test our method. The testing also shower that in addition to proving that the method actually works, it also has the following benefits:
• Faster session establishment and shorter interruption in comparison with application-level redirect methods • No need for extra DNS queries • No need to implement application-level redirect mechanisms • Fully transparent to the client
Conclusion
We designed, implemented and thoroughly tested a new method of TCP connection handover in the environment of SDN networks.
We have created a prototype SDN Forwarder and a prototype SDN Controller that we used to prove the functionality of the method.
Our tests using these prototypes proved that we can achieve faster handover times as compared to traditional application-level redirect methods that require a complete reestablishment of TCP sessions. Furthermore this was all done in a manner that is fully transparent to the client and requires no modification of the server application.
In the end the method proves that SDN technology is a great platform for implementing interesting new functions into the network environment.
Acknowledgements
This work is a result of the Research and Development Operational Program for the projects Support of Center of Excellence for Smart Technologies, Systems and Services, ITMS 26240120005 and for the projects Support of Center of Excellence for Smart Technologies, Systems and Services II, ITMS 26240120029, co-funded by ERDF.
The authors would like to thank Oskar van Deventer and his team at the Dutch Organization for Applied Scientific Research (TNO) for their invaluable help. | 14,176 | [
"1001298",
"1001299",
"1001285"
] | [
"259428",
"259428",
"259428"
] |
01466232 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466232/file/978-3-319-24315-3_32_Chapter.pdf | Martin Sarnovský
email: [email protected]
Peter Butka
email: [email protected]
Peter Bednár
email: [email protected]
František Babič
email: [email protected]
Ján Paralič
email: [email protected]
Analytical Platform Based on Jbowl Library Providing Text-Mining Services in Distributed Environment
Keywords: text and data mining, software library in Java, data preprocessing, web portal
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Question of integrated analytical solutions has become interesting in recent years to improve the end-users orientation in wide range of available services, methods, algorithms or tools. The aim was to bring these services closer to the non-expert users and provide the possibilities to use them without deep knowledge about their implementation details or internal modes of operation. The work presented in this paper represents our activities in building of the coherent and complex system for text mining experimental purposes built upon the distributed computing infrastructure. Such infrastructure can offer computational effectiveness and data storage facilities for proposed on-line analytical tool that comprises of various services for knowledge discovery in texts and provides specific data and computing capacity. Our main motivation is to provide coherent system leveraging of distributed computing concepts and providing simple user interface for users as well as administration and monitoring interface.
Text mining [START_REF] Feldman | The Text Mining Handbook: Advanced Approaches in Analyzing Unstructured Data[END_REF] aims at discovery of hidden patterns in textual data. For this topic, there is available a textbook [START_REF] Paralič | Text Mining (in Slovak: Dolovanie znalostí z textov)[END_REF], which we wrote in Slovak for our students. It describes the whole process of knowledge discovery from textual collections. We describe in details all preprocessing steps (such as tokenization, segmentation, lemmatization, morphologic analysis, stop-words elimination), we discuss various models for representation of text documents and focus on three main text mining tasks: [START_REF] Feldman | The Text Mining Handbook: Advanced Approaches in Analyzing Unstructured Data[END_REF] text categorization [START_REF] Sebastiani | Machine learning in automated text categorization[END_REF], [START_REF] Machová | Various approaches to web information processing[END_REF]; (2) clustering of textual documents [START_REF] Sarnovský | Grid-based support for different text mining tasks[END_REF], [START_REF] Rauber | Empirical Evaluation of Clustering Algorithms[END_REF]; (3) information extraction from texts [START_REF] Sarawagi | Information extraction[END_REF], [START_REF] Machová | Information extraction from the web pages using machine learning methods[END_REF].
Finally, we describe service-oriented view on text mining and present also selected distributed algorithms for text mining. Second part of the textbook [START_REF] Paralič | Text Mining (in Slovak: Dolovanie znalostí z textov)[END_REF] is devoted to description of our Jbowl (Java bag of words library) presenting its architecture, selected applications and a couple of practical examples, which help our students easier start for practical work with Jbowl on their own text mining problems. In this paper we want to present the latest advancements in Jbowl library, which makes it usable also for big data text mining applications and invite broader audience of the World Computer Congress to use this library in various text mining applications.
2
Concept of analytical library
Jbowl
Jbowl1 is a Java library that was designed to support different phases of the whole text mining process and offers a wide range of relevant classification and clustering algorithms. Its architecture integrates several external components, such as JSR 173 -API for XML parsing or Apache Lucene2 for indexing and searching. This library was proposed as an outcome of the detailed analysis of existing free software tools in the relevant domain [START_REF] Bednár | Java Library for Support of Text Mining and Retrieval[END_REF]. The motivation behind the design of this library was existence of many fragmented implementations of different algorithms for processing, analyses and mining in text documents within our research team on one hand side and lack of equivalent integrated open source tool on the other hand side. The main aim at that time was not to provide simple graphical user interface with possibility to launch selected procedures but to offer set of services necessary to create the own text mining stream customized to concrete conditions and specified objectives. The initial Jbowl version included:
Services for management and manipulation with large sets of text documents. Services for indexing, complex statistical text analyses and preprocessing tasks. Interface for knowledge structures as ontologies, controlled vocabularies or lexical WordNet database. Support for different formats as plain text, HTML or XML and various languages. These core functionalities have been continuously extended and improved based on new requirements or expectations expressed by researchers and students of our department. Detailed information can be found in [START_REF] Butka | Distributed task-based execution engine for support of text-mining processes[END_REF] or [111].
The second main update of the library offers possibility to run the text mining tasks in a distributed environment within task-based execution engine. This engine provides middleware-like transparent layer (mostly for programmers wishing to re-use functionality of the Jbowl package) for running of different tasks in a distributed environment [START_REF] Butka | One Approach to Combination of FCA-based Local Conceptual Models for Text Analysis -Grid-based Approach[END_REF]. In the next step, new services for aspect-based sentiment analysis or Formal Concept Analysis -FCA (cf. [START_REF] Ganter | Formal Concept Analysis: Mathematical Foundations[END_REF]) were added [START_REF] Butka | Design and implementation of incremental algorithm for creation of generalized one-sided concept lattices[END_REF] to extend application potential of the library in line with current trends. In case of FCA subpart related to processing of matrices from Jbowl BLAS (Basic Linear Algebra Subprograms) implementation was used and extended in order to work with FCA models known as generalized one-sided concept lattices [155] and use it for other purposes like design and implementation of FCA-based conceptual information retrieval system [166]. There is also extension of services related to processing of sequences within the data sets and processing of graphbased data, which is partially based on Jbowl API and its models.
Services for distributed data analysis
Our main motivation was to use the problem decomposition method and apply it in data-intensive analytical tasks. Distributed computing infrastructure such as grid or cloud enables to utilize the computational resources for such kind of tasks by leveraging the parallel and distributed computing concepts. There are also several existing frameworks available offering different methods of parallel/distributed processing using the principle such as mapreduce, in-memory, etc. In order to support computation-intensive tasks and improve scalability of Jbowl library, we have decided to use GridGain3 platform for distributed computing. Jbowl API was used as a basis for particular data processing and analytical tasks. We decided to design and implement distributed versions of classification and clustering algorithms implemented in Jbowl. Currently implemented algorithms are summarized in Table 1.
Table 1. Overview of currently implemented supervised and unsupervised models in Jbowl.
In general, the process of the text mining model (classification or clustering) creation is split into the sub-processes. As depicted in Fig. 1, one of the nodes in distributed computing infrastructure (master node) performs the decomposition of particular task Sequential Distributed Decision tree classifier
K-nearest neighbor classifier Rule-based classifier Support Vector Machine classifier Boosting compound classifier K-means clustering GHSOM clustering
(data or model-driven) and then assigns particular sub-tasks onto available resources in the infrastructure (worker nodes). Worker nodes produce partial outputs which correspond to partial models (on partial datasets). Those partial models are collected and merged into the final model on the master at the end. The concrete implementation of sub-tasks distribution is different in particular model types, we will further introduce the most important ones. For induction of decision trees, Jbowl library implements generic algorithm where it is possible to configure various criteria for splitting data on decision nodes and various pruning methods for post-processing of the induced tree. Distributed version of the classifier [177] considers the multi-label classification problem, when the document may be classified into one or more predefined categories. In that case, each class is considered as a separate binary classification problem and resulting model consists of a set of binary classifiers. In this case, particular binary classifiers represent the subtasks computed in distributed fashion.
Our distributed k-nearest neighbor (k-NN) classification algorithm was inspired by [START_REF] Zhang | Efficient parallel kNN joins for large data in MapReduce[END_REF]. In this solution we used the Jbowl k-NN implementation as a basis and modified it into the distributed version that split the input data into the chunks and calculates the local k-NN models on the partitions.
Another set of Jbowl algorithms modified into the distributed versions were clustering ones. Distributed implementation of GHSOM (Growing Hierarchical Self-Organizing Maps) [199] implementation uses MapReduce (GridGain implementation) paradigm and is based on parallel calculation of subtasks, which in this case represents the creation of hierarchically ordered maps of Growing SOM models [START_REF] Sarnovsky | Cloud-based clustering of text documents using the GHSOM algorithm on the GridGain platform[END_REF]. Main idea is parallel execution of these clustering processes on worker nodes. Distributed version of K-Means clustering algorithm is based on methods presented in [START_REF] Joshi | Parallel K-means Algorithm on Distributed Memory Multiprocessors[END_REF] and [222]. Our approach separates the process of creation of k clusters among the available computing resources so the particular clusters are being built locally on the assigned data.
The FCA algorithms are in general computationally very expensive when used on large datasets. This issue was solved by decomposition of the problem. Starting set of documents were decomposed to smaller sets of similar documents with the use of clustering algorithm. Then particular concept lattices were built upon every cluster using FCA method and these FCA-based models were combined to simple hierarchy of concept lattices using agglomerative clustering algorithm. This approach was implemented in distributed manner using the GridGain, where computing of local models was distributed between worker nodes and then combined together on master node.
Further, we have implemented specialized FCA-based algorithms of generalized one-sided concept lattices using the Jbowl API for sparse matrices and operations with them, which are able to work more efficiently with sparse input data usually available in text-mining and information retrieval tasks. Here we have provided experiments in order to test ratios for computation time reduction of sparse-based implementations in comparison to the standard algorithms [233]. Then, distributed version of algorithm for creation of generalized one-sided concept lattices was designed, implemented and tested in order to show additional reduction of computation times for FCA-based models [244]. The extended version of experiments was realized with real textual datasets [255], which proved behavior of previous experimental results on reduction of computation time using mentioned distributed algorithm for generalized one-sided concept lattices.
Also, we have currently finished implementation of selected methods (classification and clustering) provided in portal-based way which are able to run tasks for experiments defined by user in BOINC-based infrastructure. BOINC4 is well-known opensource platform for volunteer distributed scientific computing. In this case Jbowl package is in the core of the system and is used for running of text mining experiments defined by user setup. These experiments are decomposed to BOINC jobs, pushed to BOINC clients and result of their distributed computations is returned back to server and provided to user [266].
The vision of the whole system is to re-use computational capacities of computers within university laboratories for volunteer-based computation. Our system has potential to support researchers to start their experiments and use additional cloud-like features of distributed computing using BOINC-based infrastructure. Currently, we have implemented also a graphical user interface which hides complexity behind creation of BOINC jobs for clients using dynamic forms and automation scripts for creation of jobs and analysis and presentation of the results provided to the user.
Services for optimization
In some cases, the analytical processes can be complex, so our plan for the future development is to extend the system with the recommendations for less experienced users to improve their orientation in computing environment. These recommendations will be generated based on the observed patterns how other users are using the system and generating the results. For this purpose we will use our analytical framework designed and developed within KP-Lab project 5 . The core of this framework includes services for event logging, logs storage, manipulation with logs, extraction of common patterns and visualization of event/pattern sequences [277].
Patterns can be understood as a collection (usually a sequence) of fragments, each describing a generalization of some activity performed by users within virtual environment, e.g. sequence of concrete operations leading to the successful realization of clustering analysis. The success of this method for generating recommendations based on actual user behavior in virtual computing environment strongly depends on the quality of collected logs. Extracted and visualized information and patterns can be used not only for recommendations generation, but also for evaluation of user behavior during the solving of the data analytical tasks.
Another kind of optimization methods were implemented on the resource usage level. As mentioned in previous sections, several models are deployed on and use the distributed computing infrastructure. Main objective of these optimization methods is to improve the resource utilization within the platform based on type of performed analytical task as well as on the dataset processed. Several methods were designed for that purpose.
In general, system collects the dataset characteristics, including the information about its size and structure [288]. Another kind of data is collected from the infrastructure itself. This information describes the actual state of the distributed environment, actual state of the particular nodes as well as their performance and capacity. Depending on the type of analytical task, such information can be used to guide the sub-task creation and distribution. Sub-tasks are then created in order to maintain particular sub-task complexity on the same level and distributed across the platform according to the actual node usage and available performance and capacity.
2.4
Infrastructure services
An important condition for the proper functioning and efficient of the presented services is a technical infrastructure providing necessary computing power and data capacity. We continuously build our own computing environment in which we can not only deploy and test our services, but we're able to offer them as a SaaS (Software as a Service). Simplified illustration of the used infrastructure is shown in Fig. 2. Basic level contains several Synology network attached storages (NAS) with WD hard drives providing customized data storage capacity for various purposes; i.e. it is possible to use SQL or NoSQL databases or some types of semantic repositories. The second level is represented by high-performance IBM application servers that are used for execution and data manipulation. This part of the infrastructure is separated and inaccessible to external users.
Graphical user interface with offered services and user functionalities is provided by web server and available for different end-user devices as traditional PC, laptops or tablets. Specific part of the deployed analytical system is the administration and monitoring module. Several modules are deployed on distributed computing infrastructure and interface to manage the platform itself is necessary. Such administration interface is implemented as a web application and enables to monitor the current state of the environment, including the operational nodes, their actual state, load, capacity as well as running tasks. If necessary, it is possible to disconnect the node from the platform, or add a new node, check their state and perform several actions to interact with them (stop halted tasks, free memory/space, check the network latency and restore the default configuration). Data collected using this module is also utilized in task distribution, as briefly described in section 2. [START_REF] Sebastiani | Machine learning in automated text categorization[END_REF] On the other hand, people interested in our analytical and data processing services can download them, customize services based on their own preferences or requirements and finally deploy the customized platform on their own infrastructure. Also, as it was written in section 2.2, we have created some BOINC infrastructure from computers in our laboratories for students, which is able to provide additional computing capacity for BOINC-based applications. This paradigm is known as virtual campus supercomputing center and BOINC is widely used by several universities in the world in order to get some more computational capacities from their computers within campuses. After completion of testing phase we would like to provide graphical user interface for more researchers to run Jbowl experimental tasks, for which particular models are computed on BOINC clients and returned to user. In the future it can be interesting to find interoperable connection in usage of cloud-based infrastructure defined above and volunteer-based BOINC infrastructure under one common platform, e.g., where capacities of both parts are managed together in order to achieve more efficient analytical services.
Conclusion
The need for software libraries for support of text mining purposes is not new, but their importance is increasing because of new requirements arising in the era of big data. An important factor is the ability of the existing products to respond to the changes in the areas as IT infrastructure, new capacity opportunities for parallel computing, running the traditional text and data mining algorithms on the new infrastructures, development of the new algorithms for data processing and analysis using new computational possibilities and finally the design and implementation of simple understandable and easy to use user environment.
Basically, presented work seeks to respond to all of these challenges and address them in the practical output of the relevant research and implementation activities. The presented library does not try to compete with the other available text mining platforms, but rather represents the output of our continuous work. The developed system presents an ideal platform for our ongoing research as well as for the education. Presented tools are used by the students and teachers in teaching tasks; serve as the platform for numerous master theses and regularly being used in data analytical and research activities.
Fig. 1
1 Fig. 1 General schema of the sub-task distribution across the platform
Fig. 2
2 Fig. 2 Architecture of text mining analytical system
Basic Jbowl package -http://sourceforge.net/projects/jbowl/
https://lucene.apache.org/
http://www.gridgain.com
BOINC -https://boinc.berkeley.edu/
http://web.tuke.sk/fei-cit/kplab.html
Acknowledgment
The work presented in this paper was partially supported by the Slovak Grant Agency of Ministry of Education and Academy of Science of the Slovak Republic under grant No. 1/1147/12 (40%); partially by the Slovak Cultural and Educational Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic under grant No. 025TUKE-4/2015 (20%) and it is also the result of the Project implementation: University Science Park TECHNICOM for Innovation Applications Supported by Knowledge Technology, ITMS: 26220220182, supported by the Research & Development Operational Programme funded by the ERDF (40%). | 21,523 | [
"1001311",
"1001312",
"1001313",
"1001314",
"1001315"
] | [
"155410",
"155410",
"155410",
"155410",
"155410"
] |
01466234 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466234/file/978-3-319-24315-3_34_Chapter.pdf | David Kiwana
Björn Johansson
email: [email protected]
Sven Carlsson
email: [email protected]
Usage of Finance Information Systems in Developing Countries: Identifying Factors During Implementation that Impact Use
Keywords: ERP, Financial Information Systems, Implementation, Use, Success and Failure Factors, Developing Countries
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Finance information systems (FISs) take financial data and process it into specialized financial reports, saving time and effort in dealing with business accounting [START_REF] Morgan | What are the benefits of financial information systems[END_REF], it also provide decision-makers with information to perform managerial functions [START_REF] Hendriks | Integrated Financial Management Information Systems: guidelines for effective implementation by the public sector of South Africa: original research[END_REF].
While FISs have many benefits, it should be noted that putting them in place can be costly and in most cases requires a lot of training and commitment by people involved [START_REF] Morgan | What are the benefits of financial information systems[END_REF]. As a result many organizations find difficulties to attain the desired success during their implementations, and many critical success factors for IS implementation have been suggested, however actual evidence to devise solutions for failed projects has not been clearly established [START_REF] Mulira | Implementing inter-organisational service systems: An approach for emerging networks in volatile contexts[END_REF].
In this paper we present research that was conducted to explore factors that shape implementation and later on use of FISs in the context of developing countries. According to Mulira [START_REF] Mulira | Implementing inter-organisational service systems: An approach for emerging networks in volatile contexts[END_REF] emerging public organizational networks in developing countries work with unpredictable environments and resource scarcity that have led to higher failure rates of Information Systems (IS) implementation projects. This research builds on a retrospective field study describing implementation of a FIS at Makerere University (Mak) in Uganda. The FIS whose implementation was studied is a component of an integrated enterprise system called Integrated Tertiary Software (ITS), a South African software product that was installed at the University to manage finances/accounting, students' records and human resource functions.
Before proceeding to the sections that follow, it is important to clarify that finance information systems (FISs) many times are implemented as part of ERPs ( [START_REF] Bancroft | Implementing SAP R/3: How to introduce a large system into a large organization[END_REF]; [START_REF] Davenport | Putting the enterprise into the enterprise system[END_REF]). This means that implementation issues that are pertinent to ERPs are largely pertinent also to implementation of FSIs. This research therefore is premised on the ideology that what is said about ERPs in terms of implementation is largely applicable to FSIs implementations as well.
The next two sections present problematic issues in IS implementation and what is known about ERP/FIS implementation. Section 4 presents the research method. This is followed by a presentation of research findings. Section 6 presents and discusses the nine factors that emerged during the analysis.
Problematic Issues in IS implementation
Research findings have reported that failure of large information systems implementations like ERPs are not caused by the software itself, but rather by a high degree of complexity from the massive changes that the systems cause in the organisations ( [START_REF] Scott | Implementing Enterprise Resource Planning Systems: The Role of Learning from Failure[END_REF]; [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF]; [START_REF] Maditinos | Factors affecting ERP system implementation effectiveness[END_REF]). According to Helo, et al. [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF], the major problems of ERP implementations are not technologically related issues such as technological complexity, compatibility, standardisation etc., but mostly about organisational and human related issues like resistance to change, organisational culture, incompatible business processes, project mismanagement and lack of top management commitment. Furthermore, Huang and Palvia [START_REF] Huang | ERP implementation issues in advanced and developing countries[END_REF] has identified other issues like inadequate IT infrastructure, government policies, lack of IT/ERP experience and low IT maturity to seriously affect the adoption decision of ERPs in developing countries. What is not clear therefore is whether all such factors are exhaustively known and if so, how they (the factors) impact on eventual use of the systems considering the fact that the failure rate is still high. The failure rate of major information systems appears to be around 70% [START_REF] Davenport | Putting the enterprise into the enterprise system[END_REF][START_REF] Drummond | What we never have, we never miss? Decision error and the risks of premature termination[END_REF]. Chakraborty and Sharma [START_REF] Chakraborty | Enterprise resource planning: an integrated strategic framework[END_REF] state that 90% of all initiated ERP projects can be considered failures in terms of project management. Ptak and Schragenheim [START_REF] Ptak | ERP: tools, techniques, and applications for integrating the supply chain[END_REF] claim that the failure rates of ERP implementations are in the range of 60-90%. Helo, et al. [START_REF] Helo | Expectation and reality in ERP implementation: consultant and solution provider perspective[END_REF] make the statement that in the worst scenarios, many companies have been reported to have abandoned ERP implementations. From this discussion it can be said that in FIS implementation, as a case of ERP implementation, the issues of concern are either technologically related or contextually related. Technologically related issues are not reported as problematic since they are probably more or less the same in different contexts. This means that the contextually related issues may be more problematic and interesting to address. Contextual factors have mainly been researched in developed country contexts, the challenge is researching these issues in a developing country context. This supports the need for studying: What factors during implementation impact use of FISs in developing countries?
3 What is known about ERP/FIS Implementation?
FISs implementation is an emblematic of complex project that constantly evolves and as it is the case with design and implementation of any complex system the aspects of leadership, collaboration and innovation are of importance in the implementation process [START_REF] Dener | Financial Management Information Systems: 25 Years of World Bank Experience on What Works and What Doesn[END_REF]. A successful completion of a FIS implementation depends on external factors as well and the adverse effects of country-specific political economy issues and political environment [START_REF] Dener | Financial Management Information Systems: 25 Years of World Bank Experience on What Works and What Doesn[END_REF].
Pollock and Cornford [START_REF] Pollock | ERP systems and the university as a "unique" organisation[END_REF] argue that the need for implementation of FISs in high education sectors is a response to both internal and external factors requiring more efficient management processes due to increasing growth of the numbers of students, changes in the nature of academic work, increasing competition between institutions, increasing government pressure to improve operational efficiency, and growing diversity of expectations amongst all stakeholders [START_REF] Allen | Enterprise resource planning implementation: Stories of power, politics, and resistance[END_REF].
Causes of failure of ERP/FISs Implementation
Senn and Gibson [START_REF] Senn | Risks of investment in microcomputers for small business management[END_REF] point to user resistance as symptomatic of system failure as users may aggressively attack the system, rendering it unusable or ineffective, or simply avoid using it. Ginzberg [START_REF] Ginzberg | Early diagnosis of MIS implementation failure: promising results and unanswered questions[END_REF] found that possible causes of implementation failure being user dissatisfaction with scope, user dissatisfaction with system goals, and user dissatisfaction with the general approach to the problem that the system is meant to address. In other words, system implementations are more likely to fail when they are introduced with unrealistic expectations.
As presented by Calogero [START_REF] Calogero | Who is to blame for ERP failure?[END_REF], excessive focus on technologies rather than business user needs is one of the determinations of ERP implementations failures. Projects initiated due to technology are more likely to be unsuccessful than the business-initiated projects due to the fact that technology-initiated projects are most frequently driven by such goals as replacement of an old system with a new one which is a complicated task [START_REF] Nicolaou | ERP systems implementation: drivers of post-implementation success[END_REF].
Lack of proper user education and practical training is another cause of a failure of IS implementation projects. According to Nicolaou [START_REF] Nicolaou | ERP systems implementation: drivers of post-implementation success[END_REF] conducting user training upfront could cause unsuccessful ERP implementation due to limited scope of training possibilities before implementation. Kronbichler, et al. [START_REF] Kronbichler | A comparison of erp-success measurement approaches[END_REF] say that unclear concept of nature and use of an ERP system from the users' perspective due to poor quality of training and insufficient education delivered by top management and project team also leads to failure. In developing countries where there are more challenges due to unstable infrastructure, funding and unstable social/economic organizational environment the quality of training becomes even poorer which leads to more failures of ERP implementations compared to developing countries [START_REF] Mulira | Implementing inter-organisational service systems: An approach for emerging networks in volatile contexts[END_REF].
Specific issues for ERP/FIS Implementation in developing countries
Heidenhof and Kianpour [START_REF] Heidenhof | Design and Implementation of Finaricial Management Systers: An African FPerspective[END_REF] claim that many African countries struggle with public financial management reforms whereby institutions, systems, and processes that deal with various aspects of public finance are weak, non-transparent, and often incapable of developing adequate budgets and providing reliable data for economic modeling.
IS implementation failures keep developing countries on the wrong side of the digital divide, turning ICTs into a technology of global inequality. IS implementation failures are therefore practical problems for developing that need to be addressed [START_REF] Malling | Information systems and human activity in Nepal[END_REF]. The information, technology, processes, objectives and values, staffing and skills, management systems (ITPOSMO) checklist adapted from Malling [START_REF] Malling | Information systems and human activity in Nepal[END_REF] shows that the technological infrastructure is more limited in developing countries; the work processes are more contingent in developing countries because of the more politicized and inconstant environment; developing countries have a more limited local base in the range of skills like systems analysis and design, implementation of IS initiatives, planning, and operation-related skills including computer literacy and familiarity. When it comes to management and structures organizations in developing country are more hierarchical and more centralized, and in addition the cost of ICTs is higher than in developed countries whereas the cost of labor is lower [START_REF] Heeks | Information systems and developing countries: Failure, success, and local improvisations[END_REF]. This supports that an explorative study of: What factors during implementation impact use of FISs in developing countries, are of interest.
Research Method
This research was carried out at Makerere University (Mak) through a retrospective field study, investigating aspects of implementation of the ITS (Integrated Tertiary Software) finance subsystem. Empirical data was collected by face-to-face interviews guided by semi-structured questions. Mak was selected because it has an enrolment of about 40,000 students and therefore has a potential to provide a good ground for a wide range of issues pertinent to the study.
A total of ten people were interviewed and these included the head of the finance department, the head of the IT unit, the person who was responsible for the user team, the coordinator of NORAD (Norwegian Funding Agency) in the University's Planning Unit who funded the implementation costs and six accountants from the Finance Department. The respondents were chosen based on their relevance to the research question and closeness to the subject matter rather than their representativeness. The interviewer (one of the researchers) has a position at Mak and was to some extent involved in the implementation process of the system. The interviewer's position at Mak at that time was in the IT unit of Mak as Systems Manager with the role of assisting various units in the university in acquisition and implementation of central software resources.
Questions asked during interviews were mainly in four areas: general information about the organisation and the system, information on how the implementation was done, and information on how the system was being run and used. Analysis of the data was done using within-case analysis whereby the general patterns and themes were identified. The analysis aimed at identification of factors that were presented as influential in the implementation process by the respondents. The next section presents briefly the case and then the identified factors are presented and discussed.
Presentation of Research Findings.
Makerere University is a public university in Uganda with an enrolment of approximately 40,000 students and 5,000 staff members. The university procured an integrated enterprise system called Integrated Tertiary Software (ITS) to be used in finance management, students' administration and human resource management. In this study we focussed on the finance subsystem. Next we present why the FIS was bought and why; how and when was the time for the implementation decided; how was the actual implementation done.
What was the origin of the idea to buy the FIS and why?
In regard to issues for why the system was implemented, one thing that was mentioned by almost all interviewees was a problem of lack of efficiency in managing fees payments of students due to very large numbers of students. The Head of the Finance Department said: "the problem was the number of students and the most risky area was revenue. As a finance manager that was my main focus. The rest we could afford to handle manually. For example, with the expenditure the vouchers are with you, but with revenue you would not know who has paid and from what faculty". The Senior Assistant Bursar and the person who headed the implementation team said: "The privatisation scheme that was introduced in the nineties brought an increase in student population. Mak could no longer accurately tell how much money was being received and reports could no longer be given in a timely manner". The Head of the IT unit said: "The main motivating factor for the implementation was the big number of students and lack of efficiency that subsequently followed".
In addition donor influence and best practice also played big roles in influencing the decision to procure the system. The Head of the IT unit said: "Donors were looking at institutions within the country to create efficiencies, and automation was being seen as the best practice that was being proposed elsewhere. Mak had started looking ahead towards automation but already there was a move by development partners requiring public institutions to improve performance. So Mak's big numbers coincided with the push by the development partners to automate systems and being the highest institution of learning in the country, Mak was a prime choice for donors to fund". The Head of the IT unit continued to say that automation was not decided by the players like the head of the finance and head of academic records. "What they presented was just increasing challenges to support top management in their bid to solicit funding from the donors for the automation." In other words, according to the Head of the IT unit, the push for implementation was a top-down approach motivated by a position that institutions in developing countries needed to comply with donor requirements. The Head of IT summarised by saying that "things actually happened in parallel. Donors came in to look for efficiency and they found Mak already grappling around to see how to solve the problems of inefficiency".
Another influencing factor had to do with best practice. The Head of Finance said: "When I joined the university everything was manual and the thinking at the time was how to make Mak ICT enabled. That urged us to look into that area and we wanted to catch up with other universities so we said that we would look for funders because government wouldn't". The Head of the IT unit head also said "the adoption of systems in many institutions of higher learning, and automation of functions whether administrative or academic is not a reinventing the wheel, most institutions follow best practice. What is important is that you have a champion to introduce the automation; you need to have the funding and the team players. Then at the end you need to have a change management team that can influence and affect the changes. So it is essentially adopting best practice and that is what Mak did."
When and how was the time to start the implementation decided?
According to Tusubira [START_REF] Tusubira | Supporting University ICT (Information and Communication Technology) Developments: The Makerere University Experience[END_REF] it was during a conference for all heads of departments that was organised by the Vice Chancellor in 2000 to discuss a question of ICT development in the university. A resolution was made to develop an ICT policy and master plan that was aimed at defining a strategy that the university would take in its bid to develop the use of ICT in its management systems. The master plan comprised of all the planned ICT activities for the university for a period of five years (2001 to 2004) and the implementation mandate was given to DICTS. Among the activities was the implementation of the university information systems that included the finance system.
In summary the factors found motivating the implementation were: The need by the university top management to give development partners satisfaction that Mak had the necessary capacity to manage finance information efficiently. A need from the finance department to find a way of managing increasing student fees records in time as a result of increasing student numbers following issuance of a policy by Mak to start admitting privately sponsored students in the 1990's. Influence from best practice that pointed to automation of systems as a must way to go during that time as seen by top management and DICTS. Need by Mak under the stewardship of the Directorate for ICT Support to execute the activity of implementing information systems that included the FIS as had been prescribed in the University ICT master plan for 2001-2004. Funds provided by a developing partner, NORAD under stewardship of the Mak Planning Unit, which had to be utilised within a specific period, 2001-2004, being available.
How the actual implementation was done
After the system was procured, several activities related to the actual implementation took place. These are shortly described below in chronical order: 1) installation and customising the system, 2) formation of the implementation teams, 3) training, about 30 people were trained over a period of about two months, and 4) user acceptance and commissioning: by the end of 2006 all the modules were found to be functional although only three were being used at that time (i.e., student debtors, cash book and electronic banking) and the system was commissioned in February 2007.
Identified implementation factors impacting FIS use
A large part of the interviewees (more than 60%) said that the system and especially the interface for capturing data was not easy to use and this seemed to have discouraged many people from using the system. One accountant specifically said: "the system is not user friendly, for example, for a transaction to be completed you have to go through several steps, and in case you forgot a step you have to repeat". We label this as a: Factor of System Usability. There were too many bank accounts as each unit in the university had its own bank accounts and supervision of staff was not adequate. It was therefore very hard to have all the cashbooks across the university up-to-date to enable a complete set of reports to be generated in a timely manner. The head of implementation said "the cash books were too many as a result of the big number of bank accounts which were almost over 200. The people working on them and who were scattered in many different units could not all update them in a timely manner to have any meaningful reports generated in a timely manner". We suggest categorising this as a Factor Evaluation of Staff Performance.
The study showed that there was a lack of a clear plan for how persons should stop using the older systems. When one accountant was asked why the modules to do with the expenditure failed to be operationalized whereas the revenue module for student debtors was a great success he said "the form of record keeping at that time was in decentralized manner, so supervising people was not easy and secondly the people were allowed to continue with the older systems. Student debtors succeeded only because there was no alternative". Talking about the same, the Head of the Finance department said: "In the beginning the problem was the number of students, and the most risky area was revenue. So there was much focus on revenue. The rest you could afford to handle manually. For example, with the expenditure the vouchers are with you, but with revenue you do not know who has paid and from what faculty" We suggest categorising this as a Factor of Change Management Program.
It was found that a lot more was acquired in terms of modules than required to solve the actual problem that was prevailing. This was found to be due to the fact that the push to implement was from the top to the bottom because the funds (which were being provided by development partners) were readily available.
When the Head of the IT unit was asked whether the story would have been different if Mak was to finance the project from its own internal budget instead of donor funds she said: "If there were budget constraints whereby Mak would have to look for donors then Mak would think a lot more about how that money would be spent, and if Mak was using their own money they would have asked the finance department from inception more, because they would have said that we do not have money tell us only those critical modules that have to be funded within a constrained budget". The Head of the IT unit added: "but we have a top down approach supported by challenges from below that already has funding coming from some source aside so we do not have to involve them too much because they have already given us their challenges to support our case and we got the money. And once we put up a bid and the best system came up it was adopted in its entirety". In conclusion the Head of the IT unit said: "budgeting constraints would have forced a more concise scheme and more involvement of the user department. But this was not the case. They were there to support the cause by only challenges as the money had been got from somewhere else". We suggest categorising this as a Factor of Project Management.
According to the Head of the IT unit, the human resource structure had not been fully designed to be compliant with the new automation aspect. She said "The human resource had been used to using a manual system and now they had to take on a new system, and with too many modules, and the structural adjustments started being done after the system was installed". She added: "It was much later after evaluating the system when a decision was made to strike off some particular modules. If this had been done at the beginning, the people would have easily mastered the system and the university would have saved money." We suggest categorising this as a Factor of Change Management Program.
It was found that support was always never timely causing frustrations to many people. One accountant commented: "Support was always not timely and this rendered people to fall back to their original work practices in order to meet targets. We suggest to categories this as Factor of Technical support and Effective IT unit Another accountant said: "Nobody took initiative to operationalise the entire system". We suggest categorising this as Factor of Top Management Support.
It was observed that some people did not know and did not believe that adequate searching for a suitable system was done before the system was procured. One accountant commented that "the university should have taken time to do more research and come up with a system that would perform better. ITS was only at Mak with no any comparisons within Uganda". It was discovered the belief of the accountant was not correct because it was established from other sources that before the decision to procure was made Mak sent a team of people in a foreign university where a similar system was being used to find more about it. This means that there was lack of information with some people and we therefore suggest categorising this as a factor of Effective Communication.
It was found that all the trainees (about 30) were pulled together in one big group and it turned out to be very difficult for each individual to get direct contact with the trainers. Secondly after training the trainers immediately went back to South Africa (where they had come from) keeping very far away from users who were just maturing. The head of the user team said: "the whole department was trained together as one group for two months, but in addition the trainers should have also done individualised training, and they should have remained in close proximity".
And when asked to comment on the fact that during training people were taken through the entire system but that the situation on ground did not reflect that, the Head of the Finance department said that that was the case because they were doing an implementation of this kind for the first time. He added that "People went for training only once, so after time they forgot and the problem was that there was a lack of people to guide Mak .The consulting firm reached a point when they would want to charge whenever they would be called and so financial implications came in. They could help on the system but they could not help on the functionalities." We suggest categorising this as a factor of Education and Training.
It was found that due to some omissions or/and deficiencies that existed in the Requirements Specifications Document, some functionalities could not adequately run. For example, when an accountant was asked whether the organisation took time to review all the relevant organisation policies to ensure that they were all adequately accommodated in the automated environment, he said: "Some were done like the registration of students but at a later time. Some were not done, for instance, the system could not handle multicurrency features for fees". In some instances the consultants would accept to quickly do the necessary rectifications and in some instances they would not, which would cause problems. We suggest categorising this as a factor of Flexible Consultants.
Conclusions and future research
The aim of this study was to answer the research question: What factors during implementation impact use of FISs in developing countries? Previous studies on FIS implementation show that the design and implementation of FIS solutions is challenging and requires development of country specific solutions to meet the associated functional and technical requirements. Previous studies also show that as a result of increased challenges in developing countries due to unstable infrastructure and unstable social economic organisational environment, the quality of training gets poorer which leads to increased implementation failures compared to the situation in developed countries. The starting point for identification of factors was system usability. From that we identify nine factors that shaped the implementation and use of the FISs. These are: Project management, evaluation of staff performance, effective communication, instituting of change management programs, provision of technical support by consultants, effective IT unit, providing education and training, top management support, and flexible consultants. These factors are related to different activities in the implementation and they all influence the results of the implementation expressed as systems usability in positive or negative directions. Future research will focus on to what extent the different factors influences use of implemented systems in developing countries. | 30,444 | [
"1001318",
"1001319",
"1001320"
] | [
"344927",
"344927",
"344927"
] |
01466235 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466235/file/978-3-319-24315-3_35_Chapter.pdf | Lukáš Gregorovič
email: [email protected]
Ivan Polasek
email: [email protected]
Branislav Sobota
email: [email protected]
Software Model Creation with Multidimensional UML
Keywords: 3D UML, Analysis and Design, Sequence Diagram, Class Diagram, Fruchterman-Reingold
The aim of the paper is to present the advantages of the Use Cases transformation to the object layers and their visualization in 3D space to reduce complexity. Our work moves selected UML diagram from two-dimensional to multidimensional space for better visualization and readability of the structure or behaviour.
Our general scope is to exploit layers for particular components or modules, time and author versions, particular object types (GUI, Business services, DB services, abstract domain classes, role and scenario classes), patterns and anti-patterns in the structure, aspects in the particular layers for solving crosscutting concerns and anti-patterns, alternative and parallel scenarios, pessimistic, optimistic and daily use scenarios.
We successfully apply force directed algorithm to create more convenient automated class diagrams layout. In addition to this algorithm, we introduced semantics by adding weight factor in force calculation process.
Introduction
Increasing requirements and the complexity of designed systems need improvements in visualization for better understanding of created models, for better collaboration of designers and their teams in various departments and divisions, countries and time zones in their cooperation creating models and whole applications together.
In software development, Unified Modeling Language (UML) is standardized and widely used for creation of software models describing architecture and functionality of created system.
There are many tools that allow creation of UML diagrams in 2D space. Moving UML diagrams from two-dimensional to three-dimensional space reduces complexity and allows visualization of the large diagrams in modern three-dimensional graphics to utilize benefits of the third dimension and achieves more readable schemas of complex models to decompose structure to particular components, type layers, time and author versions.
We need to decompose behaviour and functionality to particular scenarios of the system, alternative and parallel flows, pessimistic, optimistic and daily use scenarios.
Related works for 3D UML
There are some existing alternatives how to visualize UML diagrams in 3D space. Paul McIntosh studied benefits of the 3D solution compared to traditional approaches in UML diagrams visualization. Because of using combination of X3D (eXtensible 3D) standard and UML diagrams, he named his solution X3D-UML [START_REF] Mcintosh | X3D-UML: User-Centred Design[END_REF]. X3D-UML displays state diagrams in movable hierarchical layers. GEF3D [START_REF] Pilgrim | Gef3d: A framework for two-, two-and-a-half-, and three-dimensional graphical editors[END_REF] is a 3D framework based on Eclipse GEF (Graphical editing framework) developed as Eclipse plugin. Using this framework, existing GEF-based 2D editors can be easily embedded into 3D editors. GEF3D applications are often called multi-editor. Main approach of this framework is to use third dimension for visualization connections between two-dimensional diagrams.
Another concept in field of 3D UML visualization is on virtual boxes. Authors placed diagrams onto sides of box allowing them to arrange inter-model connections which are easily understandable by the other people. GEF3D does not allow users to make modifications in displayed models. Due to fact that UML diagrams can be complex and difficult to understand, geon diagrams [START_REF] Casey | A Java 3D implementation of a geon based visualisation tool for UML[END_REF] use different geometric primitives (geons) for elements and relationships for better understanding.
Our approach
Our method visualizes use case scenarios using UML sequence diagrams in separate layers all at once in 3D space, transforms them to the object diagrams (again in separate layers) and automatically create class diagram from these multiple object structures with real associations between classes to complete structure of designed software application.
Sequence diagrams in 3D space of our prototype allow to analyse and study process and complexity of the behaviour simultaneously and compare alternative or parallel Use Case flows.
Identical elements in object diagrams have fixed positions for easy visual projection to the automatically created class diagrams with classes derived from these objects. Their relationships (associations) are inferred from the interactions in the sequence diagrams and class methods are extracted from the required operation in the interactions of these sequence diagrams.
Our Prototype
We have created our prototype as a standalone system in C++ language with Open Source 3D Graphics Engine (OGRE) or OpenSceneGraph as an open source 3D graphics application programming interface and high performance 3D graphics toolkit for visual simulation, virtual reality, scientific visualization, and modeling.
For integrated development environment (IDE) we can use Eclipse or Microsoft Visual Studio and build standalone system with import/export possibilities using XMI format (XML Metadata Interchange) or plugin module to IBM Rational Software Architect or Enterprise Architect.
Our prototype allows to distribute diagrams in separate layers arranged in 3D space. In this tool is possible to create UML class diagram, sequence diagram and activity diagram in multidimensional space with 3D fragments.
Layers can be interconnected and diagrams can be distributed to the parts in these separate layers to study interconnections and for better readability.
Diagram transformation
In software analysis and development is good practise to start with describing and capturing system behaviour. For this purpose of behavioural modeling we can use sequence diagrams.
Algorithm 1. Class diagram creation algorithm
Creating sequence diagrams we automatically identify essential objects and their methods that are necessary for functionality of the system. Thanks to element similarities between sequence diagram and object diagram in the UML metamodel definition, it is possible to use same shared data representation. Object diagram can be rendered from sequence diagram. Modifications are made in drawing algorithms. Instead of drawing full timeline graphic, lifelines are ignored and only upper part with object names is drawn. Messages between lifelines are moved from original position to directly connect appropriate objects. Transformation can be visible in Fig. 1 and Fig. 2.
For development in early phases of testing the concept we used layout algorithm that can be seen in Algorithm 1. Each unique element was placed on next available cell in imaginary grid. Advanced layout creation with force-directed algorithms is described in the next section of this paper.
Class diagram is gradually created. Instead of multiple passes through sequential diagrams to create the classes and append methods to these classes in the next iteration, algorithm for class diagram creation was optimised with buffering and memorisation. Each time when unknown class type is found for lifeline, new class instance in class diagram is created and reference is stored under unique identifier matching class name. Class types are then complemented with method names.
Class diagram layout
In transformation process we use some basic grid algorithms to arrange the objects in the matrix. With the growing number of the objects also grows the complexity of the diagram and relations between vertices, so it is crucial to create layout that is clear and readable.
Force-directed algorithms
One way how to accomplish better layout is to use force-directed algorithms, so the diagram will be evenly spread on the layer and elements with real relations are closer to each other as to the other elements. We have tested Fruchterman-Reingold and FM3 algorithms.
Fruchterman-Reingold.
Fruchterman-Reingold (FR) is simple force-directed algorithm. Each vertex is repelled from the other vertices. Edges between vertices acts as springs and pulls vertices to each other, counteracting repulsive forces. Algorithm iterates through the graph many times and each time decreases the magnitude of changes in positions, this effect is called cooling down. It could settle in some configuration to ensure the layout instead of oscillating in some other cases.
Speed of layout generating is O(n 3
), where n is number of vertices [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF].
FM3.
FM3 algorithm is more complex approach. Basic idea of the forces is the same, but FM3 uses principle of multiple levels of layout. Main difference is in the step, where provided graph is reduced into smaller subgraphs by packing multiple vertices into one.
Analogy of this principle is based on finding so-called solar systems, where one vertex is identified as the sun and the other edges that are related to the sun are marked as the planets and the moons.
Fig. 3. FM3 -solar systems collapsed into subgraph [2]
Reduction of the graph is recursively called on subgraphs until simple graph is reached, then the subgraphs are arranged and unfold, so it is returned to its higher graphs. These steps are repeated until we reach original graph. Last step arranges the final graph.
Fig. 4. FM3 -unfolding sub-graphs and layout creation [2]
This solution is significant quicker than Fruchterman-Reingold algorithm. It is possible to reach speed O(|V|log|V|+|E|) [START_REF] Hachul | Large-graph layout with the fast multipole multilevel method[END_REF].
Problems of force-directed algorithms
Unfortunately the outputs of these algorithms were not good enough for proposed use. More appropriate applications are the visualisation of large graph with tree structures. Both these algorithms have a tendency to create uniform distribution of elements in diagram. Users have a tendency to arrange elements into groups, order elements by priority, hierarchy and so on. They are looking for patterns, relations, semantics and other hidden aspects of model. This is important factor for conservation the readability and understandability of the modelled diagrams. Deficiency of this features in force directed algorithms make them not ideal to create class diagram layout.
Our focus in this phase was on the creation of the algorithm that is more appropriate. Starting point and proof of concept was considering simple semantics in diagram layout creation. Assuming that in class diagram the most important relation between two elements from semantic view is generalisation, then aggregation and finally association, it is possible to modify output of layout algorithm by adding weight factor in attractive force calculation process.
Analysing mainly two mentioned force-directed algorithms (but also the others) was created some methods how to accomplish the task of incorporation the semantic into the selected algorithms.
In case of Fruchterman-Reingold it is possible to introduce weight to vertices or edges. By adding weight, it is possible to modify the original behaviour of algorithm.
Modifying process of solar systems selection in FM3 could allow to create subgraphs, where semantically relevant objects are merged into one vertex. This ensures the separation of less relevant parts of diagram in space and then the layout is enriched by adding more elements in relevant places by reversing graph into its higher sub-graphs.
Our decision was to utilise Fruchterman-Reingold algorithm. Time complexity of the algorithm in comparison with FM3 does not become evident according to the scale in which we use these algorithms: class diagram with size of 10-100 classes, layout calculation is fast. Implementation of the algorithm is simple and it is possible to make modifications more easily than in FM3. Implementation using FM3 can be realized in the future if it.
Weighted Fruchterman-Reingold
Simple modification of FR algorithm in the form of adding weight to edges in calculation of attractive forces make desired layout improvement. Weight of edge is taken into account in process of calculating attractive forces of edge connecting two vertices. Calculated force is multiplied by weight of corresponding type of edge. It is necessary to identify current type of the edge while calculating attractive forces. Implementation of the system distinguishes different relations as instances of different classes and therefore it is easy to use appropriate weight.
While prototyping phase, weights of relations-edges were experimentally set as follows:
generalisation → 200 aggregation → 100 association → 10 Application of the selected weights affected the outputs of the algorithm in desired manner. To escalate effects of attraction, reflecting the semantics of the diagram, vertices are repelling each other with equivalent force, but magnified by factor of 10 according to original force calculated by algorithm. This tends to push vertices more apart, so the difference in distances between related and unrelated vertices is greater. This allows to make semantics patterns of class diagram more visible.
Results and Evaluation
New weighted Fruchtermant-Reingold algorithm was tested against Fruchterman-Reingold algorithm. Empirical comparison of generated layouts on multiple class diagram examples indicates, that our new algorithm provides more appropriate layout.
First example in Fig. 5 and Fig. 6 shows one of tested class diagrams: Sequence diagram metamodel. Differences between both layouts are clearly visible. Fig. 6 shows layout generated by Fruchterman-Reingold algorithm. This layout is evenly distributed across available space and it has symmetrical character.
Distribution of the classes is not optimal and orientation in such diagrams is still not easy and natural. Random scattering of connected classes is not very useful in case of readability and understanding of created class diagram. Using weighted Fruchterman-Reingold algorithm on the same class diagram example achieves significantly better layout. Main difference is that layout put relevant classes more together and creates smaller chunks of classes instead of one big mass as it was in the previous case. This means better readability, understanding and modifying designed diagrams.
Algorithm still creates some unwanted artefacts. For example, by pushing on some classes creates unnecessary edge crossings. These problems may be addressed in the future.
Nevertheless algorithm is able to create decent layout for the class diagram. Output is not perfect, but it is an initial layout, which could be corrected by the user: weighted Fruchterman-Reingold algorithm is suitable for this purpose.
Conclusion
We applied force directed algorithm successfully in the second phase of transformation from object diagrams (derived from use case scenarios in sequence diagrams) to class diagram representing static structure of the modeled software system. In addition we introduced semantics by adding weight factor in force calculation process in the layout algorithm. Type of relation between vertices influence weight applied on the attractive forces. This creates more useful layout organisation, as elements are grouped by semantics that is more readable.
Research started with software development monitoring [START_REF] Bieliková | Platform Independent Software Development Monitoring: Design of an Architecture[END_REF] and software visualization [START_REF] Polášek | Extracting, Identifying and Visualisation of the Content, Users and Authors in Software Projects[END_REF] and now, we are preparing interfaces and libraries for leap motion, 3D Mouse, and Kinect to allow gestures and finger language for alternative way of creating and management of the particular models.
Fig. 1 .
1 Fig. 1. Example of Sequence diagrams in 3D UML.
Fig. 2 .
2 Fig. 2. Object diagrams rendered from sequence diagram.
Fig. 5 .Fig. 6 .
56 Fig. 5. Sample (Sequence diagram metamodel) -layout generated with Fruchterman-Reingold algorithm
Acknowledgement
This work was supported by the KEGA grant no. 083TUKE-4/2015 "Virtual-reality technologies in the process of handicapped persons education" and by the Scientific Grant Agency of Slovak Republic (VEGA) under the grant No. VG 1/1221/12. This contribution is also a partial result of the Research & Development Operational Programme for the project Research of Methods for Acquisition, Analysis and Personalized Conveying of Information and Knowledge, ITMS 26240220039, co-funded by the ERDF. | 16,783 | [
"1001321",
"1001322",
"1001323"
] | [
"259428",
"259428",
"155410"
] |
01466236 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466236/file/978-3-319-24315-3_3_Chapter.pdf | Roman Bronis
email: [email protected]
Ivan Kotuliak
email: [email protected]
Tomas Kovacik
email: [email protected]
Peter Truchly
email: [email protected]
Andrej Binder
email: [email protected]
IP data delivery in HBB-Next Network Architecture
Keywords: IP data encapsulation, HBB-Next, DVB, SPN, ns2, Application Data Handler (ADH), Hybrid Encapsulation Protocol (HEP)
Digital television enables IP data delivery using various protocols. Hybrid television HbbTV enhances digital television with applications delivery. HBB-Next is an architecture which enhances HbbTV with additional features. However it does not specify IP data delivery despite it has access to both broadcast and broadband channel. This paper proposes architecture and protocols for IP data delivery in HBB-Next.
To achieve this goal we designed new node (Application Data Handler -ADH) in HBB-Next architecture and new communication protocols (Application Data Handler Control Protocol -ADHCP, and Hybrid Encapsulation Protocol -HEP) for data transmission. We created Stochastic Petri Net (SPN) model of designed protocols and implemented them in ns2 network simulator to verify our solution. Results of SPN model simulation and ns2 network simulation are discussed and HEP protocol is compared to existing encapsulation protocols used in DVB systems.
Introduction
Evolution from digital television to hybrid television started with Multimedia Home Platform (MHP) [START_REF]Digital Video Broadcasting (DVB); Multimedia Home Platform (MHP) Specification 1[END_REF]. It was later surpassed by hybrid television standard -Hybrid Broadcast Broadband Television (HbbTV) [START_REF]Hybrid Broadcast Broadband TV, ETSI[END_REF]. HbbTV applications are CE-HTML based and can take advantage of broadband return channel. HbbTV applications can be interactive and mostly serve for TV providers as enhanced EPG (Electronic Program Guide) applications (archive, informations about movies and shows, trailers etc.). HbbTV applications can not only serve as TV information portals, but can be also used in other areas such as in e-learning [START_REF] Kovacik | HBB Platform for E-learning Improvement[END_REF].
To enhance HbbTV application capabilities, HBB-Next platform was designed [START_REF] Podhradsky | Evolution trends in hybrid broadcast broadband TV[END_REF]. It provides additional features as user recognition (by face or voice etc.), content recommendation and user management. As in HbbTV, HBB-Next terminals are connected to broadband Internet which is used for application data or media streams delivery. In HBB-Next, service provider has access to broadcast channel, but it is used only for media streaming.
HBB-Next platform does not specify IP data delivery. Protocols for IP data delivery in digital television could be used, but they were not designed to utilize both broadcast and broadband channel. In this paper we propose enhancement to HBB-Next architecture and we design protocols to deliver IP data from applications to terminals using both broadcast and broadband channel. We compare it to similar solutions and describe its advantages.
This paper is organized as follows: the second section describes protocols used for IP data delivery in DVB systems and current state of next generation of hybrid television. The third section proposes Application Data Handler (ADH) node and ADH-Control Protocol (ADHCP). In fourth section Hybrid Encapsulation Protocol (HEP) and HEP Hash Table (HHT) protocols are described. The fifth section describes Stochastic Petri Net (SPN) model of designed communication, its properties and results of simulations. The sixth section describes implementation of designed protocols in ns2 network simulator and results of simulations. The seventh section concludes this paper.
IP data delivery in DVB and in HBB-Next
In this section we describe current state of IP data delivery in DVB systems and current state of HBB-Next architecture.
IP data delivery in DVB
Multi-Protocol Encapsulation (MPE) protocol was designed for IP data delivery in first generation DVB systems (DVB-S/C/T) [START_REF]Digital Video Broadcasting (DVB); DVB specification for data broadcasting[END_REF]. It is the most used protocol to receive IP data over broadcast channel in areas without connection or with limited broadband connection. MPE can work in two modes. In padding mode, MPE frame's unused data are filled with invalid data. In packing mode, MPE frame's unused data are filled with next packet. Padding mode is available to all devices with MPE support, however MPE packing mode is optional. Packing mode is more effective but is not supported by all end-devices.
Unidirectional Lightweight Encapsulation (ULE) protocol was designed as lightweight alternative to MPE protocol [START_REF]Unidirectional Lightweight Encapsulation (ULE) for Transmission of IP Datagrams over an MPEG-2 Transport Stream (TS), IETF[END_REF]. It has reduced header (header size: MPE -16 B, ULE -4 B) and is using packing mode by default.
For second generation DVB systems (DVB-S2 etc.), new protocol for IP data delivery can be used [START_REF]Digital Video Broadcasting (DVB) -Generic Stream Encapsulation (GSE) Protocol[END_REF]. Generic Stream Encapsulation (GSE) protocol is the most effective in DVB-S2 systems, but it is not compatible with first generation of DVB systems. Despite GSE's highest efficiency, MPE is still used in many cases. During transition to DVB-S2, providers stayed with MPE protocol because it had wider support in consumers' end-devices and was still used by DVB-S systems.
HBB-Next
HBB-Next is platform of next-generation hybrid television. It was designed to provide additional features to hybrid television. HBB-Next architecture consists of three main layers: application provider, service provider and terminal (Fig. 1). Application provider layer represents application, its data and frontend. Applications can be HbbTV compatible and can take advantage of features of Service provider layer.
Service provider is HBB-Next core architecture provider. Service provider layer consist of multiple nodes. It is designed to provide advanced features such as user recognition (Multi-modal Interface), content recommendation, enhanced identity management (IdM) and security management (SecM), and audio and video synchronisation and delivery (CloudOffloading and AV Sync nodes).
Terminal is end-point device which is able to receive transmission on broadcast channel and is also connected to broadband channel. Broadband channel's bandwidth is not specified but HBB-Next terminal is considered to have enough bandwidth for face recognition data and multimedia streaming reception.
Applications in HBB-Next can send data to terminal by broadband channel. Broadcast channel is solely used for audio and video data delivery. However, this channel could be used for delivery of other application data as well. To provide this feature HBB-Next architecture need to be enhanced. HBB-Next does not specify IP data delivery in its core. Standard DVB encapsulation protocols (MPE, ULE, GSE) could be used for IP data delivery in HBB-Next only by bypassing its core. HBB-Next platform could take advantage of its core and its connection to broadcast and broadband channels to transfer IP data from applications to terminals.
Application Data Handler (ADH) and ADH-Control Protocol (ADHCP)
HBB-Next architecture is not suitable for delivery of application data through broadcast channel. Only node which is connected to both broadband and broadcast channel is AV Sync node. For applications to be able to deliver IP data we created a new node -Application Data Handler (ADH) in service layer (Fig. 2). ADH node is used to receive application data and send them through appropriate broadcast or broadband channel according to applications' needs.
Data are sent from application to ADH node using ADHCP protocol (Fig. 3). This protocol was designed to allow application data encapsulation (data field, <1435 B), link type selection (link type field) and addressing (address type and address fields). Hash field in ADHCP headers is used as checksum for encapsulated data. ADHCP communication is based on request and reply messages and they use two different header formats. Application is requesting data transfer from ADH ((1) in Fig. 3). ADHCP request header consist of link type, address type, address (optional), data (encapsulated data) and hash value (of encapsulated data). ADH node response in case of failure with response ADHCP message ((2) in Fig. 3). ADHCP response header consists of hash value of message which was not delivered correctly and response code. Response code is:
-00 -refused: in case of ADH refuse to transmit data over selected link, -11 -retransmission request: in case any of terminals failed to receive data and they are no longer in ADH cache, -01 and 10 -reserved.
In Fig. 3 part A there is ADHCP messages exchange when there is insufficient bandwidth on broadcast channel (ADH refuses to send data). Part B shows correct data transmission with ADHCP to ADH. In part C ADHCP messages exchange in case of transmission failure on broadcast channel is shown.
HBB-Next's Application layer is considered to be connected with Service provider layer with sufficient bandwidth. ADHCP is encapsulated in TCP/IP to achieve reliable transmission.
Hybrid Encapsulation Protocol (HEP) and HEP Hash Table (HHT) protocol
To transmit data from ADH to terminal, new lightweight protocol was designed. Hybrid Encapsulation Protocol (HEP) can be used either in DVB broadcast channels encapsulated in MPEG-TS or in broadband channel over TCP/IP. Broadcast channels are not reliable medium. To check correct reception of received data, HASH values are sent from ADH to terminal using HEP Hash-Table -HHT ((1) in Fig. 4). HHT consist of list of items. One item consist of fields (size of field in brackets):
hash value (256 b) -hash value of data received from application, link type (4 b) -expected link to receive HEP frame, reception time (32 b) -expected time of HEP frame reception, in Unix time format, and sequence number (20 b) -position in terminals reception stack.
One or multiple items may be send in one HHT message. HHT is sent to selected terminal by broadband channel using TCP/IP and it consists only from items addressed to selected terminal.
After HHT communication, HEP message is sent to terminal ((2) in Fig. 4). HEP message consists of encapsulated data sent from ADH (originally from application). For HEP messages received from broadcast channel terminal counts Hash value and check if the message was received correctly. Counted value is compared to one terminal received in HHT. If received correctly, counted Hash value will match to one from terminals HHT list. In case of transmission error, counted Hash value will not match any of items in HHT list. Terminal periodically checks reception time and requests frames ((3) in Fig. 4) which were not delivered within reception time in terminals HHT. Requesting HEP message consist of number of requested frames with their hash values following. After ADH receives HEP request, it can either resend data (if still cached) or request application for retransmission with ADHCP response ((4) in Fig. 4).
HEP encapsulation was designed to be more efficient then previously used protocols. ULE protocol was designed to reduce MPE's header to achieve higher efficiency [9][10]. GSE protocol is even more efficient, but it is used only in second generation DVB channels [START_REF] Mayer | Analytical and Experimental IP Encapsulation Efficiency Comparison of GSE, MPE, and ULE over DVB-S2[END_REF]. HEP's encapsulation over broadcast channel is more efficient as MPE, ULE or GSE because it has no header. It is using HHT protocol instead which is solely transmitted over broadband channel and therefore saves broadcast channel bandwidth. connected places (from p 0 ) in a direction of an arc (to p 1 ). Stochastic Petri Nets enable transitions to be fired with given probability or rate.
To verify properties of designed protocols, we created model of their communication using Stochastic Petri Nets (Fig. 5). We verified its selected properties using PIPE tool [START_REF]PIPE: tool fol Petri Nets[END_REF]. Place p K was used to simulate broadcast link capacity. Transitions t Y and t N were stochastic transition set to simulate broadcast channel error rate -t Y was executed every 2 times, t N was executed every 8 times what represents 20% error rate on a broadcast channel. Results showed that PN is not safe and can be dead-locked -which means that sent messages can be delivered and halt in terminal's last state p 4 -correct data reception. This is considered as correct protocol behavior, because it represents that data can be delivered to final destination (p 4 -correct data reception). There was no other dead-lock identified. We tested boundedness for places representing link capacity (p K ) and results showed that these places are bounded, therefore communication can not overload link capacity. Boundedness for whole model was also tested and results showed that the whole model is bounded. Using Snoopy [13] we simulated SPN model sending 10 and 1000 messages (Fig. 6). Simulation results show correct reception of all messages -purple line of p 4 place leads to sent messages count, and correct retransmission request in case of an error on broadcast channel -highlited red line of p R place.
Simulation in ns2
In order to simulate our protocols, we implemented their version using ns2 network simulator. Application (APP) was sending its messages using ADH node broadcasting to 100 terminals through DVB gateway (DVB GW). We set terminals' error reception rate on broadcast channel to various percentages (0-99%). Terminal Term (0) had 0% error rate reception, terminal Term (1) had 1% error rate reception, and so forth. Simulation scenarion was set to requeste every erroneous message again over broadband channel through IP gateway -IP GW (Fig. 7).
Results (Fig. 8) show dependence of recevied frames on broadcast channel error. Red line represent percentage of messages received through broadcast channel and green line represents percentage of messages received through broadband channel. Independently on broadcast channel error rate, summary of both line (channels) gives 100% reception of messages on terminal. Results verified correct transmission and retransmission behaviour of designed protocols in network simulator.
Conclusion
In this paper we proposed architecture and protocols for IP data delivery in DVB broadcast channels in next generation hybrid television -HBB-Next. We We designed new node in HBB-Next architecture -Application Data Handler (ADH). ADH receives all application data and transmits them to terminals using either broadcast or broadband channels. In order to communicate with ADH we designed ADH-Control Protocol (ADHCP). For data delivery to terminals through different channels we designed Hybrid Encapsulation Protocol (HEP) and HEP Hash-Table (HHT) protocols.
We created Stochastic Petri Net (SPN) model of communication with designed protocols. We analysed properties of SPN model and simulated its behaviour. Results showed desired properties and simulations verified correct transmission and retransmission of messages. Later we implemented our protocols in network simulator and simulated communication with multiple terminals with different error rate on broadcast channel. Results showed correct data transmission over broadcast channel and retransmission over broadband channel.
Our work enables IP data delivery in HBB-Next from applications to terminals over HBB-Next service provider layer. Applications can not only serve as multimedia provider and HbbTV content provider, but with our changes they can also behave as IP data providers. Our HEP encapsulation has also reduced frames' header overhead on DVB broadcast channels. Instead it is using broadband channel to deliver HHT (header-like data).
Future work includes furhter comparison of SPN simulations with network simulations, testing of parallel transmission in various complex scenarios and implementation and testing on real hardware.
Fig. 1 .
1 Fig. 1. HBB-Next architecture (high-level view)
Fig. 2 .
2 Fig. 2. HBB-Next architecture with ADH node
Fig. 3 .
3 Fig. 3. ADHCP messages flow
Fig. 4 .
4 Fig. 4. HHT and HEP communication
Fig. 5 .
5 Fig. 5. Communication model using Petri Nets
Fig. 6 .
6 Fig. 6. SPN simulation -10 messages, 1000 simulations
Fig. 7 .Fig. 8 .
78 Fig. 7. Simulated topology
Table 1 .
1 Places and transitions in SPN
p0 ADHCP request sent t 1 send ADHCP request
p1 ADHCP request received t 2 send HHT
p2 HHT received p3 HEP received p4 correct data pF full channel t 3 send HEP frame t Y data check -correct t N data check -incorrect
pK link capacity t R request for retransmission
pR incorrect data, retransmision t F denial of transmissin
Acknowledgments. This work is a result of the Research and Development Operational Program for the projects Support of Center of Excellence for Smart Technologies, Systems and Services, ITMS 26240120005 and for the projects Support of Center of Excellence for Smart Technologies, Systems and Services II, ITMS 26240120029, co-funded by ERDF and was supported by the Slovak national research project VEGA 1/0708/13, KEGA 047STU-4/2013 and Slovak Research and Development Agency project APVV-0258-12. | 17,596 | [
"1001324",
"1001285",
"1001325",
"1001326",
"1001327"
] | [
"259428",
"259428",
"259428",
"259428",
"259428"
] |
01466237 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466237/file/978-3-319-24315-3_4_Chapter.pdf | Tomáš Halagan
email: [email protected]
Tomáš Kováčik
email: [email protected]
Peter Trúchly
email: [email protected]
Andrej Binder
email: [email protected]
Syn Flood Attack Detection and Type Distinguishing Mechanism Based on Counting Bloom Filter
Keywords: DoS detection, DoS identification, Counting Bloom Filter, TCP, SYN, flood attack, network security
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Internet allows people to connect with each other in different ways. However, every new functionality, service, new way of communication, new invention designed for the benefit of humanity may pose a potentially exploitable threat which network and systems' administrators need to be aware of.
Computer network security and privacy has a lot of attention, it is currently of very high importance-various detection algorithms or protection mechanisms are implemented on various network layers, network devices and in operating systems (a good example is the widespread use of VPN networks [START_REF] Kotuliak | Performance Comparison of IPsec and TLS Based VPN Technologies[END_REF] and aims to enhance security in mobile networks [START_REF] Nagy | Enhancing security in mobile data networks through end user and core network cooperation[END_REF]). Despite of all mentioned facts, the very important issue remains in information about currently ongoing attack which administrators need to have as soon as possible to take an action. Development of new effective solutions to detect and provide such information is thus open case [START_REF] Kambhampati | A taxonomy of capabilities based DDoS defense architectures[END_REF], [START_REF] Rejimol Robinson | Evaluation of mitigation methods for distributed denial of service attacks[END_REF].
Among the most common DoS attacks there are flooding attacks which exploit holes in used network protocols [START_REF] Habib | Steps to defend against DoS attacks[END_REF]. In our work we focus on proposal of modification of SYN flood attack detection mechanism and its implementation into KaTaLyzer. KaTaLyzer is a network traffic monitoring tool developed at STUBA [START_REF]Network monitoring tool Katalyzer[END_REF]. Based on existing Counting Bloom Filter (CBF) mechanism, our main contribution described in this paper is modification of method utilizing CBF for attack detection. With the aim of lower memory requirements we use modified CBF structure (one vector) for storing counters of half-open TCP connections. Along the possibility to detect SYN flood attack, our method allows to distinguish the type of the attack. More information about DoS SYN flood attack can be found e.g. in [START_REF]1996-21 TCP SYN Flooding and IP Spoofing Attacks[END_REF].
After detection of ongoing SYN flood attack, network administrator is notified. TCP SYN flood attacks can be distinguished into:
• Random -spoofed source IP address for each packet is generated randomly • Subnet -spoofed source IP address is for each packet generated from specific subnet range • Fixed-several chosen IP addresses are used. Section 2 of this paper describes Bloom filter data structure for storing data and its modifications. In section 3 description of CBF modification is presented while in Section 4 new S-Orthros algorithm is given. Section 5 with evaluation of proposed method is followed by discussion and paper conclusion.
Bloom filter and its modification
In the early 70's of the 20 th century, H. Burton Bloom [START_REF] Bloom | Space/Time Trade-offs in Hash Coding with Allowable Errors[END_REF] introduced new hashcoding methods, which have become the cradle of a new approach to storing data into a data structure, later called the Bloom filter. His efficient structure provides a way to reduce space required for storing data, at the cost of false-positive members. As described in [START_REF] Tabataba | Improving false positive in Bloom filter[END_REF], Bloom Filter data structure is widely used in today's internet, viruses, worms and network intruders which cause service damages with enormous economic impact. In our approach it will be modified and used for storing data about attacking IP addresses.
Bloom Filter algorithm.
Mathematics behind the Bloom Filter data structure is following: consider a set of m elements, in our case a set of m IP addresses IP = { ip 1 ; ip 2 ; ip 3 ; …ip m } The set will be after application of Bloom filter described by vector V which is n bits long, it is initially set to n zeros: V = (v 1 ,v 2 ,...v n ) = (0,0,...0) Consider k independent hash functions which are used by the Bloom filter to generate k hash values from each element from the IP set
K i = h i (ip j ), 1≤i≤k, 1≤j≤m
The hash functions output values are integers K i {1, …, n} and represent index in the vector V. If x-th hash function h x , 1≤x≤k, applied to one member of set IP, ip j IP, 1≤j≤m results in value K x , i.e. K x =h x (ip j ), K-th bit of the V vector is set to 1
V = (v 1 ,v 2 ,...v K , ... v n ) = (v 1 ,v 2 ,...1, ... v n )
For each element ip i IP, 1≤i≤k, K-th bit of V vector is set to 1, while K=h i (ip j ), for each 1≤j≤m, 1≤i≤k. This way k hash functions applied to m members of IP set change k bits in vector V (true, if the K index is always different). If two or more hash functions result in the same index K, the bit in vector V is not changed more than once, it is set to 1 once.
To find out whether an IP address H was not a member of set IP, the hash functions are applied to it and appropriate bits in vector V are checked. If even one of these bits is set to 0, H was not a member of IP set. Due to overlapping possibility of setting bits by hash functions to 1, Bloom filter does not provide reverse information, i.e. whether the H was a member of IP set.
According to relations among the hash functions and overlapping of their results, bits in V vector can be set to 1 multiple times. However, only the first setting of the bit to 1 changes the value of the bit.
Counting Bloom Filter
Assume a situation in which the elements of IP set change periodically and thus they are being inserted to and deleted from the data structure. Inserting elements is a simple process which has been described above. However, during deletion of an element from Bloom Filter data structure, we need to set the corresponding bits to zero. It is possible that this operation will affect bits which were set to 1 by hash function also for different element of the IP set. In this situation the Bloom filter no longer provides correct representation of the elements of the IP set. This problem has been solved in [START_REF] Fan | Summary cache: A scalable wide-area web cache sharing protocol[END_REF] which outlined new data structure called Counting Bloom Filter (CBF). In this structure, bits in vector V are replaced by long integers which are used as counters and each hash function has its own vector of these counters. If we want to save a track of element ip x in the data structure, each counter corresponding value of independent hash functions will be incremented. During deletion of an element appropriate counters are decremented.
Independent Hash Functions
Finding well designed set of hash functions is important for the correct storage and distribution of elements in the CBF data structure.Performance of the hash functions is also important. In a comparative study performed by Chen and Yeung in [START_REF] Yeung | Throttling spoofed syn flooding traffic at the source[END_REF], there are independent hash functions designed which have low probability of collisions. 32bit IP address is used as a key for the hash functions.
The hash functions are defined as follows:
hi(IP) = (IP + IP mod pi) mod n, 1≤i≤k
where mod denotes the modulus operation, n is the row length of the hash table, and pi is a prime number less than n.
Following table from work of Chen and Yeung shows comparison of the proposed hash function with other known hash functions [START_REF] Yeung | Throttling spoofed syn flooding traffic at the source[END_REF]. In our work our examination resulted in setting variables of the above mentioned function as follows: n=1024, k=4
New proposal of Modified CBF (MCBF)
The biggest disadvantages of CBF data structure compared to Bloom Filter are:
• more memory space is needed to store data: consider k independent hash functions which require one row (vector) of counters per function. In this case k = r, where r is number of needed rows of counters. Next, consider n as number of counters in a row. The complexity of the space needed for stored elements in the data structure CBF can be expressed as n*r. • possible overflow of counters may pose a risk especially with increased savings of elements.
Simplification of the CBF data structure is one of our contributions in this paper.
Compared to CBF, we propose to use only 1 vector of counters (as in BF) where independent hash functions increment values while saving an IP address from which half-open TCP connection is initiated (SYN packet is received). The counters are decremented when a connection is fully opened from the IP address (ACK is received).
As in our attack detection approach the data structure is cleared periodically after defined time interval (see following chapter), one vector of long integer counters is sufficient for storing IP addresses of half-open connections by incrementing and decrementing the counters. It is designed to fit the proposed solutions to detect SYN flood attack and these will be described in later section describing S-Orthros detection algorithm.
Detection module S-Orthros
Our method for detecting SYN Flood attack uses MCBF data structure. Modification of the CBF data structure resulted into simplification and clarification of the solution and also the method itself. The intention is therefore the evaluation of the conditions and thus attack detection and evaluation in a given constant time interval.
S-Orthros in nutshell
Consider the case where the detection algorithm cooperates with a measuring tool which is used to capture and analyse network traffic in real time. At the beginning, continuous process of capturing network traffic statistical data is started and runs as continuous process.
After a defined time interval (set by administrator, usually 1 minute), process analysing captured network traffic data is started. In this process, also data important for S-Orthros detection algorithm are saved -source and destination IP addresses of SYN and ACK packets. These IP addresses are saved using chosen hash functions to the MCBF structure. It consists of two tables containing n long integer counters (2*1024*4B).
The first table is used to store source IP addresses, the second table stores destination IP addresses. During the analysis, the detection algorithm S-Orthros collects information about initiated connections (i.e. SYN packets) and confirmations of the connections (i.e. ACK packets). If the analysis detects a SYN packet, the MCBF data structures are incremented counters in both tables are incremented according to results of hash functions applied to IP addresses. In case of receipt of ACK confirming previous SYN packet, counters in data structures in both tables are decremented. If there is no flood attack, the TCP handshakes are correct (i.e. number of SYN and ACK packets are the same) and data structures remain empty.
In case of a flood attack, values in MCBF structure are rising fast. Threshold of acceptable half-open connections has been according to experiences (e.g. settings of CISCO routers [START_REF] Davis | Securing and controling CISCO routers[END_REF] or different operating systems) set to 50. Attack detection process checks the number of half-open connections against the threshold and alerts administrator.
The data stored in the MCBF can be analysed and distribution of values can show type of SYN flood attack -fixed, random, subnet (see following chapter).
Evaluation
For theoretical evaluation of our type of attack-distinguishing approach, we implemented the algorithm in spreadsheet table processor Microsoft Excel. Regarding the practical method, we used generated SYN flood DoS attacks which were detected by our new SYN flood attack-detection module implemented in KaTaLyzer. Detected attacks have proven correctness and functionality of the implemented detection algorithm.
Theoretical evaluation
Using MCBF we are able to simulate different variations of SYN flood attacks. We have simulated 3 types Random SYN flood, Subnet SYN flood, Fixed SYN flood.
To obtain input data for simulations in Excel, it is necessary to define (in case of Fixed attack) and generate (in case of Random and Subnet attacks) IP addresses from which the hash functions calculate their values. These values are stored in the MCBF data structure.
For our simulations, an IP address is represented by numeric representation of 32 bit number. For instance, well-known address 192.168.0.1 has been calculated as follows:
1*256^0 + 0*256^1 + 168*256^2 + 192*256^3 = 3232235521 For Random and Subnet SYN flood attacks we have generated 250 000 IP addresses in Excel (they represent number of half-open connections). We chose following ranges for particular attacks: Both of these attacks can be distinguished thanks to the typical arrangement of IP addresses stored in the used data structure. This arrangement of IP addresses has been verified on sample of 250.000 IP addresses with proposed hash functions, which are for Excel defined as follows:
MOD((D1+MOD(D1;307));1024), where D1 represent cell containing IP address, 307 is prime number and 1024 is length of vector.
The results of simulations are shown below in Figures 2 and3
Practical evaluation
There are number of tools for generating DoS attacks. Project Neptune is used to generate SYN Flood attack, it can continuously send TCP SYN packets at a rate 248 SYN packets per second [START_REF] Cardinal | Use offense to inform defense. Find flaws before the bad guys do[END_REF]. For our purposes we tested hping3 tool [START_REF]Hping -Active Network Security Tool[END_REF] sending TCP SYN packets on port 443 to target host and sending TCP SYN packets with the ACK flag set to target host. We also used Letdown tool [START_REF] Acri | Complemento Howto[END_REF] and Ev1syn [START_REF]Ev1Syn -A SYN Flood with Random Spoofed Source Address[END_REF].
To verify the basic functionality of our proposed and into KaTaLyzer implemented detection method, a TCP SYN Flood attack has been executed using Ev1syn. As expected, S-Orthros module was able to correctly detect even weak SYN Flood attack (121 half-open connections).
Fixed attack identification.
Fixed TCP SYN flood attack was generated from 4 IP addresses. They can be identified in the graph shown in Figure 6 as 4 tipping points. S-Orthros module stored information about each IP address in the data structure of MCBF 4 times with 4 independent hash functions.
As expected, attack generated on real network had similar characteristics as the theoretical run of the attack -compare Fig. 2 and4. Minor measured variations are caused by a variety of regular communications which were captured together with the attack on the server -monitored network is connected to regular network.
Random attack identification.
In the next step we evaluated detection of Random SYN Flood attack. The test also evaluated stability of the implemented module, as well as the stability of the measurement tool KaTaLyzer during high load. All tools used for attack generation randomly generated the source IP address for each packet, the destination IP address was the address of measuring server.
For more detailed comparison of theoretical and practical approaches we closely investigated theoretical and measured attacks (see Fig. 5). It is necessary to take into account that the algorithm used to generate random IP addresses in Excel and the algorithm used to generate random spoofed IP addresses implemented in the attack generator give us different but similar input data. Nonetheless, a way of storing data in the MCBF data structure should be retained and therefore the results from both graphs in Figure 5 show uniformly stored data.
Discussion
Security and protection against DoS attacks can be addressed at different levels. It is even possible to avoid such threats in operating system by simple firewall settings [START_REF] Brouer | Mitigate TCP SYN Flood Attacks with Red Hat Enterprise Linux 7 Beta[END_REF].
Nevertheless, we can still find unsecured systems and security holes through which the attack can be successfully performed. Network traffic measuring tool KaTaLyzer, which runs 24/7, allows not only to analyze and to save the network traffic statistical data. Thanks to the new implemented attack detection module it provides an additional level of network protection. Obtained results show us, that network administrator obtains almost immediate notification about ongoing SYN flood attack, thanks to which he can take the necessary steps to mitigate or eliminate the ongoing attack. The type of attack can be also distinguished.
Correctness, completeness and functionality of the proposed detection algorithms are confirmed by the results obtained by practical methods and theoretical methods described in this paper. Comparison of theoretical and practical results show similar statistical characteristics of generated attacks. We generated TCP SYN flood attack for several hours so that we not only verify the stability of the implemented module, but also the entire measuring tool KaTaLyzer in which the module was added.
Conclusion
We proposed fast and memory-effective method for SYN Flood DoS attack detection and type identification. It is based on modification of Counting Bloom Filter, multiple vectors of counters are replaced by one vector. Appropriate counter in the vector is incremented by new half-open TCP connection and decremented by successful TCP connection establishment. Large number of half-open TCP connections evokes SYN flood attack in progress. Without an attack, the counters remain empty. Through modified Counting Bloom Filter, we are able to distinguish three main TCP SYN flood attacks (random, fixed, subnet) which may significantly help the network administrator to mitigate or avert the ongoing attack.
The new method has been implemented into new S-Orthros module for network monitoring tool KaTaLyzer. After detection and identification of SYN Flood DoS attack, the module informs network administrator.
Detection method has been verified by theoretical and practical methods for random, subnet and fixed SYN flood DoS attack.
Fig. 1 .
1 Fig. 1. Storing data in MBCF data structure using 4 hash functions and counters
•
Random -RANDBETWEEN(1; 4294967295) -covers whole range of IPs • Subnet -RANDBETWEEN(3232235776; 3232236031) -covers IP addresses from 192.168.1.0 to 192.168.1.255
. As we can observe, numbers of half-open connections in MCBF during simulated Random attack have equal distribution -small amounts of half-open connections are measured for each IP address. On the other hand, Fixed attack is typical with high numbers of half-open connections for specific IP addresses. For Subnet attack it is typical that chosen range of IP addresses have relatively high amount of half-open connections.
Fig. 2 .Fig. 3 .
23 Fig. 2. Theoretical results -Random and Fixed TCP SYN Flood attacks
Fig. 4 .
4 Fig. 4. Practical results -Fixed TCP SYN flood attack
Fig. 5 .
5 Fig. 5. Detailed Random TCP SYN flood attack -theoretical and practical results
Table 1 .
1 Tested hash functions[START_REF] Yeung | Throttling spoofed syn flooding traffic at the source[END_REF]
Hash function Consumed time (s) Number of collisions
Our function 0.187 0
Robert's 32-bit function 0.188 3838
Robert's 96-bit function 0.250 0
Cruth's function 0.031 977
Hybrid function 0.328 0
Acknowledgement
This work is a result of the Research and Development Operational Program for the projects Support of Center of Excellence for Smart Technologies, Systems and Services, ITMS 26240120005 and for the projects Support of Center of Excellence for Smart Technologies, Systems and Services II, ITMS 26240120029, co-funded by ERDF. It is also a part of APVV-0258-12, VEGA 1/0708/13 and KEGA 047STU-4/2013. It is also part of Katalyzer project katalyzer.sk and initiative ngnlab.eu. | 20,572 | [
"1001328",
"1001325",
"1001326",
"1001327"
] | [
"259428",
"259428",
"259428",
"259428"
] |
01466238 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466238/file/978-3-319-24315-3_5_Chapter.pdf | Martin Nagy
email: [email protected]
Ivan Kotuliak
email: [email protected]
Jan Skalny
Martin Kalcok
email: [email protected]
Tibor Hirjak
email: [email protected]
Integrating Mobile OpenFlow Based Network Architecture with Legacy Infrastructure
Keywords: 3GPP networks, GPRS, SDN, Software Defined Networking, NFV, Network Functions Virtualization, OpenFlow, signaling and user data separation, wireless networks, cellular networks, PCU-ng, PCUng, ePCU, vGSN, ReST, MAC tunneling, Ethernet tunneling, ICMP topology discovery, ARP APN search
UnifyCore is a concept of SDN centric, OpenFlow based and access agnostic network architecture, which changes the way networks are being built today. It is designed in a way, so present access technologies can be easily integrated in it. It provides set of architectural components and rules, which help to easily decouple components of the access technology and put their functionalities into UnifyCore building blocks. This simplifies the overall network architecture and allows the use of common transport core for all access technologies. First proof of concept built on UnifyCore is the GPRS network, which is a challenge for SDN, since it does not have split user and control plane transport. In this paper we introduce and explain features that allow fully SDN UnifyCore to be integrated with existing legacy network infrastructure (switches/routers).
Introduction
One of the drivers behind software defined networking (SDN) trend was the inflexibility of existing networking approaches and industry that limited the space for innovation. On the other hand, researchers also struggled with black box networking approaches and architectures, which limited the experimental capabilities of existing network equipment. Since then, SDN spread through wired networks and it is making its way, together with network functions virtualization (NFV), to the network operator world, where most of the industry struggles with network equipment, which is often hard to integrate with existing infrastructure that does not provide open interfaces, so complicated work arounds need to be done.
In UnifyCore architecture we are trying to address the heterogeneity and inflexibility of the network infrastructure, which causes complicated network management, control, new service deployment and orchestration. This is the case mainly with large network operators, who provide services over multiple technologies such as multiple wireless technologies (GPRS/UMTS/LTE), xDSL and optical at the same time. Customers naturally expect same look and feel of the service regardless of the technology being used. With standard networking approaches, this hard and often expensive to reach.
Our UnifyCore approach offers joint control by using open APIs on the central SDN controller and access network control elements (access managers). By using this approach, network operators can easily orchestrate and have better control of the network.
The paper is structured as follows. First two sections give an overview of the foundation of mobile networks and SDN. Next, state of the art in the area of mobile software defined networks is briefly introduced. Rest of the paper focuses on the UnifyCore architecture and its features. Last section concludes the paper.
Mobile Networks Basics
As general packet radio service (GPRS) was the first network technology we integrated into UnifyCore, we will first introduce some essential concepts of this network. In this paper we focus only on the packet switched part of the network, therefore we won't explain procedures and nodes of the circuit switched part of the network.
GPRS network consists of the radio access network (RAN) and the core network (CN). In RAN, base transceiver station (BTS) and base station controller (BSC) are located. BTS is a device which handles the radio interface. It is responsible for modulation/demodulation, error checking and correction and communicates with BSC on one side and mobile station (MS) on the other side. In BSC, all logic of the radio access network is located. Multiple BTSs are controller by a single BSC. BSC connects the RAN to the core network, more precisely to the serving GPRS support node (SGSN). This node is responsible for mobility management, session management, authentication and ciphering in the GPRS network. Further to the core network, SGSN connects to the gateway GPRS support node (GGSN). As the name implies, this node is a gateway from the mobile network to the external networks such as Internet or corporate intranet/VPN.
A basic call flow in mobile network includes two main procedures. Fist attach procedure is executed. During this procedure the mobile station is authenticated and gets connected to the network. At this point, mobile station does not have any IP connectivity. Circuit switched calls and SMSs are available (attach both to circuit switched and packet switched part of the network is assumed). In order to communicate for example with the Internet, second procedure called PDP context activation has to be executed. In this procedure, the mobile station specifies the service, which is requested by filling up the access point name information element (APN). If the procedure succeeds, the network assigns an IP address to the mobile station and transfer of the data across the network is possible.
Further details about GPRS and other mobile networks such as universal mobile telecommunications system (UMTS) and long term evolution (LTE) technologies can be found in respective 3GPP standards or books [1,2,3].
As mentioned before, the key driver behind SDN was situation in network industry, that mainly used black boxes from different vendors, which provided only CLI or SNMP for management and integration, and there was no standard APIs providing full control over the network appliance. This situation made integration of network infrastructure of different vendors very difficult and expensive. Such integration complicated network automation and integration processes. It also led to a vendor lock-ins in some cases. From the research point of view, black boxes provide little to no space for experiments, so SDN was introduce to challenge these limitations.
SDN brings separation of user and control plane of the network appliance. By doing this, each plane can evolve separately and can be optimized for its needs. Moreover as these two functions formerly residing in the same box are split by SDN, need for a communication protocol or API between these two planes was evident. Most successful SDN approach is probably the OpenFlow protocol.
OpenFlow
OpenFlow, as the name induces, builds on the idea of network flows. A flow in the network is specified by n-tuple of protocol header fields. Different set of protocol headers and header fields are supported in each version of the protocol. OpenFlow network is composed of OpenFlow controller which communicates with OpenFlow switches or forwarders in other words.
Forwarder is composed by set of flow tables, where flow entries can be written and by which packets are processed. In each flow entry, selected protocol header fieldsmatch fields are specified, and set of actions to be performed after match are associated with it. Flow entries are installed by the SDN controller at any time. When a packet is received by the OpenFlow forwarder, its header fields are compared against flow entries and in case of match actions and instruction are executed. This way, any new networking approach is dependent only at the logic in the controller, since the OpenFlow protocol and OpenFlow switch capabilities are standardized and at atomic level (network flow) [START_REF]Networking Foundation: OpenFlow Switch Specification 1.4.0[END_REF].
There are many more SDN related approaches, both academic -I2RS [START_REF]IETF: Interface to the Routing System[END_REF], ForCES [START_REF]IETF: Forwarding and Control Element Separation workgroup[END_REF], PCEP [START_REF] Le Roux | RFC 5440 -Path Computation Element Communication Protocol[END_REF] and vendor specific -OnePK [START_REF]Cisco: Cisco's One Platform Kit (onePK)[END_REF], but these have little relevance to our work, moreover OpenFlow is the leader on the market and the academia.
Related Work
Most of the present work focusing on mobile SDN is addressing different kinds of mobile gateway nodes decomposition or network functions placement [START_REF] Basta | Applying NFV and SDN to LTE mobile core gateways, the functions placement problem[END_REF] [START_REF] Hampel | Applying Software-Defined Networking to the telecom domain[END_REF]. Then there are approaches, which address network architectures in general and bring new use cases and functionalities, which are enabled by OpenFlow [START_REF] Jin | SoftCell: scalable and flexible cellular core network architecture[END_REF]. Some of telco vendors address mobile SDN with their specific approaches such OpenFlow's mobile counterpart MobileFlow [START_REF] Pentikousis | Mobileflow: Toward software defined mobile networks[END_REF] or extend standard OpenFlow protocol with mobile specific features [START_REF] Kempf | Moving the mobile evolved packet core to the cloud[END_REF]. Third part of the SDN mobile related research is the SDN based/controlled RAN [START_REF] Yang | OpenRAN: A software-defined RAN architecture via virtualization[END_REF] [START_REF] Gudipati | SoftRAN: software defined radio access network[END_REF].
The vast majority of the papers focus on the same technology -LTE. However GPRS, on which the UnifyCore demo is based, is the dominant technology for the M2M services, thanks to its maturity and simple radio interface that enables low terminal price that is crucial for massive M2M deployment. Finally, GPRS is expected to continue to provide such services and an umbrella fallback network for next one or two decades.
UnifyCore -Novel Core Network Architecture
UnifyCore architecture was developed with backwards compatibility and SDN focus in mind. It is aimed to provide mobile services and features of packet core (GPRS/UMTS/LTE), but can be also used as a transport core platform for aggregation of traffic from different access technologies and provide umbrella control and automation platform.
In UnifyCore architecture, the access technology specific protocols are terminated as close to the border between access network and core network as possible. The idea behind this is to use a common transport core, which is not complicated by various access technologies. Different access technologies such as GPRS, UMTS and LTE or WiFi are controlled by dedicated control elements called access managers. These nodes understand the signaling protocols used by the access network and terminals and provide necessary operations such as mobility/session management and signaling. As we mentioned before, common transport core is independent of access technologies connected to it, thus is controlled by a logically separate element -SDN controller. Core control SDN controller and access managers communicate via ReSTful API.
Traffic in the different access networks is usually encapsulated to various access specific protocols, moreover some technologies combine control and user data in a single stream of messages (for example GPRS, as shown later in the evaluation part). For separation of user and control plane data, UnifyCore uses OpenFlow enabled border forwarders called adaptors. Some of the access network protocols are not compatible with present OpenFlow match rules, so we use OpenFlow extensions to support such protocols. These extensions have to be supported on both access managers and border forwarders (adaptors), however they do not have to be supported in core, as it is access agnostic and based just on Ethernet tunneling [START_REF] Nagy | Utilizing OpenFlow, SDN and NFV in GPRS Core Network[END_REF]. This further emphasized the aim for simple common core.
From the mobile networks architecture, UnifyCore borrows the APN concept. As mentioned in the second section, in 3GPP mobile network, the APN is associated with GGSN (P-GW in LTE) interface and signifies a service offered at that point. We use the concept of APN, but as we do not have a GGSN or P-GW in our architecture, our APNs may be located at any border forwarder. Further details on the philosophy and architecture can be found in previous paper on the topic [START_REF] Nagy | Utilizing OpenFlow, SDN and NFV in GPRS Core Network[END_REF]. In this paper, we focus on the backwards compatibility enablers of the UnifyCore -mainly ICMP topology discovery and ARP APN search.
ICMP Topology Discovery
In order to setup a MAC (Ethernet) tunnel, few procedures have to be executed. These procedures include ICMP topology discovery (executed when a new OpenFlow forwarder joins UnifyCore topology) and ARP discovery for localization of traffic egress and ingress points (APNs). For the topology discovery UnifyCore uses its own topology discovery method based on the ICMP protocol.
Process works in two phases. First phase includes bootstrap of OpenFlow enabled forwarders. When a forwarder joins UnifyCore controller, it is asked to clear its whole configuration. Next a new OpenFlow rule is installed to first flow table (table 0 in our case). This rule forwards all ICMP echo requests with given destination IP to the controller. Together with this topology discovery flow rule, a rule for ARP discovery is installed as well (explained in separate section of the paper). If this new node is an adaptor type, extra rules are installed. These rules ensure adaptation of access network user traffic for core network and routing of control plane messages to the access network manager.
Second phase is the topology discovery itself and starts when controller constructs ICMP echo request with encoded source forwarder ID (datapath ID) and source port ID in the payload of ICMP message and injects them to all ports of newly joined forwarder. As these packets reach the adjacent forwarders (these forwarders joined network before), they are matched with the ICMP discovery rule and are forwarded back to the controller. Controller examines the message that has been just forwarded to it and extracts the source forwarder ID and source port from the OpenFlow header and originator forwarder ID and port from the ICMP payload. From this information, controller is able to construct a view of topology.
ICMP topology discovery method, same as MAC tunneling, is compatible with standard featureless L2 switches. If a L2 switch (or group of switches) connects two forwarders, incoming ICMP echo will be flooded to all ports of the switch and finally will reach some adjacent OpenFlow forwarder, which will send the ICMP echo to the controller. Controller will examine the content of the OpenFlow header and payload and update the topology accordingly. In this case, the L2 switch connecting two OpenFlow forwarders is considered to be a direct link between the sending and receiving forwarder. However this L2 switch does not break the UnifyCore concept and capabilities in any way. If there are more interconnected switches between two forwarders, the ICMP echo may be received by the controller multiple times and multiple connections may be discovered (Fig. 6).
ARP Search -Ingress and Egress Point (APN) Discovery
As mentioned before, the APN represents the ingress and egress point of the UnifyCore domain.
At the very start, UnifyCore controller looks at its configuration file and finds all the APNs (ingress and egress points) it is serving. Each APN name from the configuration file gets resolved by the DNS lookups to an IP address. Next, when a forwarder joins the network, together with ICMP topology discovery ARP search process is executed at this forwarder.
The process has several phases. It starts with the deletion of the whole flow table configuration (as mentioned earlier). This first step is common for ARP search and ICMP topology discovery. Next, rules for ARP search are installed on this new forwarder (together with ICMP discovery rules as mentioned before). First flow rule installed is the redirection of all ARP replies to the controller. At this rule we match Ethernet type 806 and ARP operation 2. Action for this flow rule is to forward ARP replies to controller, where it could be further processed. It has to be noted, that from the definition, OpenFlow forwarders do not feature ARP logic, therefore ARP message processing has to be done in the controller (or non-standard OpenFlow extensions have to be used).
Next, the controller sends an ARP request from each port of the forwarder. For each APN in the database (from the configuration file) controller sends one ARP request per forwarder port (target IP address of the APN). Following this approach, we expect, that at the APN location (adjacent network domain) there is a non-SDN capable edge router, thus we use a standard "legacy" ARP procedure. If a SDN capable domain was behind the UnifyCore domain, we could have used some SDN inter-domain signaling. This approach further improves UnifyCore compatibility with existing legacy networks.
It has to be noted, that since ARP is a LAN protocol, the source IP address has to be from the same subnet, as is the APN. Moreover different APNs can have and normally they have different IP addresses from different subnets, therefore controller choses addresses from these domains.
When these ARP request are sent out through all ports of newly added forwarder, they are captured and processed by the adjacent forwarders or an edge router serving the APN. In case of adjacent OpenFlow forwarder nothing happens and no ARP reply is generated (because OpenFlow forwarders do not process ARP the way standard routers do). In case of edge router with given IP address (serving the APN controller was looking for), the router generates ARP response, which will be received by the given forwarder and sent to the controller (ARP reply rule matched). Controller processes the message and extracts the forwarder ID and port ID from OpenFlow header. This way, the controller discovers APNs, their location in the network topology and can construct tunnels for user traffic transport (Fig. 6). When a new ingress or egress point (APN) location is found, controller starts tunnel setup between all already discovered APNs and this newly discovered one. First a shortest path algorithm is executed, which returns a set of forwarders and ports which should be used along the way from one APN to another. If an ingress point is an adaptor, first flow table is left for the traffic adaptation rules, which are set on a user basis. Rules in this table will strip off the access specific headers and forward packets to second table, which is the MAC tunnel table. Here the destination MAC address of the Ethernet frame is set and the frame itself is forwarded to tunnel by assigned interface. Next forwarders along the way perform very similar task. They match the destination MAC address and forward the frame to respective port (given by the OpenFlow rule). The very last forwarder in the way may change the destination MAC address to match the MAC address of the egress point (APN). This is the case only when more MAC tunnels are established to the same egress point (APN), for example for different QoS classes or tunnels from different source. In case of single tunnel towards this given APN, destination MAC address corresponds to the MAC address of the router in the adjacent domain
In the opposite direction (downlink), border router serving given APN in the adjacent domain could search for MAC address of an IP address present in the access network. As mentioned before, forwarders do not have the capability to respond to ARP request, so this is forwarded to the controller by an OpenFlow rule. Controller responds with the MAC address of the tunnel belonging to given end device in the access network. After receiving the requested MAC address, the edge router sends packet to the edge forwarder. This forwarder examines the destination MAC address and forwards it to a given port, specified by the OpenFlow rule. Next forwarders in the way to the access network forward the Ethernet frame in a similar manner. The access edge forwarder (adaptor) finally appends the access specific protocol headers, and in case there are more tunnels towards this endpoint, sets the destination MAC address to the MAC address of the first network node in the access network.
It has to be noted, that uplink and downlink tunnels have different tunnel IDs (MAC addresses), and so traffic can be routed in an asymmetrical manner.
Presence of tunnels even before any need for data transfer further enhances the session setup time. In comparison with for example GPRS or UMTS, where the tunnels are created in a dynamic manner based on mobile station requests. Tunnel setup by exchange of signaling messages between SGSN and GGSN takes naturally more time, than proactive tunnel setup at UnifyCore start.
Evaluation
For our initial UnifyCore proof of concept, we chose a rather specific use case -GPRS over UnifyCore. As mentioned before, most of the mobile oriented SDN research papers deal with LTE or UMTS. However, both technologies share the split user plane and control plane approach, thus introduction of SDN to such system is rather trivial.
Our work focuses on GPRS, which is basically the oldest packet based 3GPP network. Despite its age, it is still being heavily used around the globe. Moreover development on this technology continues and for example release 13 GPRS/EDGE terminals and networks bring further enhancements for the M2M use cases [START_REF]Nokia: Nokia LTE M2M , Optimizing LTE for the Internet of Things[END_REF]. This indicates that even now, GPRS is highly relevant network technology and integration of GPRS and SDN is an interesting topic.
We implemented the UnifyCore GPRS architecture in the following way. We removed SGSN and GGSN from the architecture and split their logic between SDN controller and GPRS access manager called vGSN (virtual GPRS Support Node). Session management (tunnel management) functions are centralized in the SDN controller and GPRS signaling (mobility management, signaling and authentication) is performed on the vGSN. This function split is following the UnifyCore concept introduced in one of previous paper [START_REF] Nagy | Utilizing OpenFlow, SDN and NFV in GPRS Core Network[END_REF]. In user plane we use a GPRS adaptor (GPRS enabled OpenFlow forwarder), which first splits GPRS message stream into user plane data and signaling messages. Next the signaling is sent to vGSN and user plane data is adapted to pure Ethernet (MAC tunneling). In the downlink direction, the GPRS adaptor encapsulates the pure Ethernet data into GPRS protocols and sends it to GPRS radio access network (Fig. 7). We named the GPRS adaptor ePCU or PCU-ng. This stands for enhanced PCU or PCU for next generation networks.
For the evaluation, we implemented the whole solution over the open-source software. As a controller base, Ryu controller framework was used. In the controller, OpenFlow extensions were added, in order to enable controller to command the ePCU (GPRS protocol stack extensions). As a forwarder, ofsoftswitch13 was chosen. GPRS protocol stack extensions were implemented here as well. The GPRS access manager module is based on open-source code from a hacker community, which is focusing on security holes in mobile networks -osmocom. Snippets of source code of two projects -osmo-sgsn and openGGSN were combined in order to build our vGSN. In the GPRS access network sysmoBTS hardware was used. This base station is compatible with osmo-sgsn and compliant to standard 3GPP Gb interface signaling. The setup was verified using off-the-shelf mobile phones of different types -from smart phones to feature phones. During tests, terminals were not aware of any changes in the core network, which was basically one of our most important goals and GPRS data transfer was functional in both directions.
Conclusion
The transformation from classical network architectures to SDN based is inevitable. However, very similar to IPv4 to IPv6 transition, for a certain time, classical networks and SDN networks will coexist. First in the form of SDN islands inside classical network sea, next the situation will be just the opposite. Finally, SDN will become the dominant networking technology.
For this transition period, UnifyCore features set of approaches such as ICMP discovery and ARP search, which enable it to integrate with standard router/switch based transport architecture. These methods not only allow UnifyCore to communicate with existing adjacent infrastructure, but also allow operators to protect past investments in the existing hardware with which is UnifyCore fully compatible.
Both ARP APN discovery and ICMP topology discovery mechanisms might seem redundant, but it has to be noted, that pure OpenFlow forwarders do not support standard features of switches or routers such as ARP message processing or Ethernet broadcast forwarding/flooding. Processing of such messages has to be set by the controller by OpenFlow match rules, actions and instructions.
From the 3GPP mobile network point of view, the UnifyCore easily integrates with standard 3GPP networks -end to end by Gb and Gi interfaces. Our prototype proves that even complicated Gb interface (without user data and signaling separation) is easy to integrate into UnifyCore with a flexible SDN approach.
At the time being we are starting with performance evaluation of the key features of the GPRS prototype. As mentioned before, functional validation was already done with real mobile phones, however such setup was unable to generate traffic load.
Performance of MAC tunneling implemented over user space forwarder application (ofsoftswitch13) is being evaluated using common iPerf2 and iPerf3 tools. For the GPRS related parts (signaling and user data separation, GPRS encapsulation/decapsulation) we are not aware of any free open-source performance measurement tools. Therefore, commercial tools such as Spirent LandSlide [START_REF]Spirent: Landslide[END_REF] or Ixia EPC test [START_REF]Ixia: EPC test[END_REF] need to be used, or new tool for such evaluation has to be implemented from scratch.
Fig. 4 .
4 Fig. 4. High level UnifyCore architecture
Fig. 6 .
6 Fig. 6. ICMP topology discovery (a) and ARP search (b).
Fig. 7 .
7 Fig. 7. GPRS protocol stacks (user plane) in standard 3GPP architecture [1] (a) and in UnifyCore based architecture (b).
Acknowledgments
. This work is a result of the Research and Development Operational Program for the projects Support of Center of Excellence for Smart Technologies, Systems and Services, ITMS 26240120005 and for the projects Support of Center of Excellence for Smart Technologies, Systems and Services II, ITMS 26240120029, co-funded by ERDF. | 27,099 | [
"1001284",
"1001285",
"1001329",
"1001330"
] | [
"259428",
"259428",
"259428",
"259428",
"259428"
] |
01466239 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466239/file/978-3-319-24315-3_6_Chapter.pdf | Michael Weigend
email: [email protected]
Making Computer Science Education Relevant
Keywords: Computer science education, programming, metaphor, text mining, image processing, internet computing, Python
In addition to algorithm-or concept-oriented training of problem solving by computer programming, introductory computer science classes may contain programming projects on themes that are relevant for young people. The motivation for theme-driven programmers is not to practice coding but to create a digital artefact related to a domain they are interested in and they want to learn about. Necessary programming concepts are learned on the way ("diving into programming"). This contribution presents examples of themedriven projects, which are related to text mining and web cam image processing.
The development and learning process is supported by metaphorical explanations of programming concepts and algorithmic ideas, experiments with simple programming statements, stories and code fragments.
Diving into Programming
Computer science (CS) education at schools is supposed to "introduce the fundamental concepts of computer science" (CSTA, [START_REF] Seehorn | CSTA K-12 Computer Science Standards[END_REF]) and foster computational thinking [START_REF] Wing | Computational thinking[END_REF], which includes abstraction, modeling, problem solving and creating algorithms using formal language. In contrast to information technology (IT) education, computer science education is not just about using digital tools but about designing software [START_REF] Seehorn | CSTA K-12 Computer Science Standards[END_REF]. Programming (the skill of writing a program to a given task) is considered as a new literacy [START_REF] Prensky | Programming is the new literacy[END_REF] and an important part of general education, since it is creative, constructive and precise [START_REF] Gander | Informatics and General Education[END_REF].
Programming is a problem solving activity and implies a transfer of knowledge to new scenarios. Among other cognitive operations [START_REF] Mayer | Problem solving[END_REF], transfer in problem solving requires recognition (of an analogue problem or a well known general pattern), abstraction (finding general structures by focusing the important aspects), mapping (relating familiar concepts to a new scenario), flexibility (in applying a general pattern on a special scenario) and embedment (combining elements to a whole program). Developing programming skills means practicing knowledge transfer by writing programs and solving similar tasks again and again. Consider this programming task: "Last rainy day. Develop a program for which the input is 365 integers indicating the amount of rain in each day of the year; and the output is the (index of the) last rainy day." [START_REF] Ginat | Transfer, cognitive load, and program design difficulties[END_REF].
The solution is a special variant of a general pattern, a "max computation", in which all elements of a sequence have to be compared to a given value and a variable eventually must be updated depending on the result of this comparison. Out of 95 Israeli 11-graders, 69 (73%) were able to solve this task without any help after one year of Java programming. [START_REF] Ginat | Transfer, cognitive load, and program design difficulties[END_REF] assume, that the others (23 %) had difficulties in flexibility and failed to customize a general pattern to the specifics of a new situation. I mention this example just to illustrate that writing a program from scratch without external help is not easy and requires a lot of training and experience. It is a competence that is developed gradually in many exercises. Typical tasks for practicing contain short and precise descriptions of pre-and post-conditions, which make it possible to check the correctness of the solution. For each concept (algorithmic patterns, language constructs) there are many variants of tasks embedded in scenarios from different domains. Diversity is important (to practise transferrelated operations) but the domains can be chosen rather arbitrarily. For practising search algorithms it does not matter, what to search -the last rainy day in a sequence of weather documents or the last phone call from Anna in a collection of telephone call metadata.
Computer science topics listed in curricula represent the teachers' perspective: "To be well-educated citizens in a computing-intensive world and to be prepared for careers in the 21st century, our students must have a clear understanding of the principles and practices of computer science." (CSTA) But these "principles and practices" -as such -are not necessarily interesting for high school students. For example in Germany the requirements for final high school exams include topics like object oriented programming (classes, inheritance, polymorphism, UML) and finite state automata. Probably most 15 or 16 years old students, who have to decide whether or not they take CS classes, do not even understand what these terms mean.
According to the international ROSE study, most young people in Europe and other well developed countries have a positive attitude towards science and technology but they have a problem with school science. "Topics that are close to what is often found in science curricula and textbooks have low scores on the rating of interest" [START_REF] Sjøberg | The ROSE project: An overview and key findings[END_REF]. Science and technology topics are not interesting as such, but they can get fascinating for young people, when they are embedded in a real life context. There are massive differences between genders: Girls like to learn about body and health, boys are interested in violent and spectacular contexts (e.g. chemical explosives). Both genders are especially interested in unusual and mysterious things (most popular topic: The possibility of life outside earth).
The motive for learning programming is not necessarily intrinsic. Someone might be not interested in "the principles and practices of computer science" at all, but gets involved because she or he wants to create something exciting.. Protagonists of constructionism [START_REF] Papert | Mindstorms: Children, computers, and powerful ideas[END_REF][START_REF] Resnick | Scratch: programming for all[END_REF] claim that developing digital artefacts is a very intense experience leading to deeper knowledge than just reading text books. Programming is a way to elaborate knowledge. Construct something interesting and learn on the way. This constructionist approach of "theme-driven programming" has some major implications: Diving into programming. When the learner starts a project she or he possibly has only little knowledge about programming and must learn a lot in a short time. Once-in-your-life-experience instead of repetition. Creating a digital artefact is a rich experience leading to a unique product. Richness implies that many things happen and many circumstances came together to make the project possible: Motivations that were satisfied through the project, an assignment, collaboration with other persons. In contrast to this unique experience, practicing programming implies repetition of similar activities. Priority of the artefact. The primary (subjective) goal of the learner is not to practise programming but to create an artefact. Anna has seen something cool and wants to make a similar thing. Opposed to the practicing approach the product has a higher value than the process of implementation. Limitation to basic designs. Programs developed in the classroom differ from professional programs. They are implemented as simple as possible. Using scaffolds. In contrast to the practicing approach the primary goal of a project is not to gain fluency (it will happen anyway). The project is the reason for going deeper into programming. Exploring new programming techniques requires "just in time" explanations that open the mind. Tinkering. Learning by doing requires the possibility to experiment. The learner modifies the code, runs the program and sees the effect.
Some programming languages/environments support "diving into programming". Python has a very "low threshold" which is easy to overcome by beginners. The line print("Hello!") is a valid Python program. In the interactive mode (Python shell) the user can experiment by writing individual statements which are interpreted and executed after having hit the ENTER-key. The result is displayed in the next line:
>>> len("Hello!") 6
Scratch is a visual programming environment which allows users to "build" scripts by moving block with the mouse on screen (https://scratch.mit.edu/). In this way syntax errors never happen. Children can rather easily create videos, games or animations.
How to Support Diving into Programming
A challenge for teachers and text book authors is to create program examples that are relevant (attractive) and easy to implement. A "dive-into" structure for text book units and classroom activities is this:
1. Present a relevant context. The context is an informatics-related theme or field that students consider to be interesting and important. The social aspects of technology are pointed out; its impact on everyday life and the environment are made aware. Since the interests of young people are diverse, the context should inspire to a variety of concrete projects. For 2. Explain relevant programming concepts and visualize those using metaphors. According to Lakoff [START_REF] Lakoff | The Metaphorical Structure of Mathematics: Sketching Out Cognitive Foundations for a Mind-Based Mathematics[END_REF] metaphors can serve as vehicles for comprehending new concepts. A structural metaphor is a mapping from one domain of knowledge (source) to another domain (target). A variable (target) is a container for data (source). Calling a function (target) is to delegate a job to a specialist (source). Metaphors help understanding new concepts from a target domain, if the source domain is familiar. Another facet of intuitive models is simplicity. Good metaphors represent intuitive models, Gestalt-like mental concepts, which people are very confident about. People use them when they try to understand, develop or explain programs. Programmers may use different metaphors for the same programming concept. For example, a function can be visualized by the metaphor of a factory, which takes data as input, processes them und outputs new data. A different metaphor is a tool changing the properties of an object, which keeps its identity during the process.
3. Give examples for individual statements (not context-related) for hands-on experimenting and elaborating. Novices need to experiment with new commands or functions. Often, just reading the language reference is not enough if you want to be really confident about the meaning.
4. Give a very simple prototype project ("starter project") that can be copied and tested. This can be the starting point for the development of an extended, more sophisticated program. A starter project is supposed to inspire students to do their own project in this field. Scratch users find starter projects for several topics on the Scratch website, a platform where Scratch users can publish their projects. Scratch cultivates "remixing", that is copying, changing and extending projects. For each project the remixes (successors) and preceding projects are documented. In this way ideas are reused but not stolen, since each contributor is mentioned in the history of a piece of software. A problem of remixing is that "blind copying" does not help understanding. Someone might take a program, change a small part and make it look different still not understanding the other parts.
A starter project can initiate a development process in the style of agile programming (Extreme Programming [START_REF] Beck | Extreme programming explained: embrace change[END_REF]). Students start with a very short program that implements a basic story. They test it and debug it until it works and until it is fully understood. This is the first iteration. Then they add a few lines of code to implement the next story. They develop the project in a couple of very quick iterations and learn on the way step by step. In that way -ideally -both programming competence and the program (the digital artefact) grow in parallel.
It is essential to step on not before the present iteration works fine and is fully understood. Debugging and testing is an essential part of the process. Beginners will fail to find errors if the program is too complex and contains concepts they do not understand. So the starter project must be really simple and is probably not attractive in itself. Its beauty lies in the fact that it is the first step on the way to something interesting.
Text mining
Generally speaking, text mining means making profit out of text documents that are publicly available. The text is considered as a resource that can be exploited in order to produces additional value. Text mining can be considered as a threat, when someone searches for telephone numbers, names or e-mail addresses and misuses this information. But there are many useful applications like searching for rhymes or traffic information. An important concept in text mining is regular expressions. Programming novices have to learn two things, a) the general idea of pattern matching and b) specific formal details of regular expressions (placeholders like the dot . or operators like + and *) . The general idea of pattern matching is used (in a naive way) in everyday live, when we identify things or find things and separate them from others. Metaphors for regular expressions are a sieve that separates certain objects from other objects a "grabbing-device" that can only interact with objects that have certain surface properties (lock-key concept)
Mining Mark Twain -Using Literature for Finding Rhymes
In the Project Gutenberg you can find 50 000 free e-books, including the entire works of Mark Twain (http://www.gutenberg.org/ebooks/3200). Download the utf-8 text file (15.3 MB) and store it in your project folder. This book (with 5598 pages) can be used for finding rhymes. The following listing shows a starter program (Python). The call of findall() in line #1 returns a sequence of words that end with the giving ending (plus a space). Statement #2 transforms the list to a set (without duplicates) and prints it on screen. from re import * f = open("marktwain.txt", mode="r", encoding="utf-8") book = f.read() f.close() ending = input('Ending: ') while ending: wordlist = findall("\w*" + ending + " ", book) #1 print(set(wordlist)) #2 ending = input('Ending: ') This is the output from an example run:
Ending: eep {'sheep ', 'Weep ', 'Sheep ', 'deep ', 'asleep ', 'keep ', 'steep ', 'creep '. ...}
This program works nicely, but it has many obvious weaknesses. For example, the output could be prettier (no curly brackets, commas etc), capitalized duplicates should be eliminated (just weep instead of Weep and weep), and the space-symbols at the end of each word could be cut off. Learners can extend the starter project and implement more stories in further iterations.
Mining social media
Small programs are not per se easy to understand just because they are small. Some statements may adopt advanced programming concepts that are difficult to understand. Let me discuss an example.
The Python module tweepy supports accessing twitter tweets. When someone submits a tweet, this event is documented in a json-string that is publicly available. This record contains the text of the tweet as well as information about the tweeter. If you want to create an application for processing tweets, you need to register your application on the twitter website. You get some keywords, which your program needs for authentication (consumer key, consumer secret, access token, access secret). The following program implements this story (1): Select all tweets about "gaming" and "smart city" from a live stream and store them in a text file.
It runs until it is stopped by a keyboard interrupt. Within a few hours one can collect thousands of tweets, which can be analyzed later. This is an object oriented program. It contains several object instantiations and the definition of a derived class (#1). The programmer must override the method on_data(), which processes each tweet that is taken from the Firehose. The parameter track defines a selection pattern. Twitter allows at most 1% of all tweets in the Firehose to be selected. Obviously, rather advanced programming concepts are involved. How to explain this to a beginner, who is diving into this technology? Figure 3 gives an intuitive model of the whole project. In Extreme programming this is called a "project metaphor". It is one holistic idea how to mine a Twitter live stream. In addition one can map elements of the image to formal constructs in the program text: The Stream-object is represented by a big pipe, the AuthHandler-object -resposible for access to the Firehose -is visualized by a red pipe, the file storing tweets, is a container (bottle, bucket or can) and so on.
The text file containing collected tweets can easily be analysed using the standard methods of string objects. Example (Python):
>>> text = "This is a tweet." >>> text.count("is") 2
Further stories could include these: (2) Check the frequency of tweets about topics like "gaming" or "smart city". (3) Estimate the average age of persons tweeting about certain topics by analysing the language they use.
An approach to implement story 3 is searching for certain stylistic elements that are age-dependent. For example, young tweeters use more often the words "I", "me", "you", capitalized words like "LOL" or "HAHA" and they use more often alphabetic lengthening like "niiiice" instead of "nice". Tweets from older users on the other hand contain more hyperlinks and references to the family ("family", "son", "daughter") [START_REF] Nguyen | How Old Do You Think I Am?"; A Study of Language and Age in Twitter[END_REF]. Table 1 shows the results of a "toy analysis" of tweets, which were collected during the same time slot (14 hours on May 17 th 2015).
Web cam analysis
Webcams make live at certain places really public. In contrast to surveillance cams which are accessible only by authorized persons, the images of public webcams can be observed and analysed by everyone. Running webcam-related programs implies interacting with the social environment. The input device is a public spot. This might provoke thinking about legal, political and ethical aspects of public webcams and digital technology (personal rights, security, and privacy).
Figure 3 shows screenshots from two different Python programs displaying and evaluating images public webcams. The first application observes two areas (marked by white rectangles in the lower right quadrant of the image) and detects any motion at these spots by comparing the present picture with a photo taken a few seconds earlier. The application uses the marked areas for picking the answers of two teams in a quiz. Imagine questions with two response options (yes and no). Each team answers "yes" by moving on "it's spot". It selects the answer "no" by keeping the area free from any activity. The second application observes a small rectangular area on a freeway, counts the number of motions in ten minutes, and estimates the density of the traffic [START_REF] Weigend | Raspberry Pi programmieren mit Python[END_REF].
Both programs consist of approx. 60 lines of code. A student -say Anna -could just copy such program from a text book. But if Anna is not familiar with the concepts included, this would not necessarily lead to comprehension. An alternative to copying letter by letter is reconstruction. Anna starts with a very simple nucleus, tests it until it is fully understood and then extends and changes it in iterations (similar as in Extreme Programming). This can be supported by the text book. Here is an example of a program which could be the first reconstruction step in both projects. Story 1: Get an image from a webcam and show it on screen. This linear program just demonstrates how to get an image from the internet on the display of the computer at home. The image data must be transformed in several steps. Finally (in line #1) a PIL.Image-Object has been created. Story 2: Draw something on the image, say a rectangle. This story is implemented by adding a few lines of code: from PIL import ImageDraw ... A = (305, 375, 325, 395) ... draw = ImageDraw(img) #2 draw.rectangle(A, outline="white") #3
These few lines of code demonstrate the idea of a PIL.ImageDraw-Object. In line #2 a new ImageDraw-object (named draw) is created and connected to a PIL.Imageobject named img. When draw receives a message (like in #3) it changes the state of the connected image. Further stories (which can be used in both projects) are: 3) Show the image in an application window and update it every x seconds. 4) Detect motion in two rectangular areas. 5) Show the results of the motion detection on a label below the photo. At some point refactoring is necessary. This means to improve the technical quality of the program (without changing its functionality). The target program is a well readable well structured object oriented program. Students will take the given program as an inspiration and add their own ideas.
Conclusion
Making CS relevant for young people is a major challenge for teachers and text book authors. Digital technology is omnipresent in our lives. But this does not guarantee that young people are interested in taking CS classes at schools. Fundamental principles and practices of CS must be imbedded in contexts that inspire young people. We need interesting project ideas (stories) that can be implemented quickly in small programs and media (images) that explain the idea of program code very quickly.
Fig. 1 .
1 Fig. 1. Two different metaphors illustrating the concept of finding strings with regular expressions. The formal details of regular expressions are best understood by reading the language reference and experimenting with individual statements. The function findall() from the Python module re takes a regular expression and a string as arguments and returns a list of all matching substrings. Here is a mini series of experiments illustrating how to find words that end with "eep": >>> text= "Keep it. Reeperbahn is a street in Hamburg." >>> findall("\w*eep", text) ['Keep', 'Reep'] >>> findall("\w*eep ", text) ['Keep ']
from tweepy import OAuthHandler from tweepy.streaming import StreamListener from tweepy import Stream f = open('my_tweets.txt', 'w') class MyListener(StreamListener): #1 def on_data(self, data): f.write(data) f.flush() return True auth = OAuthHandler('consumer key', 'consumer secret') auth.set_access_token('access token', 'access secret') listener = MyListener() stream = Stream(auth, listener) stream.filter(track=['smart city', 'gaming']) #2
Fig. 2 .
2 Fig. 2. Mining the Twitter Firehose.
Fig. 3 :
3 Fig. 3: Screenshots from Python programs, showing and processing the live image of public webcams at the Friedensplatz in Dortmund, Germany (left hand side) and at a freeway junction at Frankfurt, Germany (right hand side).
Table 1 .
1 Results from a toy analysis of tweets containing CS-related phrases.
Phrase Number Average length "Old" stylistic "Young" stylistic
of tweets (words) elements elements
Smart city 226 24.6 2.8 % 7.4%
Internet of things 1718 27.9 4.4% 11.2%
Gaming 19928 26.4 3.6% 11,6% | 23,729 | [
"1001331"
] | [
"325710"
] |
01466244 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466244/file/978-3-319-24315-3_28_Chapter.pdf | Hsing-Chung Chen
Chung-Wei Chen
A Secure Multicast Key Agreement Scheme *
Keywords: Cryptography, security, secure multicast, conference key, key distribution
a key agreement to securely deliver a group key to group members. Their scheme utilized a polynomial to deliver the group key. When membership is dynamically changed, the system refreshes the group key by sending a new polynomial. We commented that, under this situation, the Wu et al.'s scheme is vulnerable to the differential attack. This is because that these polynomials have linear relationship. We exploit a hash function and random number to solve this problem. The secure multicast key agreement (SMKA) scheme is proposed and shown in this paper which could prevent from not only the differential attack, but also subgroup key attack. The modification scheme can reinforce the robustness of the scheme.
Introduction
Many security protection schemes [START_REF] Chen | Packet Construction for Secure Conference Call Request in Ad Hoc Network Systems[END_REF][START_REF] Chen | Secure multicast key protocol for electronic mail systems with providing perfect forward secrecy[END_REF][START_REF] Chen | A Secure E-Mail Protocol Using ID-based FNS Multicast Mechanism[END_REF][START_REF] Wu | On Key Distribution in Secure Multicasting[END_REF][START_REF] Kim | Communication-Efficient Group Key Agreement[END_REF][START_REF] Kim | Tree-Based Group Key Agreement[END_REF][START_REF] Fekete | Specifying and Using a Partionable Group Communication Service[END_REF][START_REF] Chen | Design and Formal Analysis of A Group Signature Based Electronic Toll Pricing System[END_REF][START_REF] Craß | Securing a Space-Based Service Architecture with Coordination-Driven Access Control[END_REF][START_REF] Malik | Privacy Enhancing Factors in People-Nearby Applications[END_REF][START_REF] Kent | Differentiating User Authentication Graphs[END_REF][START_REF] Wu | Hierarchical Access Control Using the Secure Filter[END_REF] have been developed for an individual multicast group. Some schemes address secure group communications by using secure filter [START_REF] Chen | Packet Construction for Secure Conference Call Request in Ad Hoc Network Systems[END_REF][START_REF] Chen | Secure multicast key protocol for electronic mail systems with providing perfect forward secrecy[END_REF][START_REF] Chen | A Secure E-Mail Protocol Using ID-based FNS Multicast Mechanism[END_REF][START_REF] Wu | On Key Distribution in Secure Multicasting[END_REF] to enhance performance of the key management. Wu et al. [START_REF] Wu | On Key Distribution in Secure Multicasting[END_REF] proposed a key agreement to securely deliver a group key to specific members efficiently. The system conceals the group key within a polynomial consisting of the common keys shared with the members. In the Wu et al.'s scheme, the polynomial is called as a secure filter. Through their scheme, only the legitimate group members can derive a group key generated by a central authority on a public channel. Nevertheless, for the dynamic membership, the scheme is suffered from the differential attack which we describe later. The dynamic membership means the addition and subtraction of the group members. Naturally, the membership changes by the reason caused by network failure or explicit membership change (application driven) [START_REF] Kim | Communication-Efficient Group Key Agreement[END_REF][START_REF] Kim | Tree-Based Group Key Agreement[END_REF]. If an adversary collects the secure filters broadcasted among the group members, as the membership changes, the group keys sent to the group members with the secure filter will be discovered through the differential attack [START_REF] Kent | Differentiating User Authentication Graphs[END_REF].
The secure multicast key agreement (SMKA) scheme is proposed in this paper, which is a kind of secure filter to resist against the differential attack. The proposed secure filter is based on the properties of a cryptographically secure one-way hash function. Moreover, the complexity of the modified secure filter is almost the same with the complexity of the original one.
The rest of this paper consists of the following parts. The section 2 gives an overview of the secure filter and the differential attack against the secure filter for the dynamic membership. The section 3 introduces our scheme. The section 4 gives the security proof of our scheme. Then we conclude our scheme in the section 5.
2
The secure filter and the differential attack
A Differential Attack on Wu et al.'s Scheme
The differential attack utilizes the linear relationship of the coefficients in the secure filter to compromise the group keys. The differential attack is described as follows. Assume that an adversary, Ad , where Ad G . The Ad collects each secure filter used to send a group key at each session which means a period of the time for the membership unchanged. Observe that the coefficients of the secure filter, we learn the relationship as follows.
1 2 1 1 2 1, 1 mod , ( ) mod , ( ) ( ) mod , n n n C ni i C n i j i i j ap a h x p a h x h x p
The coefficients of the secure filter are the linear relationship of the secure factors. As membership changes, the differential value of the coefficients will disclose the secure factors in the secure filter. For example, as the 3 M is excluded from the group, which may be caused by network failure, then the central authority re-computes the following secure filter to refresh the group key, where ' n means the membership as the 3
M is excluded below. '
( ( )) ' mod i n i i k K i f x x h x s p ' 1 mod n i i i a x p 1, , 3 '( )
For the coefficient '1 n a , the adversary can compute M returns into the group, the central authority will refresh the group key through another secure filter composed of the secure factor 3 () hx . Then the adversary who already derives the 3 () hx through the differential attack can derive any group key as long as the 3 M is in the group.
Our Scheme
In this section, we introduce our scheme. First, we define the environment and notation. And then we introduce our scheme. The notations used in the rest of this paper are shown in Table 1.
n t i t i f x x x s p . (2)
Then the CA can derive the extension of the () t fx as following.
1 1 0 ( ) mod t n n n n f x a x a x a p . ( 3
)
The CA broadcasts the set of the coefficients, denoted as
Security and Complexity Analyses
In this section, we show that the modified secure filter can resist against the differential attack. Moreover, we proof that the modified secure filter can also prevent from the subgroup key attack [START_REF] Wen | A Novel Elliptic Curve Method for Secure Multicast System[END_REF][START_REF] Wu | Hierarchical Access Control Using the Secure Filter[END_REF] which could compromise other common keys through factorizing algorithm [START_REF] Menezes | Handbook of Applied Cryptography[END_REF].
Proposition 1. A cryptographically secure hash function ()
h has the properties: intractabil- ity, randomness, collision-free, unpredictability.
The proposition 1 is assumed commonly on cryptography [START_REF] Menezes | Handbook of Applied Cryptography[END_REF]. The intractability means that, for only given a hash value y , where () y h x , the value of x is intractable. The randomness means that, for a variable x , the elements in the set of the result () y h x , denoted as Y , are uniformly distributed. The collision free means that, given y , where () y h x , the probability of discovering '
x , where
' xx , that () hx equals ( ')
hx is negligible. The unpredictability means that hash functions exhibit no predictable relationship or correlation between inputs and outputs.
Theorem 1. An adversary cannot discover the group keys through the differential attack.
Proof: Assume that an adversary can know the membership of the group exactly. He records the distinct membership at different session. For the session t , the adversary can collect the modified secure filter below.
1 1 0 ( ) mod t n n n n f x a x a x a p . ( 4
)
The coefficient of () t fx can be derived below.
According to the Proposition 1, we can learn that the coefficients in ( 5) and ( 7) are predictable for an adversary. Therefore, it induces that the adversary cannot predict the linear relationship between these coefficients. Hence, the adversary cannot engage the differential attack successfully to compromise the group key distributed within a secure filter. □ Theorem 2. A legitimate group member cannot discover other common keys shared between the CA and other group members.
Proof: According to the Proposition 1, assume that a legitimate group member has enough ability to factorize the value of (0) t f and discover the other secure factors of the () t fx; he only can discover the hash values not tractable to the common keys. Therefore, the common keys cannot be discovered by the adversary. Then we prove that the modified secure filter can resist against the subgroup key attack.
According to Theorem 1 and Theorem 2, we proof that the modified secure filter can resist against the differential attack as well as the subgroup key attack [START_REF] Wen | A Novel Elliptic Curve Method for Secure Multicast System[END_REF][START_REF] Wu | Hierarchical Access Control Using the Secure Filter[END_REF]. □
Conclusions
In this paper, the navel key agreement scheme by using the new secure filter to improve the robustness in order to support the security functionality on dynamically changing members in the Wu's secure filter [START_REF] Wu | On Key Distribution in Secure Multicasting[END_REF]. The proposed secure filter is based on the properties of a cryptographically secure hash function. Via the security analysis, we proved that the modified secure filter can resist against the differential attack. Moreover, the modified secure filter can prevent from the subgroup key attack. The modified secure filter almost has the same complexity with the original secure filter. For a group communication, the dynamic membership is an unavoidable issue. Though the secure filter proposed in [START_REF] Wu | On Key Distribution in Secure Multicasting[END_REF] gave a simple and robustness distribution scheme for the group secret, it is suffered from the problems of the dynamic membership. The modified secure filter can enhance the secure filter for the dynamic membership and keep the efficiency.
, the adversary can derive the previous group keys through the preceding secure filters. Moreover, as the 3
2.1 Wu et al.'s Scheme
In Wu et al.'s Scheme [4], assume that there is a central authority which is in
charge of distributing a group key to the group members, denoted as G , where
12 [ , , G M M M n ] in which the i M indicates i -th group member. The i M shares a
common key k with the central authority. As the central authority starts to send a
i
group key s to the members in the G , the central authority computes the secure filter
as follows. Then the central authority broadcasts the coefficient of each item. For the i 1, ( ) ( ( )) mod i n i i k K f x x h k s p 1 mod n i i i a x p M , up-on receiving the coefficients, he can derive s by computing ( ( )) i f h k . Any adversary can not derive the s because he doesn't know any i k , where [1,2, , ] in .
Table 1 .
1 Notations
CA central authority
n number of the group members at the session t
() h cryptographically secure one-way function
c random number used at the session t
t
s group key for the session t
t
M i -th group member
i
k common key only shared with the CA and the i -th user
i
x secure factor of the modified secure factors
i
() fx modified secure filter for the session t
t
3.1 SMKA Scheme
G M M M . The M denotes
tn i
i -th group member, where [1,2, , ] in . The set of the common keys is denoted as
t K , where K tn 12 [ , , , ] k k k . Before the CA starts to send the group key t s for the
session t to the members in the G , the CA generates a random number c . Then the
t t
CA computes the secure factors below.
( || ) x h k c , (1)
i i t
where kK and {1,2, , } in . Next, the CA generates a group key s and calcu-
it t
lates the modified secure filter below.
( ) ( ) mod
1
The secure multicast key agreement (SMKA) scheme is proposed in this section. Assume that there are n group members at the session t . The set of these group members at the session t is denoted as t G , where 12
[ , , , ]
* This work was supported in part by the Ministry of Science and Technology, Taiwan, Republic of China, under Grant MOST 104-2221-E-468-002. Hsing-Chung Chen (Jack Chen) is the corresponding author. | 12,929 | [
"1001335",
"1001336"
] | [
"484833",
"484834",
"484835",
"473487"
] |
01466413 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2015 | https://inria.hal.science/hal-01466413/file/978-3-319-24315-3_8_Chapter.pdf | Dinh Thi Thao Nguyen
Thanh Nguyen
Tran Khanh Dang
Thi Ai
Dinh Thao Nguyen
email: [email protected]
Thanh Nguyen
Tran Khanh
email: [email protected]
Dang Ho
Chi Minh
A Multi-factor Biometric Based Remote Authentication Using Fuzzy Commitment and Non-invertible Transformation
Keywords: Remote authentication, biometric template protection, biometric authentication, fuzzy commitment, orthonormal matrix
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In modern world, services for people's daily needs are being digitalized. E-commerce happens everywhere, in every aspect of life. As e-commerce is being used as widely as of today, an essential need for its long survival, beside quality, is security. The first security method to be mentioned is authentication. Traditional authentication method that most e-commerce providers are using is username/password. However, this method is revealing its natural setbacks. Password cannot identify legal user with an imposter who is able to access to user's password. Besides, the more complicatedmore secured a password is, the harder it is for users to remember. That is to say, a "true" password is difficult for people to remember but easy for computer to figure out. Especially, with recent technology development, computer ability is being enhanced; meaning password cracking chance is rising too. For that reason, biometric based authentication method was born; with its advantages, this method is gradually replacing its predecessor. The first advantage to be mentioned is that biometric (such as face, voice, iris, fingerprint, palm-print, gait, signature,…) reflects a specific individual which helps preventing multi-user usage from one account [START_REF] Jain | Multibiometric systems[END_REF]. Moreover, using biometric method is more convenient for users since they do not have to remember or carry it with them. However, advantages are accompanied with challenges. Usage of method related to biometric requires technology to eliminate interferences happening when the sensor process biometric features. Beside, concerns of security and privacy, especially in remote architecture, are also put on table. The fact that human has a limited number of biometric traits makes users cannot change their biometric over and over like password once it is compromised [START_REF] Rathgeb | A survey on biometric cryptosystems and cancelable biometrics[END_REF]. Moreover, some sensitive information could be revealed if biometric templates are stored in database server without strong security techniques. In this case, the user's privacy could be violated when the attackers can track their activities by means of cross-matching when a user employs the same biometrics across all applications. Therefore, all the authenticating servers should not be trustworthy to process a user's plaint biometric, and the level of trust of these servers should be discussed more. Last but not least, the network security is also the important component in biometric based remote authentication scheme. When the authentication process is carried out over an insecure network, anyone with their curiosity can approach the biometric information transmitted [START_REF] Upmanyu | Blind Authentication: A Secure Crypto-Biometric Verification Protocol[END_REF].
The goal of this study is to present an effective approach for preserving privacy in biometric based remote authentication system. Concretely, biometric template stored in database is protected against the leakage private information while preserving the revocability property. Besides preventing the outside attacks, proposed protocol is also resistant to the attacks from inside.
The remaining parts of this paper are organized as follows. In the section 2, related works is briefly reviewed. We show what previous works have done and their limitations. From that point, we present our motivation to fill the gap. In section 3, we introduce the preliminaries and notations used in the proposal. In the next section, our proposed protocol is described in detail. In the section 5, the security analysis is presented to demonstrate for our proposal. Finally, the conclusion and future work are included in the section 6.
Related Works
Over the years, there have been plenty of works which research on preserving privacy in biometric based authentication system. Biometric template protection is one of indispensable part to this research field. In [START_REF] Jain | Biometric template security[END_REF], Jain et al presented a detailed survey of various biometric template protection schemes and discussed their strengths and weaknesses in light of the security and accuracy dilemma. There are two approaches to deal with this issue, including feature transformation and biometric cryptosystem. The first approach identified as feature transform allows users to replace a compromised biometric template while reducing the amount of information revealed. However, some methods of the approach cannot achieve an acceptable performance; others are unrealistic under assumptions from a practical view point [START_REF] Upmanyu | Blind Authentication: A Secure Crypto-Biometric Verification Protocol[END_REF]. The other approach tries to combine the biometrics and cryptography technique in order to take advantages of both. The schemes employing these methods aim at generating a key, which derived from the biometric tem-plate or bound with the biometric template, and some helper data. Both the biometric template and the key are then discarded, only the helper data is stored in the database for reproducing the biometric or the secret key later. Nevertheless, the biometric cryptosystem seem to lose the revocability property that requires the ability to revoke a compromised template and reissue a new one based on the same biometric data. On this account, some recent studies tend to integrate the advantages of both approaches to enhance only the security but also the performance of the system. The combination of secure sketch and ANN (Artificial Neural Network) was proposed in [START_REF] Huynh | A Combination of ANN and Secure Sketch for Generating Strong Biometric Key[END_REF]. The fuzzy Vault was combined with Periodic Function-Based Transformation in [START_REF] Le | Protecting Biometric Features by Periodic Function-Based Transformation and Fuzzy Vault[END_REF], or with the non-invertible transformation to conduct a secure online authentication in [START_REF] Lifang | A Face Based Fuzzy Vault Scheme for Secure Online Authentication[END_REF]. The homomorphic cryptosystem was employed in fuzzy commitment scheme to achieve the blind authentication in [START_REF] Failla | eSketch: a privacy-preserving fuzzy commitment scheme for authentication using encrypted biometrics[END_REF]. In this paper, we try to integrate the ideal of fuzzy commitment and the non-invertible transformation to guarantee the security for user's biometric template.
In recent years, many biometric based remote authentication protocols have been proposed. However, most previous protocols only protect the client side and the transmission channel, neglecting the server side. In [START_REF] Nguyen | An approach to protect Private Key using fingerprint Biometric Encryption Key in BioPKI based security system[END_REF], the authors utilizes Biometric Encryption Key (BEK) to encrypt Private Key and safeguard Private Key. The Bi-oPKI system proposed in the paper turned around the security of private key, and left the biometric feature out security aspect.
In 2010, Kai Xi et al proposed a bio-cryptographic security protocol for remote authentication in mobile computing environment. In this protocol, fingerprint was used for verification, and the genuine points were protected by the fuzzy vault technique which inserts randomly a great number of chaff points into the set of genuine points. All elements in the newly created set were given index numbers. The server only stored the index numbers of all genuine points. The communication between client and server was protected a Public Key Infrastructure (PKI) scheme, Elliptic Curve Cryptography (ECC) which offered low computational powers with the same security strength as the RSA. However, the authors focused only on the security of the client side (mobile devices) and the transmission channel. The server was supposed to have higher security strength, so the authors did not care about the attacks on the server or even the attacks from the server. In addition, the authors argued that to prevent replay attack and brute force attack, a biometric-based session key was generated separately from the set of a genuine points; nonetheless, the server only had the list of index numbers of these points so it was unable to generate the key independently as described in [START_REF] Xi | A fingerprint based bio-cryptographic security protocol designed for client/server authentication in mobile computing environment[END_REF] In 2013, Hisham et al presented another approach that combined steganography and biometric cryptosystem in order to obtain the secure mutual authentication and key exchange between client and server in remote architecture [START_REF] Al-Assam | Combining steganography and biometric cryptosystems for secure mutual authentication and key exchange[END_REF]. In this paper, the authors provided some references for proving that hiding biometric data in a cover image based on steganography technique can increase the security of transferring biometric data between unsecure networks [START_REF] Jain | Hiding biometric data[END_REF]. Moreover, in order to protect biometric template stored in the authentication server while preserving the revocability property, the protocol employed the invertible transformation technique using random orthonormal matrices to project biometric feature vectors into other spaces while preserving the original distances. The new approach obtained not only the secure mutual authentication but also the immunity from replay and other remote attacks. However, the authors have not considered the ability that the authentication server itself stoles the data in its own database to impersonate its users in order to conduct the illegal transactions. This attack will be particularly dangerous in case that server is bank. The bank with its dark intention is totally free to impersonate its customers. It abused its privileges to login into customers' accounts, draw all money and leave no guilty evidence. Nonetheless, almost current researches only focus on biometric template protection or how to defend against the attack from outside; they have not spent enough concerns for the attacks from inside yet. More concretely speaking, the ability that the server accesses into the system on behalf of a user and carries out some criminal actions should be taken into account.
In addition, the scalability property needs to be discussed more in the remote authentication architecture. When the number of users and the number of servers is growing, the number of templates which belongs to a user is large, and each server has to remember every user's template. That design makes the system vulnerable and wastes our resources. To guarantee the scalability properties, Fengling et al presented a biometric based remote authentication which employed the Kerberos protocol [START_REF] Fengling | Biometric-Kerberos authentication scheme for secure mobile computing services[END_REF]. A biometric-Kerberos authentication protocol was suitable for e-commerce applications. The benefit of Kerberos is that expensive session-based user authentication can be separated from cheaper ticket-based resource access. However, the Achilles' heel of the proposed scheme is Key Distributed Center (KDC)authentication server which is supposed to be trusted. Therefore, there were no techniques protecting the private information of client against the insider attacks.
The contribution of this work is that we propose the biometric-based remote authentication protocol which has the ability to prevent an authentication server from impersonating its clients. In addition, the proposal is resistant to the outside attacks from an insecure network by combining the orthonormal random project with the fuzzy commitment scheme. The mutual authentication and the key agreement are also guaranteed in this protocol.
3
Preliminaries and Notations
Fuzzy Commitment Scheme
Fuzzy commitment scheme as proposed in [START_REF] Juels | A fuzzy commitment scheme[END_REF]
Orthonormal Random Projection
Random Orthonormal Projection (ROP) is a technique that utilizes an orthonormal matrix to project a set of points into other space while preserving the distances between points. In the categorization of template protection schemes proposed by Jain [START_REF] Jain | Biometric template security[END_REF], ROP belongs to the non-invertible transformation approach. It meets the revocability requirement by mapping a biometric feature into a secure domain through an orthonormal matrix. The method to effectively deliver orthonormal matrix was introduced in [START_REF] Al-Assam | A lightweight approach for biometric template protection[END_REF]. It can be used to replace traditional method of Gram-Schmidt. Given the biometric feature vector x of size 2n, orthonormal random matrix A of size 2𝑛 × 2𝑛, random vector b of size 2n, we have the transformation y = Ax + b .
The orthonormal matrix A of size 2𝑛 × 2𝑛 owns a diagonal which is a set of n orthonormal matrix of size 𝑛 × 𝑛. The other entries of A are zeros. We present the example of matrix A in [START_REF] Jain | Multibiometric systems[END_REF] where the values 𝜃 1 , 𝜃 2 , … , 𝜃 𝑛 are the random numbers in the range 0: 2𝜋 𝐴 = cos 𝜃 1 sin 𝜃 1 -sin 𝜃 1 cos 𝜃 1 0 0 0 0 0 0 0 0 0 0 0 0 cos 𝜃 2 sin 𝜃 2 -sin 𝜃 2 cos 𝜃 2 0 0 0 0 0 0 0 0 0 0 0 0 cos 𝜃 𝑛 sin 𝜃 𝑛 -sin 𝜃 𝑛 cos 𝜃 𝑛 [START_REF] Jain | Multibiometric systems[END_REF] By using this technique to produce the orthonormal matrix, there is no need for a complex process such as Gram-Schmidt. Beside its effectiveness in computational complexity, it can also improve the security while guaranteeing intra-class variation. When client is in doubt of his template getting exposed, he only needs to create another orthonormal matrix A to gain a new transformed template.
Notations
In the rest of the paper, we will use the following notations:
B is a biometric feature vector of a client M is an orthonormal matrix that a client creates. B TC is a transformed biometric stored in the database as a template. H(m) is the hash version of the message m. BL is a biometric lock of a client. P is a permutation Pu & Pr are respectively the public key and the private key of a cryptosystem. E PuX (m) is the encryption of the message m using the public key of X. K A is the authentication key generated randomly by the client. E kA (m) is the symmetric encryption of the message m using the secret key K A . S is the mobile serial number provided by the client. S T is the mobile serial number of the client which is stored in the database. C is a client. S 1 , S 2 are respectively the first server and the second server.
Proposal Protocol
Enrollment Phase
In the enrollment phase, the client employs a random number K m stored on the his/her device to generate the random orthonormal matrix M (based on the technique described in section 3.2). After being extracted, the feature vector B is combined with matrix M to produce the cancellable version B TC of B which is sent to server S 1 after that. In addition, the client also needs to register a secret number PIN to server S 1 . The hash version of PIN is then stored in the database by server S 1 . Parallel to that process, the client has to send serial number of his/her mobile device to another server (server S 2 ). The act of dividing client's information into two different databases is meant to reduce workload for the server; more importantly it serves to limit the control of server over client's private information. The process is illustrated in Fig. 2.
Authentication Phase
In this phase, we apply the ideal of fuzzy commitment scheme to obtain the secure biometric based remote authentication. Instead of transmitting the plain biometric data over the insecure network as the original scheme, client sends a biometric lock (BL) or a helper data to a server. At the server side, a biometric lock is combined with the component Y related to the client's biometric which is stored in database at the enrollment phase. The result of this combination is the authenticated key. The process is presented by the Fig. 3. More details of the authentication phase are described in the Fig. 4. The authentication function is undertaken by the second server S 2 . Meanwhile, the first server 1 takes the responsibility for computing the encryption of the authentication key and then sends the result to S 2 to do the next steps.
In authentication phase, the client sends request to server S 1 . This server creates a random number (Nonce -Number used ONCE) N a , then sends it to the client. Note that all messages between the client and the server over transmission network are protected by asymmetric cryptosystem (PKI -Public Key Infrastructure). In the mean time, the client generates transformed biometric feature B C from biometric feature B' which is extracted in this phase, and orthonormal matrix M from K M . From the same client, biometric feature B in registration phase and B' in authentication phase cannot be identical due to noises. Calculated B C combines with N a to produce another version of transformed biometric -B O . This step is done to ensure every time the client sends his/her request, a different version of B O is created to avoid replay attack. This B O 's items are permuted through permutation P which is generated from the hash version of PIN. This operation results in Y. It is meant to improve security by eliminating the characters of each biometric feature, enabling random distribution of biometric feature's value. Following that, Y and the encryption of the authentication key K A become inputs of the fuzzy commitment process to generate biometric lock BL (described in Fig. 3.). Client then sends BL to server S 1 for authentication purpose. At server side, after generating the NONCE N A , S 1 retrieves the B TC and h(PIN) from the database. The one time version of biometric template B TO is created from the combination between B TC and N A . Some parts of this process are similar to the process at client side. After B TO , server generate Y T by shuffling B TO using P which is computed from h(PIN). Then, Y T is used to unlock the BL to reproduce the encryption of authentication key K A .
At step fourth, client retrieves the mobile serial number S, hashes it twice to have h(h(S)), encrypts the result by the authentication key, and encrypts once more time by the public key of the server S 2 before sending to S 2 . Together with this step, at the step 4', server S 1 sends the encryption of the authentication key using public key of server S 2 to S 2 . S 2 uses its private key to obtain the authentication key K A . Server S 2 uses its private key as well as the newly achieved K A to decrypt the message from the client. If the decryption process is successful, it also means the biometric the client provided matches with the transformed biometric template stored in the database of server S 1 . The result of this decryption is compared with the double hashed version of S T stored in the database of S 2 . If they are matched, the client is authenticated; otherwise, the authentication is failed.
For the mutual authentication purpose, the protocol is not stopped here. After successful authentication, server S 2 computes h(S T ), then encrypts it with the authentication key K A before sending the result back to the client. The client decrypts the message. If the decryption is successful, he/ she can be sure that the authentication server also possesses the same key. The client carries out the comparison between the h(S) of client and the h(S T ) of the server. If they are match, the server is authenticated. The client can feel secure about the authentication server which he/ she communicated with. Once the mutual authentication is successfully accomplished, K A is used to protect the communication between the client and the server.
Security Analysis
The protocol indicates that the authenticity of the client needs following factors:
Client's biometric data The number used once N A The token that holds the key K M to generate the random orthonormal matrix The PIN The mobile serial number S.
The multi-factor authentication enhances security since the ability that an attacker steals client's authentication information to enter the system is reduced. In this section, we analyze in detail how the proposal protocol is robust against some main attacks.
Biometric template attack
The original biometric is protected by the non-invertible transformation function. Server keeps the transformed version, but it is impossible for server to infer the client's original biometric from this template. Using orthonormal matrix as a noninvertible function ensures the revocability of biometric template. In case the client is in doubt that his/her biometric template is compromised, he/she only needs to alter parameter K M to produce new orthonormal matrix, then registers the new transformed biometric template to the server. This process is similar to that of changing password in traditional authentication system. Another useful factor helps against biometric template attack is the permutation. A one-time version of the transformed biometric feature B O is re-ordered by a permutation P. This operation is meant to improve security by eliminating the characters of each biometric feature, enabling random distribution of biometric feature's value. This, eventually, weakens the ability of attacker to infer the value of biometric feature.
Replay attack
Replay attack happens when attackers reuse old information to impersonate either client or server with the aim to deceive the other side. This attack is prevented by using N A and session key K A which are used only once. The system only collapses once the attackers steal private key. In that case the attacker is able to obtain the 3 rd message in authentication phase (see Fig 6) to calculate BL. After that, the attacker reuses the BL to deceive server in a new session. The proposed protocol is immune from this type of attack as the BL generated every time the clients request contains new N A produced by the server. In the event of attacker using old BL, the authentication process cannot calculate exact authentication key K A . More concretely speaking, in authentication phase, the transformed biometric features, BC at client side and BTC at server side, are combined with the same number NA by a simple addition operation. This action creates a one-time version of the transformed biometric feature; therefore, attacker cannot reuse the old transformed biometric feature to delude the server. Thanks to that, the security of the entire proto-col is strengthened without scarifying the accuracy. The accuracy is maintained because addition operation does not modify intra-class variation of the biometric features, which results in unchanged distance between transformed biometric feature & its original. In other words, the error rate stabilizes while security is strengthened.
5.3
Man-in-the-middle attack MITM (Man-in-the-middle) attack considers as an active eavesdropping, attackers make an independent connection and replays messages between client and server in order to impersonate one side to delude the other side. Concretely speaking, the communication in this case is controlled by attacker while client or server still believes that they are talking to each other over a private connection. MITM attack happens when the attacker catches the messages between client and server then impersonates one side to communicate with the other side. In our proposed protocol, this type of attack cannot occur since the protocol presents mutual authentication requirement, not only does it requires the server to authenticate its right client but also enables the client to perform its own process to confirm requested server.
Insider attack
This type of attack happens when the administrator of authentication server exploits client's data stored in the database to legalize his authentication process on behalf of the client. The proposed protocol is capable of reducing the risk from insider attack by splitting authentication server into two different servers. Each server has its own function and data. Server S 1 stores transformed biometric template and some supporting information to generate authentication key. Authentication function is carried out by server S 2 . To perform this function, server S 2 has to receive authentication key calculated by server S 1 and authentication information provided by the client. Consequently, server S 2 can only store authentication information (client's mobile serial number) to proceed authenticating client (described in section 4.2). At the same time, such information is used by client to reversibly authenticate server.
Conclusion
In this paper, we have presented an unsusceptible biometric based remote authentication protocol to most of sophisticated attacks over an open network. The proposed protocol combines client's biometric with the other authentication factors to achieve the high level of security. Thanks to the combination of fuzzy commitment and noninvertible transformation technologies as well as a mutual challenge/response, the protocol is resistant to some main attacks to biometric-based authentication system such as biometric template attack, replay attack, man-in-the-middle attack. The remarkable point of this work is that we solved the problem at the unsecure server. We reduce the ability that the administrator utilizes the client's authentication information saved in the database to impersonate him/her and cheat the system. By using the random orthonormal project instead of traditional orthonormal project, the computational complexity is reduced while the accuracy is remained.
belongs to the first class of biometric cryptosystem approach. It is the combination two popular techniques in the areas of Error Correcting Codes (ECC) and cryptography. To understand how fuzzy commitment scheme works, we have to learn about ECC. Formally speaking, ECC plays a central role in the fuzzy commitment scheme. An ECC contains a set of code-words ∁ ⊆ {0, 1} 𝑛 and a function to map a message to a code-word before it is transmitted along a noisy channel. Given the message space ℳ = {0, 1} 𝑛 , we define the translation function (or encoding function) 𝑔 ∶ ℳ → ∁, and the decoding function 𝑓 ∶ {0, 1} 𝑛 → ℳ. Therefore, 𝑔 is a map from ℳ to ∁; however, 𝑓 is not the inverse map from ∁ to ℳ but a map from arbitrary n-bit strings to the nearest code-word in ∁.In fuzzy commitment scheme, a biometric data is treated as a corrupted code-word. During registration stage, a client provides biometric template B to server. Server randomly picks a code-word c then calculates = 𝐵 ⊕ 𝑐 , and the hash version of code-word c. Next, server stores the pair of 𝛿, 𝐻𝑎𝑠 𝑐 into the database. During authentication stage, a new biometric with noise B' is distributed to server by the client. From its side, server calculates 𝑐 ′ = 𝐵 ′ ⊕ 𝛿, proceeds decoding c', then compares hash version of the result with 𝐻𝑎𝑠 𝑐 previously stored in the database. If the two are matched, client is authenticated. This process is demonstrated in Fig.1.
Fig. 1 .
1 Fig. 1. Fuzzy Commitment scheme.
Fig. 2 .
2 Fig. 2. Enrollment phase.
Fig. 3 .
3 Fig. 3. The fuzzy commitment in the proposal authentication phase.
Fig. 4 .
4 Fig. 4. Authentication phase.
Acknowledgements. This research is funded by Vietnam National University -Ho Chi Minh City (VNU-HCM) under grant number TNCS-2014-KHMT-06. We also want to show a great appreciation to each member of D-STAR Lab (www.dstar.edu.vn) for their enthusiastic supports and helpful advices during the time we have carried out this research. | 28,522 | [
"1001346",
"1001347",
"993459"
] | [
"265217",
"265217",
"265217"
] |
01466448 | en | [
"math"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01466448/file/Statistical-detection-topology_6-02_2017.pdf | Catherine Aaron
Alejandro Cholaquidis
Antonio Cuevas
Stochastic detection of some topological and geometric features
This work is closely related to the theories of set estimation and manifold estimation. Our object of interest is a, possibly lower-dimensional, compact set S ⊂ R d . The general aim is to identify (via stochastic procedures) some qualitative or quantitative features of S, of geometric or topological character. The available information is just a random sample of points drawn on S. The term "to identify" means here to achieve a correct answer almost surely (a.s.) when the sample size tends to infinity. More specifically the paper aims at giving some partial answers to the following questions: 1. Is S full dimensional? 2. If S is full dimensional, is it "close to a lower dimensional set" M? 3. If S is "close to a lower dimensional M", can we a) estimate M? b) estimate some functionals defined on M (in particular, the Minkowski content of M)?
The theoretical results are complemented with some simulations and graphical illustrations.
Introduction
The general setup. Some related literature. The emerging statistical field currently known as manifold estimation (or, sometimes, statistics on manifolds, or manifold learning) is the result of the confluence of, at least, three classical theories: (a) the analysis of directional (or circular) data [START_REF] Mardia | Directional Statistics[END_REF], Bhattacharya and Patrangenaru (2008) where the aims are similar to those of the classical statistics but the data are supposed to be drawn on the sphere or, more generally, on a lower-dimensional manifold; (b) the study of non-linear methods of dimension reduction, [START_REF] Delicado | Another look at principal curves and surfaces[END_REF], [START_REF] Hastie | Principal curves[END_REF], aiming at recovering a lower-dimensional structure from random points taken around it, and (c) some techniques of stochastic geometry [START_REF] Chazal | The "λ-medial Axis[END_REF] and set estimation [START_REF] Cuevas | Set Estimation[END_REF], [START_REF] Cholaquidis | On Poincaré cone property[END_REF], Cuevas et al. (2007) whose purpose is to estimate some relevant quantities of a set (or the set itself) from the information provided by a random sample whose distribution is closely related to the set.
There are also strong connections with the theories of persistent homology and computational topology, [START_REF] Carlsson | Topology and data[END_REF], [START_REF] Niyogi | A topological view of unsupervised learning from noisy data[END_REF], [START_REF] Fasy | Confidence sets for persistence diagrams[END_REF].
In all these studies, from different points of view, the general aim is similar: one wants to get information (very often of geometric or topological type) on a set from a sample of points. To be more specific, let us mention some recent references on these topics, roughly grouped according the subject (the list is largely non-exhaustive):
Manifold recovery from a sample of points, Genovese et al. (2012b); Genovese et al (2012c).
Inference on dimension, Fefferman et al. (2016), [START_REF] Brito | Intrinsic dimension identification via graph-theoretic methods[END_REF].
Estimation of measures (perimeter, surface area, curvatures), Cuevas et al. (2007), [START_REF] Jiménez | Nonparametric estimation of surface integrals[END_REF], [START_REF] Berrendero | A geometrically motivated parametric model in manifold estimation[END_REF].
Estimation of some other relevant quantities in a manifold, [START_REF] Niyogi | Finding the Homology of Submanifolds with High Confidence from Random Samples[END_REF], [START_REF] Chen | Nonlinear manifold representations for functional data[END_REF].
Dimensionality reduction, Genovese et al. (2012a), Tenebaum et al. (2000).
The problems under study. The contents of the paper. Let X 1 , . . . , X n be random sample points drawn on an unknown compact set S ⊂ R d . We consider two different models:
The noiseless model : the data X n = {X 1 , . . . , X n } are taken from a distribution whose support is S itself; [START_REF] Aamari | Stability and minimax optimality of tangential Delaunay complexes for manifold reconstruction[END_REF], [START_REF] Amenta | A simple algorithm for homeomorphic surface reconstruction[END_REF], [START_REF] Cholaquidis | On Poincaré cone property[END_REF], [START_REF] Cuevas | A plug-in approach to support estimation[END_REF].
The parallel (noisy) model : The data X n = {X 1 , . . . , X n } have a distribution whose support is the parallel set S of points within a distance to M smaller than R 1 , for some R 1 > 0, where M is a d -dimensional set with d ≤ d; [START_REF] Berrendero | A geometrically motivated parametric model in manifold estimation[END_REF]. Note that other different models "with noise" are considered in Genovese et al. (2012a), Genovese et al. (2012b) and Genovese et al (2012c).
Our general aim is to identify, eventually almost surely (a.s.), some geometric or topological properties of M or S. Note that with an eventual a.s. identification procedure, no statistical test is needed (asymptotically) since eventually the property (or the lack of it) is identified with no error. Moreover, the identification methods are "algorithmic" in the sense that they are based on automatic procedures to perform them with arbitrary precision. This will require to impose some restrictions on M or S. Section 2 includes all the relevant definitions, notations and basic geometric concepts we will need.
In Section 3 we first develop, under the noiseless model, an algorithmic procedure to identify, eventually, a.s., whether or not S has an empty interior; this is achieved in Theorems 1 and 2 below. A positive answer would essentially entail (under some conditions, see the beginning of Section 3) that we are in fact in the noiseless model and M has a dimension smaller than that of the ambient space.
Then, assuming the noisy model and M = ∅, Theorems 3 (i) and 4 (i) provide two methods for the estimation of the maximum level of noise R 1 , giving also the corre-sponding convergence rates. If R 1 is known in advance, the results in Theorems 3 and 4 allow us also to decide whether or not the "inside set" M has an empty interior or not.
In Section 4 we consider again the noisy, model where the data are drawn on the R 1 -parallel set around a lower dimensional set M. We propose a method to "denoise" the sample, which essentially amounts to estimate M from sample data drawn around the parallel set S around M.
In Section 5 we consider the problem of estimating the d -dimensional Minkowski measure of M under both the noiseless and the noisy model. We assume throughout the section that the dimension d (in Hausdorff sense, see below) of the set M is known.
Finally, in Section 6 we present some simulations and numerical illustrations.
Some geometric background
This section is devoted to make explicit the notations, and basic concepts and definitions (mostly of geometric character) we will need in the rest of the paper.
Some notation. Given a set S ⊂ R d , we will denote by S, S and ∂S the interior, closure and boundary of S, respectively with respect to the usual topology of R d . We will also denote ρ(S) = sup x∈S d(x, ∂S). Notice that ρ(S) > 0 is equivalent to S = ∅.
The parallel set of S of radii ε will be denoted as (1) Some geometric regularity conditions for sets. The following conditions have been used many times in set estimation topics see, e.g., [START_REF] Niyogi | Finding the Homology of Submanifolds with High Confidence from Random Samples[END_REF], Genovese et al. (2012b), [START_REF] Cuevas | Set Estimation[END_REF] and references therein.
B(S, ε), that is B(S, ε) = {y ∈ R d : inf x∈S y -x ≤ ε}. If A ⊂ R d is a Borel set, then µ d (A) (sometimes just µ(A))
Definition 1. Let S ⊂ R d be a closed set. The set S is said to satisfy the outside r-rolling condition if for each boundary point s ∈ ∂S there exists some x ∈ S c such that B(x, r) ∩ ∂S = {s}. A compact set S is said to satisfy the inside r-rolling condition if S c satisfies the outside r-rolling condition at all boundary points.
Definition 2. A set S ⊂ R d is said to be r-convex, for r > 0, if S = C r (S), where C r (S) = B(x,r): B(x,r)∩S=∅ B(x, r) c , (2)
is the r-convex hull of S. When S is r-convex, a natural estimator of S from a random sample X n of points (drawn on a distribution with support S), is C r (X n ).
Following the notation in [START_REF] Federer | Curvature measures[END_REF], let Unp(S) be the set of points x ∈ R d with a unique projection on S. Definition 3. For x ∈ S, let reach(S, x) = sup{r > 0 : B(x, r) ⊂ Unp(S) . The reach of S is defined by reach(S) = inf reach(S, x) : x ∈ S , and S is said to be of positive reach if reach(S) > 0.
The study of sets with positive reach was started by [START_REF] Federer | Curvature measures[END_REF]; see Thäle (2008) for a survey. This is now a major topic in different problems of manifold learning or topological data analysis. See, e.g., Adler et al. (2016) for a recent reference.
The conditions established in Definitions 1, 2 and 3 have an obvious mutual affinity. In fact, they are collectively referred to as "rolling properties" in [START_REF] Cuevas | On statistical properties of sets fulfilling rolling-type conditions[END_REF]. However, they are not equivalent: if the reach of S is r then S is rconvex, which in turn implies the (outer) r-rolling condition. The converse implications are not true in general; see [START_REF] Cuevas | On statistical properties of sets fulfilling rolling-type conditions[END_REF] for details. Definition 4. A set S ⊂ R d is said to be standard with respect to a Borel measure ν in a point x if there exists λ > 0 and δ > 0 such that
ν(B(x, ε) ∩ S) ≥ δµ d (B(x, ε)), 0 < ε ≤ λ.
(3)
A set S ⊂ R d is said to be standard if (3) hold for all x ∈ S.
The following result will be useful below. It establishes a simple connection between standardness and the inside r-rolling condition.
Proposition 1. Let S ⊂ R d the support of a Borel measure ν, whose density f with respect to the Lebesgue measure is bounded from below by f 0 , if S satisfies the inside r-rolling condition for all x ∈ ∂S then it is standard with respect to ν, for any δ ≤ f 0 /3 and λ = r.
Proof. Let 0 < ε ≤ r and x ∈ S, if d(x, ∂S) ≥ r the result is obvious. Let x ∈ S such that d(x, ∂S) < r. reach(S c ) ≥ r implies that there exists z ∈ R d such that x ∈ B(z, r) ⊂ S. Then, for all ε ≤ r ν
(B(x, ε) ∩ S) ≥ ν(B(x, ε) ∩ B(z, r)) ≥ f 0 µ d (B(x, ε) ∩ B(z, r)) ≥ f 0 3 µ d (B(x, ε))
Some basic definitions on manifolds.
The following basic concepts are stated here for the sake of completeness and notational clarity. More complete information on these topics can be found, for example, in the classical textbooks [START_REF] Boothby | An Introduction to Differentiable Manifolds and Riemannian Geometry[END_REF][START_REF] Do Carmo | Riemannian Geometry[END_REF]. See also the nice book [START_REF] Galbis | Vector Analysis Versus vector Calculus[END_REF] and the summary (Zhang, 2011, chapter 3). Definition 5. A topological manifold M of dimension k in R d is a subset of R d with k ≤ d such that every point in M has a neighbourhood homeomorphic to an open set in R k . We will say that M is a regular k-surface, or a differentiable k-manifold of class p ≥ 1, is there is a family (often called atlas) V = {(V α , x α )} of pairs (V α , x α ) (often called parametrizations, coordinate systems or charts) such that the V α are open sets in R k and the x α : V α → M are functions of class p satisfying: (i) M = ∪ α x α (V α ), (ii) every x α is a homeomorphism between V α and x α (V α ) and (iii) for every v ∈ V α the differential dx α (u) : R k → R d is injective.
The notion of manifold with boundary is defined in a similar way by replacing
R k with R k + = {x ∈ R k : x k ≥ 0}.
A manifold M is said to be compact when it is compact as a topological space. As a direct consequence of the definition of compactness, any compact manifold has a finite atlas. Typically, in most relevant cases the required atlas for a manifold has, at most, a denumerable set of charts.
An equivalent definition of the notion of manifold (see Do Carmo (1992, Def 2.1, p. 2)) can be stated in terms of parametrizations or coordinate systems of type (U α , ϕ α ) with ϕ α : V α ⊂ M → R k . The conditions would be completely similar to the previous ones, except that the ϕ α are defined in a reverse way to that of Definition 5.
In the simplest case, just one chart x : V → M is needed. The structures defined in this way are sometimes called planar manifolds. Some background on geometric measure theory. The important problem of defining lower-dimensional measures (surface measure, perimeter, etc.) has been tackled in different ways. The book by [START_REF] Mattila | Geometry of Sets and Measures in Euclidean Spaces: Fractals and Rectifiability[END_REF] is a classical reference. We first recall the so-called Hausdorff measure. It is defined for any separable metric space (M, ρ). Given δ, r > 0 and E ⊂ M, let
H r δ (E) = inf ∞ j=1 (diam(B j )) r : E ⊂ ∪ ∞ j=1 B j , diam(B j ) ≤ δ ,
where diam(B) = sup{ρ(x, y) : x, y ∈ B}, inf ∅ = ∞. Now, define H r (E) = lim δ→0 H r δ (E). The set function H r is an outer measure. If we restrict H r to the measurable sets (according to standard Caratheodory's definition) we get the r-dimensional Hausdorff measure on M.
The Hausdorff dimension of a set E is defined by
dim H (E) = inf{r : H r (E) = 0} = sup{r : H r (E) = ∞}. (4)
It can be proved hat, when M is a k-dimensional smooth manifold, dim H (M) = k.
Another popular notion to define lower-dimensional measures for the case M ⊂ R d is the Minkowski content.
L d 0 (M) = lim ε→0 µ d (B(M, ε)) ω d-d ε d-d , (5)
provided that this limit does exist.
In what follows we will often denote L d 0 (M) = L 0 (M), when the value of d is understood. The term "content" is used here as a surrogate for "measure", as the expression (5) does not generally leads to a true (sigma-additive) measure.
A compact set M ⊂ R d is said to be d -rectifiable if there exists a compact set K ⊂ R d and a Lipschitz function f : R d → R d such that M = f (K). In (Federer, 1969, Th. 3.2.39)
it is proved that for a compact d -rectifiable set M, L d 0 (M) = H d (M)
. More details on the relations between the rectifiability property and the structure of manifold can be found in (Federer, 1969, Th. 3.2.29).
Checking closeness to lower dimensionality
We consider here the problem of identifying whether or not the set M ⊂ R d (not necessarily a manifold) has an empty interior.
Note that, if
M ⊂ R d is "regular enough", dim H (M) < d is in fact equivalent to M = ∅. Indeed, in general dim H (M) < d implies M = ∅.
The converse implication is not always true, even for sets fulfilling the property H d (∂M) = 0 (see ( (2007))). However it holds if M has positive reach, since in this case H d-1 (∂M) < ∞ (see the comments after Th. 7 and inequality (27) in [START_REF] Ambrosio | Outer Minkowski content for some classes of closed sets[END_REF]).
Also, clearly, in the case where M is a manifold, the fact that M has an empty interior amounts to say that its dimension is smaller than that of the ambient space.
The noiseless model
We first consider the case where the sample information follows the noiseless model explained in the Introduction, that is, the data X 1 , . . . , X n are assumed to be an iid sample of points drawn from an unknown distribution P X with support M ⊂ R d . When M is a lower-dimensional set, this model can be considered as an extension of the classical theory of directional (or spherical) data, in which the sample data are assumed to follow a distribution whose support is the unit sphere in R d . See, e.g., [START_REF] Mardia | Directional Statistics[END_REF].
Our main tool here will be the simple offset or Devroye-Wise estimator (see [START_REF] Devroye | Detection of abnormal behaviour via nonparametric estimation of the support[END_REF]) given by
Ŝn (r) = n i=1 B(X i , r). (6)
More specifically, we are especially interested in the "boundary balls" of Ŝn (r).
Definition 6. Given r > 0 let Ŝn (r) be a set estimator of type (6) based on the data x 1 , . . . , x n . We will say that B(x i , r) is a boundary ball of Ŝn (r) if there exists a point y ∈ ∂B(x i , r) such that y ∈ ∂ Ŝn (r). The "peeling" of Ŝn (r), denoted by peel( Ŝn (r)), is the union of all non-boundary balls of Ŝn (r). In other words, peel( Ŝn (r)) is the result of removing from Ŝn (r) all the boundary balls.
The following theorem is the main result of this section. It relates, in statistical terms, the emptiness of M with peel( Ŝn ).
Theorem 1. Let M ⊂ R d be a compact non-empty set. Then under the model and notations stated in the two previous paragraphs we have, (a) if M = ∅, and M fulfils the outside rolling condition for some r > 0, then peel( Ŝn (r )) = ∅ for any set Ŝn (r ) of type (6) with r < r.
(b) In the case M = ∅, assume that there exists a ball B(x 0 , ρ 0 ) ⊂ M such that B(x 0 , ρ 0 ) is standard w.r.t to P X , with constants δ and λ (see Definition (4)). Then peel( Ŝn (r n )) = ∅ eventually, a.s., where r n is a radius sequence such that:
(κ log(n) n ) 1/d ≤ r n ≤ min{ρ 0 /2, λ} for a given κ > (δω d ) -1 .
Proof. (a) Let X n = {X 1 , . . . , X n } be an iid sample of X with distribution P X . To prove that peel( Ŝn (r )) = ∅ for all r < r it is enough to prove that for all r < r and for all i = 1, . . . , n there exists a point y i ∈ ∂B(X i , r ) such that y / ∈ B(X j , r ) for all X j = X i . Since M is closed and M = ∅, ∂M = M. The outside rolling ball property implies that for all
X i ∈ M exists z i ∈ M c such that B(z i , r) ∩ M = {X i }. Let us denote u i = z i -X i r and consider y i = X i + r u i . Clearly y i ∈ ∂B(X i , r ). From B(y i , r ) ⊂ B(z i , r
) and the outside rolling ball property we get that
{X i } ⊂ B(y i , r ) ∩ X n ⊂ B(z i , r) ∩ M ⊂ {X i } so that, for all X j = X i , X j / ∈ B(y i , r ) and thus, y i / ∈ B(X j , r ). (b) First we are going to prove that if C log(n) δω d n 1/d
≤ r n ≤ min{ρ 0 /2, λ} for a given C > 1 then: eventually a.s. for all y ∈ B(x 0 , 2r n ) we have B(y, r n ) ∩ X n = ∅. (7)
Consider only n ≥ 3 and let ε n = (log(n)) -1 , there is a positive constant τ d , such that we can cover B(x 0 , 2r n ) with
ν n = τ d ε -d n balls of radius r n ε n centred in {t 1 , . . . , t νn }. Let es define p n = P X ∃y ∈ B(x 0 , 2r n ), B(y, r n ) ∩ X n = ∅ , then, p n ≤ νn i=1 P X B(t i , r n (1 -ε n )) ∩ X n = ∅ . ( 8
)
Notice that for any given i,
P X B t i , r n (1 -ε n ) ∩ X n = ∅ = 1 -P X B(t i , r n (1 -ε n ) n .
Since r n ≤ ρ 0 /2, t i ∈ B(x 0 , ρ 0 ), then using that B(x 0 , ρ 0 ) is standard with the same δ and λ,
P X B t i , r n (1 -ε n ) ∩ X n = ∅ ≤ 1 -ω d δr d n (1 -ε n ) d n ≤ 1 -C log(n) n 1 -ε n d n
.
Which, according to (8) provides:
p n ≤ τ d ε -d n 1 -C log(n) n 1 -ε n d n ≤ τ d ε -d n n -C(1-εn) d ,
where we have used that (1 -x) n ≤ exp(-nx). Since C > 1, we can choose β > 1 such that p n /n -β → 0, then, p n < ∞. Finally (7) follows as a direct application of Borel Cantelli Lemma. Observe that (7) implies that x 0 ∈ Ŝn (r n ) eventually a.s., so there exists X i such that x 0 ∈ B(X i , r n ) eventually a.s. Again by ( 7) we get that, eventually a.s. for all z ∈ ∂B(X i , r n ) there exists X j such that z ∈ B(X j , r n ) and so z / ∈ ∂ Ŝn (r n ), which implies that, eventually a.s., B(X i , r n ) is not removed by the peeling process and so peel( Ŝn (r n )) = ∅ eventually, a.s.. Remark 1. Observe that the standardness conditions required in Theorem 1 b) is fulfilled if (3) holds for ν = µ d and if P X has a density f bounded from below by a positive constant.
The following result can be seen as an application of Theorem 1 for differentiable manifolds, with a specific, data driven, choice of r n .
Theorem 2. Let M be a d -dimensional compact manifold in R d . Suppose that the sample points X 1 , . . . , X n are drawn from a probability measure P X with support M which has a continuous density f with respect the d -dimensional Hausdorff measure on M, and f (x) > f 0 for all x ∈ M. Let us define, for any [START_REF] Walther | On a generalization of Blaschke's rolling theorem and the smoothing of surfaces[END_REF] M fulfils both the inside and outside rolling ball property for a small enough radius r > 0; note that such result can be applied since the C 2 assumption on the compact hypersurface ∂M implies the Lipschitz condition for the outward normal vector and the interior of every path-connected component of M is guaranteed from the fact that M is the support of an absolutely continuous distribution. By Lemma 2.3 in Pateiro-López and Rodríguez-Casal (2012) and Proposition 1, M satisfies the standardness condition established in Definition 4 with ν = P X , δ = f 0 /3 and λ < r.
β > 6 1/d , r n = β max i min j =i X j -X i . Then, i) if d = d and ∂M is a C 2 manifold then peel( Ŝn (r n )) = ∅ eventually, a.s.. ii) if d < d and M is a C 2 manifold without boundary, then peel( Ŝn (r n )) = ∅ eventu- ally, a.s.. Proof. i) As d = d then ∂M is a C 2 a compact (d -1)-manifold thus, by Theorem 1 in
In order to prove that r n fulfils the conditions in 1 b) we will use Theorem 1.1 in [START_REF] Penrose | A strong law for the largest nearest-neighbour link between random points[END_REF]. First observe that in the full-dimensional case d = d the intrinsic volume in M coincides with the restricted Lebesgue measure; see (Taylor, 2006, Prop. 12.6). As a consequence, f is equal to the density of P X w.r.t. the Lebesgue measure restricted to M. Let us denote f 1 = min x∈∂M f (x), then with probability one,
nr d n ω d log(n)β d → max 1 f 0 , 2(d -1) df 1 ≥ 1 f 0 .
Then for n large enough,
r n ≥ log(n) n β d ω d 2f 0 1/d , now if we denote κ = β d /(ω d 2f 0 ) , it fulfils that κ > (δω d ) -1
, so we are in the hypotheses of Theorem 1 b) and then we can conclude peel( Ŝn (r n )) = ∅ eventually, with probability 1.
ii) Notice that we can use Theorem 1 a) indeed, as M is a C 2 compact manifold of R d by (Thäle, 2008, Prop. 14) it has a positive reach and, thus, it satisfies the outside rolling ball condition (for some radius r > 0). Then it remains to be proved that r n ≤ r for n large enough. Let us endow M with the standard Riemannian structure, where a local metric is defined on every tangent space just by restricting on it the standard inner product on R d . Under or smoothness assumptions, the Riemannian measure induced by such a metric on the manifold M agrees with the d -dimensional Hausdorff measure on M (this is just a particular case of the Area Formula; see (Federer, 1969, 3.2.46)). So we may use Theorem 5.1 in [START_REF] Penrose | A strong law for the largest nearest-neighbour link between random points[END_REF]. As a consequence of that result
max i min j =i γ(X i , X j ) = O log n n 1/d , a.s., (9)
where γ denotes the geodesic distance on M associated with the Riemannian structure. Now, since the Euclidean distance is smaller than the geodesic distance, we have for all i, j, X j -X i ≤ γ(X i , X j ) and min j γ(X i , X
j ) = γ(X i , X i ) ≥ X i - X i ≥ min j X i -X j and finally max i min j =i γ(X i , X j ) ≥ max i min j =i X i -X j .
Finally from (9) we have max i min j =i X j -X i a.s.
-→ 0, which concludes the proof.
The case of noisy data: the "parallel" model
The following two theorems are meaningful in at least two ways. On the one hand, if we know the amount of noise (R 1 in the notation introduced before), these results can be used to detect whether or not the support M of the original sample is full dimensional (see ( 11) and ( 15)).
On the other hand, in the lower dimensional setting, they give an easy-to-implement way to estimate R 1 (see ( 10) and ( 14)).
Observe that when M = ∅, then R 1 = max x∈S d(x, ∂S). If ∂S n denotes a consistent estimator of ∂B(M, R 1 ), a natural plug-in estimator for R 1 is max
Y i ∈Yn d(Y n , ∂S n ).
In Theorem 3 ∂S n this estimator is constructed in terms of the set of the centers of the boundary balls, while in Theorem 4 we use the boundary of the r-convex hull. The second theorem is stronger than the first one in several aspects: the parameter choice is easier and the convergence rate is better (and does not depend on the parameter). The price to pay is computational since the corresponding statistic is much more difficult to implement; see Section 6.
Theorem 3. Let M ⊂ R d be a compact set such that reach(M) = R 0 > 0. Let Y n = {Y 1 , .
. . , Y n } be an iid sample of a distribution P Y with support S = B(M, R 1 ) with 0 < R 1 < R 0 , absolutely continuous with respect to the Lebesgue measure, whose density f , is bounded from below by
f 0 > 0. Let ε n = c(log(n)/n) 1/d , with c > (4/(f 0 ω d )) 1/d , let us denote Rn = max Y i ∈Yn min j∈I bb Y i -Y j where I bb = {j : B(Y j , ε n ) is a boundary ball}. i) if M = ∅, then, with probability one, Rn -R 1 ≤ 2ε n for n large enough, (10)
ii) if M = ∅, then there exists C > 0 such that, with probability one
| Rn -R 1 | > C for n large enough. (11)
Proof. i) Observe, that, since M = ∅, R 1 = max x∈S d(x, ∂S).
From Corollary 4.9 in [START_REF] Federer | Curvature measures[END_REF], reach(S) ≥ R 0 -R 1 > 0 and reach(S c ) ≥ R 1 .
A first consequence of the positive reach of S is that it has a Lebesgue null boundary and thus, with probability one for all i, Y i ∈ S and then, with probability one
Ŝn (ε n ) ⊂ B( S, ε n ). ( 12
)
Since reach(S c ) ≥ R 1 , by Proposition 1 S is standard with respect to P X for any constant δ < f 0 /3 (see Definition 4).
Then according to Proposition 1 and Theorem 4 in [START_REF] Cuevas | On boundary estimation[END_REF] to conclude that for large enough n, with probability one,
S ⊂ Ŝn (ε n ) (13)
Now, for all x ∈ S let us consider z ∈ ∂S a point such that ||x -z|| = d(x, ∂S) and t = z + ε n η where η = η(z) is a normal vector to ∂S at z that points outside S (η can be defined according to Definition 4.4 and Theorem 4.8 (12) in [START_REF] Federer | Curvature measures[END_REF]). Notice that the metric projection of t on S is y thus d(t, S) = ε n so, according to (12), with probability one t / ∈ Ŝn (ε n ). The point z belongs to S so, by ( 13), with probability one for n large enough z ∈ Ŝn (ε n ). We thus conclude [t, z] ∩ ∂ Ŝn (ε n ) = ∅, with probability one, for n large enough. Let then consider y ∈
[t, z]∩∂ Ŝn (ε n ), as y ∈ ∂ Ŝn (ε n ) there exists i ∈ I bb such that y ∈ ∂B(Y i , ε n ) and, as y ∈ [t, z], ||y-z|| ≤ ε n thus x-Y i ≤ x-z + z-y + y-Y i ≤ d(x, ∂S)+2ε n .
To sumurize we just have proved that: for all x ∈ S there exits i ∈ I bb such that x -Y i ≤ d(x, ∂S) + 2ε n thus for all x ∈ S : min i∈I bb x -Y i ≤ d(x, ∂S) + 2ε n .
To conclude max j min i∈I bb Y j -Y i ≤ max j d(Y j , ∂S) + 2ε n ≤ max x∈S (d(x, ∂S) + 2ε n = R 1 + 2ε n (with probability one for n large enough).
The reverse inequality is easier to prove, let us consider x 0 ∈ S such that d(x 0 , ∂S) = R 1 , notice that, by ( 13) (with probability one for n large enough) there exists i 0 such that x 0 -Y i 0 ≤ ε n . By triangular inequality B(Y i 0 , R 1 -ε n ) ⊂ S and by ( 13) we also have
B(Y i 0 , R 1 -ε n ) ⊂ Ŝn (ε n ) thus min i∈I bb { Y i 0 -Y i } ≥ R 1 -2ε n .
Then we have proved max
j min i∈I bb { Y i -Y j } ≥ R 1 -2ε n .
This concludes the proof of (10).
ii) Observe that to prove i) we proved that | Rn -max x∈S d(x, ∂S)| < 2ε n . Then, with probability one, for n large enough,
| Rn -R 1 | > |c 1 -R 1 |/2 = C > 0, where c 1 = max x∈∂S d(x, ∂S). Theorem 4. Let M ⊂ R d be a compact set such that reach(M) = R 0 > 0. Suppose that the sample Y n = {Y 1 , . . . , Y n } has a distribution with support S = B(M, R 1 ) for some R 1 < R 0 with a density bounded from below by a constant f 0 > 0. Let us denote Rn = max i d(Y i , ∂C r (Y n ))
where C r (Y n ) denotes the r-convex hull of the sample, as defined in (2) for r ≤ min(R 1 , R 0 -R 1 ).
i) If M = ∅ and, for some d < d, M has a finite, strictly positive d -dimensional Minkowski content, then, with probability one,
Rn -R 1 = O(log(n)/n) min(1/(d-d ),2/(d+1)) , (14)
ii) if M = ∅, then there exists C > 0 such that, with probability one
| Rn -R 1 | > C for n large enough. (15)
Proof. Again, as shown in the proof of Theorem 3, reach
(B(M, R 1 )) ≥ reach(M) - R 1 = R 0 -R 1 ; also reach(B(M, R 1 ) c ) ≥ R 1 .
Hence, according to Proposition 1 in [START_REF] Cuevas | On boundary estimation[END_REF], B(M, R 1 ) and B(M, R 1 ) c are both r-convex for r = min(R 1 , R 0 -R 1 ) > 0. Note, in addition, that by construction of S = B(M, R 1 ) we have that Si = ∅ for every path-connected component S i ⊂ S. So, we can use Theorem 3 in Rodríguez-Casal (2007) to conclude
d H ∂C r (Y n ), ∂S = O (log(n)/n) 2/(d+1) , a.s. ( 16
)
Let us prove that, with probability one, for n large enough,
B M, R 1 -d H (∂C r (Y n ), ∂S) ⊂ C r (Y n ). ( 17
)
Proceeding by contradiction, let
x ∈ B M, R 1 -d H (∂C r (Y n ), ∂S) such that x / ∈ C r (Y n
), let y be the projection of x onto M. It is easy to see that, for n large enough, with probability one, M ⊂ C r (Y n ) then y ∈ C r (Y n ). Observe that, by Corollary 4.9 in Federer (1959)
B ∂S, d H (∂C r (Y n ), ∂S) = B M, R 1 +d H (∂C r (Y n ), ∂S) \B M, R 1 -d H (∂C r (Y n ), ∂S) ,
then, the segment there exists z ∈ ∂C r (Y n )∩(x, y), (x, y) being the open segment joining x and y, but then by ( 23), d(z, ∂S) > d H (∂C r (Y n ), ∂S) which is a contradiction, that concludes the proof of (17).
First we prove i). Suppose now that M = ∅.
Then R 1 = max x∈S d(x, ∂S) = max x∈M d(x, ∂S) = d H (M, ∂S). Also, as C r (Y n ) ⊂ S thus Rn ≤ R 1 . ( 18
)
Now for all observation Y i let m i denotes its projection on M by ( 17) we have
d(m i , ∂C r (Y n )) ≥ R 1 -d H (∂C r (Y n ), ∂S) so that, with triangular inequality d(m i , Y i ) + d(Y i , ∂C r (Y n )) ≥ R 1 -d H (∂C r (Y n ), ∂S). Thus Rn ≥ R 1 -d H (∂C r (Y n ), ∂S) -min i d(Y i , M) (19)
From the assumption of finiteness of the Minkowski content of M, given a constant A > 0 there exists a constant c M such that for n large enough,
µ d B(M, (A log(n)/n) 1/(d-d ) ) ≥ c M A(log(n)/n).
Thus,
P X ∀i, d(Y i , M) ≥ (A log(n)/n) 1/(d-d ) ≤ 1 -f 0 c M A(log(n)/n) n ≤ n -f 0 c M A If we take A > 1/(f 0 c M ) we obtain, from Borel-Cantelli lemma, min i d(Y i , M) = O (log(n)/n) 1/(d-d ) , a.s. (20)
Finally, ( 14) is a direct consequence of ( 16), ( 18), ( 19) and ( 20). The proof of ii) is obtained as in Theorem 3 part ii)
Remark 2. The assumption imposed on M in part (i) can be seen as an statement of d -dimensionality. For example if we assume that M is rectifiable then, from Theorem 3.2.39 in [START_REF] Federer | Geometric measure theory Springer[END_REF], the d -dimensional Hausdorff measure of M, H d (M) coincides with the corresponding Minkowski content. Hence 0 < H d (M) < ∞ and, according to expression (4), this entails dim H (M) = d .
An index of closeness to lower dimensionality
According to Theorem 3 in the case R 1 = 0, the value 2 Rn / diam(M) (where diam(M) = max i =j X i -X j ) can be seen as an index of departure from low-dimensionality. Observe that if M = M we get 2 Rn / diam(M) → 1, a.s. and if M has empty interior, 2 Rn / diam(M) → 0 a.s.
4 An algorithm to partially de-noise the sample
Let M ⊂ R d be a compact set with reach(M) = R 0 > 0. Let Y n = {Y 1 , .
. . , Y n } be an iid sample of Y , with support S = B(M, R 1 ) for some 0 < R 1 < R 0 , and distribution P Y , absolutely continuous with respect to the Lebesgue measure, whose density f , is bounded from below. We now propose an algorithm to get from Y n , a "partially de-noised" sample of points that allow us to estimate the target set M.
The procedure works as follows:
1. Take suitable auxiliary estimators for S and R 1 . Let Ŝn be an estimator of S (based on Y n ) such that d H (∂ Ŝn , ∂S) < a n eventually a.s., for some a n → 0. Let Rn be an estimator of R 1 such that | Rn -R 1 | ≤ e n eventually a.s. for some e n → 0.
2. Select a λ-subsample far from the estimated boundary of S. Take λ ∈ (0, 1) and define
Y λ m = {Y λ 1 , . . . , Y λ m } ⊂ Y n where Y λ i ∈ Y λ m if and only if d(Y λ i , ∂ Ŝn ) > λ Rn .
3. The projection + translation stage. For every
Y λ i ∈ Y λ m , we define {Z 1 , . . . , Z m } = Z m as follows, Z i = π ∂ Ŝn (Y λ i ) + Rn Y λ i -π ∂ Ŝn (Y λ i ) Y λ i -π ∂ Ŝn (Y λ i ) , ( 21
) being π ∂ Ŝn (Y λ i ) the metric projection of Y λ i on ∂ Ŝn .
The following result shows that the above de-noising procedure allows us to recover the "inner set" M.
Theorem 5. With the notation introduced before, there exists b n = O max(a 1/3 n , e n , ε n ) such that, with probability one, for n large enough,
d H (Z m , M) ≤ b n where ε n = c(log(n)/n) 1/d with c > (4/ω d ) 1/d . Proof. First let us observe that d H (Y n , S) ≤ ε n eventually a.s.. Let us fix Y λ i ∈ Y λ m . Let us denote l = Y λ i -π ∂S (Y λ i ) and η i = (Y λ i -π ∂S (Y λ i ))/l, let us introduce two estimators l = Y λ i -π ∂ Ŝn (Y λ i ) and ηi = (Y λ i -π ∂ Ŝn (Y λ i ))/ l. With this notation Z i = π ∂ Ŝn (Y λ i ) + Rn ηi .
Recall that since reach(M) > R 1 we have (by Corollary 4.9 in [START_REF] Federer | Curvature measures[END_REF]
) that π M (Y λ i ) = π ∂S (Y λ i ) + R 1 η i , For all Y λ i there exists a point x ∈ ∂ Ŝn with ||x -π ∂S (Y λ i )|| ≤ a n so that, by triangular inequality: d(Y λ i , ∂ Ŝn ) ≤ l + a n that is, π ∂ Ŝn (Y λ i ) ∈ B(Y λ i , l + a n ) (22) Now let us prove that π ∂ Ŝn (Y λ i ) ∈ B(Y λ i , l -a n ) c (23) Suppose by contradiction that π ∂ Ŝn (Y λ i ) ∈ B(Y λ i , l -a n ), since d H (∂S n , ∂S) < a n there exists t ∈ ∂S such that t-π ∂ Ŝn (Y λ i ) < a n , but then l = d(Y λ i , ∂S) ≤ Y λ i -π ∂ Ŝn (Y λ i ) + π ∂ Ŝn (Y λ i ) -t < l.
That concludes the proof of (23). By ( 22) and ( 23) we have:
l -a n ≤ l ≤ l + a n ( 24
)
In the same way it can be proved that
π ∂ Ŝn (Y λ i ) ∈ B(π M (Y λ i ), R 1 -a n ) c . ( 25
)
Let us prove that there exists
C 0 > 0 such that for all Y λ i ∈ Y λ m , Z i -π M (Y λ i ) ≤ C 0 a 2/3 n + a 1/3 n e n + e 2 n ( 26
) First consider the case 0 ≤ R 1 -l ≤ a 1/3 n , which implies that Y λ i -π M (Y λ i ) ≤ a 1/3 n . Notice that, by (24), Y λ i -Z i = | Rn -l| ≤ a 1/3
n + e n + a n , finally we get
Z i -π M (Y λ i ) ≤ 2a 1/3 n + a n + e n . ( 27
)
Now we consider the case R 1 -l ≥ a 1/3 n , recall that by ( 22) and ( 25) we have.
π ∂ Ŝn (Y λ i ) ∈ B(Y λ i , l + a n ) \ B(π M (Y λ 1 ), R 1 -a n ). ( 28
)
In Figure 1 it is represented the case for which π ∂ Ŝn (Y λ i )-π ∂S (Y λ i ) takes its largest possible value.
To find an upper bound for such value, let us first note that
π ∂ Ŝn (Y λ i )+R 1 ηi , π ∂S (Y λ i ), π M (Y λ i ), Y λ i and π ∂ Ŝn (Y λ i
) are in the same plane Π. Let us now apply a translation T in order to get, T (π ∂S (Y λ 1 )) = 0. Let us consider in Π a coordinate system (x, y) such that π M (Y λ i ) = (0, -R 1 ). Let (x 1 , y 1 ) be the coordinates of the point π ∂ Ŝn (Y λ i ). From (28) we get
x 2 1 + (y 1 + l) 2 ≤ (l + a n ) 2 (29)
x
2 1 + (y 1 + R 1 ) 2 ≥ (R 1 -a n ) 2 (30)
If we multiply (30) by -l, we get -l(x
2 1 + y 2 1 ) -2y 1 lR 1 ≤ -la 2 n + 2a n lR 1 and if we multiply (29) by R 1 we get R 1 (x 2 1 + y 2 1 ) + 2y 1 lR 1 ≤ 2R 1 a n l + a 2 n R 1 .
Then, if sum this two equations we get,
x 2 1 + y 2 1 ≤ 4lR 1 R 1 -l a n + a 2 n ≤ 4R 2 1 a 2/3 n + a 2 n (31) Figure 1: In solid black line B(Y λ i , l) and B(π M (Y λ 1 ), R 1 -a n ), in dashed line B(Y λ i , l) Notice that Z i ∈ Π, let us denote (x, y) the coordinates of Z i in Π, then x = x 1 -Rn x 1 Y λ i -π ∂ Ŝn (Y λ i ) = x 1 -Rn x 1 l and y = y 1 -Rn l + y 1 Y λ i -π ∂ Ŝn (Y λ i ) = y 1 -Rn l + y 1 l .
Since the coordinates of π M (Y λ i ) are (0, -R 1 ) we get that
Z i -π M (Y λ i ) 2 = (x, y + R 1 ) 2 = (x 2 1 + y 2 1 ) l -Rn l 2 + R 1 -Rn l l 2 + 2y 1 l -Rn l R 1 -Rn l l . (32)
Observe that |l-l| ≤ a n and |R 1 -Rn | < e n . For n large enough we can bound
| l-R1 l | ≤ 2 and l ≥ λR 1 /2, then R 1 -Rn l l = R 1 ( l -l) -l( Rn -R 1 ) l ≤ 2 a n + e n λ . (33)
Finally by equations ( 31),( 32) and (33
), if R 1 -l ≥ a 1/3 n , there exists C 0 such that Z i -π M (Y λ i ) ≤ C 0 a 2/3 n + a 1/3 n e n + e 2 n ( 34
)
That concludes the proof of (26).
To conclude the proof of the theorem let us prove that M ⊂ B(Z m , a n + e n + 2ε n ) eventually, a.s. Recall that d H (Y n , S) ≤ ε n eventually, a.s. thus for all x ∈ M, there exists
Y i ∈ Y n such that x -Y i ≤ ε n . For n large enough we have Y i ∈ Y λ m .
Following the same ideas used to prove ( 27) we obtain Z i -Y i ≤ ε n + a n + e n . By triangular inequality we get M ⊂ B(Z m , a n + e n + 2ε n ) eventually, a.s.
Combining ( 27), ( 34) and ( 35) we obtain:
d H (Z m , M) = O max(a 1/3 n , e n , a 2/3 n + a 1/3 n e n + e 2 n , ε n ) = O max(a 1/3 n , e n , ε n ) .
The two following corollaries give the exact convergence rate for the denoising process introduced before, using the centres of the boundary balls (Corollary 1), and the boundary of the r-convex hull (Corollary 2), as estimators of the boundary of the support.
Corollary 1. Let M ⊂ R d be a compact set such that reach(M) = R 0 > 0. Let Y n = {Y 1 , . . . , Y n } be an iid sample of a distribution P Y with support B(M, R 1 ) for some 0 < R 1 < R 0 .
Assume that P Y is absolutely continuous with respect to the Lebesgue measure and the density f , is bounded from below by a constant
f 0 > 0. Let ε n = c(log(n)/n) 1/d and c > (4/(f 0 ω d )) 1/d .
Given λ ∈ (0, 1), let Z n be the points obtained after the denoising process using Rn to estimate R 1 and {Y i , i ∈ I bb } as an estimator of ∂S where
I bb = {j : B(Y j , ε n ) is a boundary ball}. Then, d H (Z m , M) = O (log(n)/n) 1/(3d) , a.s.
Using the assumption of r-convexity for M (see Definitions 2 and 3 and the subsequent comments) in the construction of the set estimator, we can replace Rn with Rn (see Theorem 4). Then, at the cost of some additional complexity in the numerical implementation, a faster convergence rate can be obtained. This is made explicit in the following result.
Corollary 2. Let M ⊂ R d be a compact d -dimensional set (in the sense of Theorem 4, i)) such that reach(M) = R 0 > 0. Let Y n = {Y 1 , . . . , Y n } be an iid sample of a distribution P Y with support B(M, R 1 ) for some 0 < R 1 < R 0 .
Assume that P Y is absolutely continuous with respect to the Lebesgue measure and the density f , is bounded from below by a constant f 0 > 0.
For a given λ ∈ (0, 1), let Z n be the set of the points obtained after the denoising process, based on the estimator ∂C r (Y n ) of ∂S (for some r with 0 < r < min(R 0 -R 1 , R 1 )) and the estimator Rn of R 1 .
Then,
d H (Z m , M) = O (log(n)/n) 2/(3(d+1)) .
5 Estimation of lower-dimensional measures
Noiseless model
In this section, we go back to the noiseless model, that is, we assume that the sample points X 1 , . . . , X n are drawn according to a distribution whose support is M. The target is to estimate the d -dimensional Minkowski content of M, as given by lim
→0 µ d (B(M, )) ω d-d d-d = L 0 (M) < ∞. (36)
This is just (alongside with Hausdorff measure, among others) one of the possible ways to measure lower-dimensional sets; see [START_REF] Mattila | Geometry of Sets and Measures in Euclidean Spaces: Fractals and Rectifiability[END_REF] for background.
In recent years, the problem of estimating the d -dimensional measures of a compact set from a random sample has received some attention in the literature. The simplest situation corresponds to the full-dimensional case d = d. Any estimator M n of M consistent with respect to the distance in measure, that is µ d (M n ∆M) → 0 (in prob. or a.s., where ∆ stands for the symmetric difference), will provide a consistent estimator for µ d (M). In fact, as a consequence of Th. 1 in [START_REF] Devroye | Detection of abnormal behaviour via nonparametric estimation of the support[END_REF] (recall that S is compact here) this will the always the case (in probability) when M n is the offset estimator (6), provided that µ d is absolutely continuous (on M) with respect to P X together with r n → 0 and nr d n → ∞. Other more specific estimators of µ d (M) can be obtained by imposing some shape assumptions on M, such as convexity or r-convexity, which are incorporated to the estimator M n ; see [START_REF] Arias-Castro | Minimax estimation of the volume of a set with smooth boundary[END_REF], [START_REF] Baldin | Unbiased estimation of the volume of a convex body[END_REF], [START_REF] Pardon | Central limit theorems for random polygons in an arbitrary convex set[END_REF].
Regarding the estimation of lower-dimensional measures, with d < d, the available literature mostly concerns the problem of estimating L 0 (M), M being the boundary of some compact support S. The sample model is also a bit different, as it is assumed that we have sample points inside and outside S. Here, typically, d = d -1; see, [START_REF] Armendáriz | Nonparametric estimation of boundary measures and related functionals: asymptotic results[END_REF], Cuevas et al. (2007), [START_REF] Cuevas | Towards a universally consistent estimator of the Minkowski content[END_REF], [START_REF] Jiménez | Nonparametric estimation of surface integrals[END_REF].
Again, in the case M = ∂S with d = 2, under the extra assumption of r-convexity for S, the consistency of the plug-in estimator L 0 (∂C r (X n )) of L 0 (∂S) is proved in [START_REF] Cuevas | On statistical properties of sets fulfilling rolling-type conditions[END_REF] under the usual inside model (points taken on S). Finally, in [START_REF] Berrendero | A geometrically motivated parametric model in manifold estimation[END_REF], assuming an outside model (points drawn in B(S, R) \ S), estimators of µ d (S) and L 0 (∂S) are proposed, under the condition of polynomial volume for S From the perspective of the above references, our contribution here (Th. 6 below) could be seen as a sort of lower-dimensional extension of the mentioned results of type µ d (M n ) → µ d (M) regarding volume estimation. But, obviously, in this case the Lebesgue measure µ d must be replaced with a lower-dimensional counterpart, such as the Minkowski content (36). We will also need the following lower-dimensional version of the standardness property given in Definition 3. Definition 7. A Borel probability measure P X defined on a d -dimensional set M ⊂ R d (considered with the topology induced by R d ) is said to be standard with respect to the d -dimensional Lebesgue measure µ d if there exist λ and δ such that, for all x ∈ M, P X (B(x, r)) = P(X ∈ B(x, r) ∩ M) ≥ δµ d (B(x, r)), for 0 ≤ r ≤ λ.
Remark 3. Observe that, by Lemma 5.3 in [START_REF] Niyogi | Finding the Homology of Submanifolds with High Confidence from Random Samples[END_REF]) this condition is fulfilled if P X has a density f bounded from below and M is a manifold with positive condition number (also known as positive reach). Standardness of the distribution has also been used in [START_REF] Chazal | Optimal rates of convergence for persistence diagrams in Topological Data Analysis[END_REF], [START_REF] Aamari | Stability and minimax optimality of tangential Delaunay complexes for manifold reconstruction[END_REF]. Theorem 6. Let X n = {X 1 , . . . , X n } be an iid sample drawn according to a distribution P X on a set M ⊂ R d . Let us assume that the distribution P X is standard with respect to the d -dimensional Lebesgue measure (see 7) and that there exists the d Minkowski content L 0 (M) of M, given by (36). Let us take r n such that r n → 0 and (log(n
)/n) 1/d = o(r n ), then (a) lim n→∞ µ d B(X n , r n ) ω d-d r d-d n = L 0 (M) a.s.. ( 37
) (b) If reach(M ) = R 0 > 0, then µ B(X n , r n ) ω d-d r d-d n -L 0 (M) = O β n r n + r n ,
where
β n := d H (X n , M) = O log(n)/n 1/d .
Proof. (a) First we will see that, following the same ideas as in Theorem 3 in [START_REF] Cuevas | On boundary estimation[END_REF] it can be readily proved that
d H (X n , M) = O log(n)/n 1/d . (38)
In order to see (38), let us consider M ∆ a minimal covering of M, with balls of radius ∆ centred in N ∆ points belonging to M. Let us prove that
N ∆ = O(∆ -d ). Indeed, since M ∆ is a minimal covering it is clear that µ d (B(M, ∆)) ≥ N ∆ ω d (∆/2) d , and then µ d (B(M, ∆)) ω d-d ∆ d-d ≥ N ∆ ω d (∆/2) d ω d-d ∆ d-d = c 1 N ∆ ∆ d , being c 1 a positive constant. Since there exists L 0 (M) it follows that N ∆ = O(∆ -d ).
Then the proof of (38) follows easily from the standardness of P X and N ∆ = O(∆ -d ), so we will omit it. Now, in order to prove (37), let us first prove that, if we take
α n = 1 -β n /r n , B(M, α n r n ) ⊂ B(X n , r n ) ⊂ B(M, r n ) a.s.. (39) d(z, X n ) = r.
Reasoning by contradiction suppose that z ∈ Sn then, with probability one, there exists j 0 such that z ∈ B(X j 0 , r) and so z -X j 0 < r that is a contradiction. Now to prove the converse implication let us assume that B(X i , r) is a boundary ball, then there exists z ∈ ∂B(X i , r) such that z ∈ ∂ Ŝn (r). Let us prove that d(z, X n \ X i ) ≥ r (from where it follows that z ∈ Vor(X i ) ). Suppose that d(z, X n \ X i ) < r, then there exists X j = X i such that d(z, X j ) < r and then B z, r -d(z, X j ) ⊂ Sn (r).
6.2 An algorithm to detect empty interior in the noiseless case using Theorem 1
In order to use in practice Theorem 1 to detect lower-dimensionality in the noiseless case, we need to fix a sequence r n ↓ 0 under the conditions indicated in Theorem 1 (b) and (b). Note that this requires to assume lower bounds for the "thickness" constant ρ(M) = sup d(x, ∂M) and the standardness constant δ (which quantifies the sharpness order of M) as well as an upper bound for the radius of the outer rolling ball. Now, according to Theorem 1, and Proposition 2, we will use the following algorithm.
1) For
i = 1, . . . , n, let V i = {V i 1 , . . . V i k i } the vertices of Vor(X i ), 2) Let r i = sup{ z -X i , z ∈ Vor(X i )} = max{ X i -V i k , 1 ≤ k ≤ k i }, since Vor(X i ) is a convex polyhedron. Define r 0 = min i r i ,
3) Decide M = ∅ if and only if r 0 ≥ r n . It is somewhat surprising to note that, in spite of the much simpler structure of Ŝn ( n ) when compared to C r (Y n ), the distance to the boundary d(x, ∂C r (Y n )) can be calculated in a simpler, more accurate way than the analogous quantity d(x, ∂ Ŝn ( n )) for the Devroye-Wise estimator Ŝn ( n )).
On the estimation of the maximum distance to the boundary
Indeed note that d(x, ∂C r (Y n )) is relatively simple to calculate; this is done in [START_REF] Berrendero | A multivariate uniformity test for the case of unknown support[END_REF] in the two-dimensional case although can be in fact used in any dimension. Observe first that ∂C r (Y n )) is included in a finite union of spheres of radius r, with centres in Z = {z 1 , . . . , z m }. Then d(x, ∂C r (Y n )) = min z i ∈Z x -z i -r. In order to find Z we need to compute the Delaunay triangulation. Recall that the Delaunay triangulation, Del(Y n ), is defined as follows
. Let τ ⊂ Y n , τ ∈ Del(Y n ) if and only if Y i ∈τ Vor(Y i ) = ∅.
Observe finally, for any dimension,
Y i ∈τ Vor(Y i ) = ∅ is a segment or a half line. If τ i is the d-dimensional simplex with vertices {Y i 1 , . . . , Y i d } ⊂ ∂B(z i , r), the point z i can be obtained as Y i j ∈τ i Vor(Y i ) ∩ B(Y i 1 , r).
Experiments
The general aim of these experiments is not to make an extensive, systematic empirical study. We are just trying to show that the methods and algorithm proposed here can be implemented in practice.
Detection of full dimensionality
As a simple numerical illustration we consider here the noisy model of Subsection 3.2. In each case, we draw 200 samples of sizes n = 50, 100,200,300,400,500,1000,2000,5000, 10000 on the A-parallel set around the unit sphere; that is, the sample data are selected on B(0, 1 + A) \ B(0, 1 -A). The width parameter A takes the values A = 0, 0.01, 0.05, 0.1, . . . , 0.05. Table 1 provides the minimum sample sizes to "safely decide" the correct answer. This means to correctly decide on, at least 190 out of 200 considered samples, that the support is lower dimensional (in the case A = 0) or that it is full dimensional (cases with A > 0).
We have used the boundary balls procedure (here and in the denoising experiment below for A = 0) with r = 2 max i (min j =i X j -X i ).
The results look quite reasonable: the larger the dimension d and the smaller the width parameter A, the harder the detection problem.
A
Denoising
We draw points on B(0, 1.3) \ B(0, 0.7) in R 2 and R 3 . In order to evaluate the effectiveness of the denoising procedure we define the random variable e = Y -1 from the denoised data Y and also from the original data. Note that the "perfect" denoising would correspond to e = 0. The Figure 2 shows the kernel estimators of both densities of e for the case d = 2 (left panel) and for d = 3 (right panel). These estimators for the denoised case are based on m = 100 values of e extracted from samples of sizes n =100, 1000, 10000. The density estimators for the initial distribution are based on samples of size 100. Clearly, when the denoised sample of size m = 100 is based on a very large sample, with n = 10000, the denoising process is better, as suggested by the fact that the corresponding density estimators are strongly concentrated around 0. The slight asymmetry in the three dimensional case, accounts for the fact that the "external" volume B(0, 1.3) \ B(0, 1) is larger than the "internal" one B(0, 1) \ B(0, 0.7).
will denote its Lebesgue measure. We will denote by B(x, ε) (or B d (x, ε), when necessary) the closed ball in R d , of radius ε, centred at x, and ω d = µ d (B d (x, 1)). Given two compact non-empty sets A, B ⊂ R d , the Hausdorff distance or Hausdorff-Pompei distance between A and C is defined by d H (A, C) = inf{ε > 0 : such that A ⊂ B(C, ε) and C ⊂ B(A, ε)}.
For an integer d < d, denote ω d-d the volume of the unit ball in R d-d . Now, define the d -dimensional Minkowski content of a set M by
Theorems 3 and 4, involve the calculation of quantities such as d(x, ∂ Ŝn ( n )) and d(x, ∂C r (Y n )), where Ŝn ( n ) is a Devroye-Wise estimator of type (6) and C r (Y n ) is the r-convex hull (2) of Y n .
Figure 2 :
2 Figure 2: Estimated density functions of the random variable e = Y -1 for d = 2 (left) and d = 3 right, with and without denoising.
Figures 3
3 Figures 3 and 4 provide a more visual idea on the result of the denoising algorithm. They correspond, respectively, to the set B(S L 3 , 0.3) (where S L 3 = {(x, y), |x| 3 + |y| 3 = 1}) and to B(T, 0.3), where T is the so-called Trefoil Knot, a well-known curve with interesting topological and geometric properties.
Figure 3 :
3 Figure 3: The yellow background is made of 5000 points (left) and 50000 points (right) drawn on on B(S L 3 , 0.3), with S L 3 = {(x, y), |x| 3 + |y| 3 = 1}. The blue points are the result of the denoising process. The black line corresponds to the original set S L 3
Table 1 :
1 Minimum sample sizes required to detect lower dimensionality for different values of the dimension d and the width parameter A.
d = 2 d = 3 d = 4
0 ≤ 50 ≤ 50 ≤ 50
0.01 [51, 100] [1001, 2000] > 10000
0.05 ≤ 50 [201, 300] [1001, 2000]
0.1 ≤ 50 [51, 100] [101, 200]
0.2 ≤ 50 ≤ 50 [51, 100]
0.3 ≤ 50 ≤ 50 [51, 100]
0.4 ≤ 50 ≤ 50 ≤ 50
0.5 ≤ 50 ≤ 50 ≤ 50
Table 2 :
2 Relative error for Minkowski contents estimation
.50 0.50 0.76 0.87
2 0.1 8.51 8.63 7.61 7.47
2 0.3 11.97 12.12 12.24 12.84
3 0 0.57 0.44 0.57 0.52
3 0.1 4.75 6.44 12.76 13.34
3 0.3 7.29 13.11 16.24 16.11
4 0 1.92 2.85 3.47 3.46
4 0.1 52.70 27.22 3.34 11.65
4 0.3 34.29 18.89 15.92 22.27
Acknowledgements
This research has been partially supported by MATH-AmSud grant 16-MATH-05 SM-HCD-HDD (C. Aaron and A. Cholaquidis) and Spanish grant MTM2013-44045-P (A. Cuevas). We are grateful to Luis Guijarro and Jesús Gonzalo (Dept. Mathematics, UAM, Madrid) for useful conversations and advice.
To prove this, consider x n ∈ B(M, α n r n ), then there exists t n ∈ M such that x n ∈ B(t n , α n r n ). Since β n = d H (X n , M) there exists y n ∈ B(t n , β n ), y n ∈ X n . It is enough to prove that x n ∈ B(y n , r n ). But this follows from the fact that, eventually a.s.,
Then, from (39)
Since there exists L 0 (M), the right hand side of (40) goes to zero. To prove that the left hand side of (40) goes to zero, let us observe that, as α n = 1 -β n /r n , and
for some constant A(M). Now the proof follows from (40) and (41).
Remark 4. In the case of sets with positive reach, part (b) suggests to take r n = max i min j =i X i -X j since we know by Theorem 1 in [START_REF] Penrose | A strong law for the largest nearest-neighbour link between random points[END_REF] that r 2 n = O (log(n)/n) 1/d ) that gives the optimal convergence rate.
Noisy Model
The estimation of the Minkowski content in the noisy model has been tackled in [START_REF] Berrendero | A geometrically motivated parametric model in manifold estimation[END_REF], where the random sample is assumed to have uniform distribution in the parallel set U . In this section we will see that even if the sample is not uniformly distributed on B(M, R 1 ) for some 0 < R 1 < R 0 = reach(M), it is still possible, by applying first the de-noising algorithm introduced in Section 4, to estimate L 0 (M). Following the notation in Section 4, let Y n be an iid sample of a random variable Y with support B(M, R 1 ), let us denote Z m the de-noised sample defined by 21. The estimator is defined as in 37 but replacing X n with Z m . Although the subset Z m is not an iid sample (since the random variables Z i are not independent), the consistency is based on the fact that Z m converge in Hausdorff distance to M, as we will prove in the following theorem.
Theorem 7. With the hypothesis and notation of Theorem 5, if max(a
Proof. The proof is analogous to the one in Theorem 6. Observe that in Theorem 5 we proved that d H (Z m , M) ≤ b n , for some b n = O(max(a
n , e n , ε n )), then b n /r n → 0. As we did Theorem 6 if we take α n = 1 -b n /r n , then, with probability one,
then we get
from where it follows
Since α n → 1 and b n /r n → 0 we get 42.
Computational aspects and simulations
We discuss here some theoretical and practical aspects regarding the implementation of the algorithms. We present also some simulations and numerical examples.
Identifying the boundary balls
The cornerstone of the practical use of Theorem 1 is the effective identification of the boundary balls. The following proposition provides the basis for such identification, in terms of the Voronoi cells of the sample points. Recall that, given a finite set {x 1 , . . . , x n }, the Voronoi cell associated with the point x i is defined by Vor(x i ) = {x : d(x, x i ) ≤ d(x, x j ) for all i = j}.
Proposition 2. Let X n = {X 1 , . . . , X n } be an iid sample of points, in R d , drawn according to a distribution P X , absolutely continuous with respect to the Lebesgue measure. Then, with probability one, for all i = 1, . . . , n and all r > 0, sup{ z -
) is a boundary ball for the Devroye-Wise estimator (6).
Proof. Let us take r > 0 and X i such that there exists z ∈ ∂B(X i , r) ∩ Vor(X i ) = ∅, let us prove that z ∈ ∂ Ŝn (r). Observe that since z ∈ Vor(X i ), d(z, X n \ X i ) ≥ r thus
Minkowski contents estimation
Finally in Table 2 we show some results about the Minkowki contents estimation, again in the case of noisy points drawn around a sphere (with R 1 =0,0.1,0.3) for different values for n and different dimensions. For every R 1 , n, d we estimate the Minkowski contents (via Monte Carlo with 10 5 points) and simulated 100 random samples. Table 2 entries provide the average relative error (in percentage) in the estimation of the boundary Minkowski contents L. That is, the entries are 100 • err(R 1 , d) where
In the Minkowski contents we used a radius ρ = √ r/2, with r = 2 max i (min j =i X j -X i ), when Rn (X n ) = 0, where Rn (X n ) = max i d(X i , ∂C r (X n )). In the case Rn (X n ) > 0 we have used ρ = r + Rn (Y n )/2, where Y n stands for the denoised sample and Rn (Y n ) = max j d(Y j , ∂C r (Y n )). | 56,458 | [
"1001349"
] | [
"15290",
"422708",
"86933",
"110760"
] |
01466666 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466666/file/978-3-642-38853-8_10_Chapter.pdf | Markus Oertel
email: [email protected]
Achim Rettberg
Reducing Re-verification Effort by Requirement-Based Change Management
Keywords: change management, system consistency, verification and validation, formal methods, safety critical embedded systems, modelbased design
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Today's safety critical embedded systems are subject to a strict quality assurance process. Domain specific standards like the ISO 26262 [START_REF]Road vehicles -functional safety[END_REF] (automotive)or ARP 4761 [START_REF]aerospace recommended practice: Guidelines and methods for conducting the safety assessment process on civil airborne systems and equipment[END_REF] (aerospace) aim to reduce the risk of harming people by a systematic hazard analysis and structured breakdown of safety requirements and system design. Once having reached a consistent system state in which all verification and validation (V&V) activities have been performed, changes can become extremely expensive since typically the whole system design needs to be verified again.
Current change management tools used in a systems engineering context like Reqtify [START_REF]Reqtify[END_REF], IBM Change [START_REF]IBM: Rational change[END_REF] or Atego Workbench [START_REF]atego: Atego workbench[END_REF] focus on the traceability [18][13] between requirements and/or system artifacts. If changes occur, these tools highlight the system artifacts directly connected by trace-links to a changed element. This approach has a couple of limitations: Changing a single component often requires changes in other components as well [START_REF] Eckert | Change and customisation in complex engineering domains[END_REF] to re-establish the consistency of the system. Based on the information available by the traceability links and the interconnections of components, it is not possible to contain the propagation of a change. This results again in very broad re-validation and recertification activities. Furthermore, it is impossible to narrow down the set of system artifacts that are directly affected by the change. All linked elements are potential candidates and need to be checked for unwanted side-effects manually, although only a part of them are relevant to the change. Also the integration of configuration management features into the change management process needs to be improved: In contrast to current development guidelines like CMMI-DEV [START_REF]CMMI Product Team: Cmmi for development, version 1.3: Improving processes for developing better products and services[END_REF] baselines are set manually, not bound to consistency criteria, and the dependencies between updated system artifacts and verification results are not considered.
In literature change management and change impact analysis is basically approached from two different directions. Estimate the costs of a change before implementing it and identifying affected elements during the change process.
Most of the techniques to quantify the costs of a change do not consider a detailed analysis of the particular change but use knowledge about similar systems and engineering experience. Clarkson et al. [START_REF] Clarkson | Predicting change propagation in complex design[END_REF] use a Design Structure Matrix in combination with a probabilistic approach to determine the likelihood and impact of changes on the different system components. Furthermore scenario based approaches [START_REF] Bengtsson | Architecture-level modifiability analysis[END_REF] exist to determine quality metrics concerning modifyability for a given architecture. Change scenarios will be defined and for each of them the possible impact will be estimated. The average effort will be calculated based on all considered change-scenarios. This approach is suited to compare two architectures or determine the risk of expensive changes. It is not used in the development process itself to handle introduced changes. Verification and validation activities are not considered and difficult to integrate into the approach since only one unit of measure can be used for the calculations (Lines of code are used in the example).
Approaches for change management during the actual change process are typically based on graph structures [START_REF] Bohner | Extending software change impact analysis into cots components[END_REF] and are mostly related to software change management. Extensions exist to object oriented software [START_REF] Ryder | Change impact analysis for object-oriented programs[END_REF] but the techniques are not applicable for system engineering since the implementation is too late available in the development process (which also might be physical and no code), therefore the requirements of the components are better suited to propagate changes.
In this paper we provide an approach for containing change effects and maintaining the consistency during the change process. Our technique ensures to identify all verification and validation activities that are affected by the change. The approach is based on the system requirements rather than the interconnections of components. Therefore we can limit the change propagation semantically without explicitly modeling it. Also the set of elements that are directly affected by a change is much smaller compared to other change management systems. I.e., from all connected elements only a subset needs to be checked for malfunctions through the change.
We currently cover the functional and timing aspects of the system within one design-perspective [START_REF] Damm | SPES2020 Architecture Modeling[END_REF]. Perspectives represent the system at different structural stages, like a logical, technical or geometrical perspective. Aspects cover the properties within one perspective that are related to a particular topic like safety, timing or weight. Our approach allows the containment of changes within the mentioned aspects only. Nevertheless adding more aspects and perspectives is systematically possible. It is expected that for a given use case only a small subset of them needs to be added like weight, electromagnetic compatibility (EMC) or heat.
The base process is described in section 3. In section 4 the principles behind the containment of the change effects are explained. Section 5 provides an overview over the necessary formalization techniques and activities that can be checked automatically. Based on this formalization we introduce a concept of identifying system elements that can be modified to reach fast a consistent system again. The approach has been implemented in an OSLC based demonstrator. Section 7 introduces this prototype and first evaluation results. A conclusion and future development activities are outlined in the final section 8.
System Abstraction and Prerequisites
We analyzed many different meta-models and standards across various domains to identify a basic set of system artifacts and trace-links [START_REF]CESAR -Cost-efficient Methods and Processes for Safety-relevant Embedded Systems[END_REF], that cover most of the currently used development artifacts. The change management process is described on these elements to be easy applicable to existing model-based development workflows.
The elements are:
-Components: Model-based representation of the interface of a system element. Depending upon the used perspective this can be logical components or concrete hardware or software parts [START_REF] Damm | SPES2020 Architecture Modeling[END_REF]. -Requirements: Requirements represent functionality or properties that the component it is attached to shall fulfill. -Implementation: Implementations represent the behavior of a component.
This might be software (code or functional model) or a hardware implementation. -V&V cases: Representation of the activity and their results to prove a property of the system.
The trace-links are:
-Satisfy: Connects a requirement with a model component. It symbolizes a "shall satisfy" relation.
-Derive: A requirement is decomposed into multiple derived requirements.
-Refine: A requirement is formalized into another language or representation.
-Implementation: Connects an implementation to a component.
These links are verification target, i.e. that it needs to be proved that the claimed property holds. Therefore V&V cases can be connected to each link detailing the activity that needs to be performed to get the necessary evidence.
To establish a change management process that allows to encapsulate the change in a determined area of the development item, it is necessary that a few prerequisites by the development process itself are met. Still, the process shall be as less inversive to the common practice processes as possible.
The most important prerequisite is that the requirements are always linked to a component and formulated in a black box manner, meaning that the requirement specifies the intended behavior on the interface of this component. This is in line with the contract based design paradigms to ensure the composability of the specified elements [START_REF] Damm | Using contractbased component specifications for virtual integration testing and architecture design[END_REF].
There are also some assumptions on the technical setup for the later realisation: As stated in typical recommendations for configuration management [START_REF]Quality management systems guidelines for configuration management[END_REF] we require all system artifacts to be under version control. In addition we require also trace-links to be versioned and pointing to versioned elements. This is necessary since the semantics of a link may not be applied for a changed element, at least not without an analysis. E.g.: A component covers a special requirement, but after changing the requirement is has to be considered again whether the attached component is still the best to implement this requirement. The link is just valid for a special version of a requirement.
Change Management Process
During the processing of a change request verification results that are connected to modified elements need to be invalidated. Furthermore all modified elements, also the trace-links and verification results need to be versioned to point to the new targets. The consistency of these elements is necessary to allow a reject of the change request at any point in time. This might be necessary if it is foreseeable that the change is becoming too expensive. Furthermore the correct version of the verification targets is required to fulfill the traceability and documentation requirements from various safety standards.
The process to achieve these goals is depicted in figure 1 and consists of four basic steps that are executed in an iterative manner:
The system (A, L) v consists of the set of system artifacts A = R ∪ C ∪ V ∪ I (Requirements, Components, V&V cases and Implementations) and the set of links L. Each state of the system is identified by a version v ∈ N + . In this description of the process a new version of the whole system is created in each step, this is a simplification in the notation, in the implementation, of course, each element is versioned for its own. A baseline v ∈ B is a version in which all verification activities v of the system are executed successfully. In the i's execution of the change phase a set of elements of the system (A, L) i-1 is modified, denoted as
E mod i ⊆ (A, L) i .
This modification alone results in an inconsistent system state, since the V&V cases connected to the system elements e that have not been changed are not valid anymore and need to be re-validated. In the local impact phase the connected V&V cases are reset:
∀v ∈ (A, L) i |e ∈ target(v) ∧ e ∈ E mod i : status(v) → suspect
The set of elements evaluated by a V&V case v is expressed by target(v) The suspect V&V cases V suspect need to be re-validated in the local verification phase:
v ∈ V suspect → {f ailed, success}
For the failed V&V cases it is necessary to adapt the system. The possible compensation candidates E comp are the elements the V&V case is targeting.
E comp i = v∈Vsuspect target(v)
The engineer needs to select one or more elements of this set and modify them so that the failed V&V activities are successful again. These modifications start the cycle again:
E mod i+1 ⊆ E comp i+1
A new baseline can be created if all V&V activities are again successfully executed.
Consistency of the System
The described process uses the V&V activities to propagate changes across the system design. We claim to identify the parts of the system that are not affected by a change. In this paper we consider the functional aspect of the system within one perspective only. This means in particular, that "side-effects" over other aspects or perspectives are not considered. A typical example for these sideeffects is change propagation through heat distribution. Due to a change on one part of the system other parts get that warm that they need additional cooling. The approach is designed in a way that these additional considerations can be added as needed by the system under development. The set of necessary perspectives can typically be limited in advance since the type of system often constitutes if e.g. EMC, weight or heat aspects have to be considered.
To be able to contain the change effects a defined set of V&V activities needs to be carried out. We use four different types of V&V cases that are necessary to comply with modern safety standards anyway (e.g. ISO 26262 [START_REF]Road vehicles -functional safety[END_REF] which basically evaluate each trace-link:
-The most important V&V activity is the verification of the correct derivation of requirements, i.e. that a requirement is correctly split up into subrequirements. If changes in requirements cause this derive-relation between requirements to fail, other requirements need to be adapted as well. Using requirements for change propagation instead of connectors between components has the benefit that a criterion for stopping the change propagation is given. If the split-up is still (or again, after further modifications) correct, there is no further propagation in this part of the system. Therefore requirements build the backbone for change propagation. To prove the correct derivation of requirements a consistency check of these requirements is technically mandatory, since the split-up of inconsistent requirements is correct if the top-level and the sub-requirements result in a system that cannot be build. -Also modifications in implementations or their components can result in change effect propagation. Therefore the implementation link between a component and an implementation needs to be verified. This check is limited to the interface, since the component itself does not contain any more information. This check is easily automatable.
-Similar to the implementation link, also the satisfy link (between requirement and component) needs to be checked. This analysis is also based in the interface. -Due to the fact, that the structural system description in form of components is separated from the behavioral description in form of the implementations and requirements an additional V&V activity is needed that checks the relation of the requirement to the implementation. This is typically performed by testing.
The functional relations between the different components are extracted from the requirements. Whether one requirement change propagates towards other components is determined by the result of the test checking the correct derivation of the connected requirements. This step can be automated, see section 5. Therefore no explicit modeling of change propagation is necessary for the functional aspect. To be able to use the requirements for containing change effects they need to be stated in a blackbox manner, i.e. that they are just formulated on the interface of the attached component. This kind of requirements allows virtual integration [START_REF] Damm | Using contractbased component specifications for virtual integration testing and architecture design[END_REF] as needed by our approach. A formal prove that these prerequisites will contain any functional change effects will be published soon.
To guarantee that there is no influence by the change outside of the identified boundary it is assumed that all verification activities are accurate. Many of the mentioned verification activities can be automated using formal methods (see section 5) and therefore reach the desired accuracy. This especially applies for the analysis verifying the correct derivation of requirements which is responsible for the propagation of the changes using the requirements. To be able to get qualified or certified even the manually performed verification activities or testcases need to have a reasonable level of confidence. The same level of confidence also applies for the propagation of the change.
Automating Change Impact Analysis
In the previous section we discussed how changes can propagate along the requirements through the system. Therefore, a reliable method of proving the requirements derivation is desired. This method can be realized using the formal requirements specification language (RSL) [START_REF] Damm | SPES2020 Architecture Modeling[END_REF] and the entailment analysis [START_REF] Damm | Using contractbased component specifications for virtual integration testing and architecture design[END_REF].
The RSL is a formal but still human readable language to express requirements. It is based upon predefined patterns with attributes that can be filled by the requirements engineer and was designed to cover a whole range of different types of requirements.
A typical example of a pattern:
Whenever <EVENT> occurs <CONDITION> holds during [INTERVAL]
The formalization the the requirement "The emergency light shall provide illumination for at least 10 minutes after the emergency landing has been initiated" looks like:
Whenever EmergencyLandingInitiated occurs emergencyLight==ON holds during [0s,600s] The semantics of these patterns are described using timed automata. The entailment analysis is a contract based virtual integration technique. Input are the specifications of the top-level component and a set of specifications of the direct sub-level components.
ent : r top × R sub → 0, 1|R sub ⊆ succ(r top )
The entailment relation is defined on the accepted traces T a of the specifications [START_REF] Hungar | Compositionality with strong assumptions[END_REF]. Entailment is given, if the set of traces which are accepted by all sub-requirements is a subset of the set of accepted traces by the top-level requirement. In contrast, if there exists a trace that is accepted by all the subrequirements not from the top-level requirement the sub-requirements are not strict enough. These additional traces describe behavior which is forbidden by the specification of the top-level requirement and therefore the requirements breakdown is not correct.
ent(r, R) = 1 iff rs∈R T a (r s ) ⊆ T a (r) 0 iff rs∈R T a (r s ) T a (r)
The entailment analysis fits the needs for the verification of derive links. It can be automatically proved that the behavior described by the derived requirements is in line with the top-level requirement. Consequently, if requirements are changed, but the derive-relation towards the top-level and towards the more refined requirements are still correct, the set of accepted traces is unchanged and a propagation of changes outside this boundary can be obviated.
Guided Compensation candidate selection by shifting
Using the formalization and entailment analysis the change management process can be further automated. In some cases it is possible to identify a concrete element out of the set of compensation candidates which is a good choice to adapt to a change, because no implementation needs to be adapted but only requirements reformulated.
This guidance mechanism uses the fact that requirements are typically stated with some margins since the actual implementation might not be known at early phases of the development process. This typically includes timing or memory constraints as well as behavioral requirements. Furthermore, the usage of COTS (components-of-the-shelf) enforces this effect since externally purchased components commonly not match the requirements to 100% but are e.g. a little faster or provide additional features not needed at the time of component selection.
In terms of the entailment relation this means that the set of accepted traces by the top-level requirements which are not in the set of the sub-level requirements might be of significant size.
∆T a (r top , R) = T a (r top ) - rs∈R T a (r s ) = ∅
If a requirement gets changed in the system, this buffer can be used to compensate the change not only locally but at a different location of the system. This can be achieved since ∆T a can be shifted towards the upper level requirements and "collect more" traces that can be used at a chunk. This is realized by replacing the top-level requirement with the parallel composition of the sub-level requirements.
The parallel composition [START_REF]SPEEDS: SPEEDS core meta-model syntax and draft semantics[END_REF] of a set of requirements is also defined by the accepted traces. The composition includes only the traces that are compatible to all requirements.
T a (r ||r 2 ) = T a (r 1 ) ∩ T a (r 2 )
If entailment is given (ent(r top , R) = 1) it is obvious that the top-level requirement r top can be replaced by the parallel composition r∈R without altering the result since entailment is always given this way:
ent( rs∈R , R) = 1 ⇒ rs∈R T a (r s ) ⊆ T a ( rs∈R ) ⇒ rs∈R T a (r s ) ⊆ rs∈R T a (r s )
While reducing the set of accepted traces for the top-level requirement by ∆T a (r top , R) a failed entailment relation using r top as a sub-requirement might be successful again.
An example with timing requirements is depicted in figure 2. The requirements R1 to R6 represent time budgets, the arrows represent derive links. If requirement R2 is changed to 50ms because it was not possible to find an implementation that could fulfill this requirement the entailment relation ent(R1, {R2, R3, R4}) would fail (see figure 2(a)), causing additional changes in the system. In this case requirement R4 can be replaced by the parallel composition of R5 and R6, namely 30ms, and the entailment relation ent(R1, {R2, R3, {R5||R6}}) is still valid (figure 2(b)).
Applying this technique over multiple levels and starting from more than one sub-requirement also non-trivial cases of shifting can be identified resulting only in requirement changes and no implementation changes. Furthermore, it is not necessary to re-validate the changed requirements.
Prototype
The tool landscape used for model-based development of safety-critical embedded systems is characterized by a high degree of distribution and heterogeneity [START_REF]CESAR -Cost-efficient Methods and Processes for Safety-relevant Embedded Systems[END_REF]. Different development teams are working with different tools (for requirements management, verification, test, modeling, simulation, etc...) and different repositories on the final product. Therefore an integrated change management solution needs to overcome tool, supply-chain and data-format boundaries. Therefore, our prototype implementation "ReMain" uses an OSLC [START_REF]Oslc: Open services for lifecycle collaboration[END_REF] locations. The tool consists of a server component handling the change requests and initiating the versioning of artifacts and a client that displays the affected system parts to the user and handles his input.
In the current demonstrator setup requirements can be stored in IBM DOORS or MS Excel, components are represented as EAST-ADL or AUTOSAR models and Implementations are read from Simulink models. The V&V cases are managed by the ReMain tool itself. Missing V&V cases which were necessary (see section 4) for our approach can be identified and generated. A dedicated V&V repository is planned for future versions.
The typical workflow should not be touched, so developers can automatically trigger the change management process by changing elements with their known modeling tool (e.g. Simulink or DOORS). The OSLC adapters of these tools will detect the change and send a modification event to the server part of the ReMain tool.
After receiving the modification event the ReMain server will perform the "local impact" activity. In particular, this includes the versioning of the connected links and V&V cases and setting their status to "suspect". The client will display only the part of the system that is currently affected by the change.
The status of the different V&V cases is represented by colored bubbles after the name of the element. The engineer gets a direct overview which activities still have to be performed and how much the change already propagated through the system. If formalized requirements are used the entailment analyzes can be executed automatically directly from the user interface of the ReMain client. If a verification activity fails, compensation candidates are highlighted.
If all verification activities have been executed successfully a new baseline can be created containing the consistent system elements. The baseline is currently stored externally but shall be directly integrated in existing configuration management tools in the future.
First evaluation results have shown that it is much easier to track changes in distributed development environments. Compared to existing approaches the decision how to react on changes can be made much faster. We use an ABS braking system as a test platform in which a saving in verification activities of more than 80% could be reached depending upon the artifacts that have been changed. For conclusive numbers more systems need to be analyzed in a systematic way.
Conclusion and Outlook
We presented an approach to identify the affected system parts during changes of system components for the functional aspect of the design. Using proper requirement formulation, traceability between system artifacts and a defined set of verification activities it is possible to reduce the re-certification effort of products since changes outside of the identified boundary have no effect on the system behavior.
In contrast to existing change management approaches the propagation of changes is not based on connected interfaces, but on the stated requirements of the system. This ensures semantically correct change effect propagation without the need of explicitly modeling the propagation. Furthermore, the set of system artifacts that need to be investigated during the process is reduced compared to approaches using the interconnections of components. Using formalized requirements the propagation process can be automated and even suggestions to possible good compensation candidates for failed V&V activities can be given. In the prototype implementation "ReMain" the process has been integrated in a highly distributed development environment consisting of commonly used development tools and formats like DOORS, Excel, Simulink, AUTOSAR and EAST-ADL.
Still, a couple of features are not yet discussed or implemented: Separating a requirement into an assumption and a promise (contract), the propagation of changes can even further reduced, also the compensation candidate selection is likely to benefit from this approach. Furthermore, the process needs to be extended to cover more than one perspective and more than the functional aspect. This includes the integration of allocation decisions. It is the aim to provide a change management solution where additional aspects (like EMC or heat distribution) can be added to cover the "side-effects" that are needed for the system under development. Also the accuracy of the approach needs to be investigated if the confidence in the executed V&V activities is low.
Fig. 1 .
1 Fig. 1. Basic process for handling Change Requests
Fig. 2 .
2 Fig. 2. Simplified example using timing requirements
The research leading to these results has received funding from the ARTEMIS Joint Undertaking under grant agreement n • 269335 and the German Federal Ministry of Education and Research (BMBF). | 28,529 | [
"1001359",
"1001360"
] | [
"303555",
"146984"
] |
01466668 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466668/file/978-3-642-38853-8_12_Chapter.pdf | Marcela Šimková
email: [email protected]
Zdeněk Přikryl
email: [email protected]
Zdeněk Kotásek
email: [email protected]
Tomáš Hruška
email: [email protected]
Automated Functional Verification of Application Specific Instruction-set Processors
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The core of current complex embedded systems is usually formed by one or more processors. The use of processors brings advantages of a programmable solution mainly the possibility of a software change after the product is shifted to the market.
The types of processors that are used within embedded systems are typically determined by an application itself. One can use general purpose processors (GPPs) or application specific instruction-set processors (ASIPs) or their combination. The advantages of GPPs are their availability on the market and an acceptable price because they are manufactured in millions or more. On the other hand, their performance, power consumption and area are worse in comparison to ASIPs that are highly optimised for a given task and therefore, have much better parameters.
However, one needs a powerful and easy-to-use tools for the ASIPs design, testing and verification as well as tools for their programming and simulation. These tools often use architecture description languages (ADLs) [START_REF] Mishra | Processor Description Languages[END_REF] for the description of a processor. ADL allows automated generation of the programming tools such as C/C++ compiler or assembler, simulation tools such as instruction-set simulator (IIS) or profiler. Moreover, the representation of the processor in hardware description language (HDL) such as VHDL or Verilog is generated from this description. If the designer wants to change a design somehow (add a new feature, fix a bug), he or she just change the processor description and all tools as well as the hardware description are re-generated. This allows really fast design space exploration [START_REF] Martin | ESL Design and Verification: A Prescription for Electronic System Level Methodology (Systems on Silicon)[END_REF] within processor design phases. Examples of ADLs are LISA [START_REF] Hoffmann | Leupers Architecture Exploration for Embedded Processors with LISA[END_REF], ArchC [START_REF] Azevedo | The ArchC architecture description language and tools[END_REF], nML [START_REF] Fauth | Describing instruction set processors using nML[END_REF] or CodAL [START_REF]Codasip[END_REF].
In some cases, when a special functional unit is added for instance a modular arithmetic unit (in cryptography processors) additional verification steps should be performed. The reason is that description of instructions can be different for simulation/hardware and for the C compiler (e.g. LISA language has separate sections describing the same feature for the simulator and for the C compiler). In this case, one should verify that the generated hardware can be programmed by the generated C compiler. In other words, the C compiler should be verified with respects to the features of a processor. Therefore, it is highly desirable to have a tool that automatically generates verification environments and allows checking all above mentioned properties.
In this paper we propose an innovative approach for automated generation of verification environments which is easily applicable in the development cycle of processors. As a development environment we utilise the Codasip framework [START_REF]Codasip[END_REF] but the main principles are applicable in other environments as well. In order to verify a hardware representation of processors with respect to the generated C/C++ compiler we decided to apply functional verification approach as it offers perfect scalability and speed.
The following section shows the state of the art in the development of processors and description of verification techniques which are typically used in this field. Afterwards, the Codasip framework is described in Section 3. Our approach is introduced in Section 4, and Section 5 presents experimental results. In the end of this paper, the conclusions and our plans for future research are mentioned.
Verification in the Development Cycle of Processors
The following subsections introduce verification techniques used within processor design phases as well as research projects and companies dealing with processor design.
Verification Approaches
For verification of processors a variety of options exists: (i) formal verification, (ii) simulation and testing, and (iii) functional verification. However, their nature and preconditions for the speed, user expertise and complexity of the verification process are often a limiting factor.
Formal verification is an approach based on an exhaustive exploration of the state space of a system, hence it is potentially able to formally prove correctness of a system. The main disadvantages of this method are state space explosion for real-world systems and the need to provide formal specifications of behaviour of the system which makes this method often hard to use.
Simulation and testing, on the other hand, are based on observing the behaviour of the verified system in a limited number of situations, thus it provides only a partial guarantee of correctness of the system. However, because tests focus mainly on the typical use of the system and on corner cases, this is often sufficient. Moreover, writing tests is usually faster and easier than writing formal specifications.
Functional verification is a simulation-based method that does not prove correctness of a system but uses advanced verification techniques for reaching verification closure. Verification environments (or testbenches) are typically implemented in some hardware verification language, e.g. in SystemVerilog, OpenVera or e. During verification, a set of constrained-random test vectors is typically generated and the behaviour of the system for these vectors is compared with the behaviour specified by a provided reference model (which is called scoreboarding). In order to measure progress in functional verification (using coverage metrics), it is necessary to (i) find a way how to generate test vectors that cover critical parts of the state space, and (ii) maximise the number of vectors tested. To facilitate the process of verification and to formally express the intended behaviour, internal synchronisation, and expected operations of the system, assertions may be used.
All above mentioned features are effective in checking the correctness of the system and maximising the efficiency of the overall verification process. The popularity of functional verification is claimed by the existence of various verification methodologies, with OVM (Open Verification Methodology) [5] and UVM (Universal Verification Methodology) [5] being mostly used. They offer a free library of basic classes and examples implemented in SystemVerilog and define how to build easily reusable and scalable verification environments.
Design and Verification of Processors
There exist several research projects or companies dealing with processor design using ADLs. One of them is an open-source ArchC project [START_REF] Azevedo | The ArchC architecture description language and tools[END_REF]. A processor is described using ArchC language and the semantics of instructions is described using SystemC constructions [START_REF]SystemC Project[END_REF]. All programming tools as well as IIS can be generated from the description in ArchC. The generation of a hardware representation is currently under development and there is no mention of verifying them so far.
Synopsys company offers Processor Designer [8] for designing processors. It uses LISA language. The programming tools, ISS as well as the HDL representation can be generated from the processor description. However, they do not provide any automatically generated verification environments. As mentioned in Section 1, instructions are described in two ways, the first one is used by the generator of the C compiler and the second one is used by ISS. Therefore, if the descriptions of instructions for ISS and the C compiler are not equivalent, then there is no automatic way how to detect it. In that case, C compiler cannot program the target processor properly.
Target company uses an enhanced version of nML language for the description of a processor microarchitecture [12]. The C compiler and ISS can be generated from the description of a processor as well as the HDL representation. According to the web presentation, the verification environment consists of a test program generator. It emits programs in assembly language (note that the assembly language is processor specific). The generated test program is loaded by the ISS and a third party RTL simulator which evaluates the generated HDL representation. The program is executed and after the testing phase is completed, results are compared. If they are equal, then the test passed, otherwise an inconsistency is found and it needs to be investigated further. Nevertheless, a generator of test programs in some high-level language like C or C++ is missing. Therefore, the C compiler itself is not verified with respect to the processor.
When focusing on automated generation of verification environments in general (not necessary for processors), there already exist some commercial solutions which are quite close to our work. One example is Pioneer NTB from Synopsys [10] which enables to generate SystemVerilog and OpenVera testbenches for different hardware designs written in VHDL or Verilog with inbuilt support for third-party simulators, constrained-random stimulus generation, assertions and coverage. Another example is SystemVerilog Frameworks Template Generator (SVF-TG) [START_REF]Paradigm Works SystemVerilog Frameworks Template Generator[END_REF] which assists in creating and maintaining verification environments in SystemVerilog according to the Verification Methodology Manual (VMM).
Codasip Framework
The Codasip Framework is a product of the Codasip company and represents a development environment for ASIPs. As the main description language it utilises ADL called CodAL [START_REF]Codasip[END_REF]. It is based on the C language and has been developed by the Codasip company in cooperation with Brno University of Technology, Faculty of Information Technology. All mainstream types of processor architectures such as RISC, CISC or VLIW can be described.
The CodAL language allows two kinds of descriptions. In the early stage of the design space exploration a designer creates only the instruction-set (the instructionaccurate description). It contains information about instruction decoders, the semantics of instructions and resources of the processor. Using this description, programming tools such as a C/C++ compiler and simulation tools can be properly generated. The C/C++ compiler is based on LLVM platform [START_REF]The LLVM Compiler Infrastructure Project[END_REF].
As soon as the instruction-set is stabilised a designer can add information about processor microarchitecture (cycle-accurate description) which allows generating programming tools (without the C/C++ compiler), RTL simulators and the HDL representation of the processor (in VHDL or Verilog). As a result, two models of the same processor on different level of abstraction exist.
It is important to point out that in our generated verification environments the instruction-accurate description is taken as a golden (reference) model and the cycleaccurate description is verified against it.
The instruction-accurate description can be transformed into several formal models which are used for capturing particular features of a processor. Formal models which are used in our solution are decoding trees in case of instruction decoders and abstract syntax trees in case of semantics of instructions [START_REF] Přikryl | Advanced Methods of Microprocessor Simulation[END_REF]. All formal models are optimised and then normalised into the abstract syntax tree representation that can be transformed automatically into different implementations (mainly in C/C++ languages). The generated code together with the additional code (it represents resources of processor such as registers or memories) forms ISS.
It should be noted that some parts of the generated code can be reused further in the golden model for verification purposes (more information in Section 4). At the same time as the golden model is generated, connections to the verification environment are established via the direct programming interface (DPI) in SystemVerilog. Automated generation of golden models reduces the time needed for implementation of verification environments significantly. Of course, a designer can always rewrite or complement the golden model manually.
The cycle-accurate description of a processor can be transformed into the same formal models as in case of the instruction-accurate description. Besides them, the processor microarchitecture is captured using activation graphs. In case of the cycle-accurate description, the formal models are normalised into the component representation. Each component represents either a construction in the CodAL language such as arithmetic operators or processor resources or it represents an entity at the higher level of abstraction such as the instruction decoder or a functional unit. Two fundamental ideas are present in this model, (i) components can communicate with each other using ports and (ii) components can exist within each other. In this way, component representation is closely related to HDLs and serves as an input for the HDL generator as well as for the generator of verification environments. For better comprehension of the previous text, the idea is summarised once again in Figure 1. Codasip works with the instruction and the cycle-accurate description of a processor and specific tools are generated from these descriptions. The highlighted parts are used during the verification process. It should be noted that the presented idea is generally applicable and is not restricted only to the Codasip Framework.
Verification environments generated from formal models are thoroughly described in the following section.
Functional Verification Environments for Processors
The goal of functional verification is to establish the conformance of a design of processor to its specification. However, considerable time is consumed by designing and implementation of verification environments.
In order to comfortably debug and verify ASIPs designed in the Codasip framework as fast as possible and not waste time with implementation tasks we designed a special feature allowing automated pre-generation of OVM verification environment for every processor. In this way we can highly reuse the specification model provided in the Co-dAL language and all intermediate representations of the processor for comprehensive generation of all units.
Our main strategy for building robust verification environments is to comply with principles of OVM (they are depicted in Figure 2). We have fulfilled this task in the following way: 1. OVM Testbench. Codasip supports automated generation of object-oriented testbench environments created with compliance to open, standard and widely used OVM methodology. 2. Program Generator. During verification we need to trigger architectural and microarchitectural events defined in the verification plan and ensure that all corner cases and interesting scenarios are exercised and bugs hidden in them are exposed.
For achieving the high level of coverage closure of every design of processor it is possible to utilise either a generator of simple C/C++ programs in some third-party tool or already prepared set of benchmark programs. 3. Reference Methodology. A significant benefit of our approach is gained by automated creation of golden models for functional verification purposes. We realised that it is possible to reuse formal models of the instruction-accurate description of the processor at the higher level of abstraction and generate C/C++ representation of these models in the form of reference functions which are prepared for every instruction of the processor. Moreover, we are able to generate SystemVerilog encapsulations, so the designer can write his/her own golden model with advantage of the pre-generated connection to other parts of the verification environment. 4. Functional Coverage. According to the high-level description of the processor and the low-level representation of the same processor in VHDL, we are able to automatically extract interesting coverage scenarios and pre-generate coverage points for comprehensive checking of functionality and complex behaviour of the processor. Of course, it is highly recommended to users to add some specific coverage points manually. Nevertheless, the built-in coverage methodology allows measuring the progress towards the verification goals much faster.
In addition, interconnection with a third-party simulator e.g. ModelSim from Mentor Graphics allows us to implicitly support assertion analysis, code coverage analysis and signals visibility during all verification runs. A general architecture of generated OVM testbenches is depicted in Figure 3 and main components are described further. -DUT (Device Under Test). The verified hardware representation of the processor written in VHDL/Verilog. According to the type of the processor, different number of internal memories and register arrays is used. The processor typically contains a basic control interface with clock and reset signal, an input interface and an output interface. -OVM Verification Environment. Basic classes/components of generated verification testbenches with compliance to OVM methodology:
• Input Ports Sequencer and Driver. Generation of input sequences and supplying them to input ports of the DUT. • Instruction Sequencer and Driver. Generation of input test programs or reading already prepared benchmark programs from external resources. Afterwards, programs are loaded to the program memory of the processor as well as to the instruction decoder in Scoreboard. • Scoreboard. This unit represents a self-checking mechanism for functional verification. In order to prepare expected responses of the verified processor to a particular program, Scoreboard uses a set of pre-generated reference functions created with respect to the specification described in the high-level Co-dAL language. For computational purposes a reference memory and a reference register array are used. As a result, predicted memory result, predicted register array result and predicted output ports result are prepared. • Halt Detection Unit. It is necessary to define a specific time in simulation when images of memories and register arrays of processor are checked. Evidently, it should be done when a program is completely evaluated by the pro-cessor. This situation can be distinguished according to the detection of HALT instruction activity in the processor. • Data Monitor. In case of the detection of HALT instruction activity, Data
Monitor reads images of memories and register arrays of the processor and sends them to Scoreboard where they are compared with expected responses. If a discrepancy occurs, verification is stopped and a detailed report with description of an error is provided. • Output Ports Monitor. Output ports of DUT are driven and their values are stored for later processing. In contrast with Data Monitor, this monitor works continuously not only in case of HALT instruction activity. Values generated by a reference model in Scoreboard are stored as well. Equivalence between stored values is checked by a function given by user or by default one. • Subscribers. The aim of these units is to define functional coverage points, in other words interesting scenarios according to the verification plan which should be properly checked. These units are not present in Figure 3 although they are generated in every verification environment.
Experimental Results
In this section, the results of our solution are provided. We generated verification environments for two processors. The first one is the 16bits low-power DSP (Harward architecture) called Codea2. The second one is the 32bit high performance processor (Von Neumann architecture) called Codix. Detailed information about them can be found in [START_REF]Codasip[END_REF]. We used Mentor Graphics' ModelSim SE-64 10.0b as the SystemVerilog interpreter and the DUT simulator. Testing programs from benchmarks such as EEMBS and MiBench or test-suites such as full-retval-gcctestuite and perrenial testsuite were utilised during verification. The Xilinx WebPack ISE was used for synthesis. All experiments were performed using the Intel Core 2 Quad processor with 2.8 GHz, 1333 MHz FSB and 8GB of RAM running on 64-bit Linux based operating system. Table 1 expresses the size of processors in terms of required Look-Up Tables (LUTs) and Flip-Flops (FFs) on the Xilinx Virtex5 FPGA board. Other columns contain information about the number of tracked instructions and the time in seconds needed for generation of SystemVerilog verification environment and all reference functions inside the golden models (Generation Time). In addition, the number of lines of programming code for every verification environment is provided (Code Lines). A designer typically needs around fourteen days in order to create basics of the verification environment (without generation of proper stimuli, checking coverage results, etc.), so the automated generation saves the time significantly. Table 2 provides information about the verification runtime and results. As Codea2 is a low-power DSP processor some programs had to be omitted during experiments because of their size (e.g. programs using standard C library). Therefore, the number of programs is not the same as in case of Codix. Of course, the verification runtime depends on the number of tested programs and if the program is compiled with no optimisation the runtime is significantly longer. The coverage statistics in Table 3 can show which units of the processor have been appropriately checked. As one can see, the instruction-set functional coverage reaches only around fifty percent for both processors (i.e. a half of instructions were executed). The low percentage is caused by the fact that selected programs from benchmarks did not use specific C constructions which would invoke specific instructions. On the other hand, all processors registers files were fully tested (100% Register File coverage). This means that read and write instructions were performed from/to every single address in register files. The functional coverage of memories represents coverage of control signals in memory controllers. Besides functional coverage, ModelSim simulator provides also code coverage statistics like branch, statement, conditions and expression coverage. According to the code coverage analysis we were able to identify several parts of the source code which were not executed by our testing programs and therefore we must improve our testing set and explore all coverage holes carefully. Figure 4 demonstrates the status of instruction-set functional coverage for Codix processor after execution of 500 programs in ModelSim. Of course, the main purpose of verification is to find bugs and thanks to our pregenerated verification environment we were able to target this issue successfully. We discovered several well-hidden bugs located mainly in the C/C++ compiler or in the description of a processor. One of them was present in the data hazard handling when the compiler did not respect a data hazard between read and write operation to the register file. Another bugs caused jumping to incorrectly stored addresses and one bug was introduced by adding a new instruction into the Codix processor description. The designer accidentally added a structural hazard into the execute stage of the pipeline.
Conclusion and Future Work
To summarise, implementation of functional verification environment is a manual and highly error prone process. As we wanted to accelerate creation and maintenance of advanced OVM verification environment for ASIPs we implemented a special feature which allows their automated generation. The experimental results show that the automatic generation is fast and robust and we were able to find several crucial bugs during the processors design.
In the future we plan to utilise a sophisticated generator of programs in order to achieve higher level of coverage of verified processors because during experiments we identified several holes in functional coverage and code coverage. Moreover, we want to discover the relation between test-templates and coverage points.
Fig. 1 .
1 Fig. 1. Verification flow in the Codasip Framework.
Fig. 2 .
2 Fig. 2. Verification methodology.
Fig. 3 .
3 Fig. 3. OVM verification environments for user-defined processors in Codasip framework.
Fig. 4 .
4 Fig. 4. Coverage Screenshot.
Table 1 .
1 Measured Results.
Processor LUTs/FF (Virtex5) Tracked Instructions Generation Time [s] Code Lines
Codea2 1411/436 60 12 2871
Codix 1860/560 123 26 3586
Table 2 .
2 Runtime statistics.
Processor Programs Runtime [min]
Codea2 636 28
Codix 1634 96
Table 3 .
3 Coverage statistics.
Processor Code Coverage [%] Branch Statement Conditions Expression Instruction-Set Register File Memories Functional Coverage [%]
Codea2 87.0 99.1 62.3 58.1 51.2 100 87.5
Codix 92.1 99.2 70.4 79.4 44.7 100 71.5
This work was supported by the European Social Fund (ESF) in the project Excellent Young Researchers at BUT (CZ.1.07/2.3.00/30.0039), the IT4Innovations Centre of Excellence (CZ.1.05/1.1.00/02.0070), Brno Ph.D. Talent Scholarship Programme, the BUT FIT project FIT-S-11-1, research plan no. MSM0021630528, and the research funding MPO ČR no. FR-TI1/038. | 26,128 | [
"1001361",
"1001362",
"1001363",
"1001364"
] | [
"160209",
"160209",
"160209",
"160209"
] |
01466670 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466670/file/978-3-642-38853-8_15_Chapter.pdf | Gustavo Kunzel
Jean M Winter
Ivan Muller
email: [email protected]
Carlos E Pereira
email: [email protected]
João C Netto
email: [email protected]
A Passive Monitoring Tool for Evaluation of Routing in WirelessHART Networks
Keywords: WirelessHART, Wireless industrial networks, Routing
Wireless communication networks have received strong interest for applications in industrial environments. The use of wireless networks in automation systems introduces stringent requirements regarding real-time communication, reliability and security. The WirelessHART protocol aims to meet these requirements. In this protocol, a device known as Network Manager is responsible for the entire network configuration, including route definition and resource allocation for the communications. The route definition is a complex process, due to wireless networks characteristics, limited resources of devices and stringent application requirements. This work presents a tool that enables the evaluation of the topology and routes used in operational WirelessHART networks. By capturing packets at the physical layer, information of operating conditions is obtained, where anomalies in network topology and routes can be identified. In the case study, a WirelessHART network was deployed in a laboratory, and by the developed tool, important information about the network conditions was obtained, such as topology, routes, neighbors, superframes and links configured among devices.
Introduction
The deployment of wireless networks in real-world control and monitoring applications can be a labor-intensive task [START_REF] Tateson | Real World Issues in Deploying a Wireless Sensor Network for Oceanography[END_REF]. Environmental effects often trigger bugs or degrade performance in a way that cannot be observed [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF]. To track down such problems, it is necessary to inspect the conditions of network after devices deployment. The inspection can be complex when commercial equipment is used in applications. It can be difficult to gather specific information about the performance of the network, according to the limited visibility provided by the equipments.
Wireless networks has stringent requirements on reliable and real-time communication [START_REF] Han | Reliable and Real-Time Communication in Industrial Wireless Mesh Networks[END_REF][START_REF] Zeng | Delay monitoring for wireless sensor networks: An architecture using air sniffers[END_REF] when used in industrial control applications. Missing or delaying the process data may severely degrade the control quality. Factors as signal strength variations, node mobility and power limitation may interfere on overall performance.
Recently, the International Electrotechnical Commission certified the Wire-lessHART (WH) protocol as the first wireless communication standard for process control [4]. The good acceptance of the protocol by the industry has ensured the developing of different devices that meet the standard from several manufacturers. However, it can be seen that there is still a great lack of computational tools that allow a clearer examination of the behavior and characteristics of these networks and devices [START_REF] Winter | WirelessHART Routing Analysis Software[END_REF]. Many of these tools become essential as soon as the full operation of the network depends and varies according to the aspects of the environment as well as the distribution of devices.
The WH network enables mesh topologies, where all the devices have the task of forwarding packets to and from other devices. The Network Manager (NM) has the task of gather information about devices neighbors, network conditions and communication statistics. Based on this info, the NM defines the routes used for communication. The evaluation of the routes used may help user to improve network performance and identify problems, as well as device characteristics.
Several works address the collection of diagnosis information for wireless networks, utilizing active and passive mechanisms [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF][START_REF] Yu | DiF: A Diagnosis Framework for Wireless Sensor Networks[END_REF][START_REF] Han | Wi-HTest: Compliance Test Suite for Diagnosing Devices in Real-Time WirelessHART Network[END_REF][START_REF] Srinvasan | SWAT: Enabling Wireless Network Measurements[END_REF][START_REF] Maerien | FAMoS: A Flexible Active Monitoring Service for Wireless Sensor Networks[END_REF][START_REF] Rost | Memento: A Health Monitoring System for Wireless Sensor Networks[END_REF][START_REF] Ramanathan | Sympathy: a debugging system for sensor networks[END_REF][START_REF] Chen | Using Passive Monitoring to Reconstruct Sensor Network Dynamics[END_REF][START_REF] Zeng | Delay monitoring for wireless sensor networks: An architecture using air sniffers[END_REF][START_REF] Depari | Design and performance evaluation of a distributed WirelessHART sniffer based on IEEE1588[END_REF][START_REF] Ban | Implementation of IEEE 802.15.4 Packet Analyzer[END_REF]. Active mechanisms involve instrumentation of the network devices with monitoring software. Passive mechanisms utilize sniffers that overhear the packets exchanged on the physical layer [START_REF] Yu | DiF: A Diagnosis Framework for Wireless Sensor Networks[END_REF]. The passive method has advantages, as no interference is added to the network. However, related works do not address specific issues about the passive monitoring of WH packets. WH utilizes an authentication/encryption mechanism to provide secure communication, so the tool must keep track of information to correctly decode the packets and obtain decrypted data. Commercial tools provide means for collecting and decoding WH packets, but the results are shown in a spreadsheet format, making the analysis of data a labor-intensive task.
This work discusses the development of a passive monitoring software tool for evaluation of topology and routes used in WH networks, with a specific architecture to deal with the security information of the protocol. The user can input collected log files or implement a communication directly with sniffers, allowing online and offline analysis methods. Once received, the packets are decoded, an overview of the network is built and by means of statistics, charts, lists, graphs, and other information about the network is shown, helping the user on different evaluations of the network.
The paper is structured as follows. Diagnosis approaches for wireless sensor networks are presented in Section 2. Section 3 presents a short brief of WH and the protocol packet structure and routing mechanisms. Section 4 presents the tool structure. Section 5 presents a case study using the tool in a WH network. The conclusion and the future works are presented in Section 6.
Related Work
The diagnosis of wireless networks can be achieved in an active or passive fashion. The active mechanism involves the instrumentation of the nodes with monitoring software for capturing of diagnostic information. The active approaches require nodes to transmit specific messages to diagnosis tools using the communication channel or an alternative back channel [START_REF] Srinvasan | SWAT: Enabling Wireless Network Measurements[END_REF][START_REF] Maerien | FAMoS: A Flexible Active Monitoring Service for Wireless Sensor Networks[END_REF][START_REF] Rost | Memento: A Health Monitoring System for Wireless Sensor Networks[END_REF][START_REF] Ramanathan | Sympathy: a debugging system for sensor networks[END_REF]. This method may overload the normal network communication. A back channel is also not usually available on the devices and on the field. Scarce sensor resources (bandwidth, energy, constrained CPU and memory) may also affect the performance of this kind of diagnosis and change the behavior of the network [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF]. The passive approaches in [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF], [START_REF] Yu | DiF: A Diagnosis Framework for Wireless Sensor Networks[END_REF], [START_REF] Chen | Using Passive Monitoring to Reconstruct Sensor Network Dynamics[END_REF][START_REF] Zeng | Delay monitoring for wireless sensor networks: An architecture using air sniffers[END_REF][START_REF] Depari | Design and performance evaluation of a distributed WirelessHART sniffer based on IEEE1588[END_REF][START_REF] Ban | Implementation of IEEE 802.15.4 Packet Analyzer[END_REF] utilize sniffers to overhear packets exchanged by the nodes, to form an overview of the network. This approach does not interfere on the network, as no additional bandwidth is required for diagnostic information transfer and no processing and energy power is used in the devices for diagnosis purposes [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF]. On the other hand, the passive method is subjected to packet loss, caused by interference, collision and coverage of sniffers. Solutions for the sniffer's deployment problem are proposed in [START_REF] Zeng | Delay monitoring for wireless sensor networks: An architecture using air sniffers[END_REF]. The hardware for the sniffers is not addressed in this work.
The software architectures for captured packets evaluation of IEEE 802.15.4 are proposed in [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF], [START_REF] Yu | DiF: A Diagnosis Framework for Wireless Sensor Networks[END_REF] and [START_REF] Ban | Implementation of IEEE 802.15.4 Packet Analyzer[END_REF]. These works propose a generic architecture for collecting, merging, decoding, filtering and visualizing data. However, these approaches do not have mechanisms to deal with protocols that contain security and encryption like WH. Wi-Analys [START_REF] Han | Wi-HTest: Compliance Test Suite for Diagnosing Devices in Real-Time WirelessHART Network[END_REF] is a commercial tool that provides means for collecting and decoding packets captured from WH networks, but the visualization of results is done in a spreadsheet format, what difficult the analysis of the information.
3
The WirelessHART Protocol
The WH standard is part of version 7 of the HART specification [START_REF] Kim | When HART Goes Wireless: Understanding and Implementing the WirelessHART Standard[END_REF][START_REF] Song | Improving pid control with unreliable communications[END_REF]. It features a secure network and operates on the 2.4 GHz ISM (Industrial, Scientific and Medical) radio band. The physical layer is based on the IEEE 802.15.4 standard in which direct sequence spread spectrum is employed [10]. A WH network supports a variety of devices, including field devices, adapters, portable devices, access points, network manager and a gateway to connect to a host application. The protocol allows multiple access and media arbitration by means of Time Division Multiple Access (TDMA) [START_REF] Rappaport | Wireless Communications -Principles & Practice[END_REF]. The links among devices are programmed and allocated in different time slots by the NM. The NM continuously adapts the routing and schedule due to changes in network topology and demand for communication [START_REF] Chen | WirelessHART: real-time mesh network for industrial automation[END_REF]. The following subsections present the ISO/OSI layers of the protocol.
Data-link Layer
The Data-Link Layer is responsible for secure, reliable, error free communication of data between WH devices [START_REF][END_REF]. The communications are performed in 10 ms timeslots, where two devices are assigned to communicate. A communication transaction within a slot supports the transmission of a Data-Link Protocol Data Unit (DLPDU) from a source, followed by an acknowledgment DLPDU by the addressed device. To enhance reliability, channel-hopping mechanism is combined with TDMA. DLPDU structure is presented in Fig. 1.
The CRC-16 ITU-T [14] is used for bit error detection and AES-CCM* [START_REF] Dworkin | Recommendation for Block Cipher Modes of Operation: The CCM Mode for Authentication and Confidentiality[END_REF] is used for message authentication. Authentication uses the WH Well-Known Key for advertisement DLPDUs and messages of joining devices. Other communications use the Network Key (provided by the NM when a device is joining the network). The Nonce used is a combination of the Absolute Slot Number (ASN) and the source address of the packet. ASN counts the total number of slots occurred since network's birth, and is known by devices through the advertise packets. Five types of DLPDU packets are defined: Advertisement, Acknowledge, Data, Keep-Alive and Disconnect.
Network Layer
The Network Layer provides routing, end-to-end security and transport services. Data DLPDU packets contain in its payload a Network Layer Protocol Data Unit (NPDU), shown in Fig. 2. The NPDU contains three layers: Network Layer, with routing and packet time information, Security Layer that ensures private communication and enciphered payload, containing information being exchanged over network [16]. Graph routing. A graph contains paths that connect different devices on the network. The NM is responsible for creating the graphs and configuring them on each device through transport layer commands [START_REF] Han | Reliable and Real-Time Communication in Industrial Wireless Mesh Networks[END_REF]. A graph shows a set of direct links between source and final destination and can provide also redundant paths. To send a packet using this method, the source device of packet writes the specific Graph ID number in the NPDU header. All devices on the path must be preconfigured with graph information that specifies the neighbors to which packets may be forwarded.
Source routing.
The source routing provides one single directed path between source and destination device. A list of devices that the packet must travel is statically specified in the NPDU header of the packet [START_REF] Chen | WirelessHART: real-time mesh network for industrial automation[END_REF]. This method does not require configuration of graphs and routes in the devices.
Superframe routing. In this method, packets are assigned to a specific superframe and the device sends the message according to the identification of the superframe.
The forwarding device selects the first available slot in the superframe, and sends the message. So, the superframe must have links that leads packet to its destination. Identification of the superframe routing is done in the NPDU header using the Graph ID field. If the field value is less than 255, then routing is done using superframe. If the value is 256 or more, then routing is done via graphs. A combination of superframe routing and the source routing is also allowed. In this case, the packet is forwarded through the source list with slots configured inside the specified superframe.
Transport Layer
The Transport Layer provides means to ensure end-end packet delivery, device status and one or more commands. Enciphered payload of the Security Layer contains a Transport Layer Protocol Data Unit (TPDU). Fig. 3 shows the structure of the TPDU packet. The structure of the proposed tool is presented in Fig. 4. The tool provides meanings for capturing and decoding captured data, obtaining network information and visualizing routes configured in the devices.
Capture
The capture of the packets exchanged by nodes is carried out in a passive way by installing one or more sniffers within the area of network. The sniffers add also a timestamp to the captured packets. The deployed sniffers may not be able to hear all packets that occur in network. Reasons involve radio sensitivity, positioning and noise. A partial coverage of the network can meet the requirements of some types of analysis for the WH protocol. This approach has the advantage of limiting the amount of data processed in later steps. Further information about sniffers deployment can be found in [START_REF] Zeng | Delay monitoring for wireless sensor networks: An architecture using air sniffers[END_REF]. For routing evaluation, sniffers may be deployed close to the Access Points, where all the management data to and from NM passes by. An important factor to be observed in the diagnostic of WH protocol is the communication on multiple channels [START_REF][END_REF], requiring sniffers to be able to monitor the 16 channels simultaneously. Other issue is that the use of multiple sniffers introduces the need of a synchronization mechanism, since packets may be overheard in different sniffers who have a slightly different clock [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF]. A merging process is necessary to combine several sniffers captures in a single trace, ordered according to the timestamp of packets. The merging methods can be found in [START_REF] Ringwald | Deployment of Sensor Networks: Problems and Passive Inspection[END_REF], [START_REF] Yu | DiF: A Diagnosis Framework for Wireless Sensor Networks[END_REF], and [START_REF] Chen | Using Passive Monitoring to Reconstruct Sensor Network Dynamics[END_REF].
Capture
In order to keep the flexibility of the tool, the Capture Block has an interface for input of data from different sources, such as simulators, capture log files, or direct connection with sniffers. The received data is added in a queue to be processed.
Decoder
The Decoder Block aims to convert a packet from raw bytes to structured message description, according to the ISO/OSI model of WH. At the end of this process, the contents of the packets are interpreted to get information about network conditions. The decoding process is complex due the AES-CCM*, which requires that information about the keys and counters are obtained and stored. The main blocks of the decoder are shown in Fig. 5 and described below. Before execution, user must provide the system with the Network ID and Join Key to enable the decoder to obtain information needed for further authentication and decryption.
Network Trace
Network keys and ASN Initially, the raw bytes of the packet are converted to its specific type of DLPDU. The packets with wrong CRC-16 and wrong header are identified. Once structured, the DLPDU packets that do not belong to the Network ID provided are identified.
AES-CCM* NPDU Decryption
Network Trace. The Network Trace Block provides the necessary information to the decoder for authentication and decryption of packets. It also holds information discovered of the network (e.g. devices, superframes and links). Depending on the coverage of the sniffers, the data stored in the Network Trace may be similar to the data stored in the NM, which have full information about the network operation. For each new message authenticated or decrypted, the Network Trace must be updated in order to maintain updated information of the keys, counters and network. Authentication and decryption of some packets may be compromised, as result of missing keys due to packet loss. The user should be aware of this issue when evaluating the network. For authenticating the DLPDUs, the Network Trace must keep trace of the current ASN of the network, using an advertise packet captured. While the ASN of the network is not provided, authentication and further processes are compromised. The DLPDU must be authenticated in order to verify its integrity. The Message Integrity Code (MIC) field of the DLPDU is compared with the MIC obtained applying the AES-CCM* algorithm on the raw bytes of the DLPDU. The Network Trace Block keeps track of the Well-Known Key and the Network Key. The Network Key is obtained during the join process of a device.
Once authenticated, the Network Trace is updated with the last ASN used and with the packet timestamp. The Data DLPDUs are decoded on the NPDU layer, while the other types of DLPDUs are sent to the Fill Message block. Another issue involves the decryption of the NPDUs. To do the decryption, sniffers must hear the join process of the device, where the Session Keys provided by the NM are obtained. Without these keys the system is not able to decrypt the contents of the Security Layer messages. For Data DLPDUs, the payload contained in the Security Layer of the NPDU is decrypted, using the Join Key or the specific Device's Session Keys and Session Counters.
Once decrypted, the packet is decoded in the transport layer, where a TPDU is generated. The Network Trace interprets the commands contained in the TPDU in order to maintain an updated view of the network, with Network Keys, Sessions, Superframes, Links, Device's Timers, Services, and further information. A list of all decoded packets is generated in order to allow filtering of messages in future applications of the tool.
Topology and routes
The information discovered and stored in Network Trace is used to build an updated view of the network topology and the routes used. Network neighbor's information is used to build the topology of the network. The routes used for packet propagation are obtained based on the graphs, superframes and routes configured on each device. A graph representing each route is built for further analysis.
Visualizer
The topology and the discovered routes are summarized by the Visualizer Block to be easily interpreted by the user. Representations such statistics, charts and graphs can be used for analysis. Information contained in the Network Trace about the devices and network also may be displayed.
Case Study
In order to evaluate the tool, we deployed a WH network in a laboratory environment. The network consisted of the following devices: a Network Manager, an Access Point and a Gateway (Emerson model 1420A), nine WH-compatible field devices developed in previous work [START_REF] Muller | Development of Wire-lessHART Compatible Field Devices[END_REF] and a Wi-Analys Network Analyzer Sniffer, from Hart Communication Foundation. Fig. 6a shows the WH-compatible devices and Fig. 6b the sniffer. The data collected from sniffer was stored in a log file and later loaded in the tool. Packets were captured during a period of 120 minutes since network's birth. The sniffer was deployed close to the access point to get overall information of network. Fig. 7 shows a representation of the network. The following subsections present analysis of the network behavior obtained with the captured packets. Before loading the file in the developed tool, we provided the Join Key (0x12345678000000000000000000000000) and Network ID (0001) of devices. The sensor devices publish their process variable each minute.
Network Manager/ Gateway
Network Topology evaluation
The current topology of network is evaluated to find devices that may be bottlenecks for transferring data and devices with weak connections to neighbors. A graph is built showing discovered neighbors and the Received Signal Level (RSL) of packets overheard from neighbors. Fig. 8 shows the current graph when analysis reaches the end of log file. As observed, the connectivity of the network is high, as devices can hear almost all other neighbors. Blue circle represents the Access Point of the network.
Routes used for devices to propagate data to access point
The routes configured in the devices were used as basis to reconstruct information of graphs and paths of the network. Based on the information stored in the Network Trace, our tool has identified that the NM uses superframe routing. Superframe 0 has the uplink graph [START_REF] Han | Reliable and Real-Time Communication in Industrial Wireless Mesh Networks[END_REF], used to forward data towards the access point. Superframe 1 contains the broadcast graph that is used by NM to send packets to all devices through a combination of source routing and superframe routing.
Fig. 1 .
1 Fig. 1. DLPDU structure
Fig. 2 .
2 Fig. 2. NPDU structure
Fig. 3
3 Fig. 3. TPDU structure
Fig. 4 .
4 Fig. 4. Monitoring tool structure
Fig. 5 .
5 Fig. 5. Packet decoding sequence
Fig. 6 .
6 Fig. 6. WH compatible devices (a) and sniffer (b)
Fig. 7 .
7 Fig. 7. Deployed network representation
Fig. 8 .
8 Fig. 8. Network topology graph
Fig. 9 .
9 Fig. 9. Uplink graph contained in Superframe 0
Conclusion and Future Work
The use of wireless networks in industrial control and monitoring applications can present performance problems due to several factors. To track down such problems, it is necessary to inspect the network and nodes conditions after the deployment.
In this paper we present a software tool for inspection of routing in WH networks. Capture of information is done in a passive way by sniffers. The captured packets are to build an overview of network topology and routes used in communications. Visualization of obtained information is done via graphs, charts and lists.
The study case has shown that tool can provide important information about the network conditions, and can help user to identify problems and understand the protocol and devices characteristics. User must be aware that packet loss caused by sniffers may affect the analysis.
On ongoing work, we are using this tool to analyze a WH deployment in an industrial application, to verify different aspects of network topology and routing strategies used in WH equipment. Information analyzed shall be used for improvements on devices and on Network Manager routing and scheduling algorithms, to better adjust the network performance for desired applications. The developing of enhanced algorithms for routing and scheduling in WirelessHART networks is still a necessity. | 25,901 | [
"1001369",
"1001370",
"1001371",
"1001372",
"1001373"
] | [
"302610",
"302610",
"302610",
"302610",
"302610"
] |
01466672 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466672/file/978-3-642-38853-8_17_Chapter.pdf | Philipp Reinkemeier
email: [email protected]
Ingo Stierand
Compositional Timing Analysis of Real-Time Systems based on Resource Segregation Abstraction
For most embedded safety-critical systems not only the functional correctness is of importance, but they must provide their services also in a timely manner. Therefore, it is important to have rigorous analysis techniques for determining timing properties of such systems. The ever increasing complexity of such real-time systems calls for compositional analysis techniques, where timing properties of local systems are composed to infer timing properties of the overall system. In analytical timing analysis approaches the dynamic timing behavior of a system is characterized by mathematical formulas abstracting from the statedependent behavior of the system. While these approaches scale well and also support compositional reasoning, the results often exhibit large over-approximations. Our approach for compositional timing analysis is based on ω-regular languages, which can be employed in automata-based model-checking frameworks. To tackle the scalability problem due to state-space explosion, we present a technique to abstract an application by means of its resource demands. The technique allows to carry out an analysis independently for each application that shall be deployed on the same platform using its granted resource supply. Integration of the applications on the platform can then be analyzed based on the different resource supplies without considering details of the applications.
Introduction and Related Work
Developing safety-critical real-time systems is becoming increasingly complex as the number of functions realized by these systems grows. Additionally an increasing number of functions is realized in software, which is then integrated on a common target platform in order to save costs. The integration on a common platform causes interferences between the different software-functions due to their shared resource usage. It is desirable to bound these interferences in a way to make guarantees about the timing behavior of the individual softwarefunctions. A schedulability analysis delivers such bounds for interferences between software-tasks sharing a CPU by means of a scheduling strategy.
We present a compositional analysis framework using real-time interfaces based on ω-regular languages. Following the idea of interface-based design, components are described by interfaces and can be composed if their corresponding interfaces are compatible. The contribution of this work is a framework allowing to formally capture the resource demand of an interface, that we call segregation property. Compatibility of interfaces then can be reduced to compatibility of their segregation properties. Further a refinement relation is defined, which leads to a sufficient condition for compatibility of segregation properties. The framework can be used in (but is not restricted to) scenarios like the following: The bottom part of Fig. 1 shows a target platform that is envisioned by say an Original Equipment Manufacturer (OEM). It consists of a processing node (P ). Suppose the OEM wants to implement two applications, components C 1 and C 2 , on this architecture and delegates their actual implementation to two different suppliers. Both applications share the same resource of the target platform, i.e. tasks t 1 , t 2 and t 3 are all executed on P after integration. Therefore, a resource reservation is assigned to each component, guaranteeing a certain amount of resource supply. Then the timing behavior of both components can be analyzed independently from each other based on their resource demands and the guaranteed resource supply. Verification of the successful integration of C 1 and C 2 on the platform then amounts to check whether the reserved resource supplies can be composed.
There have been considerable studies on compositional real-time scheduling frameworks [START_REF] Thiele | Real-time interfaces for composing realtime systems[END_REF][START_REF] Wandeler | Interface-based design of real-time systems with hierarchical scheduling[END_REF][START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF][START_REF] Henzinger | An interface algebra for real-time components[END_REF][START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF]. These studies define interface theories for components abstracting the resource requirement of a component by means of demand functions [START_REF] Thiele | Real-time interfaces for composing realtime systems[END_REF][START_REF] Wandeler | Interface-based design of real-time systems with hierarchical scheduling[END_REF], bounded-delay resource models [START_REF] Henzinger | An interface algebra for real-time components[END_REF], or periodic resource models [START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF][START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF]. Based on these theories the required resources of a component, captured by its interface, can for example be abstracted into a single task. This approach gives rise to hierarchical scheduling frameworks where interfaces propagate resource demands between different layers of the hierarchy. Our proposed resource segregation abstraction of a component is an extension of the real-time interfaces presented in [START_REF] Bhaduri | A proposal for real-time interfaces in speeds[END_REF]. Contrary to the aforementioned approaches, our real-time interfaces and resource segregation are based on ω-regular languages. That means, the approach can for example be employed in automata-based model-checking frameworks. In addition the results we present are not bound to specific task and resource models, like periodic or bounded delay.
Analytic methods provide efficient analysis by abstracting from concrete behavior. The drawback is that this typically leads to over-approximations of the analysis results. Computational methods on the other hand, such as modelchecking for automata ([2, 9, 6]), typically provide the expressive power to model and analyze real-time systems without the need for approximate analysis methods. This flexibility comes with costs. Model-checking is computationally expensive, which often prevents analysis of larger systems. The contribution of this paper will help to reduce verification complexity for the application of computational methods.
The paper is structured as follows: Section 2 briefly introduces real-time interfaces presented in [START_REF] Bhaduri | A proposal for real-time interfaces in speeds[END_REF], where task executions are characterized by ω-regular languages over time slices occupied by the respective tasks. Section 3 provides the formalization of segregation properties for interfaces, which can be used to abstract from concrete behavior of an interface. We define refinement and composition operations on segregation properties that preserve schedulability of the composition of the associated interfaces. Section 4 shows that our approach is consistent with the (analytical) resource models of [START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF][START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF] by the definition of a translation. Section 5 discusses further work and concludes the paper.
Real-Time Interfaces
The resource segregation abstraction presented in this work is based on the real-time interfaces presented in [START_REF] Bhaduri | A proposal for real-time interfaces in speeds[END_REF]. Therefore, we briefly summarize the basic definitions. We assume a set of real-time components are to be executed on a set of resources such as processing nodes and communication channels. Each component consists of a set of tasks. A real-time interface of a component specifies the set of all its legal schedules when it is executed on the resources. For example, consider a component with two tasks 1 and 2, which are scheduled on a single resource in discrete slots of some fixed duration, like shown in Fig. 1 for component P 1 . A schedule for this component can be described by an infinite word over the alphabet 0, 1, 2. 0 means the resource is idle during the slot, and 1 and 2 means the corresponding task is running. The real-time interface of a component is an ω-language containing all legal schedules of the component. Therefore, an interface with a non-empty language contains at least one schedule and is said to be schedulable. Interfaces can be composed (intersection) to check whether two components together are schedulable. Definition 1. A real-time interface I is a tuple (L, T ), where T ⊆ T is the set of tasks of the interface and L ⊆ T ω is an ω-regular language denoting the set of legal schedules of I. The empty task 0 denoting an idle slot is part of every interface, i.e. 0 ∈ T .
The intuition of an interface is that it describes the set of schedules that satisfy the requirements of its component. An interface with an empty language is said to be not schedulable. Conversely, an interface with a non-empty language is said to be schedulable, as at least one legal schedule exists for the interface.
Example: Suppose that task t 1 in Fig. 1 is a periodic task t with period p = 5 and an execution time e = 3. The language of its interface I 1 can be described by the following regular expression: L I1 = 0 <5 [t 3 ||| 0 2 ] ω , where u ||| v denotes all possible interleavings of the finite words u and v. That means, a schedule is legal for interface I 1 , as long as it provides 3 slots during a time interval of 5.
Observe, that interface I 1 captures an assumption about the activation pattern of task t 1 . The part 0 <5 of the regular expression represents all possible phasings of the initial task activations. This correlates to the formalism of event streams, which is a well-known representation of task activation patterns in realtime systems (cf. [START_REF] Richter | Compositional Scheduling Analysis Using Standard Event Models[END_REF]) by lower and upper arrival curves η -(∆t) and η + (∆t).
T 2T 3T 4T 0 1 2 3 4 5
Fig. 2. Arrival curves of periodic events
Key to dealing with interfaces having different alphabets is the following projection operation: For alphabet T and language L of an interface I and T ⊆ T , we consider its projection pr T (L) to T , which is the unique extension of the function T → T that is identity on the elements of T and maps every element of T \ T to 0. We will also need the inverse projection pr -1 T (L), for T ⊇ T , which is the language over T whose words projected to T belong to L.
Definition 2. Given two interfaces
I 1 = (L 1 , T 1 ) and I 2 = (L 2 , T 2 ) the parallel composition I 1 I 2 is the interface (L, T ), where -T = T 1 ∪ T 2 and -L = pr -1 T (L 1 ) ∩ pr -1 T (L 2 )
The intuition of this definition is that a schedule is legal for I 1 I 2 if its restriction to T 2 is legal for I 2 and its restriction to T 1 is legal for I 1 . That means tasks of an interface are allowed to run when the resource is idle in the other interface. -T ⊇ T and
-pr T (L ) ⊆ L
The intuition of this definition is that all schedules legal in I are (modulo projection) also legal schedules in I and I is able to schedule more tasks from the set T \ T in the gaps left by schedules in I.
The following lemmas provide useful properties of the real-time interface framework.
Lemma 1. Parallel composition of interfaces is associative and commutative.
An associative and commutative composition operation guarantees that composable interfaces may be assembled together in any order. Therefore, real-interfaces support incremental design.
Lemma 2. Refinement of interfaces is a partial order.
As refinement is a partial order, it is ensured that: If interface I I, then for any interface I I it holds that I I. That means interfaces can be refined iteratively.
Lemma 3. Refinement is compositional. That means I I implies I J I J.
A compositional refinement allows to refine composable interfaces separately, while maintaining composability. Together with commutativity and associativity of the composition operator, we have that the real-time interfaces support independent implementability. Proofs for Lemma 1-3 are presented in [START_REF] Bhaduri | A proposal for real-time interfaces in speeds[END_REF].
Resource Segregation
While real-time interfaces are powerful enough to cope with complex designs and scenarios like depicted in Fig. 1, the refinement relation involves complex language inclusion checks. Moreover, the details of all components and their tasks must be known in order to compose them. Therefore, we introduce an abstraction for a real-time interface consisting of multiple tasks that we call segregation property. These segregation properties will be defined such that compositionality of segregation properties ensures compositionality of their interfaces, respectively. That means given two interfaces I 1 and I 2 and segregation properties B I1 and B I2 , we look for a composition operator and a simple property ϕ, such that
B I1 B I2 |= ϕ =⇒ I 1 I 2 is schedulable
Interface Composability
Recall, that an interface I describes a set of legal schedules. It represents for the activation patterns of its tasks a set of possible discrete slot allocations under which the tasks can be executed successfully. A segregation property B I for an interface abstracts from the tasks of the interface, and only exposes a set of possible slots reservations for which the interface is schedulable. Note that a segregation property for an interface indeed may contain more available slots than are used by the respective interface.
Composition of the segregation properties B I1 and B I2 of interfaces I 1 and I 2 then combines non-conflicting slot reservations of B I1 and B I2 . The property ϕ states that at least one such non-conflicting slot reservation exists, i.e. the set of slot reservations defined by B I1 B I2 is not empty.
We now define slot reservation and segregation property of an interface and define a composition operation. We use the composition operation on slot reservations to derive a composability condition for interfaces based on their segregation properties. Definition 4. A slot reservation B is an ω-regular language over {0, 1}, B ⊆ {0, 1} ω . Each ω-word b ∈ B defines an infinite sequence of slots that are either available (0) or unavailable [START_REF] Baruah | Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor[END_REF].
We denote B I a segregation property for interface I, if and only if for all b ∈ B I holds that I is schedulable for all its activation patterns using only the available slots defined in b.
Example: For task t 1 in the example above, B I1 is a segregation property of I 1 if it contains for each sub-sequence of length 5 at least 3 available slots. A valid segregation property for I 1 is B I1 = σ∈C 5 3 σ ω , where C 5 3 denotes the set of finite words σ = σ 1 . . . σ 5 over {0, 1} of length 5 obtained by combination of 3 symbols σ i = 0 over 5 symbols and the remaining symbols are 1.
We define the parallel composition B 1 B 2 of slot reservations such that we select only those pairs b 1 ∈ B 1 , b 2 ∈ B 2 , where no slot is available (0) in both words, combining them into a single word b ∈ B 1 B 2 , where slots are available that are available in either b 1 or b 2 and all other slots remain unavailable [START_REF] Baruah | Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor[END_REF].
For convenience, we make use of the binary operators ∧ and ∨ defined on elements of {0, 1} with their usual Boolean interpretations. We extend both operators to ω-regular The following lemma states the desired condition for composability of interfaces depending on their segregation properties: Lemma 4. Two interfaces I 1 and I 2 are composable and can be scheduled together if the parallel composition of their segregation properties is not empty, i.e. B I1 B I2 = ∅.
words b i = b i1 b i2 . . . ∈ {0, 1} ω ,
B 1 B 2 = {b 1 ∧ b 2 |b 1 ∈ B 1 , b 2 ∈ B 2 and b 1 ∨ b 2 = 1 ω } Example:
Proof: As B I1 is a segregation property for I 1 and B I2 is a segregation property for I 2 , I 1 is schedulable for all its activations patterns for all b ∈ B I1 and I 2 is schedulable for all b ∈ B I2 , respectively. According to Definition 5 all words b ∈ B I1 B I2 are sequences of slots such that there exists a pair b 1 ∈ B I1 , b 2 ∈ B I2 , where no slot is available in both words. Thus, interface I 1 can be scheduled using only the available slots in b that are also available in b 1 , interface I 2 respectively. Consequently, it holds that each unavailable slot in b 1 ∈ B I1 is not used by I 1 , and I 2 may schedule one of its tasks in these slots, if they are available in b 2 ∈ B I2 . For interface I 2 the same argument applies. Thus, it follows that the language of I 1 I 2 is not empty, which according to Definition 1 means a legal schedule for I 1 I 2 exists.
Refinement of Slot Reservations
Recall, that Definition 4 defines B I to be a segregation property for interface I, if and only if I is schedulable for all its activation patterns for every b ∈ B I . From this definition we conclude that given a segregation property B I , any subset B I ⊆ B I is also a segregation property for interface I. Further, each b = σ 1 σ 2 . . . ∈ B I defines a sequence of slots, where all slots σ i = 0 are available to the interface. Obviously, if the interface is schedulable for b ∈ B I , then it is also schedulable for b , where b = σ 1 σ 2 . . . and σ i = 0 and σ i = 1 for some i and σ j = σ j for all other slots j = i. In other words, we can always make more slots available to an interface without impact on its schedulability.
These observations give rise to a refinement relation on slot reservations. First, we define a partial order on ω-regular words over {0, 1} as follows: Let be b, b ∈ {0, 1} ω . We say b ≤ b if and only if ∀i ∈ N : The following lemma formalizes these observations and provides a sufficient condition for composability of interfaces (see Lemma 4):
σ b i = 1 =⇒ σ bi = 1.
Periodic Resource Models and Resource Segregation
As discussed in Section 1, the idea of resource segregation and their exploitation in compositional analysis frameworks is not new. However to our best knowledge the principle has only been applied in frameworks that are based on analytical methods. For example the frameworks proposed by I. Lee et. al. [START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF][START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF] are based on the concepts of demand bound functions dbf (∆) and supply bound functions sbf (∆). The function dbf (∆) characterizes the maximal processing demand of a real-time component within any interval of length ∆. The function sbf (∆) characterizes the minimal processing power provided by the resource in any time interval of length ∆. The real-time component is considered to be schedulable, if ∀∆ : dbf (∆) ≤ sbf (∆). Note, that the concept of service curves known from real-time calculus [START_REF] Chakraborty | A general framework for analysing system properties in platform-based embedded system designs[END_REF] is comparable with these frameworks, as described in [START_REF] Wandeler | Interface-based design of real-time systems with hierarchical scheduling[END_REF].
In this section we discuss in more detail the relation of our approach with the frameworks presented in [START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF][START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF]. We will see that our approach is able to capture the models considered in these frameworks, and thus results established in these frameworks also apply in our setting.
Both frameworks are based on the concepts of demand bound functions and supply bound functions, where in [START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF] a Periodic Resource Model is presented and in [START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF] an Explicit Deadline Periodic Resource Model (EDP) is presented. Both models are used to create compositional hierarchical scheduling frameworks. In both frameworks a component is a set of tasks scheduled under a specific strategy. The total resource demand of a component to schedule all its tasks is expressed as a demand bound function dbf (∆). The resource models are used to capture the amount of resource allocations of a partitioned resource, which is formally expressed as a supply bound function sbf (∆). If a component is schedulable under the considered partitioned resource (defined by the resource model), i.e. dbf (∆) ≤ sbf (∆), then the resource model can be transformed into a task and components can be composed hierarchically. Thus, the composition problem is reduced to the abstraction problem.
The periodic resource model Γ = (Π, Θ) characterizes a partitioned resource that repetitively provides Θ units of resource with a repetition period Π. The EDP resource model Ω = (Π, Θ, ∆) is an extension of the periodic resource model. It characterizes a partitioned resource that repetitively supplies Θ units of resource within ∆ time units, with Π the period of repetition. Keeping in mind the idea of transforming a resource model into a task at the next level of the hierarchy, the relation between both models becomes clear: A periodic resource model Γ = (Π, Θ) is the EDP model Ω = (Π, Θ, Π) (cf. [START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF]). Therefore, in the following we focus on EDP resource models.
Real-time Component Model
A real-time component is defined as C = {C 1 , . . . , C n }, S , where C i is either another real-time component or a sporadic task. A sporadic task is defined by a tuple τ = (p, e, d), where p is a minimum separation time, e the execution time of the task and d a deadline relative to the release of task τ . It holds that e ≤ d ≤ p. The workload C 1 , . . . , C n is scheduled under strategy S that is either RM (rate monotonic), DM (deadline monotonic) or EDF (earliest deadline first). The resource demand of a component is then the collective resource demand of its tasks under its scheduler S. The demand bound function [START_REF] Lehoczky | The rate monotonic scheduling algorithm: Exact characterization and average case behavior[END_REF][START_REF] Baruah | Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor[END_REF] dbf C (∆) characterizes the maximum resource demand for a task set in any given time interval of length ∆.
In our framework real-time components translate into interfaces, where each interface I is either a composition of interfaces I = I 1 . . . I n or an 'atomic interface' in case of a single sporadic task. Given a task t = (p, e, d), the language of the corresponding interface is L It = 0 <p-1 [(t e ||| 0 d-e )0 p-d ] ω , where u ||| v denotes all possible interleavings of the finite words u and v. Given a component C = {C 1 , . . . , C n }, S , then the condition I C1 . . . I Cn = ∅ determines whether the component is schedulable at all under some scheduling strategy S. Now consider a fixed priority scheduling (FPS), say rate monotonic scheduling. The component is schedulable under FPS if and only if I F P S I C1 . . . I Cn . How to capture the scheduling of a task set under FPS in terms of an interface I F P S is described in [START_REF] Bhaduri | A proposal for real-time interfaces in speeds[END_REF]. A segregation property B I F P S of interface I F P S characterizes the resource demands of C = {C 1 , . . . , C n }, F P S .
The resource demands of a component C, explicated by the demand bound function dbf C (∆) can be safely over-approximated by any function f (∆), with f (∆) ≥ dbf C (∆). For example in [START_REF] Shin | Periodic resource model for compositional real-time guarantees[END_REF] a linear function ldbf C (∆) is given for EDF scheduling that provides an upper bound for dbf C (∆). In our framework over-approximations of the resource demands B I C of a component translate into refinements of B I C . As discussed in Section 3.2 any B I C B I C is also a segregation property for interface I C , albeit a potential over-approximation of the resource demands defined by B I C .
Resource Model and Schedulability
Consider an explicit deadline periodic resource model Ω = (Π, Θ, ∆). It characterizes a partitioned resource that repetitively supplies Θ units of resource within ∆ time units, with Π the period of repetition. The partitioned resource characterized by Ω, can also be characterized by the following slot reservation:
B Ω = 1 ≤(Π-Θ) [(0 Θ ||| 1 ∆-Θ )1 Π-∆ ] ω
The resource supply of a resource is the amount of provided resource allocations. Complementary to the demand bound function for components, the resource supply bound function sf b Ω (∆) computes the minimum resource supply for Ω in a given time interval of length ∆.
The resource supply sf b Ω (∆) can be safely under-approximated by any function f (∆), with f (∆) ≤ sf b Ω (∆). Analogously, in our framework underapproximations of the resource supply are captured by the refinement relation. B Ω , with B Ω B Ω , is a potential under-approximation of the resource supply of a resource.
In the context of EDP resource models, schedulability is defined for a realtime component C = {C 1 , . . . , C n }, S using an EDP resource model Ω. Exact schedulability conditions are given for the scheduling strategies RM , DM and EDF . We will not go into the details of the theorems here and refer to [START_REF] Easwaran | Compositional analysis framework using edp resource models[END_REF]
Conclusion
This paper proposes a formalization of segregation properties enabling compositional timing analysis based on ω-regular languages. By exploiting the formalism of real-time interfaces, segregation properties allow to abstract the concrete behavior of components, and provide conditions under which the composition of a set of components results in a schedulable system. The approach supports the verification process in two directions. Firstly, the abstraction helps to reduce verification complexity, which often prevents analysis of larger systems using model-checking techniques. Secondly, the approach subsumes well-known approaches in the domain of analytical resource models. This enables the elaboration of combined methods to further reduce analysis efforts.
While this initial approach supports only single resources, future work will allow for the expression of multiple resources. In this case, slot reservations do not argue over the alphabet {0, 1}, but over tuples (r 1 , ..., r n ) for n resources, where r i ∈ {0, 1}. Indeed, this requires modified definitions of composition and refinement. A further extension of this approach will allow to support multiprocessor resources. In this case, segregation properties are no more defined over the alphabet {0, 1}, but for example over sets {0, 1, ..., m} characterizing the number of available processing units of the resource.
Fig. 1 .
1 Fig. 1. Exemplary Integration Scenario using Resource Segregation
Definition 3 .
3 Given two interfaces I = (L, T ) and I = (L , T ), then I refines I, I I, if and only if:
Definition 5 .
5 by their component-wise application: b 1 ∧ b 2 = (b 11 ∧ b 21 )(b 12 ∧ b 22 ) . . ., and ∨ respectively. Given two slot reservations B 1 and B 2 the parallel composition B 1 B 2 is defined as:
Fig. 3 B 2 Fig. 3 .
323 Fig. 3. Illustration of Slot Reservation Composition
Definition 6 .
6 That is, b precedes b if all slots that are unavailable (1) in b are also unavailable in b. Indeed b might contain additional unavailable slots that are available (0) in b . In other words: Slots that are available in b are also available in b . Obviously, a bottom element 0 ω and a top element 1 ω exist with regard to the partial order ≤. For any b ∈ {0, 1} ω we have that 0 ω ≤ b ≤ 1 ω . We extend the relation on ω-regular words over {0, 1} to slot reservations (ω-regular languages over {0, 1}) as follows: Given two slot reservations B and B, then B refines B, B B, if and only if: ∀b ∈ B : ∃b ∈ B : b ≤ b The refinement relation on slot reservations is a pre-order, as mutual refinement not necessarily implies equivalence: B B and B B =⇒ B = B . Note, that this definition of refinement captures both observations: Given a segregation property B I , any subset B I ⊆ B I is also a segregation property for I and it holds that B I B I . Further, for a segregation property B I , we can construct B I B I from B I , where for some b ∈ B I we make more slots available, i.e. ∃b ∈ B I : b ≤ b. Still B I is a segregation property for I. Thus, given a segregation property B I for interface I, then any B I B I is also a segregation property for I. However, B I may be an 'over-approximation' of B I . Consider the segregation property B I and a subset B I ⊂ B I . As the interface I is schedulable for all words b ∈ B I , we can understand B I as a set of alternative slot reservations 'supported' by the interface I. Thus, this alternative is lost when eliminating a word from B I in a subset B I ⊂ B I . Now consider B I B I obtained by replacing some word b ∈ B I with a word b ≤ b. The interface I is schedulable using only the available slots in b. b may be on over-approximation as more slots can be available in b that are not available in b. Both over-approximations of B I lead to an increased probability of causing slot conflicts when composing them with another segregation property B Ĩ . But If that composition is still not empty, I and Ĩ are composable and can be scheduled together. Example: Fig. 4 depicts an illustration of the preceding discussion on refinement applied on the segregation property B I1 of I 1 from the example above.
Fig. 4 .
4 Fig. 4. Illustration of Segregation Property Refinement
Lemma 5 .
5 Given two interfaces I 1 and I 2 and segregation properties B I1 and B I2 , respectively. Then for any two slot reservations B I1 B I1 and B I2 B I2 it holds that B I1 B I2 = ∅ =⇒ B I1 B I2 = ∅. Proof: According to Definition 5 all words b ∈ B I1 B I2 are sequences of slots such that there exists a pair b 1 ∈ B I1 , b 2 ∈ B I2 , where no slot is available in both words. According to Definition 6 b 1 ∈ B I1 , b 2 ∈ B I2 exist, where b 1 ≤ b 1 and b 2 ≤ b 2 . Slots that are unavailable (1) in b 1 are also unavailable in b 1 . The same holds for b 2 and b 2 . It follows that b 1 and b 2 can be composed and B I1 B I2 contains at least on element.
instead. Basically it must holds that ∀∆ : df b C (∆) ≤ sf b Ω (∆). Schedulability of C under Ω can be formulated in our framework as refinement. Given the segregation property B I C and the resource supply B Ω , then C is schedulable under Ω, if B Ω B I C . Sufficient conditions based on over-approximation of resource demands and under-approximation of resource supplies are induced by transitivity of the refinement relation. Given segregation property B I C , slot reservation B Ω and B I C B I C , and B Ω B Ω , then B Ω B I C =⇒ B Ω B I C .
This work was partly supported by the Federal Ministry for Education and Research (BMBF) under support code 01IS11035M, 'Automotive, Railway and Avionics Multicore Systems (ARAMiS)', and by the German Research Council (DFG) as part of the Transregional Collaborative Research Center 'Automatic Verification and Analysis of Complex Systems' (SFB/TR 14 AVACS). | 32,029 | [
"1001374",
"1001375"
] | [
"303555",
"146984"
] |
01466675 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466675/file/978-3-642-38853-8_1_Chapter.pdf | Takuya Azumi
email: [email protected]
Yasaman Samei Syahkal
Yuko Hara-Azumi
email: [email protected]
Hiroshi Oyama
email: [email protected]
Rainer Dömer
email: [email protected]
TECSCE: HW/SW Codesign Framework for Data Parallelism Based on Software Component
This paper presents a hardware/software (HW/SW) codesign framework (TECSCE) which enables software developers to easily design complex embedded systems such as massive data-parallel systems. TECSCE is implemented by integrating TECS and SCE: TECS is a component technology for embedded software, and SCE provides an environment for system-on-a-chip designs. Since TECS is based on standard C language, it allows the developers to start the design process easily and fast. SCE is a rapid design exploration tool capable of efficient MPSoC implementation. TECSCE utilizes all these advantages since it supports transformation from component descriptions and component sources to SpecC specification, and lets the developers decide data partitioning and parallelization at a software component level. Moreover, TECSCE effectively duplicates software components, depending on their degree of data parallelizing, to generate multiple SpecC specification models. An application for creating a panoramic image removing objects, such as people, is illustrated as a case study. The evaluation of the case study demonstrates the effectiveness of the proposed framework.
Introduction
Increasing complexities of embedded system and strict schedules in time-tomarket are critical issues in the today's system-level design. Currently, various embedded systems incorporate multimedia applications, which are required more and more complex functionalities. Meanwhile, the semiconductor technology progress has placed a great amount of hardware resources on one chip, enabling to implement more functionalities as hardware in order to realize efficient systems. This widens design space to be explored and makes system-level designs further complicated -to improve the design productivity, designing systems at a higher abstraction level is necessary [START_REF] Sangiovanni-Vincentelli | Quo vadis, SLD? Reasoning about the Trends and Challenges of System Level Design[END_REF].
Hardware/software (HW/SW) codesign of these systems mainly relies on the following challenging issues: (1) data parallelism to improve performance, [START_REF] Azumi | A new specification of software components for embedded systems[END_REF] support for software developers to implement such complicated systems without knowing system-level languages such as SystemC and SpecC, (3) implementation to directly use existing code without modification, and (4) management of communication between functionalities. To the best of our knowledge, there is no work addressing all of the above issues.
This paper presents a system-level framework (TECSCE) to cope with the preceding issues. This framework aims at enabling even software developers to easily design complicated systems such as multimedia applications which are rich in data parallelism. For this, we integrate a component technology for embedded software, TECS (TOPPERS Embedded Component System [START_REF] Azumi | A new specification of software components for embedded systems[END_REF]), and the systemon-a-chip environment SCE [START_REF] Dömer | System-on-Chip Environment: A SpecC-Based Framework for Heterogeneous MPSoC Design[END_REF], which is based on SpecC language. Since TECS is based on conventional C language, it allows the developers to start the design process easily and fast. SCE is a rapid design exploration tool capable of efficient MPSoC implementation.
The contribution of this work is to present a system-level design method for software developers to deal with massively parallel embedded systems using TECS. In existing HW/SW codesign technologies, a designer needs to manually add or modify HW/SW communication sources (e.g., their size, direction, and allocator) in input behavioral descriptions, which is complex to specify and errorprone. In contrast, in the proposed framework, the developer can design the overall system at a software component level and has no need to specify the HW/SW communication in the input description because TECS defines the interface between components, and the communication sources are automatically generated. Moreover, a new mechanism of duplicating components realizes data partitioning at the software component level for an effective speedup of the applications.
The rest of this paper is organized as follows. Section 2 explains TECS, SCE, and the overview of the proposed framework. Section 3 depicts a case study of adapting the proposed framework. The evaluation of the case study is shown in Section 4. Related work is described in Section 5. Finally, Section 6 concludes this paper.
TECSCE
In this section, the overviews of TECS, SCE, and a system-level design framework (TECSCE) integrating TECS and SCE are presented.
TECS
In embedded software domains, software component technologies have become popular to improve the productivity [START_REF] Azumi | A new specification of software components for embedded systems[END_REF][START_REF]AUTOSAR: AUTOSAR Specification[END_REF][START_REF] Åkerholm | The SAVE approach to component-based development of vehicular systems[END_REF]. It has many advantages such as increasing reusability, reducing time-to-market, reducing software production cost, and hence, improving productivity [START_REF] Lau | Software component models[END_REF].
TECS adopts a static model that statically instantiates and connects components. The attributes of the components and interface sources for connecting the components are statically generated by the interface generator. Furthermore, TECS optimizes the interface sources. Hence, no instantiation overhead is introduced at runtime, and the runtime overhead of the interface code is minimized [START_REF] Azumi | Optimization of component connections for an embedded component system[END_REF]. Therefore, these attributes of TECS are suitable for system-level designs. Furthermore, in system-level designs, parallelism and pipeline processing should be considered. TECS supports parallelism and pipeline processing on a real-time OS for multi-processors in embedded software [START_REF] Azumi | Memory allocator for efficient task communications by using RPC channels in an embedded component system[END_REF]. The oneway calling is provided to support the parallelism. It means that a caller component does not need to wait until a callee component finishes executing. At a software level for multiprocessors environment, the parallelism has been already supported in TECS. Therefore, it is possible to adapt the feature for system-level designs.
Component Model in TECS A cell is an instance of component in TECS. Cells are properly connected in order to develop an appropriate application. A cell has entry port and call port interfaces. The entry port is an interface to provide services (functions) to other cells. Each service of the entry port called the entry function is implemented in C language. The call port is an interface to use the services of other cells. A cell communicates in this environment through these interfaces. To distinguish call ports of caller cells, an entry port array is used. A subscript is utilized to identify the entry port array. A developer decides the size of an entry port array. The entry port and the call port have signatures (sets of services). A signature is the definition of interfaces in a cell. A celltype is the definition of a cell, as well as the Class of an object-oriented language. A cell is an entity of a celltype. Figure 1 shows an example of a component diagram. Each rectangle represents a cell. The dual rectangle depicts a active cell that is the entry point of a program such as a task and an interrupt handler. The left cell is a TaskA cell, and the right cell is a B cell. Here, each of tTask and tB represents the celltype name. The triangle in the B cell depicts an entry port. The connection of the entry port in the cells describes a call port.
Component Description in TECS The description of a component in TECS can be classified into three descriptions: a signature description, a celltype description, and a build description. An example for component descriptions is presented in Section 3 to briefly explain these three descriptions5 .
SCE
SCE implements a top-down system design flow based on a specify-explore-refine paradigm with support for heterogeneous target platforms consisting of custom hardware components, embedded software processors, dedicated IP blocks, and
① ② ③ ④ ⑤ ⑥ Fig. 2.
Design flow using the proposed framework.
complex communication bus architectures. The rest of features and design flow is explained in the next subsection.
Overview of TECSCE
Case study for proposed framework
In this section, the proposed framework is explained through a case study. First, a target application is described. Then, two kinds of mechanism to generate specification models (Step4 in Figure 2) are depicted.
Target Application
The target application named MovingObjectRemoral for a case study of the framework is an application for generating a panoramic image removing objects, such as people. In the panoramic image view system, such as Google Street View, a user can see images from the street using omnidirectional images. Figure 3 illustrates the target application. The application creates the image without people as shown in the right image of Figure 3 based on the algorithm [START_REF] Hori | Removal of moving objects and inconsistencies in color tone for an omnidirectional image database[END_REF] by using a set of panoramic images which are taken at the same position.
Since creating an image by removing obstacles needs a number of original images, each of which has too many pixels, the original program is designed only for off-line use. Because the output image depends on the place and environment, we do not know how many source images are needed to create the output image. Therefore, currently, we need enormously long time to take images at each place. Our final goal is to create the output image in real-time by using our framework. Figure 5 shows a signature description between tReader and tMOR, and between tMOR and tWriter. The signature description is used to define a set of function heads. A signature name, such as sSliceImage, follows a signature keyword to define the signature. The initial character ("s") of the signature name sSliceImage represents the signature. A set of function heads is enumerated in the body of this keyword. TECS provides the in, out, and inout keywords to distinguish whether a parameter is an input and/or an output. The in keyword is used to transfer data from a caller cell to a callee cell. The oneway keyword means that a caller cell does not need to wait for finishing a callee cell. Namely, the oneway keyword is useful when a caller cell and a callee cell are executed in parallel.
TECS components for the target application
Figure 6 describes a celltype description. The celltype description is used to define the entry ports, call ports, attributes, and variables of each celltype. The singleton keyword (Line 1 in Figure 6) represents that a singleton celltype is a particular cell, only one of which exists in a system to reduce the overhead. The active keyword (Line 1 in Figure 6) represents the entry point of a program such as a task and an interrupt handler. A celltype name, such as tReader, follows a celltype keyword to define celltype. The initial character ("t") of the celltype name tReader represents the celltype. To declare an entry port, an entry keyword is used (Line 6 and 9 in Figure 6). Two words follow the entry keyword: a signature name, such as sSliceImage, and an entry port name, such as eSliceImage. The initial character ("e") of the entry port name eSliceImage represents an entry port. Likewise, to declare a call port, a call keyword is used (Line 3 and 10 in Figure 6). The initial character ("c") of the call port name cSliceImage represents a call port.
The attr and var keywords that are used to increase the number of different cells are attached to the celltype and are initialized when each cell is created. The set of attributes or variables is enumerated in the body of these keywords. These keywords can be omitted when a celltype does not have an attribute and/or a variable.
Figure 7 shows a build description. The build description is used to declare cells and to connect between cells for constructing an application. To declare a cell, the cell keyword is used. Two words follow the cell keyword: a celltype name, such as tReader, and a cell name, such as Reader (Lines 3-5, Lines 7-9, and Lines 10-11 in Figure 7). In this case, eSliceImage (entry port name) of MOR 000 (cell name) is connected to cSliceImage (call port name) of Reader (cell name). The signatures of the call port and the entry port must be the same in order to connect the cells.
cellPlugin
At the component level (Step 3 in Figure 2), the proposed framework realizes data partitioning. A new plugin named cellPlugin is proposed to duplicate cells for data partitioning and connect the cells. There are two types of cellPlugin: RepeatCellPlugin and RepeatJoinPlugin.
RepeatCellPlugin supports duplication of cells depending on the slice count i.e. the number of data partitions. and connection between the call port of the duplicated cells and the entry ports of the connected cell in the original build description (Line 6 in Figure 7). RepeatJoinPlugin provides connection between the call port of the duplicated cells generated by RepeatCellPlugin (Line 2 in Figure 7). Note that it is easy to duplicate MOR cells for realizing data partitioning and parallelization as shown in Figure 4.
cd2specc
In this subsection, policies of transformation from a component description to a specification model in SpecC language are described. A basic policy of transformation is that a cell and an argument of function of signature correspond to a behavior and a channel in SpecC language, respectively. The tReader, tMOR, and tWriter celltypes correspond to tReader, tMOR, tWriter behaviors generated by cd2specc, respectively. The following pseudo code describes the examples of generated SpecC code.
Pseudo Code 1 tMOR behavior Pseudo Code 1 shows a tMOR behavior. If a behavior has an entry function, the behavior receives parameters to call the entry function. In this case, tMOR behavior receives sliced images by using channels to call entry function (eSliceImage sendBlock) in Pseudo Code 1. Although there are several ways to realize tMOR, here we show in the pseudo code an algorithm to do so easily. This is often used for sorting algorithm based on brightness of each pixel to find the background color for each pixel. In this case, the brighter color depending on the rate value (Line 12 in Figure 6) is selected.
A SpecC program starts with execution of the main function of the root behavior which is named Main as shown in Pseudo Code 2. The roles of the main behavior are instantiation of behaviors, initialization of channels, connection of channels between behaviors, and management of execution of the other behaviors.
All behavioral synthesis tools typically do not support all possible C language constructs, such as recursion, dynamic memory allocation. Thus, TECS component source obeying these restrictions can be synthesized. Since recursion and dynamic memory allocation are not usually used for embedded software, these restrictions are not critical.
Figure 8 shows a specification model of SpecC language when slice count is two. The model consists of four behaviors and four communication channels. The numbers of channels and MOR instances are depended on the number of slice count.
Pseudo Code 2 Main behavior
Evaluation
For the experimental evaluation of the TECSCE design flow, we used the application described in Section 3 to show effectiveness of cellPlugin and cd2specc for improving design productivity.
First, we measured the number of lines of each component description generated by cellPlugin and each SpecC code generated by cd2specc. The values in Figure 9 represent the total number of lines of generated code. When the number of data partitioning is zero, the value shows the lines of common code, e.g., def- initions of channel types, template code of behaviors, and implementation code based on entry functions. As can be seen from Figure 9, the lines of the code proportionally grow to slice count. In TECSCE, the developers only change the parameter for slice count in order to manage the data partitioning. The results indicate that the communication code between behaviors have a significant impact on productivity. Therefore, it can be concluded that cellPlugin and cd2specc are useful, particularly for large slice count.
Next, we evaluated four algorithms to realize the MOR: Bubble, Insert, Average, and Bucket. Bubble is a basic algorithm for MOR based on a bubble sort to decide the background color. Insert is based on an insertion sort. Average is assumed that the background color is the average color value. Bucket is based on a bucket sort.
Each MOR behavior was mapped onto different cores based on ARM7TDMI (100MHz). The execution time of processing 50 images with 128x128 pixels on every core is measured when slice count was eight. An ISS (Instruction Set Simulator) supported by SCE was used to measure the cycle counts for estimation of the execution time. Table 1 shows the results of execution time for each core when slice count is eight. These results indicate that the generated SpecC descriptions are accurately simulatable.
All of the series of images are not necessary to collect the background color for the target application because the series of images are almost the same. Therefore, if a few input images can be obtained per second, it is enough to generate the output image. In our experiments, two images per second were enough to generate an output one. It is possible to use this application in realtime when each input image with 256x512 is used on this configuration (eight cores, ARM 100MHz, and Bucket algorithm). If the developers want to deal with bigger images in real-time, there are several options: to use higher clock frequency, to increase the number of data partitioning, to use hardware IPs, and so forth.
Related Work
HW/SW codesign frameworks have been studied for more than a decade.
Daedalus [START_REF] Nikolov | Daedalus: Toward composable multimedia mp-soc design[END_REF] framework supports a codesign for multimedia systems. It starts from a sequential program in C, and converts the sequential program into a parallel KPN (Kahn Process Network) specification through a KPNgen tool.
SystemBuilder [START_REF] Honda | RTOS and codesign toolkit for multiprocessor systems-on-chip[END_REF] is a codsign tool which automatically synthesizes target implementation of a system from a functional description. It starts with system specification in C language, in which a designer manually specifies the system functionalities as a set of concurrent processes communicating with each other through channels.
SystemCoDesigner [START_REF] Keinert | Systemcodesigner an automatic esl synthesis approach by design space exploration and behavioral synthesis for streaming applications[END_REF] supports a fast design space exploration and rapid prototyping of behavioral SystemC models by using an actor-oriented approach.
The system-on-chip environment (SCE) [START_REF] Dömer | System-on-Chip Environment: A SpecC-Based Framework for Heterogeneous MPSoC Design[END_REF] is based on the influential SpecC language and methodology. SCE implements a top-down system design flow based on a specify-explore-refine paradigm with support for heterogeneous target platforms consisting of custom hardware components, embedded software processors, dedicated IP blocks, and complex communication bus architectures.
System-level designs based UML [START_REF] Vidal | UML design for dynamically reconfigurable multiprocessor embedded systems[END_REF][START_REF] Mischkalla | Closing the gap between UML-based modeling, simulation and synthesis of combined HW/SW systems[END_REF] are proposed to improve the productivity. One [START_REF] Vidal | UML design for dynamically reconfigurable multiprocessor embedded systems[END_REF] is for exploring partial and dynamic reconfiguration of modern FPGAs. The other [START_REF] Mischkalla | Closing the gap between UML-based modeling, simulation and synthesis of combined HW/SW systems[END_REF] is for closing the gap between UML-based modeling and SystemC-based simulation.
To the best of our knowledge, there is no work addressing all of the issues mentioned in Section 1. TECSCE solve all of the issues since cd2specc, which a part of TECSCE, makes the overall system at a software component level in order to hide the many implementation details such as communication between functionalities. The framework users do not need to specify the HW/SW communication in the input description because the communication sources are automatically generated from component descriptions TECS specifically defines the interface between components. Therefore, TECSCE realizes that existing code can be used without modification and without knowing system-level languages such as SystemC and SpecC. Moreover, cellPlugin, which is a part of TECSCE, supports that duplication of components realizes data partitioning at a component level for an effective speedup of the applications.
Conclusions
This paper proposed a new codesign framework integrating TECS and SCE, which enables software developers to deal with massive parallel computing for multimedia embedded systems. The advantage of our framework is that developers can directly exploit software components for system-level design without modifying input C sources (component sources). Moreover, since TECS supports data partitioning and SCE supports MPSoCs as target architectures, our framework can deal with more complex applications (such as MOR) and can help parallelize them for efficient implementation. The evaluation demonstrated the effectiveness of the proposed framework including cellPlugin and cd2specc and the capability of operating the MOR application in real-time. Furthermore, almost all multimedia applications can be adapted to the same model of our framework. cellPlugin and cd2specc are open-source software, and will be avaibale to download from the website at [15].
Fig. 1 .
1 Fig. 1. Component diagram.
Figure 2
2 represents the design flow using the proposed framework. The circled numbers in Figure 2 represent the order of design steps. -Step1: A framework user (hereafter, a developer) defines signatures (interface definitions) and celltype (component definitions). -Step2: The developer implements celltype source (component source code) in C language. They can use the template code based on signatures and celltype descriptions. -Step3: The developer describes an application structure including definitions of cells (instances of component) and the connection between cells. In this step, the developer decides the degree of data partitioning. If it is possible to use existing source code (i.e., legacy code), the developer can start from Step3. -Step4: The SpecC specification model based on the component description, including definitions of behaviors and channels, is generated by a TECS generator. The specification model is a functional and abstract model that is free of any implementation details. -Step5: The designer can automatically generate system models (Transactionlevel models) based on design decisions (i.e. mapping the behaviors of the specification model onto the allocated PEs). -Step6: The hardware and software parts in the system model are implemented by hardware and software synthesis phases, respectively.
Fig. 3 .
3 Fig. 3. Target application. Left images are input images. Right image is a result image. SCE supports generating a new model by integrating the design decisions into the previous model.
Figure 4
4 Figure 4 shows a TECS component diagram for the target application. Each rectangle represents a cell which is a component in TECS. The left, middle, and right cells are a Reader cell, an MOR (MovingObjectRemoral) cell, and a Writer cell, respectively. The Reader cell reads image files, slices the image, and sends the sliced image data to the MOR cells. The MOR cell collects background colors (RGB) of each pixel based on the input images. The Writer cell creates the final image based on the data collected by the MOR cell. Here, tReader, tMOR, and tWriter represent the celltype name.Figure5shows a signature description between tReader and tMOR, and between tMOR and tWriter. The signature description is used to define a set of function heads. A signature name, such as sSliceImage, follows a signature
Fig. 4 .Fig. 5 .
45 Fig. 4. Component diagram for target application. 1 signature sSliceImage { 2 [ oneway ] void sendBlock([ in ]const slice *slice image); 3 }; Fig. 5. Signature description for the target application.
Fig. 6 .
6 Fig. 6. Celltype description for the target application.
Fig. 7 .
7 Fig. 7. Build description for the target application.
Fig. 8 .Fig. 9 .
89 Fig.8. Specification model of SpecC language when slice count is two.
Table 1 .
1 Results of Execution Time (ms) (slice count is Eight)
Algorithm CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8
Bubble 16465.0 16343.0 16215.2 16162.5 16276.9 16325.0 16372.7 16396.3
Insert 2261.3 2360.5 2423.9 2425.4 2423.2 2384.4 2345.2 2317.8
Average 942.1 973.4 997.9 997.7 997.6 997.8 997.8 997.5
Bucket 944.9 973.2 987.7 980.8 998.8 999.3 999.4 999.2
Please refer[START_REF] Azumi | A new specification of software components for embedded systems[END_REF] for the more detailed explanations.
Acknowledgments
This work was partially supported by JSPS KAKENHI Grant Number 40582036. We would like to thank Maiya Hori, Ismail Arai, and Nobuhiko Nishio for providing the MOR application. | 26,314 | [
"1001381",
"1001382",
"1001383",
"1001384",
"1001385"
] | [
"173774",
"445806",
"146890",
"484920",
"445806"
] |
01466677 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466677/file/978-3-642-38853-8_21_Chapter.pdf | Stefan Groesbrink
email: [email protected]
On the Homogeneous Multiprocessor Virtual Machine Partitioning Problem
This work addresses the partitioning of virtual machines with real-time requirements onto a multi-core platform. The partitioning is usually done manually through interactions between subsystem vendors and system designers. Such a proceeding is expensive, does not guarantee to find the best solution, and does not scale with regard to the upcoming higher complexity in terms of an increasing number of both virtual machines and processor cores. The partitioning problem is defined in a formal manner by the abstraction of computation time demand of virtual machines and computation time supply of a shared processor. The application of a branch-and-bound partitioning algorithm is proposed. Combined with a generation of a feasible schedule for the virtual machines mapped to a processor, it is guaranteed that the demand of a virtual machine is satisfied, even if independently developed virtual machines share a processor. The partitioning algorithm offers two optimization goals, required number of processors and the introduced optimization metric criticality distribution, a first step towards a partitioning that considers multiple criticality levels. The different outcomes of the two approaches are illustrated exemplarily.
Introduction
This work targets the hypervisor-based integration of multiple systems of mixed criticality levels on a multicore platform. System virtualization refers to the division of the resources of a computer system into multiple execution environments in order to share the hardware among multiple operating system instances. Each guest runs within a virtual machine-an isolated duplicate of the real machine. System virtualization is a promising software architecture to meet many of the requirements of complex embedded systems and cyber-physical systems, due to its capabilities such as resource partitioning, consolidation with maintained isolation, transparent use of multiple processor system-on-chips, and cross-platform portability.
The rise of multi-core platforms increases the interest in virtualization, since virtualization's architectural abstraction eases the migration to multi-core platforms [11]. The replacement of multiple hardware units by a single multi-core system has the potential to reduce size, weight, and power. The coexistence of mixed criticality levels has been identified as one of the core foundational concepts for cyber-physical systems [START_REF] Baruah | Towards the design of certifiable mixed-criticality systems[END_REF]. System virtualization implies it in many cases, since the applicability of virtualization is limited significantly if the integration of systems of different criticality level is not allowed.
Contribution This work addresses the partitioning of virtual machines with real-time requirements onto a multi-core platform. We define this design problem as the homogeneous multiprocessor virtual machine partitioning problem in a formal manner, specifying the computation time demand of virtual machines and the computation time supply of a shared processor. A mapping of a given set of virtual machines among a minimum number of required processors is achieved by a branch-and-bound algorithm, such that the capacity of any individual processor is not exceeded. This automated solution provides analytical correctness guarantees, which can be used in system certification. An introduced optimization metric is a first step towards a partitioning that considers multiple virtual machine criticality levels appropriately.
System Model
Task Model and Virtual Machine Model
According to the periodic task model, each periodic task τ i is defined as a sequence of jobs and characterized by a period T i , denoting the time interval between the activation times of consecutive jobs [START_REF] Liu | Scheduling algorithms for multiprogramming in a hardreal-time environment[END_REF]. The worst-case execution time (WCET) C i of a task represents an upper bound on the amount of time required to execute the task. The utilization U (τ i ) is defined as the ratio of WCET and period: U (τ i ) = C i /T i . A criticality level χ is assigned to each task [START_REF] Vestal | Preemptive scheduling of multi-criticality systems with varying degrees of execution time assurance[END_REF]. Only two criticality levels are assumed in this work, HI and LO.
A virtual machine V k is modeled as a set of tasks and a scheduling algorithm A, which is applied by the guest operating system. A criticality level χ is assigned to each virtual machine. If a virtual machine's task set is characterized by multiple criticality levels, the highest criticality level determines the criticality of the virtual machine.
Multi-core and Virtual Processor
Target platform are homogeneous multi-core systems, consisting of m identical cores of equal computing power. This implies that each task has the same execution speed and utilization on each processor core. Assumed is in addition a shared memory architecture with a uniform memory access.
A virtual processor is a representation of the physical processor to the virtual machines. A dedicated virtual processor P virt k is created for each virtual machine V k . It is in general slower than the physical processor core to allow a mapping of multiple virtual processors onto a single physical processor core. A virtual processor is modeled as a processor capacity reserve [START_REF] Mercer | Processor capacity reserves: Operating system support for multimedia applications[END_REF], a function Π(t) : N → {0, 1} defined as follows:
Π(t) = 0, resource not allocated 1, resource allocated (1)
The computation capacity of a physical processor core is partitioned into a set of reservations. Each reservation is characterized by a tuple (Q k , Υ k ): in every period Υ k , the reservation provides Q k units of computation time. α k = Q k /Υ k denotes the bandwidth of the virtual processor.
The computational service provided by a virtual processor P virt k can be analyzed with its supply function Z k (t), as introduced by Mok et al. [START_REF] Mok | Real-time virtual resource: A timely abstraction for embedded systems[END_REF]. Z k (t) returns the minimum amount of computation time (worst-case) provided by the virtual processor in an arbitrary time interval of length t ≥ 0:
Z k (t) = min t0≥0 t0+t t0 Π(x)dx .
(2)
Notation
The symbols in this paper are therefore defined as follows:
1. All parameters of the system-number of processors and computing capacity, number of virtual machines and parameters of all virtual machines, number of tasks and parameters of all tasks-are a priori known.
τ i = (C i , T i ) : task i with WCET C i , period T i , utilization U (τ i ) = C i /T i 2. P = {P 1 , P 2 , ..., P m } : set of processors (m ≥ 2) 3. V = {V 1 , V 2 , ..., V n } : set of virtual machines (n ≥ 2) 4. τ i ∈ V k : task τ i is executed in V k 5. U (V k ) = τi∈V k U (τ i ) : utilization of V k 6. χ(V k ) ∈ {LO
The Homogeneous Multiprocessor Virtual Machine Partitioning Problem
The scheduling problem for system virtualization on multi-core platforms consists of two sub-problems:
(i) partitioning: mapping of the virtual machines to processor cores (ii) uniprocessor hierarchical scheduling on each processor core Sub-problem (ii) is well-understood and many solutions are available, e.g. [START_REF] Lipari | Resource partitioning among real-time applications[END_REF]. This work focuses on sub-problem (i) and refer to it as the homogeneous multiprocessor virtual machine partitioning problem. More precisely, the virtual processors P virt executing the virtual machines V have to be mapped to the physical processors P :
V f1 → P virt f2 → P . ( 3
)
f 1 is a bijective function: each virtual machine V k is mapped to a dedicated virtual processor P virt k . f 2 maps 0 to n = |P virt | virtual processors to each element of P . A solution to the problem is a partition Γ , defined as:
Γ = (Γ (P 1 ), Γ (P 2 ), ..., Γ (P m )) (4)
Such a mapping of virtual machines (equivalent to virtual processors) to physical processors is correct, if and only if the computation capacity requirements of all virtual processors are met; and by consequence the schedulability of the associated virtual machines is guaranteed.
The partitioning problem is equivalent to a bin-packing problem, as for example Baruah [START_REF] Baruah | Task partitioning upon heterogeneous multiprocessor platforms[END_REF] has shown for the task partitioning problem by transformation from 3-Partition. The virtual machines are the objects to pack with size determined by their utilization factors. The bins are processors with a computation capacity value that is dependent on the applied virtual machine scheduler of this processor. The bin-packing problem is known to be intractable (NP-hard in the strong sense) [START_REF] Garey | ): Applying multi-core and virtualization to industrial and safety-related applications[END_REF] and the research focused on approximation algorithms [START_REF] Coffman | Approximation algorithms for bin packing: a survey[END_REF].
Scheduling Scheme
It is an important observation that the hypervisor-based integration of independently developed and validated systems implies partitioned scheduling. As a coarse-grained approach, it consolidates entire software stacks including an operating system, resulting in scheduling decisions on two levels (hierarchical scheduling). The hypervisor schedules the virtual machines and the hosted guest operating systems schedule their tasks according to their own local scheduling policies. This is irreconcilable with a scheduling based on a global ready queue.
Virtual Machine Scheduling In the context of this work, n virtual machines are statically assigned to m < n processors. Although a dynamic mapping is conceptually and technically possible, a static solution eases certification significantly, due to the lower run-time complexity, the higher predictability, and the wider experience of system designer and certification authority with uniprocessor scheduling. Run-time scheduling can be performed efficiently in such systems and the overhead of a complex virtual machine scheduler is avoided.
For each processor, the virtual machine scheduling is implemented based on fixed time slices. Execution time windows within a repetitive major cycle are assigned to the virtual machines based on the required utilization and the maximum blackout time. As a formal model, the Single Time Slot Periodic Partitions model by Mok et al. [START_REF] Mok | Resource partition for real-time systems[END_REF] is applied. A resource partition is defined as N disjunct time intervals {(S 1 , E 1 ), ..., (S N , E N )} and a partition period P partition , so that a virtual machine V i is executed during intervals (S i +j•P partition , E i +j•P partition ) with j ≥ 0. Kerstan et al. [START_REF] Kerstan | Full virtualization of real-time systems by temporal partitioning[END_REF] presented an approach to calculate such time intervals for virtual machines scheduled by either earliest deadline first (EDF) or rate-monotonic (RM), with S 0 = 0 and S i = E i-1 :
E i = S i + U (V i ) • P partition in case of EDF S i + 1 U RM lub (Vi) U (V ) • P partition in case of RM , with (5)
U RM lub (V i ) = n tasks • (2 1 n tasks -1) (6)
In case of RM, a scaling relative to the least upper bound U RM lub is required. If the partition period is chosen as
P partition = gcd({T k |τ k ∈ n i=1 V i })
, no deadline will be missed [START_REF] Kerstan | Full virtualization of real-time systems by temporal partitioning[END_REF].
The virtual machine schedule is computed offline and stored in a dispatching table, similar to the cyclic executive scheduling approach [START_REF] Baker | The cyclic executive model and ada[END_REF]. The size of this table is bounded, since the schedule repeats itself after P partition . Such a highly predictable and at design time analyzable scheduling scheme is the de facto standard for scheduling high-criticality workloads [START_REF] Mollison | Mixed-criticality real-time scheduling for multicore systems[END_REF].
In the terms of the resource reservation model of the virtual processor, the bandwidth α k of the virtual processor P virt k that executes virtual machine V k is equal to (E k -S k )/P partition , with Υ k = P partition and Q k = α • Υ . Note that this abstraction of the computation time demand of a virtual machine to a recurring time slot that is serviced by a virtual processor (Q k , Υ K ) allows to regard the virtual machine as a periodic task and transforms the virtual machine partitioning problem to the task partitioning problem.
Task Scheduling Any scheduling algorithm can be applied as task scheduler, as long as it allows to abstract the computation time requirements of the task set in terms of a demand-bound function dbf (V i , t), which bounds the computation time demand that the virtual machine could request to meet the timing requirements of its tasks within a specific time interval of length t [START_REF] Shin | Compositional real-time scheduling framework with periodic model[END_REF]. As a task set cannot possibly be schedulable according to any algorithm if the total execution that is released in an interval and must also complete in that interval exceeds the available processing capacity, the processor load provides a simple necessary condition for taskset feasibility: A virtual machine V k , applying A as local scheduler and executed by a virtual processor P virt k characterized by the supply function [START_REF] Shin | Compositional real-time scheduling framework with periodic model[END_REF]).
Z k , is schedulable if and only if ∀ t > 0 : dbf A (V k , t) ≤ Z k (t) (compare
Partitioning Algorithm
Common task set partitioning schemes apply Bin-Packing Heuristics or Integer-Linear-Programming (ILP) approaches in order to provide an efficient algorithm [START_REF] Carpenter | A categorization of real-time multiprocessor scheduling problems and algorithms[END_REF] [START_REF] Davis | A survey of hard real-time scheduling for multiprocessor systems[END_REF]. In the context of this work, however, the number of virtual machines is comparatively small and the partitioning algorithm is to be run offline and does not have to be executed on the embedded processor. Therefore, the algorithm performs a systematic enumeration of all candidate solutions following the branch-and-bound paradigm [START_REF] Land | An automatic method of solving discrete programming problems[END_REF]. The depth of the search tree is equal to the number of virtual machines n.
Two optimization goals are considered, according to which candidates are compared. Minimizing the number of processors is the basic optimization goal. In addition, the goal can be set to maximize the CriticalityDistribution, a metric defined as follows:
Definition. The CriticalityDistribution Z denotes for a partitioning Γ the distribution of the n crit ≤ n HI-critical virtual machines among the m processors:
Z(Γ ) = m i=1 ζ(P i ) n crit , with (7)
ζ(P i ) = 1, if ∃P virt j ∈ Γ (P i ) : χ(V j ) = HI, 0, otherwise. ( 8
)
For example, assumed that n crit = 4 and m = 4, Z equals 1 if there is at least one HI-critical virtual machine mapped to all processors; and Z equals 0.75 if one processor does not host a HI-critical virtual machine. This results, if the maximum number of processors is not limited, to a mapping of each cirtical virtual machine to a dedicated processor, which is potentially shared with LOcritical virtual machines, but not with other HI-critical virtual machines.
The motivation of the optimization goal criticality distribution is the Criticality Inversion Problem, defined by de Niz et al. [START_REF] De Niz | On the scheduling of mixed-criticality real-time task sets[END_REF]. Transferred to virtual machine scheduling, criticality inversion occurs if a HI-critical virtual machine overruns its execution time budget and is stopped to allow a LO-critical virtual machine to run, resulting in a deadline miss for a task of the HI-critical virtual machine. By definition of criticality, it is more appropriate to continue the execution of the HI-critical virtual machine, which can be done for highly utilized processors by stealing execution time from the budget of LO-critical virtual machines. It is in general easier to avoid criticality inversion, if virtual machines of differing criticality share a processor. If the number of virtual machines does not exceed the number of physical processors, all critical virtual machine are mapped to different physical processors. The partitioning algorithm either minimizes the number of processors or maximizes the criticality distribution, while minimizing the number of processors among partitions of same criticality distribution.
Before generating the search tree, the set of virtual machines V is sorted according to decreasing utilization. This is motivated by a pruning condition: if at some node, the bandwidth assigned to a processor is greater than 1, the computational capacity of the processor is overrun and the whole subtree can be pruned. Such a subtree pruning tends to occur earlier, if the virtual machines are ordered according to decreasing utilization.
We introduce that a virtual machine is termed to be heavy, if certification requires that this virtual machine is exclusively mapped to a dedicated processor or if other virtual machines can only be scheduled in background, i.e. the heavy virtual machine is executed immediately whenever it is has a computation demand. By consequence, a heavy virtual machine cannot be mapped to the same processor as other HI-critical virtual machines.
Example
The different outcome dependent on the optimization goal of the algorithm is illustrated with the examplary virtual machine set of Table 1. EDF is assumed for all virtual machines, so that a scaling is not required and α k = U (V k ). Figure 1 depicts the virtual machine to processor mapping for three different goals, with a red virtual machine identifier denoting a HI-critical virtual machine. Subfigure (a) depicts the outcome for the optimization of the number of processors. The virtual machine set is not schedulable on less than four processors. The average utilization per processor is 0.775 and the criticality distribution Z is 3/5 = 0.6. Subfigure (b) depicts the outcome for the optimization of the criticality distribution, however with a maximum number of m max = 4 processors allowed. The allocation is therefore still characterized by the minimum number of processors. The criticality distribution Z improves to 4/5 = 0.8. From a criticality point of view, this mapping is more suitable, since the options to avoid criticality inversion on processor P 3 are very limited in the first solution. Subfigure (c) depicts an unrestricted optimization of the criticality distribution, resulting in an additional processor. The optimal criticality distribution Z = 1 is achieved, however at the cost of exceeding the minimum number of processors, which leads to a decrease of the average utilization per processor to 0.62. The last mapping is the correct choice, if the five HI-critical virtual machines are heavy.
Related Work
The related problem of partitioning a periodic task set upon homogeneous multiprocessor platforms has been extensively studied, both theoretically and empirically [START_REF] Carpenter | A categorization of real-time multiprocessor scheduling problems and algorithms[END_REF] partitioning a task set with precedence constraints, in order to minimize the required overall computational bandwidth [START_REF] Buttazzo | Partitioning real-time applications over multi-core reservations[END_REF]. Peng and Shin presented a branchand-bound algorithm in order to partition a set of communicating tasks in a distributed system [START_REF] Peng | Assignment and scheduling communicating periodic tasks in distributed real-time systems[END_REF].
Kelly at al. proposed bin-packing algorithms for the partitioning of mixedcriticality real-time task sets [START_REF] Kelly | On partitioned scheduling of fixed-priority mixedcriticality task sets[END_REF]. Using a common mixed-criticality task model (characterized by an assignment of multiple WCET values, one per each criticality level in the system), they experimentally compare different kinds of task ordering according to utilization and criticality and observed that the latter solutions results in a higher percentage of finding a feasible schedule for a randomly generated task set.
Shin and Lee introduced a formal description of the component abstraction problem (abstract the real-time requirements of a component) and the component composition model (compose independently analyzed locally scheduled components into a global system) [START_REF] Shin | Compositional real-time scheduling framework with periodic model[END_REF]. Easwaran et al. introduced compositional analysis techniques for automated scheduling of partitions and processes in the specific context of the ARINC-653 standard for distributed avionics systems [START_REF] Easwaran | A compositional scheduling framework for digital avionics systems[END_REF], however, did not tackle the mapping of partions to processors. As required by the ARINC specification and as done in this work, a static partition schedule is generated at design time. Both the partitions and the tasks within the partitions are scheduled by a deadline-monotonic scheduler.
Conclusion and Future Work
This work defined the partitioning problem of mapping virtual machines with real-time constraints to a homogeneous multiprocessor architecture in a formal manner. This is the prerequisite for an algorithmic solution. Formal models were adapted to abstract and specify the computation time demand of a virtual machine and the computation time supply of a shared processor, in order to analytically evaluate whether it is guaranteed that the demand of a virtual machine is satisfied. The application of a branch-and-bound algorithm is proposed with two optimization metrics. A brief introduction on how to generate a feasible virtual machine schedule after the partitioning was given. A highly predictable and at design time analyzable scheduling scheme based on fixed time slices was chosen as this is the de facto standard for scheduling high-criticality systems.
Partitioning and schedule generation together guarantee that all virtual machines obtain a sufficient amount of computation capacity and obtain it in time, so that the hosted guest systems never miss a deadline. This automated solution provides analytical correctness guarantees, which can help with system certification. In contrast to a manual partitioning, it guarantees to find the optimal solution and scales well with regard to an increasing number of both virtual machines and processor cores. The optimization metric criticality distribution is a first step towards a partitioning that considers multiple criticality levels appropriately. The different outcomes of the two approaches were illustrated exemplarily.
The presented algorithm serves as a groundwork for a research of the partitioning problem. In particularly, we are going to include the overhead of virtual machine context switching, since it is for most real implementations extensive enough to not be neglected. The partitioning directly influences the virtual machine scheduling, which in turn heavily influences the number of virtual machine context switches. In addition, communication between virtual machines should be included, since the communication latency depends on the fact whether two virtual machines share a core or not. A further interesting question is whether a more detailled analysis of the timing characteristics of the virtual machines, in order to map guests with similar characteristics to the same processor, leads to better results.
Fig. 1 .
1 Fig. 1. Mappings for different Optimization Goals
Table 1 .
1 Example: Set of Virtual Machines
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
χ LO HI LO LO HI HI HI LO LO HI
U 0.6 0.5 0.5 0.3 0.25 0.2 0.2 0.2 0.2 0.15
Acknowledgments. This work was funded within the project ARAMiS by the German Federal Ministry for Education and Research with the funding IDs 01IS11035. The responsibility for the content remains with the authors. | 24,555 | [
"1001388"
] | [
"74348"
] |
01466679 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466679/file/978-3-642-38853-8_22_Chapter.pdf | Kay Klobedanz
email: [email protected]
Jan Jatzkowski
email: [email protected]
Achim Rettberg
email: [email protected]
Wolfgang Mueller
email: [email protected]
Fault-Tolerant Deployment of Real-Time Software in AUTOSAR ECU Networks
We present an approach for deployment of real-time software in ECU networks enabling AUTOSAR-based design of fault-tolerant automotive systems. Deployment of software in a safety-critical distributed system implies appropriate mapping and scheduling of tasks and messages to fulfill hard real-time constraints. Additional safety requirements like deterministic communication and redundancy must be fulfilled to guarantee fault tolerance and dependability. Our approach is built on AUTOSAR methodology and enables redundancy for compensation of ECU failures to increase fault tolerance. Based on AUTOSAR-compliant modeling of real-time software, our approach determines an initial deployment combined with reconfigurations for remaining nodes at design time. To enable redundancy options, we propose a reconfigurable ECU network topology. Furthermore, we present a concept to detect failed nodes and activate reconfigurations by means of AUTOSAR.
Introduction
Today's automotive vehicles provide numerous complex electronic features realized by means of distributed real-time systems with an increasing number of electronic control units (ECUs). Many of these systems implement safetycritical functions, which have to fulfill hard real-time constraints to guarantee dependable functionality. Furthermore, subsystems are often developed by different partners and suppliers and have to be integrated. To address these challenges, the AUTomotive Open System ARchitecture (AUTOSAR) development partnership was founded. It offers a standardization for the software architecture of ECUs and defines a methodology to support function-driven system design. Hereby, AUTOSAR helps to reduce development complexity and enables smooth integration of third party features and reuse of software and hardware components [START_REF] Fennel | Achievements and exploitation of the AUTOSAR development partnership[END_REF]. Figure 1 illustrates the AUTOSAR-based design flow steps and the resulting dependencies for the deployment of provided software components [2]. Deployment implies task mapping and bus mapping resulting in schedules affecting each other. The problem of mapping and scheduling tasks and messages in a distributed system is NP-hard [3]. Beside hard real-time constraints, safetycritical systems have to consider additional requirements to guarantee dependability. Hence, deterministic communication protocols and redundancy concepts Fig. 1. System design flow steps and their dependencies [2] shall be utilized to increase fault tolerance of such systems [START_REF] Paret | Multiplexed Networks for Embedded Systems[END_REF]. AUTOSAR supports FlexRay, which is the emerging communication standard for safety-critical automotive networks. It provides deterministic behavior, high bandwidth capacities, and redundant channels to increase fault tolerance. To further increase fault tolerance, node failures shall also be compensated by redundancy.
We present an approach for real-time software deployment built on AU-TOSAR methodology to design fault-tolerant automotive systems. Our approach determines an initial deployment combined with necessary reconfigurations and task replications to compensate node failures. The determined deployment solution includes appropriate task and bus mappings resulting in corresponding schedules that fulfill hard real-time constraints (cf. Fig. 1). In addition, we propose a modified version of a reconfigurable ECU network topology presented in [START_REF] Klobedanz | An approach for self-reconfiguring and fault-tolerant distributed real-time systems[END_REF] to enable flexible task replication and offer the required redundancy. Regarding AUTOSAR, we propose a flexible Runnable-to-task mapping for faulttolerant systems and present a concept for an AUTOSAR-compliant integration of our fault-tolerant approach: We propose an AUTOSAR Complex Device Driver (CDD) to detect failed nodes and initiate the appropriate reconfiguration.
The remainder of this paper is structured as follows. After related work and an introduction to AUTOSAR we present our proposal for a reconfigurable ECU network topology in Section 4. Section 5 describes our fault-tolerant deployment approach and applies it to a real-world application before we introduce a concept for AUTOSAR integration in Section 6. The article is closed by the conclusion.
Related Work
In general, scheduling of tasks and messages in distributed systems is addressed by several publications [START_REF] Pop | Scheduling with optimized communication for time-triggered embedded systems[END_REF][START_REF] Pop | Bus access optimization for distributed embedded systems based on schedulability analysis[END_REF][START_REF] Eles | Scheduling with bus access optimization for distributed embedded systems[END_REF]. Other publications propose heuristics for the design of FlexRay systems [START_REF] Ding | A ga-based scheduling method for flexray systems[END_REF][START_REF] Ding | An effective ga-based scheduling algorithm for flexray systems[END_REF][START_REF] Kandasamy | Dependable communication synthesis for distributed embedded systems[END_REF]. In [START_REF] Brendle | Dynamic reconfiguration of flexray schedules for response time reduction in asynchronous fault-tolerant networks[END_REF] strategies to improve fault tolerance of such systems are described. However, we propose an approach for fault-tolerant deployment of real-time software specific for AUTOSAR-based design flow. AU-TOSAR divides task mapping into two steps: Mapping (i) software components (SWCs) encapsulating Runnables onto ECUs and (ii) Runnables to tasks that are scheduled by an OS. Since the number of tasks captured by AUTOSAR OS is limited, Runnable-to-task mapping is generally not trivial [START_REF][END_REF]. Although some approaches solve one [START_REF] Peng | Deployment optimization for autosar system configuration[END_REF] or even both [START_REF] Zhang | Optimization issues in mapping autosar components to distributed multithreaded implementations[END_REF] steps for an AUTOSAR-compliant mapping, to our knowledge, [START_REF] Kim | An autosar-compliant automotive platform for meeting reliability and timing constraints[END_REF] is the only one considering this combined with fault tolerance. But unlike our approach, only a subset of the software requires hard real-time and each redundant Runnable is mapped to a separate task.
AUTOSAR provides a common software architecture and infrastructure for automotive systems. For this purpose, AUTOSAR distinguishes between Application Layer including hardware-independently modeled application software, Runtime Environment (RTE) implementing communication, and Basic Software (BSW) Layer providing hardware-dependent software, e.g. OS and bus drivers.
An Application Layer consists of Software Components (SWCs) encapsulating complete or partial functionality of application software [START_REF][END_REF]. Each Atomic-SWC has an internal behavior represented by a set of Runnables. "Atomic" means that this SWC must be entirely -i.e. all its Runnables -mapped to one ECU. Runnables model code and represent internal behavior. AUTOSAR provides RTE events, whose triggering is periodical or depends on communication activities. In response to these events, the RTE triggers Runnables, i.e. RTE events provide activation characteristics of Runnables. Based on RTE events, all Runnables assigned to an ECU are mapped to tasks scheduled by AUTOSAR OS. AUTOSAR Timing Extensions describe timing characteristics of a system related to the different views of AUTOSAR [18,[START_REF] Peraldi-Frati | Timing modeling with autosar -current state and future directions[END_REF]. A timing description defines an expected timing behavior of timing events and timing event chains. Each event refers to a location of the AUTOSAR model where its occurrence is observed. An event chain is characterized by two events defining its beginning (stimulus) and end (response). Timing constraints are related to events or event chains. They define timing requirements which must be fulfilled by the system or timing guarantees that developers ensure regarding system behavior.
At Virtual Function Bus (VFB) level communication between SWCs is modeled by connected ports. We apply the Sender-Receiver paradigm in implicit mode, i.e. data elements are automatically read by the RTE before a Runnable is invoked and (different) data elements are automatically written after a Runnable has terminated [START_REF][END_REF]. AUTOSAR distinguishes Inter-ECU communication between two or more ECUs and Intra-ECU communication between Runnables on the same ECU [21]. For Inter-ECU communication, AUTOSAR supports the FlexRay protocol providing message transport in deterministic time slots [22]. FlexRay makes use of recurring communication cycles and is composed of a static and an optional dynamic segment. In the time-triggered static segment, a fixed and initially defined number of equally sized slots is statically assigned to one sender node. Changing this assignment requires a bus restart. Slot and frame size, cycle length, and several other parameters are defined by an initial setup of the FlexRay schedule. The payload segment of a FlexRay frame contains data in up to 127 2-byte words. Payload data can be divided into AUTOSAR Protocol Data Units (PDUs) composed of one or more words. Hence, different messages from one sender ECU can be combined by frame packing.
A Reconfigurable ECU Network Topology
To increase fault tolerance in an ECU network, node failures should be compensated by redundancy and software replication. In current distributed real-time systems, failures of hardwired nodes cannot be compensated by software redundancy as connections to sensors and actuators get lost. We propose a modified network topology distinguishing two types of ECUs [START_REF] Klobedanz | An approach for self-reconfiguring and fault-tolerant distributed real-time systems[END_REF]: (i) Peripheral interface nodes, which are wired to sensors and actuators, and just read/write values from/to the bus and (ii) functional nodes hosting the functional software and communicating over the bus. Since peripheral interface nodes do not execute complex tasks, they only require low hardware capacities allowing cost-efficient hardware redundancy. Here, we focus on distributed functional ECUs that provide and receive data via communication bus and can therefore be utilized for redundancy and reconfiguration. In the following ECU refers to functional nodes.
Fault-Tolerant Deployment Approach
In this section we present our fault-tolerant deployment approach. It contains (i) initial definition and modeling of the given SW-architecture & HW-topology for interdependent (ii) Runnable & task mappings and (iii) bus mappings. For better traceability all steps of our approach are applied to a real-world application.
Modeling of Software Architecture
Figure 2 illustrates the functional components of a Traction Control (TC) and an Adaptive Cruise Control (ACC) system, shows data dependencies, and provides information about their timing properties [START_REF] Kandasamy | Dependable communication synthesis for distributed embedded systems[END_REF]: worst-case execution times (WCETs) and periods. In AUTOSAR these components are modeled as SWCs, whose functional behavior is represented by Runnables. Putting each Runnable into a separate SWC enables mapping of each Runnable to an arbitrary ECU. Thus, we use Runnable-to-ECU and SWC-to-ECU mapping as synonyms. The set of Runnables is modeled as
R = {R i (T i , C i , r i , d i , s i , f i ) | 1 ≤ i ≤ n}.
Each Runnable R i is described by its period T i , WCET C i , release time r i , deadline d i , start time s i , and finishing time f i . For the TC system, Fig. 3 shows the resulting model on VFB level with Runnables listed in Table 1. A VFB level model represents the given software architecture with communication dependencies independent of the given hardware architecture and acts as input for the AUTOSAR-based system design. AUTOSAR Timing Extensions are used to annotate timing constraints for the model by means of events and event chains.
Based on the event chain R 1 → R 5 → R 8 → R 10 in Fig. 3, a maximum latency requirement defines that the delay between the input at R 1 (stimulus) and the output of R 10 (response) must not exceed the given maximum end-to-end delay (period) of 3000µs. Timing constraints are defined for each event chain. Dependencies between Runnables imply order and precedence constraints. Considering the maximum end-to-end delay for the event chains, we define an available execution interval
E i = [r i , d i ]
for each Runnable R i with release time:
r i = 0 , if R i ∈ R in max{r j + C j | R j ∈ R directPrei } , else.
If a Runnable has no predecessors (R i ∈ R in ), the available execution interval of R i starts at r i = 0. Otherwise, r i is calculated by means of r j and C j of the direct predecessors (R j ∈ R directPrei ). The deadline of R i is calculated as:
d i = max end-to-end delay , if R i ∈ R out min{d k -C k | R k ∈ R directSucci } , else.
If a Runnable has no successors (R i ∈ R out ), the available execution interval of R i ends at d i = max end-to-end delay. Otherwise, ECUtmp ← GetEarliestFinish(Etmp) 5:
d i depends on r k and C k of the direct successors of R i (R k ∈ R directSucci ).
MapRunnable(Ri, ECUtmp) 6: end for 7: for all Ri ∈ R \ Rin do 8:
Etmp ← GetFeasibleECUs(E) 9:
ECUtmp ← GetMinimumDelay(Ri, Etmp) 10:
MapRunnable(Ri, ECUtmp) 11: end for 12: return Minit : R → E
Runnable and Task Mapping
For a feasible SWC-to-ECU and Runnable-to-task mapping the properties of the ECUs have to be considered. The set of ECUs is E = {ECU j | 1 ≤ j ≤ m}. We consider a homogeneous network structure. Hence, the Runnable WCETs provided in Table 1 are valid for all ECUs. The objective of our approach is to determine a feasible combined solution for an initial software deployment and all necessary reconfigurations and task replications for the remaining nodes of the network in case of a node failure. Thus, each configuration has to fulfill the deadlines of all Runnables and the end-to-end delay constraints for all event chains. Therefore, our approach iteratively analyzes and reduces the resulting execution delay for each SWC-to-ECU mapping to finally ensure minimized end-to-end delays for all event chains. It starts with the initial mapping M init described by peusdo-code in Alg. 1. The algorithm defines the mapping order of Runnables via sorting them by deadline and release time. Before each mapping a schedulability test has to determine the feasible ECUs in E. Therefore, we propose our Extended Response Time Analysis for Rate (Deadline) Monotonic Scheduling which is a common approach for AUTOSAR OS:
X i = (C i + δ i ) + i-1 j=1 X i T j C j .
It combines the WCET C i and the resulting communication delay δ i to calculate the response time X i for each Runnable R i . The initial mapping begins with the Runnables R in , which have no precedence constraints, and maps them iteratively to the ECU hosting the last Runnable with the earliest finishing time. Thus, in a network with n ECUs, the first n Runnables will be mapped to empty ECUs. For Runnables with predecessors (R ∈ R \ R in ), Alg. one or more of the direct predecessors of R are last Runnable(s), the algorithm maps R to the same ECU as the predecessor with the latest finishing time. This avoids additional Inter-ECU communication delay for the latest input of R. If there are Runnables mapped to E pre after all direct predecessors, the Inter-ECU communication for input to R can take place during their execution. In this case the algorithm determines the ECU preMin with the earliest finishing time. If there are ECUs that do not host any of the direct predecessors of R, the one with the earliest finishing time (ECU nonPreMin ) is also considered. The algorithm compares the difference ∆ between these finishing times to the communication overhead resulting by a mapping to ECU nonPreMin . The communication overhead depends on the number of slots needed and on the slot size defined for bus communication (ref. Section 5.3). If the communication overhead is smaller than ∆, the algorithm returns ECU nonPreMin , else it returns ECU preMin . By means of Alg. 1 and Alg. 2 our approach determines a feasible initial mapping with minimized execution delays considering timing, order, and precedence constraints. In a network with n ECUs it has to perform n redundancy mappings. Alg. 3 calculates the redundancy mapping M red for a Runnable set to feasible remaining ECUs. Beside R fail and E rem it takes M init as an input, meaning that the set of Runnables initially mapped to E rem is kept for each remaining ECU. This allows to combine Runnables on E rem to tasks and the reuse of messages and slots in different reconfigurations. Similar to the initial mapping, the algorithm Algorithm 3 RedundancyMapping(R fail , E rem , M init ) Input: Runnables R fail , ECUs Erem, and Mapping Minit. Output: RedundancyMapping: M red : R fail → Erem.
1: M red ← Minit 2: for all Ri ∈ R fail do 3: Etmp ← GetFeasibleECUs(Erem) 4:
ECUtmp ← GetECUMinE2E(Ri, Etmp, M red ) 5:
M red ← M red ∪MapRunnable(Ri, ECUtmp) 6: end for 7: return M red : R fail → Erem iteratively inserts the Runnables from R fail . Hence, in each mapping step the redundancy mapping M red is complemented by the currently performed mapping. In Alg. 4, for each assignment our approach determines the Runnable-to-ECU mapping resulting in the minimum overall end-to-end delay, i.e. longest end-toend delay of all event chains. This algorithm checks each ECU i ∈ E rem based on their current mapping. It complements M cur by inserting R preserving order and precedence constraints by means of deadlines and release times. This insertion results in Runnable shiftings and growing execution delays due to the constraints on one or more of the ECUs. The algorithm calculates the overall end-to-end delay for all event chains implied by M i and stores it referencing to ECU i . This results in a set of end-to-end delays (E2E): one for each Runnableto-ECU mapping. Finally, Alg. 4 compares these values and returns the ECU with minimum overall end-to-end delay. Fig. 4 depicts Gantt Charts for the TC and ACC systems in a network with 3 ECUs. It shows how our SWC-to-ECU approach preserves the initial order of Runnables on remaining ECUs and inserts redundant Runnables. It also shows that our approach enables an efficient Runnable-to-task mapping to reduce the number of required tasks. For this purpose Runnables that are assigned to the same ECU and keep connected at each redundancy mapping, are encapsulated in one task. Summarized, this results in 13 tasks for the initial mapping and 18 tasks for the redundant Runnables. Although each redundant Runnable is mapped to a separate task, our approach also supports the encapsulation of redundant Runnables in one task.
τ r3 τ r1 τ r2 R 1 R 2 R 3 R 4 R 5 R 6 R 7 R 8 R 9 R 10 R 1 1 R 12 R 13 R 14 R 15 R 16 R 17 R 18 R 1 R 2 R 3 R 4 R 5 R 6 R 7 R 8 R 9 R 10 R 1 1 R 12 R 13
R 1 R 2 R 3 R 4 R 5 R 6 R 7 R 8 R 9 R 10 R 1 1 R 12 R 13 R 14 R 15 R 16 R 17 R 18 ECU 1 ECU 3 R 1 R 2 R 3 R 4 R 5 R 6 R 7 R 8 R 9 R 10 R 1 1 R 12 R 13 R 14 R 15 R 16 R 17 R 18 ECU 1 ECU 2 τ 1 τ 2 τ 4 τ 3 τ 10 τ 1 1 τ 12 τ 13 τ 1 τ 2 τ 3 τ 4 τ 5 τ 6 τ 7 τ 8 τ 9
Fig. 4. Gantt Charts of Runnable and task mappings for TC and ACC systems
Communication and Bus Mapping
The number of Inter-ECU messages depends on the Runnable mappings; their size depends on the given software architecture. The message sizes of the TC and ACC systems are 10 to 22 bits [START_REF] Kandasamy | Dependable communication synthesis for distributed embedded systems[END_REF] and require one or two words of a FlexRay frame. For each Inter-ECU message m i , the Runnable mappings result in an available transmission interval Tx i = [f send , s recv ]. Thus, m i may be transmitted in one slot θ of the slot set Θ i in Tx i . The number of slots in Θ i depends on the slot size. Here, we consider a slot size of θ size = 25µs, i.e. up to 6 PDUs per frame. Alg. 5 describes our bus mapping approach. It adds the Inter-ECU messages M Mi of all Runnable mappings to a common message set M. For each message it determines the sender ECU and transmission interval per mapping and adds it to a common set Ω mj . Afterwards, it performs an assignment of slots to messages respectively sender ECUs. Therefore, all Inter-ECU messages M are considered. Analyzing all messages with the same sender ECU, the corresponding transmission intervals Ω mi,ECUj get identified. Since the initial mapping is kept, Inter-ECU messages can be sent by the same ECU in one or more Runnable mappings (ref. Fig. 4). Thus, our approach reduces the number of needed slots for the ECU assignments. It compares the determined transmission intervals. For overlapping slots the first available common slot θ map is assigned to the sender ECU for the transmission of m i . Thus, the same message and slot is reused in different reconfigurations. In non-overlapping intervals, m i is mapped to the first available slot in each interval. It is also checked if the current message can be mapped to the same slot as one of the other messages by utilizing frame packing. Table 2 provides an excerpt of the determined bus mappings. It shows transmission intervals and assigned slots for messages per sender and gives examples for reuse and frame packing. Fig. 4 depicts the end-to-end delays for the event chain R 1 → R 10 and shows that the end-to-end delay constraint is fulfilled for all mappings. The same holds for all other event chains. Since in this paper we consider periodic tasks, ScheduleTables have repeating behavior, i.e. a RUNNING ScheduleTable is processed in a loop. Having an AUTOSAR-compliant concept to detect a failed ECU within a network and to manage different task activation patterns on an ECU, we need to combine these concepts. This can be done by using BSW Mode Manager. Defining one mode per configuration on a particular ECU, our CDD can request a mode switch when a failed ECU is detected. This mode switch enforces that the currently running ScheduleTable is STOPPED and, depending on the failed ECU, the appropriate ScheduleTable enters state RUNNING.
Conclusion
We presented an approach for fault-tolerant deployment of real-time software in AUTOSAR ECU networks and applied it to real-world applications. It offers methods for task and message mappings to determine an initial deployment combined with reconfigurations. To enable redundancy, we proposed a reconfigurable network topology. Finally, we introduced a CDD for detecting failed nodes and activation of reconfigurations.
Fig. 2 .
2 Fig.2. Functional Components of a TC (a) and an ACC (b) system[START_REF] Kandasamy | Dependable communication synthesis for distributed embedded systems[END_REF]
Fig. 3 .
3 Fig. 3. Traction Control system model on AUTOSAR VFB level
Table 1 .
1 Table 1 summarizes the calculated available execution intervals E i for the Runnables of the TC and ACC systems. Runnable properties for TC and ACC systems (values in µs)
TC System ACC System
Ri R1-4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15 R16 R17 R18
Ci 200 300 150 175 400 150 200 300 150 300 175 200 250 200 150
ri 0 200 0 0 500 900 900 0 0 300 0 600 600 850 800
di 2100 2400 2400 2400 2800 3000 3000 2350 2350 2650 2550 2850 2800 3000 3000
2 returns the ECU with the minimum execution delay. It determines the direct predecessors of R, their hosting ECUs E pre , and the last Runnables on these ECUs (R last ). If
Algorithm 2 GetMinimumDelay(R, E)
Input: A Runnable R and ECUs E.
Output: ECU with minimum delay.
1: R directPre ← GetDirectPredecessors(R)
2: Epre ← GetHostECUs(R directPre , E)
3: R last ← GetLastRunnables(Epre)
4: Rcap ← R last ∩ R directPre
5: if Rcap = ∅ then
6: ECU ← GetHostECU(GetLatestFinish(Rcap),E)
7: else
8: ECUpreMin ← GetEarliestFinish(Epre)
9: if E \ Epre = ∅ then
10: ECUnonPreMin ← GetEarliestFinish(E \ Epre)
11: ∆ ← Diff GetFinishTime(ECUpreMin), GetFinishTime(ECUnonPreMin)
12: if ∆ > ComOverhead then
13: ECU ← ECUnonPreMin
14: else
15: ECU ← ECUpreMin
16: end if
17: else
18: ECU ← ECUpreMin
19: end if
20: end if
21: return ECU
Algorithm 4 GetECUMinE2E(R, E, M cur )
τ r4 τ r5 τ r6
Input: Runnable R, ECUs E, and Mapping Mcur.
Output: ECU causing minimum overall E2E delay
1: for all ECUi ∈ E do
2: Mi ← Mcur∪MapRunnable(R, ECUi)
3: E2EECU i ← OverallE2EDelay(Mi)
4: E2E ← E2E∪E2EECU i
5: end for
6: ECU ← ECUMinE2E(E2E)
7: return ECU
ECU and activate the appropriate redundant tasks within the ECU network according to the fault-tolerant reconfiguration. While AUTOSAR specifies a BSW called Watchdog Manager to manage errors of BSW modules and SWCs running on an ECU, there is no explicit specification regarding detection of failed nodes within an ECU network. Therefore, we propose to extend AUTOSAR BSW by means of a Complex Device Driver (CDD,[23]). Using FlexRay-specific functionality provided by BSW of AUTOSAR Communication Stack, it can be monitored if valid frames are received. Combined with the static slot-to-sender assignment, each ECU can identify failed ECUs. When a failed ECU is detected, each remaining ECU has to activate its appropriate redundant tasks. For this purpose we propose using ScheduleTables: a statically defined activation mechanism provided by AUTOSAR OS for time-triggered tasks used with an OSEK Counter[START_REF][END_REF]. Here, we use the FlexRay clock to support synchronization of ScheduleTables running on different ECUs within a network. Note that tasks are only activated, i.e. tasks require an appropriate priority to ensure that they are scheduled in time. For each ECU we define one single ScheduleTable for each configuration of this ECU, i.e. a ScheduleTable activates only those tasks that are part of its corresponding configuration. Utilizing the different states that each ScheduleTable can enter -e.g. RUNNING and STOPPED -the ScheduleTable with the currently required configuration is RUNNING while all the others are STOPPED.
6 Reconfiguration with AUTOSAR
Having a feasible AUTOSAR-compliant SWC-to-ECU and Runnable-to-task
mapping, two challenges remain to solve by means of AUTOSAR: Detect a failed
Acknowledgements
This work was partly funded by the DFG SFB 614 and the German Ministry of Education and Research (BMBF) through the project SANITAS (01M3088I) and the ITEA2 projects VERDE (01S09012H), AMALTHEA (01IS11020J), and TIMMO-2-USE (01IS10034A). | 27,069 | [
"1001389",
"1001390",
"1001391",
"1001392"
] | [
"46049",
"46049",
"146984",
"46049"
] |
01466684 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466684/file/978-3-642-38853-8_26_Chapter.pdf | André Heuer
email: [email protected]
Tobias Kaufmann
email: [email protected]
Thorsten Weyer
email: [email protected]
Extending an IEEE 42010-compliant Viewpoint-based Engineering-Framework for Embedded Systems to Support Variant Management
The increasing complexity of today's embedded systems and the increasing demand for higher quality require a comprehensive engineering approach. The model-based engineering approach that has been developed in the project SPES 2020 (Software Platform Embedded Systems) is intended to comprehensively support the development of embedded systems in the future. The approach allows for specifying an embedded system from different viewpoints that are artefact-based and seamlessly integrated. It is compliant with the IEEE Std. 1471 for specifying viewpoints for architectural descriptions. However, the higher demand for individual embedded software necessitates the integration of variant management into the engineering process of an embedded system. A prerequisite for the seamless integration of variant management is the explicit consideration of variability. Variability allows for developing individual software based on a set of common core assets. Yet, variability is a crosscutting concern as it affects all related engineering disciplines and artefacts across the engineering process of an embedded system. Since the IEEE Std. 1471 does not support the documentation of crosscutting aspects, we apply the concept of perspectives to IEEE Std. 1471's successor (IEEE Std. 42010) in order to extend the SPES engineering approach to support continuous variant management.
1
Introduction
Embedded systems bear more and more functionality, must satisfy a growing number of crucial quality demands, and additionally have a higher degree of complexity and inter-system relationships. Key players of the German embedded systems community were involved in the project SPES 2020 (Software Platform Embedded Systems) which was a joined project funded by the German Federal Ministry of Education and Research 1 . SPES 2020 aimed at developing a model-based engineering approach that addresses the challenges mentioned above (cf. [START_REF] Broy | Introduction to the SPES Modeling Framework[END_REF]).
The project consortium represented important industrial domains in Germany: automation, automotive, avionics, energy, and healthcare. In the project, the partners from industry and academia jointly developed an artefact-centred, model-based engineering framework for embedded systems that is based on the IEEE Standard 1471 (cf. [START_REF]IEEE Recommended Practice for Architectural Description of Software Intensive Systems[END_REF]). This framework is called the SPES Modelling Framework (or short: SPES MF). The SPES MF focusses on the software within an embedded system (cf. [10]) and allows for a seamless engineering of embedded systems, from the requirements to the technical architecture of the system under development (SUD) across multiple abstraction layers (cf. [START_REF] Broy | Introduction to the SPES Modeling Framework[END_REF]).
Beside the need for seamless model-based engineering, there is a higher demand for the development of different variants of embedded systems. Variant management consists of activities to define variability, to manage variable artefacts, activities to resolve variability and to manage traceability information that are necessary to fulfil these activities (cf. [START_REF] Pohl | Software Product Line Engineering: Foundations, Principles and Techniques[END_REF]) in each step within the engineering process.
Thereby, variability is defined as the ability to adapt [START_REF]The ARTFL Project: Webster´s Revised Unabridged Dictionary[END_REF], i.e. a development artefact can exist in different shapes at the same time (cf. [START_REF] Pohl | Software Product Line Engineering: Foundations, Principles and Techniques[END_REF]). The current version of the SPES MF does not support the systematic consideration of variants. As a consequence, concepts and techniques are required for extending the SPES MF to support variant management in the engineering process of an embedded system. A prerequisite for that is the seamless consideration of variability across the engineering artefacts (cf. [START_REF] Broy | Outlook[END_REF]).
Variability may cause crosscutting changes, for example, in the requirements and the architecture by adapting a system for a specific variant (cf. [START_REF] Noda | Aspect-Oriented Modeling for Variability Management[END_REF]). A new requirement may impose changes to the architecture. Thus, variability can be seen as a crosscutting concern. Since variability affects all existing viewpoints of the SPES MF, the SPES MF needs to be adapted to deal with such crosscutting concerns. In [10], perspectives are recommended to address crosscutting concerns. We define a Variability perspective for SPES MF that supports the development of different variants of systems in a systematic and comprehensive way.
The paper is structured as follows: Section 2 describes the fundamentals for extending the SPES MF with respect to the consideration of variability in the different engineering artefacts. Section 3 describes our extension of the SPES MF to integrate the Variability perspective in the SPES MF. Section 4 reviews the related work on integrating variability in architectural frameworks. Section 5 gives a conclusion and sketches the future research.
Fundamentals
In order to cope not only with the functionality and complexity of a single SUD but also with the variability of a number of similar embedded systems, this section describes the fundamentals to extend the SPES MF for supporting variant management.
Variant Management in the Engineering of Embedded Systems
Variability is defined as the ability to adapt. Thus, the variability of an embedded system is defined as the ability to adapt the system with regard to a specific context (e.g. context of use, cf. section 1).
It is widely accepted in industry and academia that variability should be documented explicitly in a variability model, which is already a well-proven paradigm in the software product line community (cf., e.g., [START_REF] Clements | Software Product Lines -Practices and Patterns[END_REF], [START_REF] Kang | Feature-oriented product line engineering[END_REF], [START_REF] Pohl | Software Product Line Engineering: Foundations, Principles and Techniques[END_REF]). This explicit documentation of variability is based on two ontological concepts and their relations. The variability subject is defined as a variable item of the real world or a variable property of such an item, e.g. the paint of a car (cf. [START_REF] Pohl | Software Product Line Engineering: Foundations, Principles and Techniques[END_REF]). The variability object is defined as a particular instance of a variability subject, e.g. red paint. A variant is a running system that is constituted of a selected set of variability objects. Consequently, in the engineering of variability-intensive embedded systems, variant management can be characterized as a process that complements the original engineering process (e.g. requirements engineering, architectural design) by systematically considering variants in each of the engineering disciplines. Performing continuous variant management additionally implies that the relationships of variants are seamlessly documented on a semantic level across the engineering process.
2.2
Viewpoint-Specifications based on IEEE Std. 1471 and IEEE Std. 42010
The IEEE Std. 1471 [START_REF]IEEE Recommended Practice for Architectural Description of Software Intensive Systems[END_REF] and its current successor IEEE Std. 42010 [10] introduce a conceptual framework for architectural descriptions (cf. section 1). The key concept of both frameworks is the architectural viewpoint (or short: viewpoint). To reduce the complexity, the architectural description of a system is typically divided into a number of interrelated views. A viewpoint can be characterized as a structured specification that supports the definition of such a view on the system. The specification of a viewpoint consists of the stakeholders' concerns (e.g. specifying the logical architecture) that are addressed by the view together with conventions for creating that view (e.g. the underlying ontology, the ontological relationships to other views, and rules for evaluating the quality of the corresponding views). Beside the different interrelated views of a system, typically, a system architecture also bears certain crosscutting properties, i.e. properties that have an ontological grounding in each view or an ontological relationship to each one of the views. According to IEEE Std. 42010, architectural models can be shared across multiple views expressing the ontological relationships of the views. This is one possible implementation of the concept of architectural perspectives (or short: perspectives) introduced by ROZANSKI and WOODS in [START_REF] Rozanski | Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives[END_REF].
2.3
The SPES 2020 Modelling Framework
The SPES MF supports the development of embedded systems by focussing on the following principles (cf. [START_REF] Broy | Introduction to the SPES Modeling Framework[END_REF]): distinguishing between problem and solution, explicitly considering system decomposition, seamless model-based engineering, distinguishing between logical and technical solutions, and continuous engineering of crosscutting system properties. These principles manifest themselves within the SPES MF in two orthogonal dimensions, the SPES viewpoints and the SPES abstraction layers.
The SPES MF Viewpoints. The different stakeholders (e.g. requirements engineers, functional analysts, solution architects) in the engineering process of an embedded system have different concerns. Based on the separation of concerns principle, the individual concerns of stakeholders are addressed by certain views that are, in accordance to IEEE Std. 1471, governed by viewpoints in the SPES MF. Each viewpoint addresses certain concerns in the engineering process of an embedded system. The SPES MF differentiates between the following four SPES viewpoints: the SPES Requirements Viewpoint addresses the structured documentation and analysis of requirements; the SPES Functional Viewpoint addresses the structured documentation and analysis of system functions; the SPES Logical Viewpoint addresses structured documentation and analysis of the logical solution, and the SPES Technical Viewpoint addresses the structured documentation and analysis of the technical solution.
The SPES MF Abstraction Layers. To reduce the complexity of the engineering process a coarse-grained engineering "problem" is decomposed into a number of finegrained engineering problems following the strategy of divide and conquer, i.e. the composition of the fine-grained solutions is a solution for the coarse-grained engineering problem. Each time, a coarse-grained engineering subject is decomposed into a number of fine-grained engineering subjects; a new abstraction layer is created. Since the number of abstraction layers depends on the properties of the individual engineering context of an embedded system, the SPES MF does not define a certain number of abstraction layers. However the SPES MF provides the mechanism to create new abstraction layers that can be used by engineers to decompose the overall engineering problem to a level of granularity at which the complexity of the finegrained systems is manageable without the need of performing another step of decomposition.
Integrating Variability in the SPES MF
To extend the SPES MF for supporting continuous variant management, firstly, the nature of variability is analysed. Secondly, a general concept for extending the SPES MF is defined and thirdly the specification of the Variability perspective is presented.
An Insight into the Nature of Variability within the SPES MF Viewpoints
Variability can affect the SPES viewpoints in different ways. Within the SPES Requirements View the requirements of the SUD are specified by using different types of models (e.g. goal models, scenario models). For instance, requirements in terms of system goals (cf. e.g. [START_REF] Daun | Requirements Viewpoint[END_REF]) are specifying the intention of the stakeholders with regard to the objectives, properties, or use of the system [START_REF] Daun | Requirements Viewpoint[END_REF]. A variable goal thus represents an objective that may only apply in a specific usage context of the system. Goals can also be contradictory, for example, if a goal of a certain stakeholder excludes a goal of another stakeholder in a specific usage context of the system. In this situation, these two goals can never be included together in the same variant of a system. Thus, variability in goals may have its origin in variability concerning the stakeholders that have to be considered or in variable intentions of one stakeholder with respect to a different usage context. In contrast to the Requirements View, in the Technical View hardware components are defined which are implementing specific functions or realizing logical components. Variability on the Technical View could be embodied, for example, by different pins of hardware pieces, or by a different clock speed of a bus. It is obvious that the ontological meaning of variability in the Technical Viewpoint is different from the ontological meaning of variability in the Requirements Viewpoint. In the same way we come to the general conclusion that the ontological meaning of variability is different in all of the SPES viewpoints.
General Concept for extending the SPES MF for Variant Management
A specific aspect where variability occurs, for instance, in the Requirements View of the SUD is on the ontological level that is different as a variable aspect in the Technical View ( in Fig. 1).Variability within the Requirements View can be modelled in an explicit variability artefact (i.e. variability model) with a precise ontological relationship to the engineering artefacts of the Requirements View ( in Fig. 1), whereas variability within the Technical Viewpoint can be documented in an explicit variability model that has a precise ontological relationships to the engineering artefacts of the Technical Viewpoint. This concept can also be applied to other viewpoints and results in distinct variability models for each of the SPES viewpoints. As already mentioned in Section 2.3, today's embedded software is engineered across different abstraction layers based on the SPES MF. Thus, most of the artefact types that are defined based on the underlying ontology of the viewpoints are used on each of the abstraction layers but with a different level of granularity of the engineering subject. On a subsystem layer in Fig. 1, for example, a component diagram models the structure of the SUD. On this level, an interface can be variable. However, on the sub-subsystem layer in Fig. 1, for example, the structure of the different components can also be modelled by a component diagram and interfaces can also be variable and both variable interfaces are related to each other. Additionally, the definition of a variability subject on a higher abstraction layer may lead to different alternatives that impose new variability objects representing different decompositions on a lower abstraction level. Thus, not only artefacts of different types, but also of the same type across different abstraction layers ( in Fig. 1) are affected.
The general concept of integrating variability in the SPES MF is also based on the empirical findings and conceptualizations of AMERICA ET AL. [START_REF] America | Multi-view Variation Modeling for Scenario Analysis[END_REF] as well as THIEL and HEIN [START_REF]The Open Group: TOGAF Version 9.1. 10[END_REF]. AMERICA ET AL., argue to explicitly document the possible design decisions, by documenting viewpoint relevant variability. Furthermore, they argue that the explicit documentation of choices leads to an increased awareness of such choices, which in turn is beneficial for the stakeholder communication. THIEL and HEIN are interpreting variability as a kind of quality of the architecture of a system in terms of its configurability and modifiability. Variability is materialized in the artefacts by changes or adaptations of specific elements, e.g. interfaces.
Specifying Crosscutting Aspects conform to IEEE Std. 42010
The SPES MF in its current version does not specify how variant management and thus variability should be addressed in its corresponding viewpoints and abstraction layers. As we already discussed, variant management potentially affects all artefacts and consequently crosscuts all viewpoints of the SPES MF. IEEE Std. 42010 itself provides a mechanism for realizing crosscutting concerns by allowing architectural models to be shared across multiple views and thereby focus on the relevant aspects of a view. Regarding variant management, we belief that this approach is not sufficient, because, as we discussed in section 3.2, the ontological meaning of variability significantly differs in each of the four viewpoints. As a consequence, a shared architectural model would need to be able to represent viewpoint-specific ontological concepts. ROZANSKI and WOODS [START_REF] Rozanski | Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives[END_REF] also recognized a need for addressing crosscutting aspects fulfilling specific concerns of the majority of system's stakeholders. They are identifying qualities of the architecture (e.g. safety, security) that are affecting all views. To address these qualities, ROZANSKI and WOODS introduced the concept of perspectives, which are defined as [START_REF] Rozanski | Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives[END_REF]: "[…] a collection of architectural activities, tactics, and guidelines that are used to ensure that a system exhibits its particular set of related properties that require consideration across a number of the system's architectural views". Perspectives are therefore orthogonal to architectural views. In [START_REF] Rozanski | Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives[END_REF] a perspective specification template is proposed that addresses quality properties in an IEEE Std. 42010 based specification.
Specification of the Variability Perspective for the SPES MF
Since variability can be regarded as a quality property and therefore as a crosscutting concern of a system architecture, we extend the SPES MF by following the approach that is described in Section 3.3. To that end we use the template proposed in [START_REF] Rozanski | Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives[END_REF] for specifying the architectural perspective. An excerpt from the specification of the Variability perspective is shown in Table 1.
Table 1. Excerpt from the specification of the Variability perspective
Section Content
Applicability Each SPES MF view is affected: When applying the Variability perspective to the Requirements View, it guides the requirements engineering process of the SUD so that the variability of the requirements can be considered systematically. When applying the Variability perspective to the Functional View, it guides the functional design for the SUD so that the variability of the system functions can be considered systematically. When applying the Variability perspective to the Logical View, it guides the design of the logical architecture of the SUD so that the variability within of the logical architecture can be considered systematically. When applying the Variability perspective to the Technical View, it guides the design of the technical architecture of the SUD so that variability of the technical architecture can be considered systematically. Each SPES MF abstraction layer is affected: When applying the Variability perspective to an abstraction layer, it guides the systematic engineering of the engineering subjects within that layer so that the variability can be considered across all views of the engineering subject.
Concerns Variability: the ability of the SUD to be adapted to a different context, e.g. context of usage, technological context, economical context, legal context or organizational context. Quality properties of variability: correctness, completeness, consistent and traceable to its origin and to corresponding engineering artefacts.
Activities Steps for applying the Variability perspective to the Requirements View:
Identification of variability in the requirements of the SUD: This step aims at identifying variability in the requirements that is originated by variable context properties. Documentation of variability in the requirements of the SUD: This step aims at documenting the variability in the requirements. Analysis of variability in the requirements of the SUD: This step aims at analysing the variability in the requirements, e.g. with respect to correctness, completeness and consistency. Negotiation of variability in the requirements of the SUD: This step aims at negotiating the variability in the requirements, with the stakeholders of the SUD. Validation of the variability in the requirements of the SUD: This step aims at analysing the variability in the requirements, e.g. with respect to correctness, completeness and consistency.
Steps for applying the Variability perspective to the Functional View:
[…]
Steps for applying the Variability perspective to the Logical View:
[…]
Steps for applying the Variability perspective to the Technical View: Identification of variability in the technical architecture of the SUD: This step aims at identifying variability in the technical architecture that is originated by, for example, variable technical resources (e.g. processors, communication infrastructure) as well as variable sensors or actuators).
Architectural tactics
Context Analysis and Documentation: for structured analysis and documentation of the context properties that are the origin of variability Orthogonal Variability Modelling: for explicit documentation of variability and its relationship to engineering artefacts Model Checking: […]
Problems and pitfalls
Problems and pitfalls that may arise: The increasing complexity of variable artefacts that increases the effort to keep the engineering artefacts consistent. Complex variability models tend to be ambiguous und confusing, for example, false optional features that are part of every product because of constraints.
[…]
Related Work
Today, multiple frameworks for designing a system's architecture exist. All these frameworks share the concept of multiple architectural views. In this context, crosscutting concerns are often considered as quality or system properties or nonfunctional as well as quality requirements of a SUD, which need special consideration when crafting a system's architecture.
In terms of documenting a system's architecture the standards IEEE Std. 1471 [START_REF]IEEE Recommended Practice for Architectural Description of Software Intensive Systems[END_REF] and its successor IEEE Std. 42010 [10] provide a conceptual framework for specifying viewpoints governing views (cf. section 2.2). Another approach for documenting a system's architectural views is proposed in "Views and Beyond" [START_REF] Clements | Documenting software architectures: views and beyond[END_REF], which is compliant to IEEE Std. 1471 (cf. [START_REF] Clements | Comparing the SEI's Views and Beyond Approach for Documenting Software Architectures with ANSI[END_REF]).
ZACHMAN proposes a Framework [START_REF] Zachman | A Framework for Information Systems Architecture[END_REF], which makes use of six different architectural representations (viewpoints). This framework does not state how crosscutting concerns should be addressed in detail.
The Reference Model of Open Distributed Processing (RM-ODP) [START_REF]Information Technology -Open Distributed Processing -Reference Model -Architecture[END_REF] proposes five viewpoints, focusing on particular concerns within a system. In addition a set of system properties are defined including quality of service, but it is not addressed how these properties should be encountered during system development.
The rational unified process (RUP) makes use of The 4 + 1 View Model of Architecture, which was introduced in [START_REF] Kruchten | The 4 + 1 View Model of architecture[END_REF]. In RUP four different kinds of non-functional requirements are distinguished, which are subject to an iterative, scenario-based process determining the key drivers of architectural elements. But no explicit guidelines are given how non-functional requirements should be addressed in the architectural design phase. Attribute-Driven Design (ADD) [START_REF] Wojcik | Attribute-Driven Design (ADD), Version 2.0[END_REF] is a method that can be described as an approach for defining a software architecture based on the software's quality attribute requirements. Essentially, ADD promotes a recursive design process decomposing a SUD making use of the architectural tactics introduced in [START_REF] Bass | Software Architecture in Practice[END_REF], resulting compliant views to in [START_REF] Clements | Documenting software architectures: views and beyond[END_REF], and consequently explicitly addressing crosscutting concerns.
The TOGAF framework uses the iterative Architecture Development Method (ADM), which contains an analysis of changes etc. in terms of their cross architectural impact. In its current version [START_REF]The Open Group: TOGAF Version 9.1. 10[END_REF], the TOGAF framework encourages the use of IEEE Std. 42010 in order to craft the necessary viewpoints and views.
ROZANSKI and WOODS [START_REF] Rozanski | Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives[END_REF] take the 4+1 View Model of Architecture as foundation and provide an IEEE Std. 42010 compliant viewpoint catalogue. The stakeholders' requirements as well as the architecture are subject to an iterative architecture definition process. But in contrast to RUP, crosscutting concerns are explicitly addressed in terms of perspectives.
As motivated in subsection 2.3, software intensive embedded systems need special consideration during their engineering. The domain independent model-based engineering methodology of SPES, takes these special needs and challenges into account. In doing so, the IEEE Std. 1471 based viewpoints of the SPES MF are explicitly tailored to the needs of the development of software intensive embedded systems. The above described frameworks are of a more general nature and are consequently not directly applicable in the context of such systems. As motivated in subsection 3.1, variability affects multiple viewpoints and their artefacts. Consequently, it is our firm belief that variability has to be addressed explicitly. Therefore we decided to apply the approach by ROZANSKI and WOODS to the SPES MF in order to address variability explicitly.
Conclusion
The specification of the Variability perspective is an essential means for supporting the continuous variant management across the whole engineering process of embedded systems, which are based on the SPES MF. This is done by defining how to seamless integrate, among others, the identification, documentation and analysis of variability and its relationships to the underlying engineering artefacts across the viewpoints and abstraction layers of the SPES MF. In our future work we will apply the extension of the SPES MF for variant management to three industrial case studies (a driver assistance system for vehicles, a mission control software in unmanned aerial vehicles, and a desalination plant) to gain deeper insights concerning the applicability and usefulness for supporting the continuous variant management in the engineering processes of embedded systems.
Fig. 1 .
1 Fig. 1. Variability models in the different viewpoints and their relations
See http://spes2020.informatik.tu-muenchen.de/
Acknowledgement
This paper was partially funded by the BMBF project SPES 2020_XTCore under grant 01IS12005C and the DFG project KOPI grant PO 607/4-1. | 28,676 | [
"1001396",
"1001397",
"998693"
] | [
"300612",
"300612",
"300612"
] |
01466685 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466685/file/978-3-642-38853-8_27_Chapter.pdf | Katharina Gilles
Stefan Groesbrink
email: [email protected]
Daniel Baldin
email: [email protected]
Timo Kerstan
email: [email protected]@www.dspace.de
Proteus Hypervisor: Full Virtualization and Paravirtualization for Multi-Core Embedded Systems
System virtualization's integration of multiple software stacks with maintained isolation on multi-core architectures has the potential to meet high functionality and reliability requirements in a resource efficient manner. Paravirtualization is the prevailing approach in the embedded domain. Its applicability is however limited, since not all operating systems can be ported to the paravirtualization application programming interface. Proteus is a multi-core hypervisor for PowerPC-based embedded systems, which supports both full virtualization and paravirtualization without relying on special hardware support. The hypervisor ensures spatial and temporal separation of the guest systems. The evaluation indicates a low memory footprint of 15 kilobytes and the configurability allows for an application-specific inclusion of components. The interrupt latencies and the execution times for hypercall handlers, emulation routines, and virtual machine context switches are analyzed.
Introduction & Related Work
System virtualization refers to the division of the hardware resources into multiple execution environments [START_REF] Smith | The Architecture of Virtual Machines[END_REF]. The hypervisor separates operating system (OS) and hardware in order to share the hardware among multiple OS instances. Each guest runs within a virtual machine (VM)-an isolated duplicate of the real machine (also referred to as partition). The consolidation of multiple systems with maintained separation is well-suited to build a system-of-systems. Independently developed software such as third party components, trusted (and potentially certified) legacy software, and newly developed application-specific software can be combined to implement the required functionality. The reusability of software components is increased, time-to-market and development costs can be reduced, the lifetime of certified software can be extended. The rise of multi-core processors is a major enabler for virtualization. The replacement of multiple hardware units by a single multi-core system has the potential to reduce size, weight, and power [START_REF] Prisaznuk | Integrated Modular Avionics[END_REF]. Virtualization's architectural abstraction eases the migration from single-core to multi-core platforms [START_REF]Applying multi-core and virtualization to industrial and safety-related applications[END_REF] and supports the creation of an unified software architecture for multiple hardware platforms.
Primary use cases for this technology are security for open systems and OS heterogeneity. First, if a system allows the user to add software, the isolation of potentially faulty or malicious software in a VM ensures against risks for the critical parts of the system. Second, multiple different OSs can be hosted to provide each subsystem a suitable interface. Industrial automation, medical, or mobile systems, for example, require often both a real-time operating system (RTOS) and a general purpose operating system (GPOS) [START_REF]Applying multi-core and virtualization to industrial and safety-related applications[END_REF]. The deterministic and highly efficient RTOS executes critical tasks such as the control of actuators or the cellular communication of a mobile device. The feature-rich GPOS supports the development of the graphical user interface. The integration of a legacy component may require a third OS.
Since system virtualization gained significant interest in the embedded realtime world, multiple vendors developed multi-core hypervisors for this domain, for example, Wind River's Embedded Hypervisor, LynuxWorks' LynxSecure Hypervisor, or Green Hills' Integrity Multivisor. See [START_REF] Gu | A State-of-the-Art Survey on Real-Time Issues in Embedded Systems Virtualization[END_REF] for a recently published survey of both commercial and academic real-time virtualization solutions. In the academic world, Xi et al. developed a real-time scheduling framework for the hypervisor Xen [START_REF] Xi | RT-Xen: Towards Real-time Hypervisor Scheduling in Xen[END_REF], which supports PowerPC multi-core architectures. Xen relies on either paravirtualization [START_REF] Barham | Xen and the Art of Virtualization[END_REF] or on hardware assistance. Xen is not available for PowerPC without an additional hypervisor mode, although this is on the project's roadmap since 2006 [START_REF] Blanchard | Xen on PowerPC[END_REF]. XtratuM by Masmano et al. is a paravirtualization hypervisor implemented on PowerPC [START_REF] Masmano | XtratuM: a Hypervisor for Safety Critical Embedded Systems[END_REF]. SParK by Ghaisas et al. is a hypervisor for PowerPC platforms without hardware assistance for virtualization [START_REF] Ghaisas | SParK: Safety Partition Kernel for Integrated Real-Time Systems[END_REF]. However, their solution requires paravirtualization and does not support multi-core platforms. Closest to our work, Tavares et al. presented an embedded hypervisor for PowerPC 405, which supports full virtualization, but no multi-core architectures [START_REF] Tavares | A Customizable and ARINC 653 Quasi-compliant Hypervisor[END_REF].
None of these hypervisors provides full virtualization on multi-core PowerPC platforms without hardware assistance. They rely on either paravirtualization or processor virtualization extensions. Examples for processors with hardware assistance for virtualization are Intel VT-x or AMD-V for x86 architectures. Virtualization support was added to the PowerPC architecture with instruction set architecture Power ISA Version 2.06 [START_REF][END_REF], is however only available for high performance processors. Typical platforms for embedded systems do not feature hardware assistance and many OSs cannot be paravirtualized for legal or technical reasons. By consequence, the applicability of existing PowerPC hypervisors is limited significantly.
We present the first real-time hypervisor for multi-core PowerPC platforms, which features both paravirtualization and full virtualization without relying on explicit hardware assistance for virtualization. Proteus ensures VM separation and is characterized by a bare-metal approach, a symmetric use of the processor cores, and a synchronization mechanism that does not rely on special hardware support. The evaluation shows a low memory and execution time overhead. 2 Approach
In previous work, we developed a predecessor with the same name Proteus [START_REF] Baldin | Proteus, a Hybrid Virtualization Platform for Embedded Systems[END_REF], a hypervisor for 32-bit single-core PowerPC architectures. In this work, we present a redesign for multi-core platforms.
Design
A hosted hypervisor runs on top of a host OS [START_REF] Smith | The Architecture of Virtual Machines[END_REF], which leaves resource management and scheduling at the mercy of this OS. Moreover, the entire system is exposed to the safety and security vulnerabilities of the underlying OS. A bare-metal hypervisor runs directly on top of the hardware, facilitating a more efficient virtualization solution. The amount of code executed in privileged mode is smaller compared to a hosted hypervisor, since only a (preferably thin) hypervisor and no OS is incorporated in the trusted computing base. The attack surface is reduced, both the overall security and the certifiability of functional safety are increased. Due to those performance and robustness advantages as well as the clearer and more scalable separation, the bare-metal approach is more appropriate for embedded systems and followed by our design. The design of the Proteus hypervisor is depicted in Fig. 1. The PowerPC 405 [START_REF]PowerPC 405 Processor Core[END_REF] features two execution modes. In the problem mode for applications only a subset of the instruction set can be executed. In the more privileged supervisor mode for system software, full access to machine state and I/O devices is available via privileged instructions. Only the minimal set of components is executed in supervisor mode: interrupt and hypercall handlers, VM scheduler, and interpartition communication manager (IPCM). All other components such as I/O device drivers are placed inside a separate partition (untrusted VMP modules) and executed in problem mode.
Any occurring interrupt is delegated to the hypervisor. The hypervisor saves the context of the running VM and forwards the interrupt internally to the appropriate component or back to the OS. If the execution of a privileged instruction caused the interrupt, it is forwarded to the dispatcher to identify the corresponding emulation routine. In case of a hypercall, the hypercall handler invokes either the emulator, the inter-partition communication manager, or the VM scheduler. An external interrupt is forwarded to the responsible device driver.
Proteus is a symmetric hypervisor: all cores have the same role and execute guest systems. When the guest traps or calls for a service, the hypervisor takes over control and its own code is executed on that core. Different guests on different cores can perform this context switch from guest to hypervisor at the same time. An alternative design is the sidecore approach with one dedicated core to exclusively execute the hypervisor [START_REF] Kumar | Re-architecting VMMs for Multicore Systems: The Sidecore Approach[END_REF]. When an interrupt occurs, the hypervisor on the sidecore handles it and no context switch is invoked. The hypervisor may either be informed via an interprocessor interrupt (not featured by the PowerPC 405) or a notification by the guest OS, which requires paravirtualization. To reconcile sidecore approach and full virtualization, a small fraction of the hypervisor could be executed on each core to forward interrupts. The guest OS could run unmodified, but each exception would involve a context switch and thereby a loss of the major benefit. If the sidecore is already serving the request of a guest, other guests have to wait, resulting in varying interrupt processing time, which is inappropriate for real-time systems. For these reasons, we decided in favor of a symmetric design.
Multi-core Processor Virtualization
The virtualization of the processing unit is the crucial part of a hypervisor. An instruction is called sensitive if it depends on or modifies the configuration of resources. According to a criteria defined by Popek and Goldberg, an instruction set is efficiently virtualizable, if the set of sensitive instructions is a subset of the set of privileged instructions [START_REF] Popek | Formal Requirements for Virtualizable Third Generation Architectures[END_REF]. The PowerPC fulfills this criteria and is fully virtualizable. In contrast for example to the x86 architecture, all sensitive instructions cause an exception (trap), if executed in problem mode.
Solely the hypervisor is executed in supervisor mode and the guests are executed in problem mode with no direct access to the machine state. This limitation of the guests' hardware access is mandatory in order to retain the hypervisor's control over the hardware and guarantee the separation between VMs. The PowerPC 405 does not provide explicit hardware support for virtualization such as an additional hypervisor execution mode. However, guest OSs rely themselves on an execution-mode differentiation. Therefore, the problem mode has to be subdivided into two logical execution modes: VM's privileged mode and VM's problem mode. By virtualizing the machine state register, the hypervisor creates the illusion that a guest OS is executed in supervisor mode, but runs it actually in problem mode. When a guest OS executes a privileged instruction in problem mode (e.g. an access to the machine state register) a trap is caused and the hypervisor executes the responsible emulation routine.
In a multi-core system, access to shared resources must be synchronized. A common solution are semaphores, accessed under mutual exclusion and assigned exclusively to one core at any time. The PowerPC 405 does not feature any hardware support to realize mutual exclusion in a multi-core architecture. Its instructions lwarx (load locked) and stwcx (store conditional) for atomic memory access do not work across multiple processor cores. Since interrupt disabling is as well not feasible for multi-core systems, Proteus implements a software semaphore solution: Leslie Lamport's Bakery Algorithm [START_REF] Lamport | A new solution of Dijkstra's concurrent programming problem[END_REF]. It does not require atomic operations such as test-and-set, satisfies FIFO fairness and excludes starvation, an advantage over Dijkstra's algorithm [START_REF] Dijkstra | Solution of a problem in concurrent programming control[END_REF].
Full Virtualization and Paravirtualization
The capability to host unmodified OSs classifies hypervisors. In terms of full virtualization, unmodified guests can be hosted, whereas paravirtualization requires a porting of the guest OS to the hypervisor's paravirtualization application programming interface (API) [START_REF] Barham | Xen and the Art of Virtualization[END_REF]. The guest is aware of being executed within a VM and uses hypercalls to request hypervisor services, what can often be exploited to increase the performance [START_REF] King | Operating System Support for Virtual Machines[END_REF]. The major drawback is the need to port an OS, which involves modifications of critical kernel parts. If legal or technical issues preclude this for an OS, it is not possible to host it. A specific advantage of paravirtualization for real-time systems is the possibility to apply dynamic real-time scheduling algorithms, which in general require a passing of scheduling information such as deadlines from guest OS to hypervisor.
Proteus supports both kinds because of those characteristics of the two approaches-paravirtualization's efficiency, but limited applicability on the one hand, full virtualization's support of non-modifiable guests on the other hand. If the modification of an OS is possible, the system designer decides whether the effort of paravirtualization is justified. The concurrent hosting of both paravirtualized and fully virtualized guests is possible without restriction. Proteus is designed for the co-hosting of GPOS and RTOS, and the natural approach is to host a paravirtualized RTOS and a fully virtualized GPOS. In addition, bare-metal applications without underlying OS can be hosted.
Each privileged instruction is associated with an emulating hypercall. Hypercalls are realized as system calls. A system call is identified as a hypercall, if it is executed in the VM's logical privileged mode. A paravirtualized OS can use hypercalls to communicate with other guests, call I/O functionality, pass scheduling information to the hypervisor, or yield the CPU.
Spatial and Temporal Separation
System virtualization for embedded real-time systems requires the guarantee of spatial and temporal separation of the guest systems. Spatial separation refers to the protection of the integrity of the memory space of both the hypervisor and the guests. Any possibility of a harmful activity going beyond the boundaries of a VM has to be eliminated. To achieve this, each VM operates in its own address space, which is statically mapped to a region of the shared memory. It is protected by the memory management unit (MMU) of the PowerPC 405. Communication between VMs is controlled by the IPCM. If the hypervisor authorizes the communication, it creates a shared-memory tunnel. Communication between VMs is mandatory, if formerly physically distributed systems that have to communicate with each other are consolidated.
Temporal separation is fulfilled, if all guest systems are executed in compliance with their timing requirements. A predictable, deterministic behavior of every single real-time guest has to be guaranteed. The worst-case execution times (WCET) of all routines are bounded and were analyzed (see section 3). These results make it possible to determine the WCET of a program that is executed on top of Proteus. System virtualization implies scheduling decisions on two levels. The hypervisor schedules the VMs and the guest OSs schedule their tasks according to their own scheduling policies. Proteus manages a global VM scheduling queue and each VM can be executed on each core. If this is undesired, a VM can be bound to one specific core or a subset of cores, for example to assign a core exclusively to a safety-critical guest [START_REF]Applying multi-core and virtualization to industrial and safety-related applications[END_REF]. If the number of VMs n guests exceeds the number of processor cores n cores , at each point in time, n guests -n cores VMs are not executed. The cores have to be shared in a time-division multiplexing manner and the VM scheduling is implemented as a fixed time slice based approach. The guests' task sets have to be analyzed and execution time windows within a repetitive major cycle are assigned to the VMs based on the required utilization and execution frequency. This static scheduling approach is for example applied in the aerospace domain and part of the software specification ARINC 653 (Avionics Application Standard Software Interface) [START_REF] Prisaznuk | ARINC 653 Role in Integrated Modular Avionics (IMA)[END_REF] for space and time partitioning in avionics real-time systems in the context of Integrated Modular Avionics [START_REF] Garside | Integrating modular avionics: A new role emerges[END_REF]. See [START_REF] Kerstan | Full virtualization of real-time systems by temporal partitioning[END_REF] for guidance of designing a schedule that allows all guests to meet their timing constraints. The scheduler can be replaced by implementing an interface.
Experimental Results
Evaluation Platform: IBM PowerPC 405
Target architecture of our implementation are platforms with multiple IBM PowerPC 405 cores [START_REF]PowerPC 405 Processor Core[END_REF], a 32-bit RISC core providing up to 400 MHz. It is designed for low-cost and low-power embedded systems and features separate instruction and data caches as well as a MMU with a software-managed TLB. Specifications and register-transfer level description are freely available to the research community. Due to the API compatibility within the PowerPC family, porting the results to other PowerPC processors should be fairly simple. In order to be able to evaluate the software with low effort on different hardware configurations, the evaluation platform is a software simulator for PowerPC multi-cores [START_REF]IBM PowerPC 4XX Instruction Set Simulator (ISS)[END_REF]. The IBM PowerPC Multi-core Instruction Set Simulator can optionally include peripheral devices (e.g. an UART) and provides an interface for external simulation environments. Many components of the simulated hardware can be configured, for example, the number of cores or cache sizes.
Memory Footprint
Dependent on the requirements of the actual system, Proteus can be configured. The workflow is based on the modification of a configuration file by the system designer. According to these specifications, the preprocessor manipulates the implementation files and removes unneeded code.
Figure 2 lists code and data size for the base functionality and the additionally required memory for different components, also depicted in a figure with a differentiation between text segment (executable instructions) and data segment (static variables). The hypervisor is written in C and assembly language. The efficiency of a hypervisor is highly dependent on the execution times of the interrupt handling. For this reason, most of the components called by those handlers and the handlers themselves are written in assembly language. All executables are generated with compiler optimization level 2 (option -O2 for the GNU C compiler), which focuses on the performance of the generated code and not primarily on the code size. The solely full virtualization supporting base requires a total of about 11 kilobytes. The addition of paravirtualization support accounts for less than 1 kilobyte.
The system designer can decide on enabling TLB virtualization (TLB V), device driver support, and inter-partition communication. Innocuous register file mapping (IRFM) is a performance boost for paravirtualized guests. By mapping a specific set of privileged registers into VM's memory space, no trap to the hypervisor is required to access these registers. Previrtualization (Pre V) is an approach to paravirtualize guests automatically [START_REF] Levasseur | Previrtualization: Soft Layering for Virtual Machines[END_REF]. The source code is analyzed at compile time in order to identify privileged instructions. At load time, the hypervisor replaces privileged instructions by hypercalls. If all features are enabled, the memory requirement of the hypervisor sums up to about 15 kilobytes.
Feature Memory Footprint [bytes]
Assembler
Execution Time Overhead
The following performance figures denote the worst-case execution time in case of enabled and hot instruction cache, resulting in a duration for each instruction fetch of one processor cycle, and a clock speed of 300 MHz.
Virtual Machine Context Switch If multiple VMs share a core, switching between them involves the saving of the context of the preempted VM, the selection of the next VM and the resume of this VM, including the restoring of its context. Table 1 lists the execution times for a VM context switch. The overhead of accessing the semaphore that protects the ready queue accounts for a large part of the scheduling execution time. Synchronized Shared Resource Access Routines Figure 3 depicts the execution time of the subroutines of Bakery's Algorithm for synchronized shared resource access (semaphore operations wait() and signal()). The execution time increases linearly with the number of cores, since the included execution of the function mutex start() has to iterate over an array of length equal to the number of cores. The operation wait() causes a blocking of the calling process, if the resource is not available. In case of four cores, the worst case occurs if the calling process is blocked by a process on each of the three other cores, as depicted in Fig. 4. The following formula calculates the worst-case waiting time. wait short (14 cycles), wait long (46 cycles), signal short (15 cycles) and signal long (54 cycles) refer to the shortest and longest paths through the routines wait() and signal(). The critical section is equal to 3 • (wait long + mutex stop), which is the minimum influence of the critical section, since core 1 cannot perform the signal() before core 4 completed the try to acquire the semaphore. As a result, the worst-case waiting time for synchronized shared resource access sums up to 1797 processor cycles or 5990 ns. Interrupt Latency Virtualization increases the interrupt latency. Any interrupt is first delivered to the hypervisor, analyzed and potentially forwarded back to the guest. For example, the additional latency of a programmable timer interrupt is 497 ns (149 processor cycles) and 337 ns (101 processor cycles) for a system call interrupt. To obtain the total interrupt latency, one has to add the interrupt latency of the guest OS. Timer interrupt handling takes longer, since the virtual interrupt timer has to be updated. Proteus omits the effort of saving the complete VM context by saving only the registers that are needed by the emulation routine. The implementation in assembly language uses the fewest possible number of registers.
Emulation of Privileged Instructions
The emulation of privileged instructions is the core functionality of the hypervisor. The emulation service is re- quested via interrupt (full virtualization) or hypercall (paravirtualization). Table 2 lists the execution times of some exemplary emulation routines. Compared to full virtualization, paravirtualization speeds up the execution by 14% to 55%. The average speedup for all privileged instructions, not just the ones listed in this paper, is 39.25%. An analysis of the steps of an emulation routine helps to understand why paravirtualization can achieve such a significant speedup:
1. Reenabling of the data translation and saving of the contents of those registers that are needed to execute the emulation routine. 2. Analysis of the exception in order to identify the correct emulation subroutine and jump to it (dispatching). 3. Actual emulation of the instruction. 4. Restoring of the register contents. Table 3 lists the execution time of those steps exemplary for the instruction mtevpr. The actual emulation accounts for the smallest fraction. Register saving and restoring are expensive, however likewise for both full virtualization and paravirtualization. The performance gain of paravirtualization is based on the significantly lower overhead for identification of the cause of the exception and dispatching to the correct subroutine. In case of paravirtualization, only a register read-out is necessary in order to obtain the hypercall ID. The other two hypercalls return to the VM and the execution time measurement is stopped when the calling VM resumes its execution.
Conclusion
Proteus is a hypervisor for embedded PowerPC multi-core platforms, which is able to host both paravirtualized and fully virtualized guest systems without hardware assistance for virtualization. It is a bare-metal hypervisor, characterized by a symmetric use of the processor cores. The synchronization mechanism for shared resource access does not rely on hardware support. This increases the execution time overhead, but extends the applicability of the hypervisor to shared-memory multiprocessor systems. The hypervisor ensures spatial and temporal separation among the guest systems. Paravirtualization is due to efficiency advantages the prevailing virtualization approach in the embedded domain. Its applicability is however limited. For legal or technical reasons, not all operating systems can be ported to the paravirtualization interface. Proteus can host such operating systems nevertheless, based on full virtualization's execution of unmodified guests. Paravirtualized operating systems can use hypercalls to communicate with other guests, call I/O functionality, pass scheduling information to the hypervisor, or yield the CPU.
The evaluation highlighted the low memory requirement and the applicationspecific configurability. The memory footprint is 11 kilobytes for the base functionality and 15 kilobytes for a configuration with all functional features. The interrupt latencies and the execution times for synchronization primitives, hypercall handlers, emulation routines, and virtual machine context switch are all in the range of hundreds of processor cycles. The detailed WCET analysis of all routines make it possible to determine the WCET of a hosted application.
The Proteus Hypervisor is free software released under the GNU General Public License. The source code can be downloaded from https://orcos.cs.unipaderborn.de/orcos/www .
Fig. 1 :
1 Fig.1: Design of the Proteus Hypervisor[START_REF] Baldin | Proteus, a Hybrid Virtualization Platform for Embedded Systems[END_REF]
Fig. 2 :
2 Fig. 2: Impact of Individual Components on Memory Footprint
x = 3 (
3 mutex start + wait short + mutex stop + 3(wait long + mutex stop) +mutex start + signal long + mutex stop) = 3(169 + 14 + 11 + 3(46 + 11) + 169 + 54 + 11) = 1797.
Fig. 3 :Fig. 4 :
34 Fig. 3: Linear Dependency of Execution Time of Routines for Exclusive Resource Access on Number of Cores
Table 1 :
1 Execution Time of a Virtual Machine Context Switch (4 cores)
Routine Execution time
in ns (processor cycles)
VM Context Saving 450 (135)
VM Scheduling 2270 (681)
VM Resume 800 (240)
Total 3520 (1056)
Table 2 :
2 Execution Time of Emulation Routines
Privileged Execution time in ns (processor cycles)
instruction Full virtualization Paravirtualization Speedup
rfi 527 (158) 410 (123) 28 %
wrteei 447 (134) 393 (118) 14 %
mtmsr 517 (155) 363 (109) 42 %
mtevpr 503 (151) 347 (104) 45 %
mtzpr 547 (164) 353 (106) 55 %
mfmsr 453 (136) 363 (109) 25 %
mfevpr 477 (143) 363 (109) 31 %
Table 3 :
3 Execution Time of Emulation Routine for mtevprHypercalls A guest OS can request hypervisor services via the paravirtualization interface. The hypercall vm yield, which voluntarily releases the core, has an execution time of 507 ns (152 processor cycles). By calling sched set param, the guest OS passes information to the hypervisor's scheduler. The execution time of this hypercall is 793 ns (238 processor cycles). The hypercall create comm tunnel requests the creation of a shared-memory tunnel for communication between itself and a second VM and is characterized by an execution time of 1027 ns (308 processor cycles). The hypercall vm yield does not return to the VM and the execution time is measured the start of the hypervisor's schedule routine.
Step of emulation Execution time in ns (processor cycles)
routine Full virtualization Paravirtualization
Save registers 137 (41) 137 (41)
Analysis and dispatch 220 (66) 73 (22)
Emulate 20 (6) 20 (6)
Restore registers 127 (38) 117 (35)
Total 503 (151) 347 (104) | 29,628 | [
"1001388",
"1001398",
"1001399"
] | [
"263107",
"263107",
"263107",
"439638"
] |
01466686 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466686/file/978-3-642-38853-8_28_Chapter.pdf | Bruno Dal
Bó Silva
Marcelo Götz
A Structural Parametric Binaural 3D Sound Implementation Using Open Hardware
Keywords: Binaural, Parametric, Beagleboard, Embedded, Digital Signal Processing Currently with IMS Power Quality
Most binaural 3D sound implementations use large databases with pre-recorded transfer functions, which are mostly prohibitive for real time embedded applications. This article focus on a parametric approach proposal, opening space for customizations and to add new processing blocks easily. In this work we show the feasibility of a parametric binaural architecture for dynamic sound localization in embedded platforms for mobile applications, ranging from multimedia and entertainment to hearing aid for individuals with visual disabilities. The complete solution, ranging from algorithms analysis and suiting, development using the Beagleboard platform for prototyping, and performance benchmarks, are presented.
Introduction
Virtual sound localization is an interesting feature for innumerous types of applications, ranging from multimedia and entertainment to hearing aid for individuals with visual disabilities. Binaural sound localization is a technique that uses single-channel input signal, and by proper manipulation, can produce two different signals (left and right) channels. By these means a human user might perceive the source located on a determined point in space.
Head-Related Transfer Function (HRTF), which is usually well known technique employed for binaural sound localization, is a hard problem and computational intensive task. Such systems, enabling a dynamic changing of azimuth angle, usually requires the usage of dedicated computing hardware (e.g. [START_REF] Kim | The real-time implementation of 3D sound system using DSP[END_REF][START_REF] Fohl | A System-On-Chip Platform for HRTF-Based Realtime Spatial Audio Rendering With an Improved Realtime Filter Interpolation[END_REF][START_REF] Sakamoto | DSP Implementation of Low Computational 3D Sound Localization Algorithm[END_REF]). Nowadays execution platforms found in mobile devices usually offers a notable computation capacity for audio processing, which is usually promoted by the inclusion of a Digital Signal Processing (DSP) processor. However, HRTF related algorithms must be well designed for such platforms to take advantage of the available processing capacity.
In our work we aggregate various methods and models of parametric 3D binaural sound for dynamic sound localization, propose some algorithm suiting (specially for Interaural Level Difference -ILD filter) for embedded development, and validate them by prototyping using low-cost embedded hardware platform. In this work we present our first step towards a full-featured 3D binaural system, where we have implemented parametric DSP blocks on real-world hardware for feasibility evaluation.
The article is organized as follows. Firstly, in Sect. 2, we will introduce previous works on the field, followed by the models of our choice for embedded development and some considerations in Sect. 3. Then, the platform itself, technologic aspects and the employed algorithms adaptations are explained in Sect. 4. The experiment set-up including performance and localization quality results are shown in Sect. 5. Finally, Sect. 6 gives the summary, conclusions, and points out future work.
Related Work
Many previous researchers have invested on binaural sound localization. Two main areas of interest stand out: interpreting a dual-channel audio input so to discover the approximate position of the sound source (e.g. [START_REF] Raspaud | Binaural Source Localization by Joint Estimation of ILD and ITD. Audio, Speech, and Language Processing[END_REF][START_REF] Rodemann | Real-time Sound Localization With a Binaural Head-system Using a Biologically-inspired Cue-triple Mapping[END_REF]); and manipulation of a given single-channel input signal so to put it on a determined point in space as perceived by a human user. These two areas can be seen as the inverse of each-other, where the input of one is generally the output for the other. On this article we will explore the second field of research, knowing that much can be learned from the common physical principles involved.
Considering the problem of localizing a mono sound source for the human ear, we can still divide it in two great fronts: model-based and empiric localization. Many authors [START_REF] Kim | The real-time implementation of 3D sound system using DSP[END_REF][START_REF] Fohl | A System-On-Chip Platform for HRTF-Based Realtime Spatial Audio Rendering With an Improved Realtime Filter Interpolation[END_REF][START_REF] Sakamoto | DSP Implementation of Low Computational 3D Sound Localization Algorithm[END_REF] rely on pre-recorded impulse-response databases and implement complex interpolations and predictive algorithms, varying from DSP-chip based algorithms to full-fledged System-on-Chip (SoC) solutions. Hyung Jung et al. [START_REF] Kim | The real-time implementation of 3D sound system using DSP[END_REF] show a successful implementation of 5.1 surround sound system using hand-chosen responses, which works perfectly considering only 5 possible audio sources in space.
For more possible audio sources in a virtual space, almost infinite positions theoretically, we must have a rather large database of known responses and very complex interpolation algorithms, above all when considering as well the effects of elevation. Most of these systems rely on a public domain database of highspatial-resolution HRTF [START_REF] Algazi | The CIPIC HRTF Database[END_REF]. Using such a database, implies storing it into the target execution platform.
We have relied on the second approach: a parametric model. For this we assume it is possible to approximate the physical response of the medium the sound is travelling in, the time and frequency response involved in the human shape (mainly the head and ears). We have searched successful models of such physical concepts, starting with [START_REF] Rayleigh | The theory of sound[END_REF], where Rayleigh gives us the prime of Psychoacoustics. Most of our model was inspired by the various works of Algazi, Brown and Duda [START_REF] Algazi | Motion-Tracked Binaural Sound for Personal Music Players[END_REF][START_REF] Algazi | The Cipic HRTF Database[END_REF][START_REF] Avendano | A Head-and-Torso Model for Low-Frequency Binaural Elevation Effects[END_REF][START_REF] Algazi | Estimation of a Spherical-Head Model from Anthropometry[END_REF][START_REF] Brown | An Efficient HRTF Model For 3-D Sound[END_REF][START_REF] Brown | A Sructural Model for Binaural Sound Synthesis[END_REF] ,who have much contributed to the field. Their models use simplistic filters perfectly suitable for low-cost and satisfying results. For pinna-related effects we have relied on the responses found in [START_REF] Spagnol | Structural modeling of pinna-related transfer functions[END_REF] and the spherical-model approximations given by [START_REF] Miller | Modeling Interneural Time Difference Assuming a Spherical Head[END_REF].
Fundamentals
Most binaural 3D sound systems rely on a simple concept: the HRTF -Head Related Transfer Funcion. This macro-block represents the shaping suffered by the sound as it travels from source to the receiver's eardrums. The HRTF is usually divided in: ILD, ITD and PRTF, Interaural Level Difference, Interaural Time Difference and Pinna-Related Transfer Function respectively. We will disregard in this model the shoulder and torso effects, since they contribute mostly to elevation perception and our focus will be on azimuth. Fig. 1 shows the system's full conceptual block diagram.
ITD -Interaural Time Difference
ITD is conveyed by the difference in length of the paths to the left and right ears. This translates usually in a slight phase shift that is only perceivable for very low frequencies. By modelling the head as a sphere and considering the elevation angle φ to be always zero, that is, the sound source is always in the same plane as the listener's ears, we have the following model from [START_REF] Miller | Modeling Interneural Time Difference Assuming a Spherical Head[END_REF]:
DL = DLD , DLD < L L + DLA , DLD ≥ L DR = DRD , DRD < L L + DRA , DRD ≥ L (1)
where ∆D = DL -DR is the path difference, related to the IT D through the sound speed c. IT D = ∆D c . The geometric parameters are shown in Fig. 2. The total IT D will be, then, discrete and taken in samples, according to the sampling frequency F s . Both the linear and arc distances are solved using simple trigonometry and depend mostly on one parameter: the radius of the head, a h .
ILD -Interaural Level Difference
ILD is given by the Head-Shadowing effect, studied and modelled by Rayleight [START_REF] Rayleigh | The theory of sound[END_REF] as the solution of the Helmoltz equation for a spherical head. Brown shows an approximate model by a minimum phase filter in [START_REF] Spagnol | Structural modeling of pinna-related transfer functions[END_REF], also found in [START_REF] Zölzer | DAFX: digital audio effects[END_REF]:
H HS (ω, θ) = 1 + j αω 2ω0 1 + j ω 2ω0 (2)
where H HS is the filter's frequency response, dependent on input azimuth angle θ, and α is given by:
α(θ) = 1 + α min 2 + 1 - α min 2 • cos π θ θ 0 (3)
The parameter θ 0 fixes the minimum gain angle (α min ). Brown suggest that θ 0 should be 150 • , for maximum match with Rayleigh's model, but we've found that it creates discontinuity in the frequency spectrum that is perceived as a fast "warp" of the sound source, so we've preferred to make θ 0 = 180 • without major losses to the original model. Then, by simple variable mapping we have the filter responses for the left and right ears defined by:
H l HS (ω, θ) = H HS (ω, θ -Θ l ) (4)
H r HS (ω, θ) = H HS (ω, θ -Θ r ) (5)
The ILD model result is shown in Fig. 3. We can see that it provides a valid approximation to Rayleigh's solution and at the same time allows a very simple digital filter implementation. Notice symmetrical responses to the ear's reference angle overlap.
PRTF -Pinna-Related Transfer Function
With the ILD and the ITD we have modelled the general sound waveshape arriving at the listener's ears. The two previous blocks cover most of the physical shaping, while the PRTF transforms the input wave even further, giving a more natural feeling to the user. We have used the models suggested in [START_REF] Spagnol | Structural modeling of pinna-related transfer functions[END_REF], which approximate the Pinna as a series of constructive and destructive interferences represented by notch and resonance filters. According to Spagnol et al., Pinna effects give many spectral cues to perceive elevation, we have found that adding their filter the azimuth plane (φ = 0) we could create a rather superior localization feeling. Spangol's general filter solutions are shown in Eq. 6 and 7. Table 1 shows the coefficients derived from the mentioned study, pointing the reflection (refl ) and resonance (res) points with their central frequencies (f C ) and gain (G). The reflection coefficients will generate local minima in the frequency spectrum, whereas resonance frequencies will generate maxima, as it can be seen in Fig. 4.
H res (Z) = V 0 (1 -h)(1 -Z -2 ) 1 + 2dhZ -1 + (2h -1)Z -2 (6)
H ref l (Z) = 1 + (1 + k) H0 2 + d(1 -k)Z -1 + (-k -(1 + k) H0 2 )Z -2 1 + d(1 -k)Z -1 -kZ -2 (7)
RIR -Room Impulse Response
Considering the effects of the three previously mentioned models, we can achieve an acceptable result. But it suffers from lateralization, where the listener sometimes perceives the sound as coming from inside the head. The digitally processed waveshape does not have, yet, the necessary cues for proper localization. Brown suggests that a very simple RIR can make a difference [START_REF] Brown | A Sructural Model for Binaural Sound Synthesis[END_REF]. As we can see in Fig. 1, there is a direct signal path that is not seen as dependent on the input angle θ. That is a simple echo. By simply adding to the resulting binaural signals the original mono input with delay and attenuation we can reduce lateralization by a considerable amount, increasing externalization. We have used a 15ms delay with 15dB attenuation.
System Response
The system's final response is shown in Fig. 5. The visual representation uses frequency as the radius, azimuth as argument and the system's output is colorcoded in the z-axis. The polar plot clearly shows the different energy regions dependant on the azimuth and the reference angle -left or right ear. The distinctive "waves" are due to the Pinna effects, whereas the global energy difference is given by the Interaural Level Difference.
Development
Our main goal was to port the mentioned filters and models to an embedded platform. We have chosen the Beagleboard [START_REF] Golinharris | Beagleboard[END_REF], a community-supported open hardware platform which uses Texas Instruments' DM3730, containing one ARM Cortex-A8 and one DSP C64x. Being derived from the OMAP platforms, this chipset allows real-world performance validation using embedded Linux relying on the two available cores: a General Purpose Processor (GPP) and a DSP Processor. We have loaded the Beagleboard with a custom lightweight Angstrom distribution built using Narcissus [18]. Narcissus also generates a full-featured development environment ready for cross-compiling.
The building environment has three standing points: gcc-arm generates an ARM-compatible linux ELF executable; ti-cgt generates DSP-compatible functions, bundled into a library file; then we call an external tool available for easy co-processing on this hybrid chips -C6RunLib. This tool, supported by Texas Instruments, relies on a kernel module called dsplink that handles communication between processors. C6RunLib acts as a linker, putting together the ARM code with the DSP library and adding necessary code for remote procedure calls. When a function that is run on the DSP side is called from within ARM code, dsplink is called and uses shared memory to actually have the function operated by the other core. We have not entered the details of C6RunLib's inner workings, but one important feature was missing: asynchronous cross-processor function calls were not available -we have solved this issue by letting the ARM-side thread hang while the DSP is busy operating the entry-point function.
The program consists of three main parts: User Interface, Audio Output and Signal Processing. The latter is done on the DSP core, while the others are performed by the GPP (ARM Processor). The User Interface allows the user to dynamically control the azimuth angle θ, azimuth automatic change rate, output volume, turn off the filters individually (so to benchmark localization quality easily without the need to rebuild the solution) and chose among three possible infinite loop audio inputs: noise, an A tone (440Hz) or a short phrase from Suzanne Vega's Tom's Diner (benchmark track suggested in [START_REF] Zölzer | DAFX: digital audio effects[END_REF] for its pure-voice contents). Audio Output and Signal Processing blocks require real time operation to operate smoothly and are detailed on Sect. 4.2.
Algorithm Suiting
The four filter blocks were implemented using digital filters with some help from TI's DSPLIB library [START_REF]DSPLIB[END_REF] for better performance. Algorithm suiting for real time digital processing was the first step to take. The delay filters (ITD and RIR) are simple circular buffer filters, only the ITD has a variable delay given by the azimuth. PRTF is modelled by a fixed-point 16-tap truncated FIR filter calculated from Eq. 6 and 7 and using the coefficients from Table 1. Since PRTF is not sensible to azimuth changes, it is actually processed before we split the signal into two separate outputs, which is possible because all the filters are linear and the azimuth change is considerably slower than the system's response.
ILD algorithm is more complex because the filter's coefficients change with the azimuth. The minimum phase filter presented on Eq. 2 when sampled becomes:
H HS (Z, θ) = α(θ)+µ 1+µ + µ-α(θ) 1+µ Z -1 1 + µ-1 1+µ Z -1 (8)
where µ = ω0 Fs = c a h •Fs . By solving Eq. 8 symbolically on the time domain, being h[n] the related impulse response at sample n we have the following generalization:
h[n] = α+µ µ+1 , n = 0 -2µ(α-1) (µ+1) 2 , n = 1 h[n -1] • (1-µ) (µ+1) , n > 1 (9)
Using Eq. 9 the ILD is a time-variant fixed-point 16-tap FIR which is recalculated for every input buffer (given the azimuth has changed). On top of the presented model we have also added a low-pass filter to the input azimuth to avoid angle "leaps".
Prototyping System
As discussed in Sect. 4, we have to build blocks that rely on real time operation for our purposes. For audio output we have used Jack Audio Connection Kit [START_REF] Davis | Jack Audio Connection Kit[END_REF], an open library made for this kind of processing. It interfaces with Advance Linux Sound Architecture (ALSA) and tries to guarantee audio transmission without over or underruns. Jack is called by instantiating a deamon that waits for a client to connect to its inputs and outputs. The client registers a callback function that has access to the deamon's buffers. In our application we have used 512 frames-long buffers at a 48kHz sampling frequency, meaning each buffer is valid for around 10ms, which is our processing deadline and output latency. We have not used Jack as the input source, instead the audio input is read from a file and kept in memory. To simulate real time audio capture, this memory area is used to feed new input every time the callback interrupt is called for new output, thus validating the real time premise. The Jack callback function then signals another thread that sends the input signal to the DSP and receives back (always in a shared memory area) the processed response. All 512-samples buffers are signed with an incremental integer and constantly checked for skipped buffers, thus providing certainty that every input is processed and that every output is played back. Along with the signal buffers, the main application sends a structure containing the controllable parameters discussed previously. Fig. 6 illustrates the system's building blocks and their mutual operation. The left and right trunks operate both at application level (running on GPP) and are ruled by posix semaphores. The middle trunk operates inside Jack's callback function, which is triggered by the Jack daemon once the client is properly installed. The rightmost branch is the C6RunLib function that is processed on the DSP side, remembering that the Sync thread stays blocked during DSP operation. Fig. 6. System thread schema Fig. 7 shows the function which is operated on the DSP, it follows almost exactly the flowchart presented previously. When done filtering the input signal, the function will sign the output buffers and return, letting the caller thread unblock and renders the newly processed buffers available for the next audio output callback. We have found that the very first call to a function running on the DSP side takes considerably longer than the subsequent ones because of the time needed to load the application code to the other processor. This task is masked by the facilities provided by C6RunLib but must be taken into account, otherwise the system will always shut down on the first buffer because of this latency and the fact that the callback function kills the program if an underrun is detected.
Experimental Results
We have tested the system's localization quality and perceived accuracy with a set of user trials. Three main set-ups were tested. One in which another person controls the azimuth freely and the subject must grade the localization feeling Fig. 7. Signal Processing Function and try to point out where the sound is coming from. On another test subjects were free to change azimuth themselves. The grades being "Good", "Average" and "Poor". On a last test users were asked to point out where the sound source was in virtual space. The accuracy test is divided in two parts: guessing the azimuth angle of a "warping" sound source that instantly goes from one point to another and of a "moving" source which moves slowly until rest. In the accuracy test we have gathered mean and maximum perceived azimuth errors µ and M . All users were given the same test points set in random order.
We have noticed that most subjects feel a stronger localization when they have control over the azimuth, complying with the fact that a person with no sensing disabilities will rely on a sum of those to pinpoint an object on space, usually failing when depending solely on sound -create somewhat a placebo effect. When trying to point where the sound source is in a virtual space, users will always get the right quadrant if the sound source is "warping", but will generally fail to pinpoint angle giving µ = 43 • , M = 70 • . When the subject is able to hear the virtual source moving to a resting position accuracy increases to as much as about 30 degrees without mistakes ( µ = 21 • , M = 30 • ), totalling 12 clear zones in the azimuth plane.
The quality tests were also repeated three times for each of the available input samples. It was observed that the pure tone does not have enough spectral cues for localization and generates confusion most of the time. Noise is next on the list, but, for its high frequency components, it is still not as clear as it could be since the complex frequency response of the implemented system is on the lower side of the spectrum.
The mentioned accuracy results are valid for the third sample: human voice. Because of the short bandwidth and, some authors will argue, an inherent capability to localize voice this third input sample grades enormously better than the previous ones -which is largely consistent with the mathematical model. Lastly, fewer tests were run turning some filtering stages on and off so to observe the real necessity of each one. It was seen that the ITD and ILD work very closely together (although the ITD plays a void role with noise input, since phase cue is basically non-existent because of high frequencies). PRTF helps in fixing the sound source to the azimuth plane, the system without it was described as "noisy and confusing" by subjects, inducing that the PRTF wraps the spectrum to a more natural balance. Some subjects experienced lateralization when the RIR was turned off, losing completely the source in the virtual space.
As for performance, the system was built considering the fact it would not miss deadlines, so the described implementation was validated. We have tested some other parameters to test the system working under stress. It was observed that reducing the frame size to 256 samples (5ms deadline) would cause overruns depending on the operating system's load. For example, the system would run perfectly using its own video board and an attached USB keyboard, but would fail through an SSH connection. Although the same number of samples is processed per second, that is the sampling frequency remains unchanged, the cross-processor calls create noticeable overhead -a problem that could be approached by getting into the inner workings of C6RunLib.
Also, we have made it possible for the program to run fully on the GPP side, letting the DSP completely unused, and compared the CPU load reported by the operating system. When the filters are run by the second processor, the main CPU stays practically idle reporting in average 3% load. When running the complete system on the GPP side, the CPU goes almost to maximum load, floating most of the time around 97% . We were not able to sample the DSP load because the benchmarking functions provided by the libraries would crash the system. To give an approximation of DSP load, we have made the processing functions run redundantly n times, reaching overall failure at n = 3, so we estimate the DSP core load between 30% and 50% of its full capacity. The load tests were performed with a constantly changing azimuth, so the filter parameters would be constantly changing and we could observe the worst case scenario.
Conclusions
In this work we propose a full-featured 3D binaural system for sound localization. We successfully aggregated various methods and models of parametric 3D binaural, creating parametric DSP blocks. Furthermore, we have shown the feasibility by implementing and evaluating these blocks on a low-cost embedded platform. In our tests the azimuth angle was dynamically changed and clearly perceived by the user with enough accuracy, specially for human voice sound source.
Depending on the operating systems's load, some buffers overrun occurs, which decreases the output quality. This is caused probably by cross-processors calls, which relies on C6RunLib library. So, next steps in our work will be the C6RunLib internals analysis and propose some implementation solutions for this problem. We will also improve the model to consider elevation and more complex room responses.
Fig. 1 .
1 Fig. 1. Full localization model
Fig. 2 .
2 Fig. 2. ITD input parameters.
Fig. 3 .
3 Fig. 3. ILD model result for left and right ears with θ = {0, 180}
Fig. 5 .
5 Fig. 5. System's final response in polar plot.
Table 1 .
1 PRTF filter coefficients.
Singularity Type fC [kHz] G [dB]
1 res 4 5
2 res 12 5
3 refl 6 5
4 refl 9 5
5 refl 11 5 | 25,617 | [
"1001400",
"1001401"
] | [
"302610",
"302610"
] |
01466688 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466688/file/978-3-642-38853-8_29_Chapter.pdf | Zhenkai Zhang
email: [email protected]
Xenofon Koutsoukos
email: [email protected]
Modeling Time-Triggered Ethernet in SystemC/TLM for Virtual Prototyping of Cyber-Physical Systems
Keywords: TTEthernet, SystemC, TLM, Virtual Prototyping
When designing cyber-physical systems (CPS), virtual prototyping can discover potential design flaws at early design stages to reduce the difficulties at the integration stage. CPS are typically complex real-time distributed systems which require networks with deterministic end-to-end latency and bounded jitter. Time-triggered Ethernet (TTEthernet) integrates time-triggered and event-triggered traffic, and has been used in many CPS domains, such as automotive, aerospace, and industrial process control. In this paper, a TTEthernet model in SystemC/TLM is developed to facilitate the design and integration of CPS. The model realizes all the necessary features of TTEthernet, and can be integrated with the hardware platform model for design space exploration. We validate the model by comparing latency and jitter with those obtained using a commercial software-based implementation. We also compare our model with the TTEthernet modeled in OMNeT++ INET framework. Our model provides startup and restart services that are necessary for maintaining synchronized operations in TTEthernet. We evaluate these services and also the efficiency of the simulation.
Introduction
Cyber-physical systems (CPS) are complex heterogeneous systems whose design flow includes three layers: the software layer, the network/platform layer, and the physical layer [START_REF] Sztipanovits | Toward a Science of Cyber-Physical System Integration[END_REF]. The interactions within and across these layers are complex. The physical layer interacts with the hardware platform through sensors and actuators. Embedded software runs on the hardware platform and communicates via a network to realize the desired functionalities. Due to the high degree of complexity, design flaws often appear at the integration stage. In order to discover potential design flaws at early stages, a virtual prototyping development approach is required.
In virtual prototyping of CPS, modeling the hardware platform in a System-Level Design Language (SLDL) is essential to quickly evaluate the interactions between the platform and the software layer and the physical layer at early design stages. Since CPS are typically distributed real-time systems, the network also plays an important role in design and integration.
In many CPS domains that require known and bounded network latency, such as automotive, aerospace, and industrial process control, time-triggered Ethernet (TTEthernet) has been used for real-time communication. Traditional Ethernet cannot be used, since it suffers from cumulative delay and jitter. TTEthernet integrates time-triggered traffic and event-triggered traffic together, and provides the capability for deterministic, synchronous, and lossless communication while supporting best-effort traffic service of Ethernet at the same time [START_REF]Time-Triggered Ethernet[END_REF].
SystemC, which has become a de facto SLDL [START_REF]Standard SystemC Language Reference Manual[END_REF], is proposed to be one main part of virtual prototyping of CPS [START_REF] Mller | Virtual Prototyping of Cyber-Physical Systems[END_REF]. It allows system modeling and simulation at various levels of abstraction. In addition, the concept of transaction-level modeling (TLM) is adopted in SystemC to separate the computation and communication. A TLM communication structure abstracts away low-level communication details while keeping certain accuracy. Thus, both the software layer and the network/platform layer can be modeled in SystemC/TLM at early design stages making it suitable for virtual prototyping.
In this paper, we describe a TTEthernet model in SystemC/TLM for virtual prototyping in order to take into account the network effects in a CPS. The model in SystemC/TLM offers many advantages: [START_REF] Sztipanovits | Toward a Science of Cyber-Physical System Integration[END_REF] it is easy to acquire at early design stages; [START_REF]Time-Triggered Ethernet[END_REF] it is scalable to a large number of nodes; (3) the model can be integrated with the hardware platform model in a straightforward manner; (4) it provides efficient and accurate simulation.
The main contribution of this work is a TTEthernet model in SystemC/TLM that realizes all the necessary features for facilitating the design and integration of CPS. The model is validated by comparing latency and jitter with those obtained using a commercial software-based implementation and the model in OMNeT++ INET framework [START_REF] Steinbach | An Extension of the OM-NeT++ INET Framework for Simulating Real-time Ethernet with High Accuracy[END_REF]. The model is also evaluated for its startup and restart services and the simulation efficiency.
The rest of this paper is organized as follows: Section 2 gives the related work including TTEthernet and related modeling efforts. Section 3 describes the model in detail. Section 4 validates the model against a real implementation and the model in OMNeT++ INET framework and also evaluates the services in the model and the simulation efficiency. Section 5 concludes this paper.
Related Work
Time-triggered architecture (TTA) has been widely used in safety-critical CPS, which require reliable time-triggered communication systems, such as TTP/C, FlexRay, and TTEthernet [START_REF] Kopetz | The Time-Triggered Architecture[END_REF]. Compared to the maximum bandwidth of TTP/C (25Mbit/s) and FlexRay (10Mbit/s), the bandwidth of TTEthernet can reach 100Mbit/s or even 1Gbit/s, making it very attractive in many CPS domains. As mentioned in [START_REF] Steiner | Time-Triggered Ethernet: TTEthernet[END_REF], there are two versions of TTEthernet. The academic version uses preemption mechanism and only supports time-triggered (TT) and eventtriggered (ET) traffic, while the industrial version uses non-preemptive integration of TT and ET and divides ET into rate-constrained and best-effort traffic classes. In [START_REF] Kopetz | The Time-Triggered Ethernet (TTE) Design[END_REF], the academic version of TTEthernet is introduced to integrate TT and ET traffic together. In [START_REF] Steinhammer | A Time-Triggered Ethernet (TTE) Switch[END_REF], an academic version of TTEthernet switch is developed which preempts ET message transmission when a TT message arrives to guarantee a constant transmission delay of TT messages caused by the switch regardless of the load of ET traffic on the network. In [START_REF] Steinhammer | Hardware Implementation of the Time-Triggered Ethernet Controller[END_REF], a prototypical TTEthernet controller is described and implemented in an FPGA. TTTech Computertechnik AG company issued the TTEthernet specification in [START_REF] Steiner | TTEthernet Specification[END_REF] and also developed industrial products [START_REF]TTEthernet Products[END_REF]. Finally, the TTEthernet specification is standardized by SAE in [START_REF]Time-Triggered Ethernet[END_REF].
Modeling TTEthernet has been used to simulate in-vehicle communication systems. In [START_REF] Steinbach | An Extension of the OM-NeT++ INET Framework for Simulating Real-time Ethernet with High Accuracy[END_REF], an extension to the OMNeT++ INET framework is made to support simulation of TTEthernet. The model is based on standard Ethernet model in the INET framework. Although the evaluation shows the model is in good agreement with real implementation, the model does not consider different protocol state machines for different roles of synchronization, which results in some services of TTEthernet are simplified or not supported.
In order to support Ethernet networks in the system-level design in a SLDL, various models/approaches have been proposed. In [START_REF] Banerjee | Transaction Level Modeling of Best-Effort Channels for Networked Embedded Devices[END_REF], a half-duplex Ethernet based on CSMA/CD MAC protocol is simply modeled using SpecC and TLM techniques. In [START_REF]Ethernet Communication Protocol using TLM 2.0[END_REF], an Ethernet interface in SystemC/TLM-2.0 is modeled for virtual platform or architectural exploration of Ethernet controllers. Another approach is to integrate network simulators with simulation kernels of SLDLs. In [START_REF] Bombieri | TLM/Network Design Space Exploration for Networked Embedded Systems[END_REF], the NS-2 network simulator is integrated into the SystemC/TLM design flow. The advantage of this approach is that network simulators have a good support for almost every commonly used network. However, such an approach requires the integration of two discrete-event simulation kernels, which can greatly reduce the simulation efficiency.
Modeling TTEthernet in SystemC/TLM
Framework
Our TTEthernet model in SystemC/TLM aims at facilitating the design and integration of the network/platform layer in a CPS, especially if the system is a distributed mixed time-triggered/event-triggered real-time system. As shown in Fig. 1, the network/platform layer consists of several computational nodes which communicate with each other through a TTEthernet network. The TTEthernet model includes two separate parts: the TTEthernet controller and the TTEthernet switch. The network is deployed in star topology or cascaded star topology which uses switches to integrate each star topology.
In each node of the system, a TTEthernet controller communicates with other designed hardware components through a memory-mapped bus. Standard TLM-2.0 sockets are used for this purpose. As a target of TLM, the TTEthernet controller implements a blocking transport interface method for fast but looselytimed simulation and non-blocking transport interface methods for slow but approximately-timed simulation.
In order to simulate the bidirectional communication link between two ports of the TTEthernet devices, a specific Ethernet socket is used to model the port. As the TLM-2.0 Ethernet socket introduced in [START_REF]Ethernet Communication Protocol using TLM 2.0[END_REF], our Ethernet socket is a derived class from both initiator and target sockets of TLM-2.0. In order to distinguish different ports of a TTEthernet device, tagged initiator and target sockets are used as base classes of the Ethernet socket. For binding two Ethernet ports, bind() and operator() are overwritten to bind the initiator socket of each port to the target socket of the other port. For invoking transport interface methods, the operator→ distinguishes which socket of a port should be accessed according to the calling method. Since our TTEthernet model uses Ethernet rather than memory-mapped bus, interoperability is not concerned by introducing new transaction type for Ethernet which is similar to the TLM Ethernet payload type introduced in [START_REF]Ethernet Communication Protocol using TLM 2.0[END_REF].
Clock Model
In TTEthernet, a synchronized global time is the base for all time-triggered operations. Each TTEthernet device (controller/switch) is driven by a clock having a clock drift. Thus, the clock synchronization service is crucial for the correct operation. In order to simulate its synchronization service, each TTEthernet device needs to have an independent clock with its own drift and offset. However, SystemC uses a discrete event simulation kernel which maintains a global time. If we simulate every tick of a clock with a drift, the simulation overhead will be too large, which can seriously slow down the simulation. Instead, we model the clock as follows: a random ppm value is assigned to each clock in the interval [-MAX PPM, -MIN PPM] ∪ [MIN PPM, MAX PPM]. According to the time-triggered schedule, the duration in clock ticks from the current time to the time when the next time-triggered action needs to take place is calculated. After that, we can get the duration in simulation time by taking into account its clock drift: duration in simulation time = duration in clock ticks × (tick time ± drif t), and then we can arrange a clock event with this amount of time by using the notification mechanism of sc event in SystemC.
Because the clock will be adjusted periodically by the synchronization service, the arranged clock event will be affected (its occurrence in simulation time becomes sooner or later). In order to simulate this properly, the arranged clock event and its occurring time in clock ticks is stored in a linked list in an order of occurring time. When a clock event occurs or its time has passed due to clock adjustment, it will be deleted from the linked list and processes pending on it will be resumed. When the clock is corrected, notifications of the arranged clock events are canceled and new simulation times for the notifications of the events are recalculated based on the corrected clock.
A timer model is also built on the clock model, which uses the drift of the clock model to calculate the duration in simulation time and is used for timeout events. In contrast to clock events, timeout events are not affected by clock synchronization and only depend on how many ticks should pass before they occur.
TTEthernet Traffic Classes
TTEthernet supports three traffic classes: time-triggered (TT), rate-constrained (RC), and best-effort (BE). In order to recognize which traffic class a frame belongs to, either encoding it in the Ethernet MAC destination address or using EtherType field of the Ethernet frame header is feasible [START_REF]Time-Triggered Ethernet[END_REF]. In our model, we employ the destination address divided into two parts to identify critical traffic (CT) including TT and RC. The first part (32 bits) of the destination address shows whether a frame belongs to CT by checking this part against the result of bitwise AND of CT marker and CT mask. The second part (16 bits) gives the CT ID which is used for further checking and scheduling.
TT messages are used for applications with strict requirements like deterministic end-to-end latency and bounded jitter. RC messages, compliant with ARINC 664 standard part 7, are used for applications with less strict requirements, for which sufficient bandwidth should be allocated. BE messages, using the remaining bandwidth of the network, form the standard Ethernet traffic which has no guarantee of delivery and transmission latency.
TTEthernet also has a transparent traffic used for its synchronization protocol. The synchronization message is called protocol control frame (PCF), and has three types: coldstart frames (CS) and coldstart acknowledgment frames (CA) are used for startup and restart services, and integration frames (IN) are used for synchronization service. In our model, the PCF traffic also uses the MAC destination address to encode its identity.
TTEthernet Device
The TTEthernet controller and switch have several common functions/services. We extract all the common ones, and implement them in a class named tte device. tte device is the abstract base class of tte controller and tte switch which has pure virtual functions that need to be implemented by tte controller and tte switch to define different behaviors of these two different devices. Fig. 2 shows the main SystemC processes in tte device. There is an init method() SystemC method process which is sensitive to a power-on event and initializes the device. This is used to model different power-on times given in a configuration file of different devices. After power-on, startup service of TTEthernet will try to bring the device into synchronized operation mode.
Ports: An Ethernet socket is used to realize the of Ethernet ports. The TLM-2.0 transport interface methods are implemented to transmit standard Ethernet frames. Ethernet socket has both blocking transport interface and nonblocking transport interface. Due to the star or cascaded star network topology of TTEthernet, the collision domain is segmented and only two TTEthernet devices which are directly connected may contend for the use of the medium. We model TTEthernet working in full-duplex mode so that collisions become impossible; moreover, the non-preemptive integration of TT and ET is used that is compliant with the products in [START_REF]TTEthernet Products[END_REF]. Thus, the efficient blocking transport method becomes accurate enough to model the communication between two TTEthernet devices.
Each TTEthernet device (controller/switch) can have several Ethernet ports according to its configuration. For a controller, multiple ports represent redundancy which send the same frame in order to realize fault-tolerance. For a switch, each port can be connected to a controller or a switch to create a separate collision domain. Each port is associated with three dynamic thread processes which are send thread(), recv thread(), and release ET(). The send thread() and recv thread() processes with the scheduler model the data link layer of TTEthernet, which uses TDMA MAC protocol. The send thread() process is responsible for starting a frame transmission, and is controlled by the scheduler process and the release ET() process via events. The release ET() process knows the schedule and is responsible for signaling the send thread() process to send an ET frame if there is enough gap for this frame before next TT frame dispatching time comes. The recv thread() process waits for an incoming frame delivered by the b transport() method registered to the Ethernet socket. When a frame is transmitted through the b transport() method, it will be processed by the recv thread() process. According to the analysis of the destination address, either a PCF handler process will be dynamically spawned, or one of the traffic processing functions (TT, RC, or BE) will be called. The traffic processing functions are pure virtual functions which need to be implemented by different TTEthernet device to realize different behaviors.
Scheduler: Every TTEthernet device sends packets according to a static schedule that relies on synchronized global time. The static schedule is generated by an off-line scheduling tool and used by the TTEthernet device through a configuration file. We use the off-line scheduling tool provided by TTTech [START_REF]TTEthernet Products[END_REF], which guarantees two TT frames never contend for transmission.
The exec sched thread() process implements the function of the scheduler and is responsible for signaling the send thread() processes of the ports to start a TT frame transmission according to the static schedule. It pends on a synchronization event occurring when the device enters the synchronized states, and starts executing the schedule when the event happens. If the device goes out of the synchronized states, it also signals the exec sched thread() process to stop executing the schedule. If the device is a synchronization master, the scheduler process also signals send thread() processes to send out an integration PCF when PCF's dispatching time is reached (dispatching time is 0 in our model).
Protocol State Machine: Each TTEthernet device executes exactly one of the protocol state machines to maintain its role for synchronization, which are formulated in [START_REF]Time-Triggered Ethernet[END_REF]. All TTEthernet devices can be classified into three different roles: synchronization masters (SMs), synchronization clients (SCs), and compression masters (CMs). Startup service of the protocol state machines tries to establish an initial synchronized global time to make devices operate in synchronized mode. When a device detects it is out of synchronization, restart service of the protocol state machines will try to resynchronize itself.
The model has three SystemC thread processes to realize different protocol state machines respectively, which are psm sm thread() for SM, psm sc thread() for SC, and psm cm thread() for CM, as shown in Fig. 2. Each state has its own sc event object which is pended on by the state. If a state has a transition fired because of timeout, it also sets an event's notification by using the timer model, and pends on the event "OR" list of its own sc event object and the timeout sc event object. The sc event object will be notified when any one of transitions of this state is enabled, and corresponding transition flag will be set showing the guard of this transition is met. By checking the flags in an order that is defined in [START_REF]Time-Triggered Ethernet[END_REF], priorities of concurrent enabled transitions are enforced in the protocol state machines, which guarantees determinism. Since concurrent sc event notifications will not queue up, events enabling concurrent transitions will not queue up during execution of the protocol state machines.
Clique detections are used in TTEthernet to detect clique scenarios where different synchronized time bases are formed in a synchronization domain. When cliques are detected, protocol state machines will try to reestablish synchronization. The detect clique sync() method process is responsible for synchronous clique detection and is sensitive to an event that will be notified when the accep-tance window for receiving scheduled PCFs is closed. The detect clique async() method process is responsible for asynchronous clique detection and is sensitive to an event that will be notified when the acceptance window for receiving scheduled PCFs is closed in CMs or when the clock reaches the dispatching time in SMs or SCs.
Synchronization Service: When operating in synchronized mode, TTEthernet uses a two-step synchronization mechanism: SMs dispatch PCFs to CMs, and CMs calculate the global time from the PCFs (i.e. "compress") and dispatch "compressed" PCFs to SMs and SCs. SMs and SCs receive "compressed" PCFs and adjust their clocks to integrate into the synchronized time base.
When a PCF arrives, a dynamic PCF handler process (process PCF()) will be spawned to cope with this PCF. Concurrent PCF handler processes may exist due to multiple PCFs arriving with small time difference. Permanence function [START_REF]Time-Triggered Ethernet[END_REF] is used to reestablish the temporal order of the received PCFs. The process PCF() implements the permanence function by using the timer model. By checking the PCFs, the process also enables some transitions whose guards only count on PCFs in the protocol state machines.
If the TTEthernet device is a CM, a dynamic compression process (compression()) may be spawned if there is no process handling corresponding integration cycle of the PCF. The integration cycle filed of a PCF shows which round of synchronization this PCF belongs to. Compression function [START_REF]Time-Triggered Ethernet[END_REF] is used to collect PCFs having the same integration cycle number within a configurable interval and compress these PCFs for calculating the synchronized global time. The compression() also uses the timer model to realize all the time delays needed by its collection and delay phases.
When the acceptance window for receiving PCFs ends, the sync thread() process will be resumed to calculate the clock correction from the PCFs that are in-schedule. After a fixed delay (at least greater than half of the acceptance window), the clock will be adjusted by the calculated correction value.
TTEthernet Controller & Switch
Both the TTEthernet controller and switch are derived from TTEthernet device, and implement the pure virtual functions to realize different behaviors of processing the traffic.
The TTEthernet controller also acts as a TLM-2.0 target which receives transactions containing Ethernet frames via a target socket. Extensions are made to the generic payload to show which traffic class the Ethernet frame belongs to. Each traffic class has its own transmission and reception buffers. In the case of a write command, the controller puts the extracted Ethernet frame into the corresponding traffic transmission buffer. When the controller receives a frame, it will signal the processor to read it via interrupt, and it puts the received frame into a transaction in response to a read command and sets the traffic class extension.
The TTEthernet switch uses critical traffic table and schedule table to route and forward CT (TT and RC) frames, and uses static/dynamic routing tables for BE frames. It also acts as a temporal firewall for TT traffic to segregate faulty controllers if they are babbling. For RC traffic, it uses token bucket algorithm to enforce a bandwidth allocation gap between two consecutive RC frames.
Experimental Results
In this section, we compare our TTEthernet model with the model in OM-NeT++/INET framework [START_REF] Steinbach | An Extension of the OM-NeT++ INET Framework for Simulating Real-time Ethernet with High Accuracy[END_REF] and a real TTEthernet implementation from TT-Tech for validation. We also evaluate the startup and restart services as well as the efficiency of the simulation.
Validation
We set up a star topology which has four nodes connected to a central TTEthernet switch with 100Mbit/s links as shown in Fig. 3 (a). Node 1 sends both TT traffic and BE traffic to Node 2, and both Node 3 as well as Node 4 send only BE traffic to Node 2. All the traffic goes through a TTEthernet switch. The communication period is 10ms, and the time slot is 200µs. The maximum clock drift is set as 200ppm for the models. Node 1 sends a TT frame at 1ms offset of each period. The configuration files including their corresponding XML files for the nodes and switch are generated by the TTTech toolchain [START_REF]TTEthernet Products[END_REF]. From the generated XML files, we extract parameters such as critical traffic table and schedule table to configure our model and the model in OMNeT++. In this setup, the switch dispatches the TT frame sent by Node 1 at 1.4ms offset of each period. The metrics we measure are average end-to-end latency and jitter of TT frames which are important factors for real-time communication systems. We measure these metrics for different TT frame sizes under full link utilization of BE traffic. Fig. 4 shows the results of our model in SystemC/TLM, the model in OMNeT++ INET framework, and the software stack implementation in Linux from TTTech [START_REF]TTEthernet Products[END_REF].
From the figure we can see the model in SystemC/TLM and the model in OMNeT++ INET framework give very similar results. In [START_REF] Bartols | Performance Analysis of Time-Triggered Ether-Networks Using Off-the-Shelf-Components[END_REF], the method of measuring end-to-end latency of software-based implementation of TTEthernet is stated. According to [START_REF] Bartols | Performance Analysis of Time-Triggered Ether-Networks Using Off-the-Shelf-Components[END_REF], the measured latency gap (90µs) between frame size of 123 and 124 bytes of the software-based implementation is caused by the measuring port driver configuration. The measured jitter of the software-based implementation is bounded by 30µs [START_REF] Bartols | Performance Analysis of Time-Triggered Ether-Networks Using Off-the-Shelf-Components[END_REF]. The hardware-based implementations will bound the jitter more tightly [START_REF]TTEthernet Products[END_REF].
Evaluation
We set up the network as shown in Fig. 3 (b) to evaluate the startup and restart services implemented in our model. In this cluster, Node 1, Node 2, Node 5, and Node6 are SMs; Switch 1, Switch 2, and Switch 3 are CMs; the rest are SCs. The integration cycle is 10ms, and the parameters are generated by the TTTech toolchain [START_REF]TTEthernet Products[END_REF].
With different power-on times, we record the time when every powered device in the cluster enters its synchronized state in Tab. 1. Since different power-on times of Switch 2 may cause cliques in the cluster, we also record the time when every powered device is resynchronized back to its synchronized state due to clique detection and restart service. Since in this setup SCs only passively receive PCFs, we set the power-on times of them as 0s. When every device approximately starts at the same time, the devices will be synchronized quickly which are shown in the first two cases. When the CMs are powered later than the SMs, the time when every one is synchronized will be delayed as shown in the third case. In the fourth case, Node 1, Node 2, and Switch 1 establish a synchronized time base at about 29.8ms; likewise, Node 5, Node 6, and Switch 3 establish the other synchronized time base. Switch 2 which is the CM connecting with the other two CMs is powered just a little bit later than the time when the two synchronized time bases are established. Since during this small time interval the clock drifts have not caused the two time bases to differ too much, Switch 2 will join in the synchronization quickly and the two time bases will be merged into one time base. In the fifth case, two separate synchronized time bases are established before Switch 2 is powered as well. However, this time Switch 2 is powered much later (about 49.07s) than the time when the two time bases are established. The clock drifts have caused the two subsets of devices not to be synchronized over subset boundaries. When Switch 2 is started, asynchronous clique detection and restart service implemented in our model result in a new synchronized global time, and at 50.0256s every device is synchronized.
Finally, we evaluate the scalability and simulation efficiency of our approach. We set up the evaluation using a central switch, and all the nodes are connected to the switch. The simulation time is 1000s, and increasingly add a pair of nodes into the network. Each pair of the nodes, such as Node 1 and Node 2, communicates with each other using TT, RC, and BE traffic. Each node sends out a TT frame, a RC frame, and a BE frame every 10ms. Thus, there are 300, 000 × number of nodes frames totally. The result is shown in Fig 5. From the results we can see the model in SystemC/TLM has good simulation efficiency when the number of nodes increases. The simulation speed of the model in OMNeT++ INET framework is also evaluated under the same computation environment (2.50GHz dual-core CPU and 6GB memory). We simulate the same topology and traffic by using the fastest mode in OMNeT++ to get rid of the influence of animation and text outputs.
Conclusions
Due to the complex interactions between different layers of a CPS, virtual prototyping has become an important approach to discover potential design flaws before the last integration stage. SystemC/TLM has been adopted for virtual prototyping because of its capability of modeling both the software layer and the network/platform layer of a CPS. TTEthernet has been used in many CPS domains and provides bounded end-to-end latency and jitter.
In order to take into account the network effects caused by TTEthernet when designing a CPS, a model in SystemC/TLM is proposed in this paper. The developed model considers all the necessary features of TTEthernet and can be integrated into the hardware platform model in a straightforward manner. We validate the model against the model in OMNeT++ INET framework and the software-based implementation from TTTech, and evaluate the startup and restart services which are used to maintain the synchronization by powering on the devices at different times.
The future work focuses on integrating this model into a mixed TT/ET distributed CPS simulation framework and using timed automata to verify this model.
Fig. 1 .
1 Fig. 1. Network/Platform Layer Design Using TTEthernet Model in SystemC/TLM.
Fig. 2 .
2 Fig. 2. TTEthernet Device Main Structure.
Fig. 3 .
3 Fig. 3. Experiment Scenarios.
Fig. 4 .
4 Fig. 4. Average End-to-End Latency and Jitter of Different Frame Sizes.
Fig. 5 .
5 Fig. 5. Used CPU Time of Different Number of Nodes.
Table 1 .
1 Startup and Restart Service Evaluation
N1 & N2 & N5 & N6 SW1 & SW2 & SW3 Sync Resync
0s/0s/0s/0s 0s/0s/0s 29.834ms -
0.1ms/1ms/0.5ms/1.2ms 1.1ms/0.8ms/1.5ms 30.845ms -
2ms/4ms/8ms/6ms 30ms/10ms/40ms 79.856ms -
0s/0s/0s/0s 0s/30ms/0s 38.677ms -
0s/0s/0s/0s 0s/50s/0s 29.776ms 50.0256s
Acknowledgments. This work has been partially supported by the National Science Foundation (CNS-1035655). | 32,424 | [
"1001402",
"1001403"
] | [
"168363",
"168363"
] |
01466689 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466689/file/978-3-642-38853-8_30_Chapter.pdf | Gang Li
email: [email protected]
Søren Top
email: [email protected]
Mads Clausen
I/O Sharing in a Multi-core Kernel for Mixed-criticality Applications
Keywords: I/O sharing, multi-core systems, mixed-criticality, safety-critical real-time kernel, safe inter-core communication
In a mixed-criticality system, applications with different safety criticality levels are usually required to be implemented upon one platform for several reasons( reducing hardware cost, space, power consumption). Partitioning technology is used to enable the integration of mixedcriticality applications with reduced certification cost. In the partitioning architecture of strong spatial and temporal isolation, fault propagation can be prevented among mixed-criticality applications (regarded as partitions). However, I/O sharing between partitions could be the path of fault propagation that hinders the partitioning. E.g. a crashed partition generates incorrect outputs to shared I/Os, which affects the functioning of another partition. This paper focuses on a message-based approach of I/O sharing in the HARTEX real-time kernel on a multi-core platform. Based on a simple multi-core partitioning architecture, a certifiable I/O sharing approach is implemented based on a safe message mechanism, in order to support the partitioning architecture, enable individual certification of mixed-criticality applications and thus achieve minimized total certification cost of the entire system.
Introduction
A computer-based system in critical applications should guarantee to be safe, without making harm to humans or environment around, even in case of the corruption of some certain parts of safety-critical applications. In the real life, applications have different natural criticality levels. E.g. a KONE escalator1 comprises a dual-channel safety-critical control logic system (an application) and non-safety-critical systems such as display. The criticality level of an application is defined as a Safety Integrity Level (SIL, a level of risk reduction) in the IEC 61508 [START_REF]Functional safety of electrical/electronic/programmable electronic safety related systems[END_REF], ranging from the least dependable SIL 1 to the most dependable SIL 4. The certification cost of an application with a SIL is in the direct proportion to the SIL since the development and certification of higher SIL applications has more rigorous requirements. When a system integrates different SIL applications on one platform with resources sharing(processor, memory or I/O) and no isolation mechanism is taken into account, all the applications have to be certified to the highest SIL, aiming at ensuring the dependability of the highest SIL applications. Otherwise, lower SIL applications with higher failure probability would corrupt higher SIL applications. The entire system being certified to the highest SIL definitely leads to unacceptable increase in the development and certification cost. The basic approach introduces partitioning between mixed-criticality applications in terms of logical and temporal behaviour, which can prevent the propagation of a number of faults between applications. Each application is presumed to reside on a partition in the system.
Besides the traditional partitioning approach (federated architecture), the ongoing partitioning trends are to share a processing element for different SIL applications, or implement them on individual cores on a multi-core platform or the combination of both, by using partitioning mechanisms that provides isolation between applications. E.g. ARINC651 [START_REF]Arinc specification 651: Design guidance for integrated modular avionics[END_REF] standards proposed Integrated Modular Avionics (IMA) architecture which enforces system-level spatial and temporal isolation to integrate mixed-criticality applications onto a processor. [START_REF] Ernst | Certificationn of trusted mpsoc platforms[END_REF] implements mixed-criticality applications into different isolated cores on the trusted MPSoC platforms. Due to such partitioning mechanisms, different SIL applications on one processing element or platform can be certified individually according to their own SILs, and the total cost of development and certification is subsequently reduced. This is the exact objective of the ARTEMIS project Reduced Certification Costs Using Trusted Multi-core Platforms(RECOMP) 2 .
Besides performing partitioning mechanisms regarding processors and memory, I/O partitioning is also a big challenging issue when I/Os have to be shared by mixed-criticality applications. Firstly, mixed-criticality applications are not completely isolated if partitions can operate on shared I/Os directly. Shared I/Os are usually related to the functions of both high SIL applications and low SIL applications. A low SIL application could fail and possibly masquerade as a high SIL application to perform incorrect input or output on its accessible I/Os, which therefore results in the failure of the high SIL application. Additionally, even if the low SIL application doesn't masquerade as the high SIL application, it also possibly puts I/Os out of operation and thus the high SIL application is unable to perform its critical operations. Secondly, even if each partition works well from its own point of view, the I/O operation could also fail. As seen in Fig. 1, a safety-critical application (Partition 1) reaches the end point of its statically-allocated time slice and has to handover I/O resources to another partition when it's in the critical section of operating on the shared I/Os. This could be unacceptable for I/O operations. Furthermore, another application (Partition 2) takes the turn and possibly changes the I/O configuration or perform its desired operations on I/Os. In the next main frame (a periodical interval), Partition 1 obtains the turn again and continues to perform the rest of the operations. This raises the problem that Partition 2 maybe has modified the I/O configuration or has performed some input or output in the critical sec- In this paper, Section 2 discusses safety requirements of message-based I/O sharing approach on the basis of safety standards and our experiences. Section 3 introduces a simple multi-core partitioning architecture and then Section 3 investigates the design of message-based I/O sharing approach in our real-time kernel (HARTEX) [START_REF] Angelov | Hartex: a safe real-time kernel for distributed computer control systems[END_REF]. Section 5 gives an example how this I/O sharing approach is used in an industrial safe stop component. Section 6 discusses related work and Section 7 concludes.
Safety requirements
To enable the integration of mixed-criticality applications onto a multi-core platform, robust partitioning [START_REF]Integrated modular avionics (ima) development guidance and certification considerations[END_REF] is employed to provide an assurance of intended isolation of independent applications (partitions) implemented upon one platform. Regarding I/O sharing in the partitioning, informal requirements at the architectural level are proposed:
1. A partition must be unable to masquerade as another partition to operate on shared I/Os. 2. A partition must be unable to affect the shared I/O operations of another partition in terms of logical and temporal behaviour. 3. A partition must be unable to affect the logical and temporal behaviour of another partition through shared I/Os. 4. Any attempt of the incorrect access to I/Os must not lead to a unsafe state and can be detected and recognized, and results in a corresponding action to handle the violation.
Note that the second requirement enforces that a partition can't affect I/O operations of another partition, since the partition probably changes the I/O configuration without notifying another partition or disturbs the I/O operations.
The third requires a partition to be isolated from another partition even if using shared I/Os. Since message passing is used to transfer data between partitions inside one core or across multiple cores for I/O sharing, a set of safety requirements shall be proposed to ensure safe communication according to the safety standard IEC61508.
1. Protection shall be provided to message-based communication for detecting message loss, repetition, insertion and resequencing. 2. Protection shall be provided to message-based communication for detecting message data corruption, ensuring data integrity. 3. Protection shall be provided to message-based communication for detecting message transmission delay, ensuring temporal behaviour. 4. Protection shall be provided to message-based communication for detecting message masquerade, ensuring message correct identification recognition. 5. The message-based communication shall support mixed-criticality levels. 6. The message-based communication shall support the interaction across mixedcriticality partitions. 7. Any fault of message-based communication shall not leads to a unsafe state and can be handled after being detected.
The first to the fourth requirements ensures a safe message-based communication that is the base of the I/O sharing. The fifth one requires that different SIL communication services are provided to serve different SIL applications. The sixth one enables the communication between mixed-criticality partitions, which is required by the nature of applications. E.g. a high SIL control application needs to send control results to a low SIL display application.
System design
To fulfill the requirements presented above and achieve an easier certification, one rule of thumb should be kept in our mind while designing the partitioning multi-core system architecture and I/O sharing in the HARTEX real-time kernel: simplification.
System architecture
The proposed system architecture supports two levels of partitioning in the context of a configurable multi-core platform(e.g. System on Chip (SoC) platform): inter-core level and intra-core level as shown in Fig. 2. At the inter-core level, physical separation of processing cores is exploited to a great extent for core independence and isolation. Each core has a private memory(or a virtual private one), and the multi-core HARTEX kernel employs the asymmetric multi-processing (AMP) multikernel architecture proposed in [START_REF] Baumann | The multikernel: a new os architecture for scalable multicore systems[END_REF]. This multi-core architecture proposed is similar to a federated architecture but residing on a chip, in order to achieve relatively effortlessly-certifiable partitioning and exploit the methodologies and frameworks used in distributed systems. Additionally, each core support intra-core partitioning similar to IMA if mixed-safety applications have to be allocated into one processing core. However, the intra-core partitioning is surely of more certification cost since it shares more resources and complicates the architecture of the kernel and hardware (involving Memory Management Unit or Memory Protection Unit). Therefore, partitions with different SILs are recommended to be allocated respectively into isolated cores at inter-core level. If the inter-core isolation can satisfy the system partitioning needs, intra-core isolation is no longer needed, avoiding the certification cost of intra-core isolation design. Therefore, the intra-core isolation mechanism is designed as a configurable component in the kernel. A violation handling mechanism of partitioning is required to process violations in a unified manner.
In this two-level partitioning architecture, I/O sharing among partitions can be addressed by a simple message-based approach, as shown in Fig. 3. Each shared I/O has a dedicated actor that can perform all possible operations on the I/O. Tasks belonging to different partitions that require to share the I/O are no longer able to access the I/O directly. The I/O is only accessible to its hosted actor. The I/O with its actor works in an isolated partition and the other partitions can perform their desired operations on the I/O by sending a request via a safe message. The actor works as a server and processes all the requests buffered in a waiting queue when the actor partition gets its turn to run. All the execution of this approach is determined at the system integration time. This approach has two advantages. Firstly, less software has to be certified to the highest SIL among the mixed-criticality partitions. Each partition is isolated from the I/O partition and other partitions, so the low SIL partitions are just certified to their corresponding SILs. Of course, I/O actor has to be certified to the highest SIL, which is essentially inevitable. Secondly, this is a simpler solution for I/O sharing on a multi-core platform, comparing to multi-core mutual exclusion management. Therefore, it's easier to certify.
Safe message passing
Fig. 4 presents the communication mechanism of the HARTEX multikernel in general. Each core has a message manager and a communication stack (COMM stack) for safe intra-core and inter-core communication. The message manager and the communication stack are dedicated to implement the safety layer and communication protocol respectively. The message manager takes the responsibility of managing messages and applying safety-related communication measures. If a message passing failure occurs, the message manager reports the failure to the Violation Manager in the HARTEX multikernel, which handles all kinds of violations in a unified manner. Additionally, the message manager pushes incoming requests of I/O sharing into I/O waiting queue. The communication stack is a typical layered network protocol for embedded systems, which takes care of message routing for intra-core and inter-core communication. It also provides timed-triggered communication that enables the communication subsystem to transfer a message at a specific time instance.
The communication process is fully controlled by the trusted kernel executing in the supervisor mode. The message manager provides a number of safety measures in the kernel space, listed in Table 1. A message can be configured to have several of these safety measures applied according to its SIL requirement. E.g. Cyclic Redundancy Check, Check Sum and Message Acceptance Filtering measures can be applied to one message, by putting these measures IDs into the message configuration. More safety-related measures can be added in the HAR-TEX multikernel. From Table 2, all the potential communication faults presented in IEC 61508 can be alleviated by the listed safety measures. Note that the pad in the table means all the measures labelled by pads should be applied together to cover this kind of faults.
I/O sharing management
In a multi-core system, some I/Os are dedicatedly connected to a specific core, but some can be connected to several cores (E.g. a LCD I/O interface connected to multiple cores by a common bus). This raises issues from the multi-core point of view. Firstly, some dedicated resources possibly need to be accessed by the other cores. Secondly, multiple cores could use a shared exclusive resource concurrently. This contention can reduce system scalability and performance, since in the conventional way all the relevant states of a resource (E.g. free, being occupied) should be maintained for state consistency among all the cores. This complicated interaction causes the system to be of higher certification cost. Moreover, partitioning is not taken into account while sharing I/Os. Our approach that improves the localizability of shared resources has been proposed. Each shared I/O is designed to form a partition on a specified core, and all the possible operations on the shared I/O are only performed by this partition. A client-server model for a shared I/O is introduced such that the operations on the I/O of all the other partitions (clients) are achieved by sending request messages to the specified I/O partition. An I/O actor (as a server) integrated with the resource manager on the host specified core receives request messages, validates the messages, performs corresponding operations on the shared resource, and optionally feedbacks operation results to the requesting cores. The I/O actor is isolated from the other partitions and the message-based requests are guaranteed by the mentioned safety measures applied in the mes-Fig. 5. Server-Client model of shared resource access mechanism sages. Additionally, the upper limit of the number of requests from a partition in one Main Frame is predetermined in the actor partition as well as specific services of I/O operations that could be used by the partition. Fig. 5 illustrates that tasks on the same core or on different cores can send I/O requests to the resource server via the local message manager. The execution of the actor in the separation kernel is dependent on the allocated time slices of the actor partition. This approach has several advantages. Firstly, the relevant states of shared resources are not necessary to be replicated among multiple cores. This approach directly facilitates higher system scalability and simplifies the system architecture. Secondly, an I/O resource accessible only to a core are enabled to be accessed by the other cores in a uniform manner by using this approach. Thirdly, the core-level subsystem with localized I/Os has less interference with the other cores logically besides the validated message requests, which facilitates the partitioning architecture. Therefore, the system abstraction model is more understandable and also easily analysed and certified. In the end, one crashed partition which could use a shared I/O is unable to affect the I/O resource utilization by the other cores, if the I/O actor partition works well and I/O message-based requests are well-validated. Tasks with different SILs are enabled to access the I/O without hindering the partitioning architecture, when the I/O actor is certified to the highest SIL of these tasks.
To achieve the server-client model of resource sharing, each I/O resource needs to be abstracted into an I/O actor model, which provides clients with all the operation functions. These functions are invoked by the resource manager according to the validated requests. E.g. a monitor can be simply abstracted into four functions: Monitor config() function that can initialize and configure the monitor, Monitor on() function that can turn on the monitor, Monitor off() function that can turn off the monitor and Monitor display() function that can display values on the monitor. Each function can get value parameters of a configurable size in the message-based request. All the functions corresponding Fig. 6. A safety-related module for a frequncy converter to different operations have to take the I/O current configuration and states into consideration before executing its desired operations.
A case study: a safety-related module for a frequent converter
There is a typical safety-critical application from industrial domain [START_REF] Berthing | A taxonomy for modelling safety related architectures in compliance with functional safety requirements[END_REF]: a safetyrelated module for a frequency converter used to control the speed of an electrical motor. E.g. controlling a rotating blade in a manufacturing machine. The failure of stopping the motor safely results in the harm to people or equipments. Therefore, a safety-related module implementing a safe stop is highly required. In Fig. 6, the inputs of the safety-related module are a safe field bus (PROFISafe) and the local emergency stop button. In addition, a reset switch is used to recover the module into an initial state. According to standard IEC 61800-5-2, a safe stop can be ensured by two functions Safe Torque Off and Safe Stop 1.
The safety function Safe Torque Off is achieved through the safety-related interface terminal 37 which can remove the power from electronics and afterwards the motor coasts. The Safe Stop 1 is to request an internal function of the frequency converter to ramp down the speed of the motor via the terminal 27. Both functions perform in conjunction to achieve a safe stop.
To achieve SIL 3 according to IEC 61508, 1oo2D structure [START_REF]Functional safety of electrical/electronic/programmable electronic safety related systems[END_REF] is selected for acquiring the hardware fault tolerance of 1, which has two redundant isolated channels with mutual diagnostics. An initial model of the safety-related model in a high abstraction level is proposed in Fig. 7. The model only takes care of the high level components, their implementation in hardware or software, and the interaction. It's represented intuitively by the extended safety architecture taxonomy [START_REF] Berthing | A taxonomy for modelling safety related architectures in compliance with functional safety requirements[END_REF]. Black color represents components implemented in hardware and grey color means in software. Solid line represents safety-related components, dotted line represents diagnostics components and dashed line represents nonsafety-related components.
As illustrated in Fig. 7, only one multi-core chip comprising three isolated cores is used to implement the two safety-related channels and a non-safetyrelated channel respectively. Core 1 performs one safe-related channel and diag-Fig. 7. A high-level abstraction model of a safety-related module for a frequncey converter on a multi-core chip nostics, and similarly Core 2 performs the redundant safe-related channel and diagnostics. Core 3 executes the non-safety-related reset handler as well as the non-safety-related gateway. "A" in the grey dotted circle in the left are actuators that monitor their own emergency switches whether they are in the operational state or not, and "A" in the black line circle in the right are actuators that react to the external environment. Here we focus on the partitions on Core 1 as an example, aiming at investigating the sharing of the output port in the partitioning architecture. The safe-related channel (a component) can execute functional operations and the diagnostics component monitors the channels, sensors and actuators, which naturally constitutes two partitions. Both partitions share the redundant output 1 but have to be isolated. A fault taking place in the safe channel partition can not be propagated to the diagnostics partition, in order to avoid that the fault affects the judgement of the diagnostics partition and subsequently it is possibly undetectable.
In Fig. 8, three partitions should be implemented upon Core 1: Safe Channel partition, Diagnostics partition and I/O partition. The Safe Channel partition performs the safety-related functions and sends safe messages to the Diagnostics and I/O partitions. The Diagnostics partition checks the correctness of the safe channel output, as well as the states of its objective sensor and actuator. If any fault takes place, the Diagnostics partition sends safe messages to the I/O partition to interact to the actuator, aiming at switching the system into a safe state in case of the fault. All the possible operations on the output 1 are abstracted into a well-defined model including the functional operations and handlers in case of different faults. This architecture to a great extent simplifies the development and certification of this case study. Exclusive shared I/O operations are usually executed under the protection of mutual exclusion management, which is widely used in real-time operating systems. However, the management of shared I/Os has to take into account the isolation in a partitioning architecture that comprises mixed-criticality applications. The traditional I/O sharing approach Time division multiple access (TDMA) to I/Os can not fulfil the isolation requirements between partitions since it only solves temporal isolation. I/O virtualization is a new approach that ensures the division of I/O, where each virtual machine (partition) with its own system image instance can operate on I/Os independently. In the embedded system world, the XtratuM hypervisor is in charge of providing virtualization services to partitions, and all I/Os are virtualized as a secure I/O partition which can receive and handle I/O operation requests from the other partitions [START_REF] Masmano | Io virtualisation in a partitioned system[END_REF]. This is very similar to our separation kernel but still a little different. The XtratuM hypervisor is a virtualization layer that's prone to high-performance applications comparing to the HARTEX multikernel. The partition instance of the XtratuM can be a bare-machine application, a real-time operation system or a general-purpose operation system. The HARTEX multikernel focuses on fine-grained systems and its partition instance is a set of basic executable tasks. Therefore, the HARTEX multikernel leads to a small size of code (4300LOC) and simple architecture, and thus has less certification cost. It has advantages when it comes to small applications. Additionally, the HARTEX multikernel supports multi-core architecture with safe communication. A strongly partitioned real-time system in [START_REF] Shah | Sharing i/o in strongly partitioned real-time systems[END_REF] proposes the Publish-Subscribe architecture in a microkernel for I/O sharing. Partition has the Pseudo-Device Driver to access the I/Os by sending requests to the microkernel. The microkernel layer has device queues to buffer requests, physical device drivers and a device scheduler to handle the requests. This benefits the system with high I/O bandwidth since I/Os can be operated in all authorized partitions. However, the approach results in poor portability and higher certification cost, due to the fact that, besides more complicated design, the code size of the kernel increases since the I/O drivers are added into the kernel and have to be certified together with the kernel. Once moving the kernel to another platform, the entire kernel has to be certified again with the drivers of the new platform, which is required by the safety standard IEC 61508, and thus leads to poor portability. This paper targets simplification and reduced certification cost of mixed-criticality applications on a multi-core platform in the context of I/O sharing. This is a part of our deliverables for the RECOMP project. According to guidelines in the IEC 61508, this paper contributes a set of requirements to enable I/O sharing in a mixed-criticality system. Based on these requirements, a simplified partitioning multi-core architecture has been proposed with several advantages such as physical isolation between inter-core partitions, easy development and reduced certification. Furthermore, the safe message-based communication in the kernel can guarantee safe requests of accessing I/Os in the I/O partition with different levels of dependability in the kernel space. The simple I/O sharing approach has been fully explored to support mixed-criticality partitions without breaking the partitioning architecture. Therefore, a mixed-criticality system of isolated partitions that have to share I/Os can be allocated into one platform and partitions can be certified individually according to their own SILs and consequently the total certification cost is reduced. Since taking the overall consideration of hardware and software architecture, our I/O sharing approach in the context of partitioning systems is simple, flexible and certifiable.
Fig. 1 .
1 Fig. 1. Preempted I/O access by partitions
Fig. 2 .
2 Fig. 2. System architecture
4
Kernel design for I/O sharing I/O sharing in the HARTEX multikernel enforces the existence of a safe message passing manager and a certifiable I/O resource manager. A message across partitions has to be validated to a level of dependability by applying a set of
Fig. 4 .
4 Fig. 4. Producer-consumer communication in the HARTEX multikernel
Fig. 8 .
8 Fig. 8. I/O sharing in a safety-related module for a safe stop
Table 1 .
1 safety communication measures in the HARTEX
Exploitation of interconnection Cyclic Redundancy Check
hardware redundancy Message via dual channels
Exploitation of time redundancy Check sum Double-sending of a message
Message Acceptance Filtering
Other measures Message sequence index
Timed Message Scheduling
Table 2 .
2 Communication faults are coved by specific safety measures
Failure Hardware Time redun- Message fil- Sequence in- Time-
types redundancy dancy tering dex triggered
scheduling
Repetitions √
Deletion √
Insertion √
Resequence √
Corruption √ √
Delay √
Masquerade • • •
http://www.kone.com
http://www.recomp-project.eu | 28,886 | [
"1001404",
"1001405"
] | [
"50786",
"50786"
] |
01466690 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466690/file/978-3-642-38853-8_31_Chapter.pdf | Sunil Malipatlolla
email: [email protected]
Ingo Stierand
Evaluating the Impact of Integrating a Security Module on the Real-Time Properties of a System
Keywords: Security, FPGA, Interfaces, Real-Time Systems
With a rise in the deployment of electronics in today's systems especially in automobiles, the task of securing them against various attacks has become a major challenge. In particular, the most vulnerable points are: (i) communication paths between the Electronic Control Units (ECUs) and between sensors & actuators and the ECU, (ii) remote software updates from the manufacturer and the in-field system. However, when including additional mechanisms to secure such systems, especially real-time systems, there will be a major impact on the realtime properties and on the overall performance of the system. Therefore, the goal of this work is to deploy a minimal security module in a target real-time system and to analyze its impact on the aforementioned properties of the system, while achieving the goals of secure communication and authentic system update. From this analysis, it has been observed that, with the integration of such a security module into the ECU, the response time of the system is strictly dependent on the utilized communication interface between the ECU processor and the security module. The analysis is performed utilizing the security module operating at different frequencies and communicating over two different interfaces i.e., Low-Pin-Count (LPC) bus and Memory-Mapped I/O (MMIO) method.
Introduction and Related Work
Real-time applications such as railway signaling control and car-to-car communication are becoming increasingly important. However, such systems require high quality of security to assure the confidentiality and integrity of the information during their operation. For example, in a railway signaling control system, the control center must be provided with data about position and speed of the approaching train so that a command specifying which track to follow may be sent back. In such a case, it must be assured that the messages exchanged between the two parties are not intercepted and altered by a malicious entity to avoid possible accidents. Additionally, it is mandatory to confirm that the incoming data to the control center is in fact from the approaching train and not from an adversary. Similar requirements are needed in a car-to-car communication system. Thus, there is a need to integrate a security mechanism inside such systems to avoid possible attacks on them. Furthermore, above considered systems are highly safety relevant, thus the real-time properties typically play an important role in such systems. In general, the goal of a real-time system is to satisfy its real-time properties, such as meeting deadlines, in addition to guaranteeing functional correctness. This raises the question, what will be the impact on these properties of such a system when including, for example, security as an additional feature? To understand this, we integrate a minimal security module in the target real-time system and evaluate its impact on the real-time properties as a part of this work.
There exist some work in the literature which addresses the issue of including security mechanisms inside real-time applications. For example, Lin et al. [START_REF] Lin | Static security optimization for real-time systems[END_REF] have extended the real-time schedulability algorithm Earliest Deadline First (EDF) with security awareness features to achieve a static schedulability driven security optimization in a real-time system. For this, they extended the EDF algorithm with a group-based security model to optimize the combined security value of selected security services while guaranteeing schedulability of the realtime tasks. In a group-based security model, security services are partitioned into several groups depending on the security type and their individual quality so that a combination of both results in a better quality of security. However, this approach had a major challenge as how to define a quality value for a certain security service and to compute the overhead due to those services. In another work, authors Marko et al. have designed and implemented a vehicular security module, which provides trusted computing [START_REF]Trusted Platform Module (TPM) specifications[END_REF] like features in a car [START_REF] Wolf | Design, implementation, and evaluation of a vehicular hardware security module[END_REF]. This security module protects the in-vehicle ECUs and the communication between them, and is designed for a specific use in e-safety applications such as emergency breaking and emergency call. Further, the authors have given technical details about hardware design and prototypical implementation of the security module in addition to comparing its performance with existing similar security modules in the market. Additionally, the automotive industry consortium, autosar, specified a service, referred to as Crypto Service Manager (CSM), which provides a cryptographic functionality in an automobile, based on a software library or on a hardware module [START_REF]Autosar Organization: Specification of Crypto Service Manager[END_REF]. Though the CSM is a service based on a software library, it may be supported by cryptographic algorithms at the hardware level for securing the applications executing on the application layer. However, to the best of our knowledge, none of the aforementioned approaches addresses the issue of analyzing the impact on the real-time properties of a system when integrating a hardware security module inside it. The rest of the paper is organized as follows. Section 2 gives a detailed description of the system under consideration and its operation with the security module internals and the adversarial model. Section 3 evaluates the system operation with three different test scenarios, and presents the corresponding analysis results. Section 4 concludes the paper and gives some hints on future work.
System Specifications
System Model
The system under consideration is depicted in Figure 1. It comprises of a sensor, an actuator, an electronic control unit (ECU) with a processor & a security module, and an update server. The system realizes a simple real-time control application, where sensor data are processed by the control application in order to operate the plant due to an actuator. The concrete control application is not of interest in the context of this paper. It might represent the engine control of a car, or a driver assistant system such as an automatic breaking system (ABS).
The scenario depicted in Figure 1 consists of the following flow: Sensor periodically delivers data from the plant over the bus [START_REF] Anssi | chronVAL/chronSIM: A Tool Suite for Timing Verification of Automotive Applications[END_REF], which is in an encrypted form to avoid its interception and cloning by an attacker. The data is received by the input communication task ComIn, which is part of the operating system (OS). Each time the input communication task receives a packet, it calls the security service (SecSrv), which is also part of the OS, for decryption of the packet [START_REF]Autosar Organization: Specification of Crypto Service Manager[END_REF]. The security service provides the hardware abstraction for security operations, and schedules service calls. The decryption call from the communication task is forwarded to the security module [START_REF] Behrmann | A Tutorial on Uppaal 2004-11-17[END_REF], which processes the packet data. The cryptographic operations of the security module modeled by Dec, Enc, and Auth are realized as hardware blocks. The decrypted data is sent back to the security service, which is in turn returned to the ComIn. Now the data is ready for transmission to the application (4), which is modeled by a single task App. The application task is activated by the incoming packet, and processes the sensor data. The controller implementation of the task calculates respective actuator data and sends it to the communication task ComOut (5) for transmission to the Actuator. However, before sending the data to the Actuator, the communication task again calls the security service [START_REF] Hamann | A framework for modular analysis and exploration of heterogeneous embedded systems[END_REF], which in turn accesses the security module for data encryption [START_REF] Inc | X.: Xilinx[END_REF]. After Enc has encrypted the data, it is sent back to the communication task ComOut via SecSrv, which delivers the packet to the Actuator (8). It is required that the control application finishes the described flow within a single control period, i.e., before the next sensor data arrives.
Additionally, the system implements a function for software updates. To update the system with new software, the UpdateServer sends the data to the Upd task (a) via a communication medium (e.g., over Internet) to the outside. This received data must be authenticity verified and decrypted before loading it into the system. For this, the Upd forwards the data to the SecSrv, which utilizes the Auth block of the security module (b). Only after a successful authentication, the data is decrypted and loaded into the system else it is rejected.
The security module, integrated into the ECU, is a hardware module comprising of cryptographic hardware blocks for performing operations such as encryption, decryption, and authenticity verification. Though these operations are denoted as tasks in the system view, they are implemented as hardware blocks. Further, a controller (a state machine), a memory block, and an I/O interface are included inside the security module (not depicted in Figure 1). Whereas the controller executes the commands for the aforementioned cryptographic operations, the memory block acts as a data buffer. The commands arrive as requests form the SecSrv on the processor, and the responses from the security module are sent back. In essence, the SecSrv acts as a software abstraction of the hardware security module, for providing the required cryptographic operations to the executing tasks on the processor.
The security module is equipped with particular support for update functionality i.e., authenticity verification. In normal operation, the data is temporarily stored in the memory block of the security module to compare the attached Hash-based Message Authentication Code (HMAC) value by the update server with the computed HMAC value in the security module before decrypting and loading it. This kind of operation, where all update data is stored in memory of the security module, and authentication being applied at once on the data, is however not appropriate in the context of the considered real-time application. This is because, while the security module is performing authentication, other operations such as decryption of incoming sensor data or encryption of outgoing actuator data are blocked. Given this, large update data can block the device for a long time span, resulting in a violation of allowed delay by the control algorithm.
To avoid such a situation in the considered scenario, for authenticity verification, the HMAC is calculated in two steps. In the first step, a checksum of the update data is calculated using a public hash algorithm such as Secure Hash Algorithm (SHA-1). In the second step, the HMAC is computed on this checksum. The Upd task thus calculates and sends only the checksum of the data instead of whole data itself to the security module via SecSrv. Since the update process is not time critical, Upd task is executed with low priority, preventing any undesired interference with the real-time application. Therefore, only the interference for authentication of the single checksum has to be considered. However, since the update data being encrypted, Upd task needs to access the security module for its decryption. To this end, the data is split into packets and decrypted piece-wise. The impact of these operations has to be considered in the real-time analysis.
Another possibility to verify the authenticity of the update data would be to compute the HMAC iteratively within the security module. For this, the update data from the Upd task is sent to the security module in a block-by-block basis via the SecSrv. The computed HMAC on the received block is stored inside the memory block of the security module. Before sending the next block, the SecSrv checks for any pending requests for encryption or decryption operation from other high priority tasks. If there exists such a request, it is executed before sending the next block of data for HMAC computation. In order to handle this procedure, the security module should be equipped with an additional hardware block performing the scheduling of cryptographic operations. Further, the communication interface between the SecSrv and the security module has to be modified. Though, this method is currently not supported by our security module it is definitely a desired feature.
The goal of the security module is to provide a secure communication path between the sensor and the actuator and to provide authentic updates of the system. For this, the cryptographic blocks of the security module utilize standardized algorithms for providing the cryptographic operations such as encryption, decryption, hash computation, and HMAC generation & verification. Having said that, all the aforementioned cryptographic operations performed by the security module in the considered system utilize a single block cipher algorithm i.e., Advanced Encryption Standard (AES) as the base [START_REF]Advanced Encryption Standard[END_REF]. It is a symmetric key algorithm i.e., it utilizes a single secret key for both encryption and decryption operations. In addition to being standardized by National Institute for Standards and Technology (NIST), the AES-based security mechanisms consume very few computational resources, which is essential in resource constrained embedded systems.
Adversarial Model
To describe all possible attack points in the considered system, an adversarial model is formulated as depicted in Figure 2. The model highlights all the components (with a simplified ECU block) and the corresponding internal and external communication paths (i.e., numbered circles) of the original system (c.f. Figure 1). The adversary considered in the model is an active eavesdropper (c.f. Dolev-Yao Model [START_REF] Dolev | On the security of public key protocols[END_REF]), i.e., someone who first taps the communication line to obtain messages and then tries everything in order to discover the plain text. In particular he is able to perform different types of attacks such as classical cryptanalysis and implementation attacks, as defined in taxonomy of cryptographic attacks by Popp in [START_REF] Popp | An Introduction to Implementation Attacks and Countermeasures[END_REF]. While classical cryptanalysis attacks include cloning by Fig. 2. Adversarial Model interception, replay, and man-in-the-middle attacks, the implementation attacks include side-channel analysis, reverse engineering, and others.
For our analysis, we assume that the attacker is only able to perform classical cryptanalytic attacks on the external communication links (indicated by thick arrows coming from adversary) i.e., from sensor to ECU, ECU to actuator, and update server to ECU. In specific, under cloning by interception attack, the adversary is capable of reading the packets being sent to the ECU and store them for using during a replay attack. Whereas in a man-in-the-middle attack, the adversary can either pose as an ECU to authenticate himself to the update server or vice versa. In the former case, he would know the content of the update data and in the latter he may update the ECU with a malicious data to destroy the system. However, to protect the systems against the classical cryptanalytic attacks, strong encryption and authentication techniques need to be utilized. With reference to this, the security module in here provides techniques such as confidentiality, integrity, and authenticity which overcome these attacks. We rule out the possibility of attacker being eavesdropping the ECU's internal communication (indicated by a dotted arrow coming from adversary) because such an attack implies that the attacker is having a physical access to the ECU and thus control the running OS and the tasks themselves.
System Analysis
To analyze the impact of including the security feature on the real-time properties of the system, we consider three different test cases as detailed in the sequel. A brief description about the system set-up and the utilized tools is given before delving into the obtained results with the test cases.
The control application is executed with a frequency of 10 kHz, i.e., the Sensor sends each 100 µs a data packet to the ECU. The update service is modeled as a sporadic application with typically very large time spans between individual invocations. All tasks of the processor are scheduled by a fixed priority scheduling scheme with preemption, where lower priority tasks can be interrupted by higher priority tasks. Furthermore, all tasks belonging to the OS (depicted by a dark gray shaded area of Figure 1) get higher priority than the application tasks. The priorities in descending order are ComIn, ComOut, SecSrv, App, and Upd. The operations of the security module are not scheduled, and the module can be considered as a shared resource. The SecSrv task processes incoming security operation requests for the security module in first-in-first-out (FIFO) order.
For all test cases, the processor of the ECU is a 50MHz processor (20 ns cycle time) that is equipped with internal memory for storing data and code. Internal memory is accessed by reading and writing 16 Bit words within a single processor cycle. Communication between the ECU, the sensor, and the actuator is realized by a controller area network (CAN) bus. For simplicity, we assume that all data are transferred between the processor and the CAN bus interface via I/O registers of 16 bit width, and with a delay of four processor cycles. Communication over CAN is restricted to 64 Bit user data, and we assume that this is also the size of packets transmitted between the ECU and the sensor/actuator. In order to transmit 128 Bit data as required by the operation of the security module, each transmission consists of two packets. Receiving and transmitting data thus requires 16 Byte data transfer between the CAN bus controller and the processor, summing up to 64 processor cycles (1.28 µs). Storing the packet into the OS internal memory costs additional 320 ns. Bus latencies are not further specified in our setting, as we concentrate on the timing of the ECU application.
The utilized AES algorithm inside the security module operates on 128 Bit blocks of input data at a time. Thus, all the blocks (enc, dec, and Auth) of the security module, operate on same data size because they utilize the same algorithm. The security module is implemented as a proof-of-concept on a Xilinx Virtex-5 Field Programmable Gate Array (FPGA) [START_REF] Inc | X.: Xilinx[END_REF] platform. The individual cryptographic blocks of the security module are simulated and synthesized utilizing the device specific tools. Utilizing an operating frequency of 358 MHz for the FPGA, the execution time for each of encryption, decryption, and authentication operations is determined (by simulation) to be 46 ns for a 128 Bit block of input data. The timing parameters for other operating frequencies of the security module are obtained by simple scaling. The utilized FPGA device supports storage in the form of block RAM with 36 kb size, which is large enough to be used as memory block of the security module.
We apply timing analysis in order to find the worst-case end-to-end response time of the control application, starting from the reception of sensor data up to the sending of actuator data (shown in Figure 3). Various static scheduling analysis tools are available for this task (e.g. [START_REF] Hamann | A framework for modular analysis and exploration of heterogeneous embedded systems[END_REF][START_REF] Anssi | chronVAL/chronSIM: A Tool Suite for Timing Verification of Automotive Applications[END_REF]). The system however is sufficiently small for a more precise analysis based on real-time model-checking [START_REF] Dierks | Efficient Model-Checking for Real-Time Task Networks[END_REF]. To this end, the system is translated into a Uppaal model [START_REF] Behrmann | A Tutorial on Uppaal 2004-11-17[END_REF]. The worst-case response time is obtained by a binary search on the value range of respective model variable.
Target system without any security features
In the first scenario, which is shown in Figure 3, the communication tasks send the data directly to the control application and to the communication bus without encryption or decryption. The resulting end-to-end response time is shown in column "Scenario 1" of Table 1. As expected, the analysis shows that the execution times are simply summed up for the involved tasks since no further interferences occur in this simple setting. Indeed the situation would be different when multiple application tasks are executed on the same ECU, which would cause additional interferences.
Target system with secure communication feature
In the second scenario, only the secure communication feature between the ECU and the sensor and the actuator is enabled. The update service however is switched off. This implies that the security module has to perform only encryption and decryption but no authentication. For this scenario, we assume that the security module communicates with the processor via a Low-Pin-Count (LPC) bus [START_REF]Intel: Low Pin Count (LPC) Interface Specification[END_REF]. LPC is a 4 Bit wide serial bus defined with a clock rate of 33 MHz. According to the specification [START_REF]Intel: Low Pin Count (LPC) Interface Specification[END_REF], the transfer of 128 Bit data plus 16 Bit command requires about 1.46 µs, when the bus operates with typical timing parameters. Each invocation of the SecSrv involves a transfer of the data to or from the security module, plus the execution time of the task of 80 ns for internal copying operations. The security module operating at a clock rate of 50 MHz results in an individual execution time of 358 ns for encryption and decryption (from simulation results). With this set-up, the timing analysis shows (column "Scenario 2" of Table 1), that enabling only the secure communication feature results in a significant raise in the response time of the system i.e., about 13% more than in the previous scenario.
Target system with secure communication and secure update features
For the third scenario, both secure communication and authentic update features are enabled. This scenario has been analyzed with three different sets of timing parameters.
The first setting assumes the same parameters as for Scenario 2. Hence the security module operates at a clock rate of 50 MHz, and communicates with the processor via LPC. The results show a further raise in the end-to-end response time of the application because the update service might call the authentication service, which incurs an additional execution time to the pending encryption and decryption operations of the security module. Thus, it can be seen that the end-to-end response time is around 16% higher than in the first scenario.
For the second setting, we assume that the security module communicates with the processor via a Memory-Mapped I/O (MMIO) interface. Memory transfers are assumed to operate with 16 Bit words, and a delay of four processor cycles, resulting in transfer times of 80 ns. A transfer between the processor and the security module now sums up to 400 ns.
The final setting works with a very fast security module, and memory transfers only having two cycles delay. The security module operates at 358 MHz, which results in all cryptographic operations with 46 ns of execution time.
Surprisingly, the end-to-end response time in the second and the final setting has reduced and is only around 6.5% and 3.5% respectively. This implies that the type of communication interface between the processor and the security module has a significant impact on the resulting overall response time of the system.
In all settings, the software update function is assumed to perform the operations as discussed in Section 2. After calculating the checksum of the update data, task Upd sends an authentication request to SecSrv. When the authentication is successful (which is always true in the considered scenario), the task successively sends decryption requests for each packet of the update data, while waiting for the reply before sending a new request. The results shown in the table represent the worst-case behavior obtained with various values of the execution time (between 10 ns and 1 µs) needed by Upd between successive decryption requests. The impact of the operation of Upd remains rather small, which can be explained by the fact that the task is executed with low priority. However, the selection of the execution times was not exhaustive, and thus do not guarantee absence of race conditions. To enforce limited impact of the update function, the SecSrv should be modified by running with priority inheritance, where requests are executed with the same priority as the calling task. A more comprehensive analysis of this issue is subject of future work.
In this work, a real-time system integrated with a security module is analyzed to determine the impact of the latter on the worst-case response time of the system. For this, different communication interfaces such as LPC bus and MMIO method are utilized between the security module and the control unit processor of the system, for performing cryptographic operations such as encryption, decryption, and authentication. It is observed that the worst-case response time of the system is high for a slower interface (i.e., LPC) and decreases drastically for a faster interface (i.e., MMIO). Thus, when including a security mechanism in real-time systems it is necessary to consider about the type of communication interface being utilized. Though, the target system in here has a single ECU, a sensor, and an actuator, the typical systems have multiple such components which need a further investigation.
Fig. 1 .
1 Fig. 1. System Scenario
Fig. 3 .
3 Fig. 3. Scenario 1 w/o security feature
Table 1 .
1 Analysis Results
Scenario 1 Scenario 2 Scenario 3
50MHz 50MHz 50MHz 358MHz
Task LPC LPC MMIO MMIO
App 50.0 µs 50.0 µs 50.0 µs 50.0 µs 50.0 µs
ComIn 1.6 µs 1.6 µs 1.6 µs 1.6 µs 1.6 µs
ComOut 1.6 µs 1.6 µs 1.6 µs 1.6 µs 1.6 µs
SecSrv - 80 ns 80 ns 80 ns 80 ns
Comm. CPU/SM - 1.46 µs 1.46 µs 400 ns 200 ns
Dec - 358 ns 358 ns 358 ns 46 ns
Enc - 358 ns 358 ns 358 ns 46 ns
Auth - - 358 ns 358 ns 46 ns
Response Time 53.2 µs 60.1 µs 62.0 µs 56.7 µs 54.96 µs
This work was supported by the Federal Ministry for Education and Research (BMBF) under support code 01IS11035M, 'Automotive, Railway and Avionics Multicore Systems (ARAMiS) | 27,396 | [
"1001406",
"1001375"
] | [
"303555",
"146984"
] |
01466692 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466692/file/978-3-642-38853-8_4_Chapter.pdf | Berkenbrock
Marco A Wehrmeister
email: [email protected]
Gian R Berkenbrock
Automatic Execution of Test Cases on UML Models of Embedded Systems
Keywords: Model-Driven Engineering, UML, testing, test cases execution, simulation
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The design of embedded systems is a very complex task. The engineering team must cope with many distinct requirements and constraints (including timing constraints), whereas the project schedule shrinks due to the time-to-market pressure. Moreover, modern embedded and real-time systems are demanded to deliver an increasing amount of services, affecting directly the complexity of their design. As the system size increases in terms of the number of functions, the number of potential errors or bugs also increases. As a consequence, additional resources (e.g. extra time, money and people) are needed to fix such problems before delivering the final system.
A common approach to deal with the system complexity is to decompose hierarchically a complex problem into smaller sub-problems, increasing the abstraction level [START_REF] Parnas | On the criteria to be used in decomposing systems into modules[END_REF]. Model-Driven Engineering (MDE) [START_REF] Schmidt | Guest editor's introduction: Model-driven engineering[END_REF] has been seen as a suitable approach to cope with design complexity of embedded systems. It advocates that specifications with higher abstraction levels (i.e. models) are the main artifacts of the design. These models are successively refined up to achieve the system implementation using components (hardware and software) available in a target execution platform. Specifically, the engineers specify a Platform Independent Model (PIM) that is refined and mapped into a Platform Specific Model (PSM), from which source code can be generated automatically. CASE tools helps with such a transformation process.
As one can infer, the quality and correctness of the generated code is directly related with the information provided by the created models and their transformations. Thus, as the system implementation relies strongly on the created models, an undetected error in any model is easily propagated to latter design phases. An error introduced in early design stages and detected only in advanced stages leads to a higher repair cost. The total cost would be lower (from 10 to 100 times lower [START_REF] Broekman | Testing Embedded Software[END_REF]) if the error had been detected and fixed in the phase in which it was introduced. Taking into account that MDE approaches strongly rely on the PIM and its transformation to a PSM, it is very important to provide techniques to test and verify the produced models. Consequently, the automation of these tasks (e.g. automatic execution of test cases on the system models) is requited.
Aspect-oriented Model-Driven Engineering for Real-Time systems (AMoDE-RT) [START_REF] Wehrmeister | An aspect-oriented approach for dealing with non-functional requirements in a model-driven development of distributed embedded real-time systems[END_REF] [START_REF] Wehrmeister | Aspect-oriented model-driven engineering for embedded systems applied to automation systems[END_REF] has been successfully applied to design embedded and real-time systems. The engineers specify the system's structure and behavior using UML3 models annotated with stereotype of the MARTE profile 4 . In [START_REF] Wehrmeister | Support for early verification of embedded real-time systems through UML models simulation[END_REF], AMoDE-RT was extended to include a verification activity in the specification phase, in order to enable the engineers to simulate the execution of the behavior specified in the UML model. However, such an approach was not adequate, since there is a considerable effort to create and run new tests.
This work extends that previous work by proposing the repeatable and automatic verification 5 of embedded and real-time systems based on their high-level specifications. The Automated Testing for UML (AT4U) approach proposes the automatic execution of test cases on the behavioral diagrams of an UML model. The set of test cases exercise parts of the system behavior (specified in an UML model) via simulation. The test results can be analyzed to check if the system has behaved as expected. To support the proposed approach, the AT4U tool executes automatically the set of test cases on the UML model. Each test case describes a runtime scenario (on which the system is exercised) and the behavior selected to be tested. Framework for UML Model Behavior Simulation (FUMBeS) [START_REF] Wehrmeister | Support for early verification of embedded real-time systems through UML models simulation[END_REF] simulates the indicated behavior, using the input arguments specified in the test case. The obtained result is compared with the expected result to indicate whether the test case succeeded or not.
It is important to highlight that this automated testing is done already in the specification phase, so that the UML model being created (or its parts) are verified as soon as possible, even if the model is still incomplete. The proposed approach has been validated with a real-world case study. The experiments show encouraging results on the use of AT4U and FUMBeS to simulate and test the system behavior in early design phase, without any concrete implementation.
This paper is organized as follows: section 2 discusses the related work; section 3 provides an overview the AT4U approach; section 4 presents the experiments performed to assess AT4U and their results; and finally, section 5 draws some the conclusions and discusses the future work.
Related Work
This section discusses some recent related work regarding Model-Based Test (MBT) and test automation. In [START_REF] Pretschner | One evaluation of model-based testing and its automation[END_REF], the authors discuss MBT and its automation. Several characteristics are evaluated, e.g. quality of MBT versus hand-crafted test in terms of coverage and number of detected failures. Among other conclusions, the authors state that MBT is worth using as it detects from two to six times more requirements errors.
In both [START_REF] Baker | Testing UML2.0 models using TTCN-3 and the UML2.0 testing profile[END_REF] and [START_REF] Zander | From u2tp models to executable tests with TTCN-3 -an approach to model driven testing[END_REF], UML models annotated with stereotype of the UML Testing Profile (UTP) are used to generate tests using the language TTCN-3(Testing and Test Control Notation, version 3) in order to perform black-box testing on software components. The work presented in [START_REF] Baker | Testing UML2.0 models using TTCN-3 and the UML2.0 testing profile[END_REF] uses sequence diagrams to generate the TTCN-3 code for the test cases behavior. Additionally, test case configuration code (written in TTCN-3) are generated from composite structure diagrams. Similarly, in [START_REF] Zander | From u2tp models to executable tests with TTCN-3 -an approach to model driven testing[END_REF], TTCN-3 code is generated from sequence and/or activity diagrams decorated with UTP stereotypes. However, that approach uses MDE concepts and created a mapping between the UML/UTP and TTCN-3 meta-model. A TTCN-3 model is obtained from UML model by using a model-to-model transformation.
In [START_REF] Iyenghar | Towards model-based test automation for embedded systems using UML and UTP[END_REF], MBT is applied in Resource-Constrained Real-Time Embedded Systems (RC-RTES). UML models are annotated with stereotype of the UTP to generate a test framework, comprising: a proxy test model, UTP artifacts (i.e. test drivers and test cases), and a communication interface. The tests cases are executed directly on the RC-RTES, however the produced results are mirrored to the design level. In that approach, the test cases are manually specified as sequence diagrams.
The automated generation of test cases is addressed in [START_REF] Javed | Automated generation of test cases using model-driven architecture[END_REF]. MDE techniques are use to generate test cases from sequence diagrams. The Eclipse Modeling Framework (EMF) was used to create two PIM: the sequence of method calls (SMC) model and the xUnit models. By using the Tefkat engine, a model-tomodel transformation translates a SMC model into a xUnit model. Thereafter, a model-to-text transformation combines the xUnit model, test data and code headers to generate test cases for a given target unit testing framework. The experiment conducted in [START_REF] Javed | Automated generation of test cases using model-driven architecture[END_REF] generated test cases for SUnit and JUnit.
Comparing the proposed verification approach with the cited works, the main difference is the possibility to execute the test cases directly on the high-level specification. This work proposes a PIM to represent test cases information likewise [START_REF] Javed | Automated generation of test cases using model-driven architecture[END_REF] and [START_REF] Zander | From u2tp models to executable tests with TTCN-3 -an approach to model driven testing[END_REF]. However, the proposed PIM seems to be more complete since it represents both the test case definitions and the results produced during testing. The test cases are manually specified (as in [START_REF] Iyenghar | Towards model-based test automation for embedded systems using UML and UTP[END_REF]) in XML files rather than using the UML diagrams as in [START_REF] Iyenghar | Towards model-based test automation for embedded systems using UML and UTP[END_REF], [START_REF] Baker | Testing UML2.0 models using TTCN-3 and the UML2.0 testing profile[END_REF] and [START_REF] Zander | From u2tp models to executable tests with TTCN-3 -an approach to model driven testing[END_REF]. Finally, to the best of our knowledge, AT4U is the first approach that allows the automated execution of test cases on UML model.
Automated Testing for UML models
The Automated Testing for UML (AT4U) approach have been created to aid the engineers to verify the behavior of embedded and real-time systems in earlier design stages. AMoDE-RT was extended to include a verification activity: the execution of a set of test cases upon the behavior described in the UML model. Therefore, the proposed approach relies on two techniques: (i) the automatic execution of many test cases, including test case scenario setup, execution of selected system behaviors, and the evaluation of the produced results; and (ii) the execution of the specified behavior via simulation.
The proposed approach is based on the ideas presented in the family of codedriven testing frameworks known as xUnit [START_REF] Beck | Simple smalltalk testing[END_REF]. However, instead of executing the test cases on the system implementation, the test cases are executed on the UML model during the specification phase in an iterative fashion (i.e. a closed loop comprising specification and simulation/testing) until the specification is considered correct by the engineering team. For that, AT4U executes a set of test cases upon both individual elements (i.e. unit test) and groups of dependent elements (i.e. component test) that have been specified in the UML model. Further, as the execution of test cases is performed automatically by a software tool named AT4U tool, the testing process can be repeated at every round of changes on the UML model, allowing regression test. If inconsistencies are detected, the UML model can be fixed, and hence, the problem is not propagated to the next stages of the design. An overview of the AT4U approach is depicted in figure 1.
Usually, high-level specifications such as UML models are independent of any implementation technology or execution platform. The use of any platform specific testing technology is not desirable since engineers need to translate the high-level specification into code for the target execution platform before per- forming any kind of automated testing. In this situation, the testing results may be affected by both the errors introduced in the specification and the errors of this translation process.
In order to to allow testing automation for platform independent specifications, AT4U provides: (i) a platform independent description of test cases, and (ii) a mechanism to execute these platform independent test cases. AT4U proposes a test suite model, whose meta-model is based on the concepts and ideas of the xUnit framework [START_REF] Beck | Simple smalltalk testing[END_REF] and the UML Testing profile 6 . AT4U meta-model represents the following information: (i) the test cases used to exercise the system behavior; and (ii) test case results, which are produced during the execution of each test case. For details on AT4U meta-model, see [START_REF] Wehrmeister | Early verification of embedded systems: Testing automation for UML models[END_REF]. It is important to note that this test suite model is platform independent, and thus, it could be used later in the design cycle to generate test cases in the chosen target platform.
The AT4U tool automates the execution of test cases on high-level specifications. It takes as input a DERCS model7 (created from the UML model that represents the embedded real-time system under test) and an XML file containing the description of all test cases that must be executed. Once the system model and the test suite model are loaded, the test cases are executed on the model as follows. For each test case, the AT4U performs: (i) the setup of the initial scenario; (ii) the simulation of the selected methods; and (iii) the evaluation of the results obtained from the simulated execution of the methods set.
In the scenario initialization phase, the information provided by the AT4U meta-model is used to initialize the runtime state of DERCS' objects. Each object described in the input scenario provides the values to initialize the DERCS's objects, i.e. these values are directly assigned to runtime information of the corresponding attributes.
Thereafter, in the method testing phase, AT4U executes the methods specified within the test case, checking if the associated assertions are valid or not. This phase is divided in two parts: (i) method setup and execution; and (ii) the evaluation of the assertions associated with the method under test. Once all input arguments are set, FUMBeS simulates the execution of the behavior asso-ciated with the method under test. Each individual actions associated with the simulated behavior is executed and its outcomes eventually modify the runtime state of the DERCS model (for details see [START_REF] Wehrmeister | Support for early verification of embedded real-time systems through UML models simulation[END_REF]).
After the simulation, FUMBeS returns the value produced during the execution of the method under test. Then, evaluation phase takes place. The assertions associated with the method under test are evaluated. AT4U tool compares the expected result with the one obtained after the method execution, using the specified comparison operation. In addition, AT4U tool evaluates the assertions related the expected scenario of the whole test case. For that, it compares the expected scenario described in the test case with the scenario obtained after executing the set of methods. Each object of the expected scenario is compared with its counterpart in the DERCS model. If the state of both objects is similar, the next object is evaluated until all objects described in the expected scenario are checked. The test case is marked as successful if all assertions are valid, i.e. those related to each individual method must be valid, along with those related to the whole test case. If any of these assertions is not valid, the test case failed and is marked accordingly.
It is worth noting that an important issue of any automated testing approach is to provide the feedback on the executed test cases. AT4U approach reports the outcomes of the test cases by means of an XML file. This file is generated from the output elements provided by the AT4U model.
The root of the XML document (TestSuite) is divided in various nodes, each one reporting the results of one test case. Each TestCase node has two subtrees. Scenario subtree reports the input, expected and result scenarios. Each of these scenarios reports the snapshot of the objects states in the following moments: Input describes the objects before the execution of any method; Expected indicates the objects specified in the expected scenario of the test case; and Result reveals the objects after the execution of all methods within the test case. Methods subtree presents information on the methods exercised during the test case. For each Method, the following information is reported: InputParameters describes the values used as input to simulate the execution of the method's behavior; ReturnedValue indicates the expected and the obtained value returned after the execution of the method; Scenario reports the input, expected and result scenarios (the snapshots are taken after the method execution).
However, although a comprehensive amount of data is provided by this report, the XML format is not appropriate for reading by human beings. This XML file could be processed (e.g. using XSLT -eXtensible Stylesheet Language Transformations8 ) to produce another document in a human readable format, such as HTML or RTF, so that the test cases results are better visualized. Such a feature is subject for our future work. However, it is important to highlight that, by generating an XML file containing the results of the execution of each test case, other software tools could process such information analyze the obtained results. For instance, test coverage could be evaluated, or software documentation could be generated based on such an information.
AT4U Validation: UAV Case Study
This section presents a case study conducted to validate the AT4U approach. This case study has two main goals. The first one is to check if the proposed approach is practicable, i.e. if the engineers can specify a set of test cases and execute them automatically on the UML model of an embedded and real-time system. The second one is to evaluate the performance of the AT4U tool, in order to assess how much time AT4U tool takes to execute the set of test cases.
The AT4U approach was applied in the movement control system of an Unmanned Aerial Vehicle (UAV) presented in [START_REF] Wehrmeister | Aspect-oriented model-driven engineering for embedded systems applied to automation systems[END_REF]. More precisely, 20 test cases have been created to test parts of the movement control system in 4 different situations: (i) the system operating under normal environmental conditions (4 test cases); (ii) the sensing subsystem is running, but the helicopter is powered down and it lies on the ground (4 test cases); (iii) the helicopter is powered on, but it lies on the ground (7 test cases); (iv) the system operating under hostile environmental conditions (5 test cases).
The set of test cases exercises a total of 31 distinct methods, including simple get/set methods and methods with more complex computations. Some of these methods have been simulated more than once (in different situations), and hence, 143 different simulations have been performed to execute the complete set of test cases. The normal and abnormal values have been chosen as input for the test cases. The complete set of test cases was executed and all assertions were evaluated as true, indicating that all test cases were successful.
Figure 2 shows a small fragment of the report generated by AT4U tool regarding the test case for the method TemperatureSensorDriver.getValue(). Lines 102-105 show that the expected and returned values are equal. Hence, the assertion on the behavior of this method was evaluated as true (assertResult="true"). The set of test case has been repeated 10 times and their results remained the same. Thereafter, the behavior of the TemperatureSensorDriver.getValue() was modified in the UML model, in order to check if its test case would fail. Now, this method returns the value of sTemperature.Value plus one instead of returning directly this value. The complete set of test cases was executed again. As expected, this test case failed, since its assertion is not valid anymore.
The first goal of this case study was achieved as various test cases have been specified and executed repeatedly on the high-level specification of the embedded and real-time system. By using AT4U approach, it is possible to perform regression tests. Modifications on the UML model can be automatically evaluated by the test cases previously developed. An error introduced in the system's parts that are covered by the test cases is quickly detected; at least one of these parts is going to eventually fail, as it was illustrated in the previous paragraph.
As mentioned, the second goal of this case study is to evaluate the performance of the AT4U tool. The technologies reused were implemented in Java (i.e. DERCS, FUMBeS, and EMF), and thus, AT4U tool was developed using the JDK 1.6. The experiments have been conducted with a MacBook equipped with an Intel Core 2 Due processor running at 2.16 MHz, 2 GB of RAM, Snow Leop-001 <? xml version = " 1.0 " encoding = " UTF -8 " ? > 002 < TestSuite > 003 < TestCase id = " TC -1. ard as operating system, and Java SE Runtime Environment version 1.26.0 26 (build 1.6.0 26-b03-384-10M3425). To run the experiment, the system was rebooted, all background applications finished, and the experiments executed in a shell session.
The set of test cases was executed 100 times. The complete set was executed in 3712 ms on average, and the average execution time per test case was 185,6 ms (25,96 ms per method). It is important to note that this performance is highly dependent on the tested behaviors. In fact, depending on the complexity of the tested behaviors (e.g. the amount of loop iterations, branches, or executed actions), this execution time can be longer or shorter. Hence, the numbers presented in this case study show only that it is feasible to run test cases to simulate the behavior specified in an UML model.
Intuitively, one may claim that this execution time is longer than the ones required by the native implementations. However, AT4U has been created to allow automated testing for high-level specifications. Considering that there is no implementation available in the specification phase, the AT4U execution time would not be a problem, since engineers do not need to spend time implementing a functionality to verify a solution. Hence, the errors in the specification may be quickly detected without having any implementation or the complete UML model. This performance allows the execution of the whole set of test cases at every round of changes on the UML model to assess if errors were introduced in the specification.
Conclusions and Future Work
To decrease the overall cost, it is very important to provide methods and tools to check the created artifacts as soon as possible in the design cycle. The later an error is detected, the higher the cost and effort to fix it [START_REF] Broekman | Testing Embedded Software[END_REF]. This works proposes an additional activity in the AMoDE-RT approach. Specifically, the AT4U approach supports the automation of the verification activities of system specifications, more precisely, UML models. The specification phase became an iterative process comprising modeling, model-to-model transformations, and model testing and simulation. By using the AT4U verification approach, engineers specify and execute a set of test case during the creation of the UML model. AT4U tool supports the proposed approach by automating the execution of the test cases on the UML model. It uses the FUMBeS framework to simulate (parts of) the system behavior. In other words, AT4U executes (the parts of) the embedded and real-time system under controlled and specific situations that have been defined within the test cases. Test cases are represented in a platform independent fashion by means of a test suite model. Engineers write test cases in an XML file which is used to instantiate the mentioned testing model. Similarly, AT4U tool reports the results of the test cases execution in an XML file. This report provides information such as the initial, expected and resulting scenarios used in each test case, as well as the data produced by the executed behaviors and the results of the assertions evaluation.
Although this work is the initial step towards automatic execution of test cases in both models and system implementation, it already presented encouraging results. The proposed approach has been validated with a real-world application of embedded and real-time systems domain. The experiments show indications that AT4U approach is suitable for the purpose of an early assessment of system behavior. Engineers may verify the system specification, and also evaluate different solutions, while the specification is being created in early stages of design. It is worth pointing out that there is no need to implement any part of the system under design to check the suitability of a solution.
Moreover, AT4U enables the regression test of UML model. This helps the engineers to identify whether an error has been included in the specification in the latter refinement steps. Although the benefits already mentioned in this paper, AT4U is not intended to be the unique technique used to verify the system. It shall be used along with other verification methods and tools including codedriven testing automation frameworks, in order to improve the confidence on the design and also to decrease the number of specification errors.
Future work directions include the improvement of AT4U in order to support the use of the UML testing profile to specify the set of test cases, instead of using an XML file Two new tools are needed to support AT4U: one to facilitate the specification of the test cases XML file; and another to facilitate the visualization of the testing report. A test cases generation tool is also important. Such a tool would use the information provided by AT4U PIM to generate the corresponding test cases for a selected target platform. In addition, this tool could create the test cases automatically based on the execution flow of the system behavior. An UML virtual machine using FUMBeS is also envisaged. It should simulate the whole embedded and real-time system, i.e. the execution of its active objects and their concurrent execution, respecting the time constraints.
Fig. 1 .
1 Fig. 1. Overview of AT4U verification approach
http://www.omg.org/spec/UML/2.4
http://www.omg.org/spec/MARTE/1.1
In this paper, "verification" means checking the specification using a finite set of test cases to exercise system behavior, instead of the exhaustive and formal verification.
http://www.omg.org/spec/UTP/1.1/
Distributed Embedded Real-time Compact Specification (DERCS) is a PIM suitable for code generation and simulation. Unfortunately, due to space constraints, a discussion on the reasons that led to the creation and use of DERCS is out-of-scope for this text. Interested readers should refer to[START_REF] Wehrmeister | Aspect-oriented model-driven engineering for embedded systems applied to automation systems[END_REF] and[START_REF] Wehrmeister | Support for early verification of embedded real-time systems through UML models simulation[END_REF].
http://www.w3.org/TR/xslt
This work is being supported by National Council for Scientific and Technological Development (CNPq -Brazil) through the grant 480321/2011-6. | 27,995 | [
"1001409",
"1001410"
] | [
"300860",
"484937"
] |
01466693 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466693/file/978-3-642-38853-8_5_Chapter.pdf | Rafael B Parizi
email: [email protected]
Ronaldo R Ferreira
email: [email protected]
Luigi Carro
email: [email protected]
Álvaro F Moreira
email: [email protected]
Compiler Optimizations Do Impact the Reliability of Control-Flow Radiation Hardened Embedded Software
Keywords: compiler optimization, compiler orchestration, embedded systems, fault tolerance, LLVM, radiation, reliability, soft errors, tuning
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Compiler optimizations are taken for granted in modern software development, enabling applications to execute more efficiently in the target hardware architecture. Modern architectures have complex inner structures designed to boost performance, and if the software developer were to be aware of all those inner details, performance optimization would jeopardize the development processes. Compiler optimizations are transparent to the developer, who picks the appropriate ones to the results s/he wants to achieve, or, as it is more common, letting this task to the compiler itself by flagging if it should be less or more aggressive in terms of performance.
Industry already offers microprocessors built with 22 nm transistors, with a prediction that transistor's size will reach 7.4 nm by 2014 [START_REF] Moreira | ITRS 2009 Roadmap. International Technology Roadmap for Semiconductors[END_REF]. This aggressive technology scaling creates a big challenge concerning the reliability of microprocessors using newest technologies. Smaller transistors are more likely to be disrupted by transient sources of errors caused by radiation, known as soft-errors [START_REF] Borkar | Designing reliable systems from unreliable components: the challenges of transistor variability and degradation[END_REF]. Radiation particles originated from cosmic rays when striking a circuit induce bit flips during software execution, and since transistors are becoming smaller there is a higher probability that transistors will be disrupted by a single radiation particle with smaller transistors requiring a smaller amount of charge to disrupt their stored logical value. The newest technologies are so sensitive to radiation that their usage will be compromised even at the sea level, as predicted in the literature [START_REF] Normand | Single event upset at ground level[END_REF]. In [START_REF] Rech | Neutron-induced soft-errors in graphic processing units[END_REF] it is shown that modern 22nm GPU cards are susceptible to such an error rate that makes their usage unfeasible in critical embedded systems. However, industry is already investing in GPU architectures as the platform of choice for high performance and low power embedded computing, such as the ARM Mali® embedded GPU [START_REF]ARM Mali Graphics Hardware[END_REF].
The classical solution to harden systems against radiation is the use of spatial redundancy, i.e. the replication of hardware modules. However, spatial redundancy is prohibitive for embedded systems which usually cannot afford extra costs of hardware area and power. The increase on power is a severe problem, because it is expected that 21% of the entire chip area must be turned off during its operation to meet the available power budget, and an impressive chip area of 50% at 8 nm [START_REF] Esmaeizadeh | Dark silicon and the end of multicore scaling[END_REF]. This creates the dark silicon problem [START_REF] Esmaeizadeh | Dark silicon and the end of multicore scaling[END_REF]: a huge area of the circuit cannot be used during its lifecycle. This problem gets worse when the microprocessor has redundant units, because system's reliability could be compromised if redundant units were turned off. The current solution to this problem is to use radiation hardened microprocessors, which are designed to endure radiation. The problem with this approach is the low availability and high pricing of those radiation hardened components. For instance, a 25 MHz microprocessor has a unitary price of U$ 200,000.00 [START_REF] Mehlitz | Expecting the unexpectedradiation hardened software[END_REF]. This high pricing makes the use of radiation hardened microprocessors unfeasible for embedded systems used in aircrafts, not to say about cars and low-end medical devices such as pacemakers. For these critical embedded systems where cost is the major constraint a cheaper but yet effective approach for reliability against radiation is necessary.
Software-Implemented Hardware Fault-Tolerance (SIHFT) [START_REF] Goloubeva | Software-Implemented Hardware Fault Tolerance[END_REF] is an approach for radiation reliability that adds redundancy in terms of extra instructions or data to the application, keeping the hardware unchanged. SIHFT techniques work by modifying the original program by adding checking mechanisms to it. SIHFT are classified either as control-flow or as data-flow. The former is designed to detect when an illegal jump has occurred during application execution to possibly proceed with the resolution of the correct jump address or at least signaling that such an error has occurred. The latter checks if a data variable being read is correct or not. While the effects of dataflow SIHFT methods are clear (usually the duplication of program variables or the addition of variable checksums solve the problem), the impacts of the control-flow ones is yet not well understood. Because control-flow methods modify the program's control-flow graph (CFG), which happens to be the same artifact used by compiler optimizations, the efficiency of control-flow reliability techniques might be influenced by the optimizations in an unpredictable way.
In this paper we evaluate how the cumulative usage of compiler optimizations influence reliability of applications hardened with the state-of-the-art Automatic Correction of Control-flow Errors (ACCE) [START_REF] Vemu | ACCE: Automatic correction of control-flow errors[END_REF] control-flow SIHFT technique, which was chosen because it is the current most efficient method in terms of reliability, attaining an error correction rate of ~70%. The application set we use in this paper is drawn from the MiBench [START_REF] Guthaus | MiBench: A free, commercially representative embedded benchmark suite[END_REF] suite. For the sake of clarity, the ACCE technique is briefly reviewed in Section 2. Section 3 presents the fault model we assume and the methodology used in this paper. Finally, Section 4 presents the impact of individual and cumulative optimization passes using the LLVM [START_REF] Lattner | LLVM: A compilation framework for lifelong program analysis & transformation[END_REF] as the production compiler.
Automatic Correction of Control-flow Errors
ACCE [START_REF] Vemu | ACCE: Automatic correction of control-flow errors[END_REF] is a software technique for reliability that detects and corrects controlflow errors (CFE) due to random and arbitrary bit flips that might occur during software execution. The hardening of an application with ACCE is done at compilation, since it is implemented as a transformation pass in the compiler. ACCE modifies the applications' basic blocks with the insertion of extra instructions that perform the error detection and correction during software execution. In this section we briefly explain how ACCE works in two separate subsections, one dedicated to error detection and the other to error correction in the subsections 2.1 and 2.2, respectively. The reader should refer to the ACCE article for a detailed presentation and experimental evaluation [START_REF] Vemu | ACCE: Automatic correction of control-flow errors[END_REF]. The fault model that ACCE assumes is further described in Section 3.
Control-Flow Error Detection
ACCE performs online detection of CFEs by checking the signatures in the beginning and in the end of each basic block of the control-flow graph, thus, ACCE is classified as a signature checking SIHFT technique as termed in the literature. The basic block signatures are computed and generated during compilation; the signature generation is critical because it needs to compute non-aliased signatures between the basic block, i.e. each block must be unambiguously identified. In addition, for each basic block found in the CFG two additional code regions are added, the header and the footer. The signature checking during execution takes place inside these code regions. Fig. 1 shows two basic blocks (labeled as N2 and N6) with the additional code regions. The top region corresponds to the header and the bottom to the footer. Still at compilation ACCE creates for each function in the application two additional blocks, the function entry block and the Function Error Handler (FEH). For instance, Fig. 1 depicts a portion of two functions, f1 and f2, both owning entry blocks labeled as F1 and F2, and function error handlers, labeled as FEH_1 and FEH_2, respectively. Finally, ACCE creates a last extra block, the Global Error Handler (GEH), which can only be reached from a FEH block. The role of these blocks will be presented soon.
At runtime ACCE maintains a global signature register (represented as S), which is constantly updated to contain the signature of the basic block that the execution has reached. Therefore, during the execution of the header and footer code regions of each basic block, the value of the signature register is compared with the signatures generated during compilation for those code regions and, if those values do not match, a control flow error has just been detected and the control should be transferred to the corresponding FEH block of the function where execution currently is at. ACCE also maintains the current function register (represented as F), which stores the unique identifier of the function currently being executed. The current function register is only assigned at the extra entry function block. This process encompasses the detection of an illegal and erroneous due to a soft error.
Fig. 1 depicts an example of the checking and update of signatures performed in execution time that occurs in a basic block. In this example, the control-flow error occurs in the block N2 of function F1, where an illegal jump incorrectly transfers the control flow to the basic block N6 of function F2. When the execution reaches the footer of the block N6 the signature register S is checked against the signature generated at compilation. In this case, S = 0111 (i.e. the previous value assigned in the header of the block N2). Thus, the branch test in the N6 footer will detect that the expected signature does not match with the value of S, and, thus, the CFE error must signaled (step 1 in Fig. 1). In this example, the application branches to the address f2_err, making the application enter the FEH_2 block (since the error was detected by a block owned by the function F2, the function error handler invoked is the FEH_2). At this point, the CFE was detected and ACCE can proceed with the correction of the detected CFE.
2.2
Control-Flow Error Correction
The correction process starts as soon as an illegal jump is detected by the procedure described in subsection 2.1, with the control flow transferred to the FEH corresponding to the function where the CFE was found. The FEH checks if the illegal jump was originated in the function it is responsible to handle its detected errors by comparing the value of the function's identifier (F1 or F2, in the example of Fig. 1) with the current function register F. If the error happened in the function stored in the F register, FEH evaluates the current value of the signature register and then transfers the control to the basic block that is the origin of the illegal jump (this origin is stored in the S register). On the other hand, if the illegal jump was not originated in the function where the detection has occurred, the FEH then transfers the control flow to the GEH. In this case, the GEH is responsible for identifying the function where the CFE has occurred and to transfer the control flow back to this function, so that the error is correctly treated by the function's FEH. The GEH searches the function where the error has occurred and transfers the control to its entry block, which will then sends the control flow to the proper FEH so that the error can be corrected, i.e. branching the control to the basic block where the CFE has occurred.
Recalling the example depicted in Fig. 1, after the CFE is detected and the control is transferred to FEH_2 (step 1) the F register is matched against the function identifier of the function from where the control came. However, since the CFE originated in the basic block N2 of function F1, F = 1. Therefore, FEH_2 is not capable of finding the basic block where the CFE originated, and then it transfers the control to the GEH so that the correct FEH can be found (step 2). The GEH searches for the function identifier stored in F, until it finds that it should branch to F1 (step 3). Upon reaching the entry block F1, the variable err_flag = 1, because it was assigned to 1 in the GEH, meaning that there is an error that should be fixed, thus, the control branches to FEH_1 (step 4). Now since F = 1, FEH_1 knows that it is the FEH capable of handling the CFE and, as such, sets the variable err_flag to 0. Finally, it searches for the basic block that has the signature equals the register S. Upon finding it, the control branches to this basic block, i.e. N2 in Fig. 1 (step 5). This last branch restores the control flow to the point of the program right before the occurrence of the CFE. Notice that inside all the FEH and the GEH there is the variable num_error counting how many times the control has passed through a FEH or GEH. This acts as a threshold for the number of how many times the correction must be attempted, which is necessary to avoid an infinite loop in case the registers F or S get corrupted for any reason. This process concludes the correction of a CFE with ACCE.
Fault Model and Experimental Methodology
The fault model we assume in the experiments is the single bit flip, i.e. only one bit of a word is changed when a fault is injected. ACCE is capable of handling multiple bit flip as long as the bits flipped is within a same word. Since the fault injection, as it will be discussed later, guarantees that the injected fault ultimately turned into a manifested error it does not matter how many bits are flipped, i.e. there is no silent data corruption: faults that cause a word to change its value that does not change the behavior of the program nor its output. This could happen in the case the fault flipped the bits of a dead variable.
The ACCE technique was implemented as a transformation pass in the LLVM [START_REF] Lattner | LLVM: A compilation framework for lifelong program analysis & transformation[END_REF] production compiler, which performs all the modifications in the control-flow graph described in section 2 using the LLVM Intermediate Representation (LLVM-IR). The ACCE transformation pass was applied after the set of compiler optimizations, since doing in the opposite order a compiler optimization could invalidate the ACCE generated code and semantics.
Since ACCE is a SIHFT technique to detect and correct control-flow errors, the adopted fault model simulates three distinct control flow disruptions that might occur due to a control flow error. Remind that a CFE is caused by the execution of an illegal branch to a possibly wrong address. The branch errors considered in this paper are:
1. Branch creation: the program counter is changed, transforming an arbitrary instruction (e.g. an addition) into an unconditional branch; 2. Branch deletion: the program counter is set to the next program instruction to execute independently if the current instruction is a branch; 3. Branch disruption: the program counter is disrupted to point to a distinct and possibly wrong destination instruction address. We implemented a software fault injector using the GDB (GNU Debugger) in a similar fashion as [START_REF] Krishnamurthy | A design methodology for software fault injection in embedded systems[END_REF], which is an accepted fault injection methodology in the embedded systems domain, in order to perform the fault injection campaigns. The steps of the fault injection process are the following:
1. The LLVM-IR program resulting from the compilation with a set of optimization and with ACCE is translated to the assembly language of the target machine; 2. The execution trace in assembly language is extracted from the program execution with GDB; 3. A branch error (branch creation, deletion or disruption) is randomly selected. In average each branch error accounts for 1/3 of the amount of injected errors; 4. One of the instructions from the trace obtained in step 2 is chosen at random for fault injection. In this step a histogram of each instruction is computed because instructions that execute more often have a higher probability to be disrupted; 5. If the chosen instruction in step 4 executes n times, choose at random an integer number k with 1 ≤ k ≤ n; 6. Using GDB, a breakpoint is inserted right before the k-th execution of the instruction selected in step 4; 7. During program execution, upon reaching the breakpoint inserted in step 6, the program counter is intentionally corrupted by flipping one of its bits to reproduce the branch error chosen in step 3; 8. The program continues its execution until it finishes.
A fault is only considered valid if it has generated a CFE, i.e. silent data corruption and segmentation faults were not considered to measure the impacts of the compiler optimizations on reliability. All the experiments in this paper were performed in a 64bit Intel Core i5 2.4 GHz desktop with 4 GB of RAM and the LLVM compiler version 2.9. For all programs versions, where each version corresponds to the program compiled with a set of optimizations plus the ACCE pass, 1,000 faults were injected using the aforementioned fault injection scheme. In the experiments we considered ten benchmark applications from the MiBench [START_REF] Guthaus | MiBench: A free, commercially representative embedded benchmark suite[END_REF] embedded benchmark suite: basicmath, bitcount, crc32, dijkstra, fft, patricia, quicksort, rijndael, string search, and susan (comprising susan corners, edge, and smooth).
Impact of Compiler Optimizations on Control-Flow Reliability of Embedded Software
This section looks at the impacts on software reliability when an application is compiled with a set of compiler optimizations and further hardened with the ACCE method. Throughout this section the baseline for all comparisons is an application compiled with the ACCE method without any other compiler optimization. ACCE performs detection and correction of control-flow errors, thus all data discussed in this section considers the correction rate as the data to compute the efficiency metric. In this analysis we use 58 optimizations provided by the LLVM production compiler. Finally, the results were obtained using the fault model and fault injection methodology described in section 3.
The impact of the compiler optimizations when compiling for reliability is measured in this paper using the metric Relative Improvement Percentage (RIP) [START_REF] Pan | Fast and Effective Orchestration of Compiler Optimizations for Automatic Performance Tuning[END_REF]. The RIP is presented in Eq. 1, where F i is a compiler optimization, E(F i ) is the error correction rate obtained for a hardened application compiled with F i , and E B is the error correction rate obtained for the baseline, i.e. the application compiled only with ACCE and without any optimization.
% 100 ) ( ) ( B B i i B E E F E F RIP
(1) Fig. 2 shows a scatter plot of the obtained RIP for each application, with each of the 58 LLVM optimizations being a point in the y-axis. Each point represents the hardened application compiled with a single LLVM optimization at a time. Thus, for each application have 58 different versions (points in the chart). Fig. 2 shows that several optimizations increase the RIP considerably, sometimes reaching a RIP of ~10%. This is a great result, which shows that reliability can be increased for free just picking appropriate optimizations that facilitates for ACCE the process of error detection and correction. However, we also see that some optimizations totally jeopardize reliability, reaching a RIP of -73.27% (bottom filled red circle for bitcount).
It is also possible to gather evidence that the structure of the application also influences how an optimization impacts on the RIP of reliability. Let us consider the block-placement optimization, which is represented by the white diamond in Fig. 2. In the case of the qsort application, block-placement has a RIP of -42.75% and a RIP of +11.68%. The reader can notice that other optimizations also have this behavior (increasing RIP for sovme applications and decreasing it for others). It also happens that some hardened applications are less sensitive to compiler optimizations, as it is the case of the crc_32 one, where the RIP is within the ± 5% interval around the baseline.
Fig. 3 depicts the RIP of a selected subset of the 58 LLVM optimizations, making it clear that even within a small subset the variation in RIP for reliability is far from Relative Improvement Percentage for the error correction rate of applications hardened with ACCE under further compiler optimization. Each hardened application was compiled with a single optimization at a time, but all applications were compiled with the 58 LLVM optimizations, thus, each hardened application has 58 versions. The baseline (RIP = 0%) is the error correction rate of the hardened application compiled without any LLVM optimization. Each point in the chart represents the application with one optimization protected with ACCE. Usually compiler optimizations are applied in bulk, using several of them during compilation. Therefore, it is important to also examine if successive optimization passes could compromise or increase software reliability of a hardened application. Fig. 4 presents the error correction rate RIP where the hardened application was compiled with a subset of the 58 LLVM optimizations. In this experiment we used six sizes of subsets: 10, 20, 30, 40, 50, and 58. The RIP shown in Fig. 4 is the average of five random subsets, i.e. it is an average of distinct subsets of the same size. Taking the average and picking the optimizations at random reproduces the effects of indiscriminately picking the compiler optimizations or, at least, choosing optimizations with the object of optimizing performance without previous knowledge of how the chosen optimizations influence together the software reliability.
It is possible to see that the cumulative effect of compiler optimizations in the error correction RIP is in most of the cases deleterious, but for a few exceptions. Fig. 4 confirms that some applications are less sensitive to the effects of compiler optimizations, e.g. the crc32 has its RIP within the [-1.11%, 0.73%]. On the other hand, basicmath, bitcount, and patricia are jeopardized. Interesting to notice that the RIP in case of picking a subset of optimizations is not subject to the much severe reduction that was measured when only a single optimization was used (Fig. 2), evidencing that the composition of distinct optimization may be beneficial for reliability.
Based on the data and experiments discussed in this section it is clear that choosing of compiler optimizations requires the software designer to take into consideration that some optimizations may not be adequate in terms of reliability for a given application. Moreover, data shows that a given optimization is not only by itself a source of reliability reduction; reliability is also dependent of the application being hardened and how a given optimization facilitates or not the work of the ACCE technique.
Related Work
Much attention has been devoted to the impact of compiler optimizations on program performance in the literature. However, the understanding of how those optimizations work together and how they influence each other is a rather recent research topic. The Combined Elimination (CE) [START_REF] Pan | Fast and Effective Orchestration of Compiler Optimizations for Automatic Performance Tuning[END_REF] is an analysis approach to identify the best sequence of optimizations of for a given application set using the GCC compiler. The authors discuss that simple orchestration schemes between the optimizations can achieve near-optimal results as if it was performed an exhaustive search in all the design space created by the optimizations. CE is a greedy approach that firstly compiles the programs with a single optimization, using this version as the baseline. From those baseline versions the set of Relative Improvement Percentage (RIP) is calculated, which is the percentage that the program's performance is reduced/increased (section 4 discussed RIP in details). With the RIP at hand for all baselines, the CE starts removing the optimizations with negative RIP, until the total RIP of all optimizations applied into a program do not reduce. CE was evaluated in different architectures, achieving an average RIP of 3% for the SPEC2000, and up to 10% in case of the Pentium IV for the floating point applications.
The Compiler Optimization Level Exploration (COLE) [START_REF] Hoste | Cole: compiler optimization level exploration[END_REF] is another approach to achieve performance increase by selecting a proper optimization sequence. COLE uses a population-based multi-objective optimization algorithm to construct optimal set of optimizations for a given application using the GCC compiler. The data found with COLE give some insightful results about how the optimization. For instance, 25% of the GCC optimizations appear in at most one Paretto set, and some of them appear in all sets. Therefore, 75% of all optimizations do not contribute to improve the performance, meaning that they can be safely ignored! COLE also shows that the quality of an optimization is highly tied with the application set.
The Architectural Vulnerability Factor (AVF) [START_REF] Mukherjee | A systematic methodology to compute the architectural vulnerability factors for a high-performance microprocessor[END_REF] is a metric to estimate the probability that the bits in a given hardware structure will be corrupted by a soft-error when executing a certain application. The AVF is calculated as the total time the vulnerable bits remains in the hardware architecture. For example, the register file has a 100% AVF, because all of its bits are vulnerable in case of a soft-error. This metric is influence by the application due to liveness: for instance, a dead variable has a 0% AVF because it is not used in a computation. The authors in [START_REF] Jones | Evaluating the Effects of Compiler Optimisations on AVF[END_REF] evaluate the impact of the GCC optimizations in the AVF metric by trying to reduce the AVF-delaysquare-product (ADS) introduced by the authors. The ADS relates considers a linear relation of the AVF between the square of the performance in cycles, clearly prioritizing performance over reliability. It is reported that the -O3 optimization level is detrimental both to the AVF and performance, because for the benchmarks considered (MiBench) have increased the number of loads executed. Again, the patricia application was the one with the highest reduction in the AVF at 13%.
In [START_REF] Bergaoui | Impact of Software Optimization on Variable Lifetimes in a Microprocessor-Based System[END_REF] the authors analyze the impact of compiler optimizations on data reliability in terms of variable liveness. Liveness of a variable is the time period between the variable is written and it is last read before a new write operation. The authors conclude that the liveness is not related only with the compiler optimization, but it also depends on the application being compiled, which is in accordance with the discussion we made in section 4. The paper shows that some optimizations tend to extend the time a variable is stored in a register instead of memory. The goal behind this is obvious: it is much faster to fetch the value of a variable when it is in the register than in memory. However, the memory is usually more protected than registers because of cheap and efficient Error Correction Code (ECC) schemes, and, thus, thinking about reliability it is not a good idea to expose a variable in a register for a longer time. The solution to that could be the application of ECC such as Huffman to the program variables itself. Decimal Hamming (DH) [START_REF] Argyrides | Decimal hamming: a novel softwareimplemented technique to cope with soft errors[END_REF] is a software technique that does that for a class of programs where the program's output is a linear function of the input. The generalization of efficient data-flow SIHFT techniques such as DH (i.e. ECC of program variables) is still an open research problem.
Conclusions and Future Work
In this paper we characterized the problem of compiling embedded software for reliability, given that compiler optimizations do impact the coverage rate. The study presented in this paper makes clear that choosing optimizations indiscriminately can decrease software reliability to unacceptable levels, probably avoiding the software to be deployed as originally planned. Embedded software and systems deployed in space applications must always be certified evidencing that they support harsh radiation environments, and given the increasing technology scaling, other safety critical embedded systems might have to tolerate radiation induced errors in a near future. Therefore, the embedded software engineer must be very careful when compiling safety critical embedded software.
Design space exploration (DSE) for embedded systems usually considers "classical" non-functional requirements, such as energy consumption and performance. However, this paper has shown the need for automatic DSE methods to consider reliability when pruning the design space of feasible solutions. This could be realized with the support of compiler orchestration during the DSE step. As future work we are studying how to efficiently extend automatic DSE algorithms to implement compiler orchestration for reliability against radiation induced errors.
F1Fig. 1 .
1 Fig.1. Depiction of how the control is transferred from a function to the basic blocks that ACCE has created when a control-flow error occurs during software execution. In this figure, there is a control flow error (dashed arrow) causing the execution to jump from the block N2 of function F1 to the block N6 of function F2.
Fig. 2.Relative Improvement Percentage for the error correction rate of applications hardened with ACCE under further compiler optimization. Each hardened application was compiled with a single optimization at a time, but all applications were compiled with the 58 LLVM optimizations, thus, each hardened application has 58 versions. The baseline (RIP = 0%) is the error correction rate of the hardened application compiled without any LLVM optimization. Each point in the chart represents the application with one optimization protected with ACCE.
Fig. 3 .
3 Fig. 3. Relative Improvement Percentage of a selected subset of the 58 LLVM optimizations. The baseline (RIP = 0%) is the error correction rate of the hardened application compiled without any LLVM optimization.
Fig. 4 .
4 Fig. 4.Relative Improvement Percentage of random subsets of the 58 LLVM optimizations with a varying number of optimizations for each different subset: 10, 20, 30, 40, 50, 58 optimizations. The RIP for each subset was measured taking the average of 6 random subsets for each subset size. Hence, distinct possible optimizations subsets were considered. The baseline (RIP = 0%) is the error correction rate of the hardened application compiled without any LLVM optimization.
Acknowledgments
This work is supported by CAPES foundation of the Ministry of Education, CNPq research council of the Ministry of Science and Technology, and FAPERGS research agency of the State of Rio Grande do Sul, Brazil. R. Ferreira was supported with a doctoral research grant from the Deutscher Akademischer Austauschdienst (DAAD) and from the Fraunhofer-Gesellschaft, Germany. | 32,706 | [
"1001411",
"1001412",
"840971",
"1001413"
] | [
"302610",
"302610",
"302610",
"302610"
] |
01466694 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466694/file/978-3-642-38853-8_6_Chapter.pdf | Douglas P B Renaux
email: [email protected]
Fabiana Pöttker
email: [email protected]
Power Reduction in Embedded Systems Using a Design Methodology Based on Synchronous Finite State Machines
Keywords: Synchronous finite state machine, energy consumption, ARM microcontroller, Tankless water heater, Model-Based-Development
To achieve the highest levels of power reduction, embedded systems must be conceived as low-power devices, since the early stages of the design process. The proposed Model-Based-Development process uses Synchronous Finite State Machines (SFSM) to model the behavior of low-power devices. This methodology is aimed at devices at the lower-end of the complexity spectrum, as long as the device behavior can be modeled as SFSM. The implementation requires a single timer to provide the SFSM clock. The energy reduction is obtained by changing the state of the processor to a low-power state, such as deep-sleep. The main contribution is the use of a methodology where energy consumption awareness is a concern from the early stages of the design cycle, and not an afterthought to the implementation phase.
Introduction
The reduction of power consumption by embedded devices has become a major concern for designers for three main reasons: (1) Global electrical energy consumption is growing at a rate close to 40% per decade [START_REF]2012 Key World Energy Statistics[END_REF] and embedded devices, in the order of 100 billion, contribute to this consumption; (2) many embedded devices, mainly due to mobility requirements, are battery powered (using both rechargeable and non-rechargeable batteries) and lower consumption results in a longer usage time and less pollution to the environment; (3) in a futuristic view, embedded devices will harvest energy from the environment, thus disposing of a very limited energy budget.
Bohn [START_REF] Bohn | Social, Economic and Ethical Implications of Ambient Intelligence and Ubiquitous Computing[END_REF] presents a futuristic scenario of Ambiental Intelligence where a large number of embedded devices will be present in our surroundings, including clothes, appliances, and most of the equipment that we use daily. This ubiquitous network of devices, many being of very small dimensions, will benefit from the approach of Energy Harvesting, i.e. the use of the energy sources in our surroundings, including moving objects, vibrating machine parts, changes in temperature, light and other electromagnetic waves. Since the amount of energy obtainable by Energy Harvesting is usually very small, such devices will be required to have extremely low power consumption levels, in the order of tens to hundreds of µW [START_REF] Strba | Embedded Systems with Limited Power Resources[END_REF].
To achieve the lowest possible levels of energy consumption, the low-power requirements must be a major concern since the early stages of development. The proposed methodology addresses this concern by the use of Synchronous Finite State Machines (SFSM) as it is a modeling approach that is prone to low-power implementation. A straightforward implementation of putting the processor in deep-sleep mode between clock ticks of the SFSM results in very low consumption levels combined with low resources usage: just a timer is required.
2
Problem Domain
Approaches for the Reduction of Energy Consumption in Embedded Systems
There are many approaches to achieve energy consumption reduction in Embedded Systems [START_REF] Inführ | Hard-and Software Strategies for Reducing Energy Consumption in Embedded Systems[END_REF]. These can be categorized in four classes:
• Hardware: these are hardware design techniques including lower supply voltage, lower processor clock frequencies, low power functional blocks design and low power operating modes.
• Computer Architecture and Compilers: concerns the instruction-set design techniques and the appropriate use of the instruction-set by the compilers [START_REF] Venkatachalam | Power Reduction Techniques for Microprocessor Systems[END_REF], [START_REF] Ortiz | Highl Level Optimization for Low Power Consumption on Microprocessor-Based Systems[END_REF].
• RTOS: the kernel can manage the system's energy consumption by changing to low-power modes or reducing the processing power [START_REF] Wiedenhoft | Um Gerente de Energia para Sistemas Profundamente Embarcados[END_REF], [START_REF] Huang | Adaptive Dynamic Power Management for Hard Real-Time Systems[END_REF].
• Application: the applications can change state to low-power requirement modes whenever their processing needs allow so [START_REF] Inführ | Hard-and Software Strategies for Reducing Energy Consumption in Embedded Systems[END_REF].
The methodology proposed in this paper falls into the Application class. Our approach uses a very straightforward mechanism of changing the state of the processor to sleep-state, where power consumption is minimal and the CPU is not operating. Another approach is used in Chameleon [START_REF] Liu | Chameleon: Application-Level Power Management[END_REF], an application level power management system that controls DVFS settings.
Synchronous Finite State Machines
A Synchronous Finite State Machine performs state transitions only when its clock ticks. Hence, its transitions are synchronous, as opposed to Asynchronous Finite State Machines where transitions may occur as soon as an external event triggers them.
One important difference between SFSM and AFSM is depicted in Fig. 1 where a SFMS is presented. The initial state is A. The occurrence of the event ev1 enables a transition to state B. This transition occurs only on the next clock tick. Between two consecutive clock ticks, more than one event may occur. This situation is illustrated by the transition from state B to state A. This transition is enabled if both events ev1 and ev2 occurred between two consecutive clock ticks.
Related Work
The reduction of energy consumption in embedded systems has been the research subject for several decades already. All the approaches listed in Section 2.1 have been analyzed. Ishihara [START_REF] Ishihara | System-Level Techniques for Estimating and Reducing Energy Consumption in Real-Time Embedded Systems[END_REF] presents techniques for analysis and measurements in embedded systems, so that energy consumption reduction techniques can be evaluated. Tiwari [START_REF] Tiwari | Power Analysis of Embedded Software: A First Step Towards Software Power Minimization[END_REF] presents a methodology for embedded software energy consumption analysis.
Venkatachalam [START_REF] Venkatachalam | Power Reduction Techniques for Microprocessor Systems[END_REF] presents a survey of the available techniques for energy consumption reduction in embedded systems. These techniques can be applied at several levels: circuit, systems, architecture and applications.
At circuit level, most of the power consumption is due to charging the intrinsic capacitances in digital circuits. Power is a function of the capacitance, clock frequency and the square of the supply voltage. The Dynamic Voltage Scaling (DVS) technique is frequently used to reduce power consumption. It consists of the reduction in the supply voltage and clock frequency whenever the system does not require its full processing speed.
Other circuit level techniques comprise the reduction of transistor sizes in the IC fabrication process, thus, diminishing the intrinsic capacitances, as well as logic gates restructuring to reduce the amount of switching needed. Concerning the buses, some of the applied techniques are the reduction of switching frequency, crosstalk, and signal amplitude, as well as bus segmentation and bus precharging.
At compiler level, energy consumption reduction is obtained by code optimization (less instruction to produce the same logical result) and by reducing the number of memory accesses.
At the applications level, the application and its runtime environment (e.g. RTOS) exchange information concerning opportunities for energy reduction, such as availability of resources and processing power requirements.
The proposed methodology is concerned with design techniques at the applications level. Future improvements in the proposed methodology will include the interactions with the RTOS and the required support to be implemented in the RTOS.
Proposed Methodology
The proposed methodology is applicable at the applications level (Section 2.1), therefore, it follows the usual application software development process. In each phase, decisions are taken so that the system can be modeled by SFSM and implemented as such.
1. Operational Concept: the product must be conceived to be prone to a SFSM modeling approach, as such, it must deal with discrete time events and actions. Inputs may represent continuous values, but they will be sampled synchronously to the SFSM clock. Outputs will be generated on every SFSM clock period, hence, with delays when compared to a system that operates continuously. 2. Software Requirements: the requirements must consider the discrete time operation of the system, hence, delays in input signal detection and output signal generation should be allowed. The larger the allowed delays, the higher the possibilities for reduction of energy consumption. At this phase, if functional and temporal requirements are too stringent then the solution space is reduced, as well as the attainable low-power requirements. 3. Software Architecture: The current version of our proposed methodology concerns lower-complexity devices that can be modeled as a single SFSM or as a collection of SFSM. If several SFSM are used, from an energy consumption point of view, it is preferable if all the SFSM clocks have the same period or are multiples of the same master period, thereby reducing the number of times that the processor has to switch from deep-sleep mode to operating mode.
The selection of the SFSM clock period is of significance both for the reduction of consumption and also for the effect on the control algorithm of the embedded device. In the remainder of this section, several considerations regarding the determination of the SFSM clock period are presented. 4. Software Design and Implementation: to improve flexibility, reusability, and portability, the control algorithms design should be parameterized with respect to the SFSM clock periods such that changes to the clock period do not required code modifications except for a possible change in a defined constant.
For the implementation either a hardware timer or an RTOS timer is required to tick at the end of each SFSM clock period. On each tick the processor is awakened from the deep-sleep mode and executes a simple procedure: (1) reading inputs, (2) identifying if a transition is enabled and firing it, (3) generating new outputs, and (4) returning to deep-sleep mode. 5. Software Testing: special care must be taken in the testing phase with respect to input detection delay and output delays due to the SFSM operation.
Considerations on the Proposed Technique
Applicability.
The proposed technique is aimed at lower complexity embedded systems, particularly to reactive systems. It is feasible to apply this technique to systems composed of several SFSM, however, more significant reduction of energy consumption is achieved when all SFSM operate with the same clock period or on multiples of the same clock period.
SFSM Clock Period Selection.
In general, most embedded applications allow for a range of the SFSM clock period. The higher the SFSM clock period, the higher the reduction in energy consumption, at the expense of a larger output delay.
The energy consumption (in Wh) over a given time of operation (much larger than the SFSM clock period) is given by [START_REF]2012 Key World Energy Statistics[END_REF]. The same type of equation ( 2) is used to calculate the power consumption (in W). Equation 2 can be simplified to (3) when the computation time is considerably smaller than the SFSM clock period. Figure 2 shows the asymptotes of (3) and their crossing point T m . As may be noticed, there is no benefit in increasing the SFSM clock period above T m because the maximum consumption reduction has already been achieved. Hence, the optimal SFSM clock period to achieve the maximum reduction in power consumption is given by (4). 3) -power consumption versus SFSM clock period Figure 3 presents the effect of the SFSM clock period increase on the power consumption of two embedded systems with different power consumptions when the processor is operating. The embedded system 1 (Consumption 1) has a power consumption (C ON ) five times lower than embedded system 2 (Consumption 2), while both have the same power consumption in deep sleep mode (C DS ). In both cases the reduction in power consumption is significant with the increase of the clock period, up to their T m (0.021s for the system 1 and 0.1s for the system 2 -obtained from Equation 4). Higher SFSM clock periods than these would only lead to an increased delay in the response time of the system without significant reduction in power consumption.
= + (1)
= + (2)
≅ + (3)
Fig. 3. Effect of the SFSM clock period on the energy consumption
Effects on Real-Time Systems.
When a system is modeled as a SFSM a computational delay is added and this delay is dependent on the SFSM clock. The effect of the delay must be taken into account in the design process of the system as well as the maximum possible clock period.
Experimental Results
To illustrate the proposed methodology and validate the results the control unit of a tankless gas water heater was developed and measured. In the literature there are other experiments performed with tankless gas water heaters, such as the fuzzy logic control unit presented by Vieira [START_REF] Vieira | Modeling and Control of a Water Gas Heater with Neuro Fuzzy Techniques[END_REF].
Tankless Gas Water Heater Description and Requirements
A tankless gas water heater is composed of a heat exchanger, a gas burner, a control unit (gas heater controller), several sensors (water flow, water outlet temperature, and pilot flame detector) and one actuator (gas valve). As soon as water is flowing through the heat exchanger the gas valve opens and the gas is ignited by the pilot flame. Water is heated while it flows through the heat exchanger. There is no storage of hot water in this system. As soon as the usage of hot water stops the gas valve is closed and the flame extinguish.
The gas heater controller is the electronic module responsible for the control of the temperature at the water outlet by controlling the gas flow valve. The controller needs to sensor the water temperature (T w ), the water flow (W) and the presence of the pilot flame (P) as well as the desired temperature (T s ) selected by the user.
The functional requirements concern the opening of the gas valve to allow for the adequate gas flow to maintain the water temperature within 2 degrees Celsius from the desired temperature. The safety requirements are that the gas flow valve must be closed if the water flow is below the safety limit (0.5 liters per minute) or the pilot flame is off.
Figure 4 depicts the tankless gas water heater block diagram. The controller receives signals from the water flow sensor (W), the pilot light sensor (P) and the water temperature sensor (Tw). The first two sensors are digital (on/off) while the water temperature sensor is analog. The desired temperature (Ts) is given by a 10 position selector. Table 1 presents the desired temperature corresponding to each position of the selector switch. The water outlet temperature sensor is connected to a 4 bit analogto-digital converter. The temperatures corresponding to each value of the ADC are also presented in Table 1. The controller output V controls the gas flow valve in the range from 0% (valve totally closed) up to 100% (valve totally open).
Control Algorithm Design
The control algorithm is implemented by the SFSM presented in Fig 5 . In the Idle state, no water is flowing and the gas flow valve is closed. When the water flow is above the minimum level of 0.5 liters/min and the pilot flame is on, then the SFSM transitions to the Init Delay state and, after 3 seconds, it transitions to the T state with the gas flow valve at 50%. The control algorithm consists of making no corrections to the gas flow valve output whenever the measured temperature is within 2 degrees of the desired temperature; to do small corrections (2% per 350 ms control cycle) when the temperature difference is small (up to one setting of the temperature selector) and doing larger corrections (10% per 350 ms control cycle) when the temperature difference is larger.
Due to the safety requirement, whenever the water flow is below the minimum level or the pilot flame is off, the SFSM transitions to the Idle state.
Control Algorithm Simulation
The tankless gas heater was simulated in Matlab/Simulink. The block diagram of the simulation model is depicted in Fig 6 . The thermal transfer is modeled as a second order system with a time constant of 1 second. This block models the thermal transfer from the flame to the heat exchanger followed by the thermal transfer from the heat exchanger to the water flow. The change in temperature is given by (derived from the formulae presented by Henze [START_REF] Henze | Development of a Model Predictive Controller for Tankless Water Heaters[END_REF]):
= [ !] × # $%&' [ ( ) ] × # * +, (5)
where: P is the power (in kW) generated due to gas burning flow is the water flow (in kg/s) through the heat exchanger c pw is the specific heat capacity of water ( 4.18 kJ/kg.K )
The gas heater controller has an input for the timer tick and two parameterized inputs to accommodate for the changes in the SFSM clock period. The simulation evaluates the desired temperature step-response (desired temperature changed to 40 degrees). Fig 7 shows the response of the simulation: after 12 seconds the system stabilizes within ± 1 degree of the desired temperature.
The simulations were performed with SFSM clock periods ranging from 25 ms to 350 ms. Due to the parameterization of the correction values the responses are nearly identical for all values of the SFSM clock periods. (Section 4). On this processor, the required supply current is of 21 mA in the operating mode and of 100 uA in deep-sleep mode. 2. The supply voltage is of 3.3 V.
The optimal SFSM clock period (Section 4.1) for this implementation is 21 ms. One can observe that changing the SFSM clock period from 25 ms to 350 ms (a 14 fold increase) results in a power reduction from 606 uW to 350 uW, a reduction of only 42%.
The effects of the changes in the SFSM clock period on the water temperature are not noticeable, as predicted by the simulation results. This is mainly due to the time constants of the physical system (heat transfer from flame to heat exchanger and then to water flow) being larger than the maximum SFSM clock periods used in this evaluation.
Conclusion
The proposed methodology advocates the use of Synchronous Finite State Machine models since the early phases of development as well as a concern, during requirements elicitation, for delay tolerances. In this way, a straightforward implementation technique of short execution bursts followed by periods of low-power deep-sleep results in significant reduction of energy consumption.
A tankless gas water heater controller was implemented with the proposed methodology. This controller is powered from a 4000 mAh non-rechargeable battery whose life is extended from 190 hours (if the processor is in continuous operating mode) to 2.25 years (using deep-sleep mode).
One could argue that the total amount of energy consumed by a gas water heater renders useless the small amount of energy that is saved in the given example. However, there are two important considerations: (1) currently the gas water heater has two energy sources -gas and electricity; the potential for significant reduction in energy consumption from the battery will extend battery life, reducing cost and environmental waste. (2) Since gas water heaters produce waste heat, one could apply energy harvesting to power the electronics, provided a start-up mechanism is available.
The proposed methodology is currently aimed at devices of lower complexity. A future research direction is broadening the scope to include more complex devices; these are likely to require the use of an RTOS that should manage the entry into deepsleep mode. Another research direction is the application of the technique to Asyn-chronous Finite State Machines, aiming at the devices whose functionality is not adequately modeled by SFSM.
Fig. 1 .
1 Fig. 1. Example of a State Diagram of a SFSM
Fig. 2 .
2 Fig. 2. Asymptotic behavior of equation (3) -power consumption versus SFSM clock period
Fig. 4 .
4 Fig. 4. Tankless gas water heater block diagram
Fig. 5 .
5 Fig. 5. Tankless gas water heater controller state diagram
Fig. 6 . 4 Implementation
64 Fig. 6. Tankless gas heater simulation model
Fig. 7 .Fig. 8 .
78 Fig. 7.Step-response obtained by simulation
Table 1 .
1 Desired and Measured Temperature Ranges
Desired Measured
Temperature Temperature
( o C) -T s ( o C) -T w
0 below 38°
1 40° 38° to 42°
2 44° 42° to 46°
3 48° 46° to 50°
4 52° 50° to 54°
5 56° 54° to 58°
6 60° 58° to 62°
7 64° 62° to 66°
8 68° 66° to 70°
9 72° 70° to 74°
10 76° 74° to 78°
11 78° to 82°
12 82° to 86°
13 86° to 90°
14 90° to 94°
15 94° to 98°
Table 2 .
2 Measured Power Consumption
Scenario Supply Current Power Consumption
Continuously in operating mode 21 mA 69 mW
SFSM clock period of 25 ms. 184 uA 606 uW
SFSM clock period of 350 ms 106 uA 350 uW | 21,890 | [
"1001414",
"1001415"
] | [
"300860",
"300860"
] |
01466696 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466696/file/978-3-642-38853-8_9_Chapter.pdf | Marcel Pockrandt
email: [email protected]
Paula Herber
email: [email protected]
Verena Klös
email: [email protected]
Sabine Glesner
email: [email protected]
Model Checking Memory-Related Properties of Hardware/Software Co-designs
scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Concurrent HW/SW systems are used in many safety critical applications, which imposes high quality requirements. At the same time, the demands on multifunctioning and flexibility are steadily increasing. To meet the high quality standards and to satisfy the rising quantitative demands, complete and automatic verification techniques such as model checking are needed. Existing techniques for HW/SW co-verification do not support pointers or memory. Thus, they cannot be used to verify memory-related properties and they are not applicable to a wide range of practical applications, as many HW/SW co-designs rely heavily on the use of pointers.
In this paper, we present a novel approach for model checking memory-related properties of digital HW/SW systems implemented in SystemC/TLM [START_REF]IEEE Standards Association: IEEE Std. 1666-2005, Open SystemC Language Reference Manual[END_REF][START_REF]Open SystemC Initiative (OSCI): TLM 2.0 Reference Manual[END_REF]. SystemC/TLM is a system level design language which is widely used for the design of concurrent HW/SW systems and has gained the status of a de facto standard during the last years. The main idea of our approach is to formalize a clean subset of the SystemC memory model using multiple typed arrays. We incorporate this formal memory model into our previously proposed SystemC to timed automata transformation [START_REF] Herber | Model Checking SystemC Designs Using Timed Automata[END_REF][START_REF] Herber | Transforming SystemC Transaction Level Models into UPPAAL Timed Automata[END_REF][START_REF] Pockrandt | Model Checking a SystemC/TLM Design of the AMBA AHB Protocol[END_REF]. With that, we enable the complete and automatic verification of safety and timing properties of SystemC/TLM designs, including memory safety, using the Uppaal model checker. Our approach can handle all important elements of the SystemC/TLM language, including port and socket communication, dynamic sensitivity and timing. Thus, we can cope with both hardware and software and their interplay. We require our approach for model checking of memory-related properties of SystemC/TLM designs to fulfill the following criteria:
1. The SystemC memory model subset must be clearly defined. 2. The automatic transformation from SystemC/TLM to Uppaal must cover the most important memory related constructs, at least the use of pointer variables and call-by-reference. 3. The resulting representation should produce as little overhead as possible on verification time and memory consumption for the Uppaal model checker. 4. To ease debugging, the automatically generated Uppaal model should be easy to read and understand.
Note that our main goal is to make the theoretical results from formal memory modeling applicable for practical applications, and in particular, to transfer the results from the verification of C programs (with pointers) to the verification of HW/SW co-designs written in SystemC. We do not aim at supporting the full C memory model, including inter-type aliasing and frame problems. Instead, we focus on a small, clean subset of the SystemC memory model that is sufficient for most practical examples and can be verified fully automatically.
The rest of this paper is structured as follows: In section 2, we briefly introduce the preliminaries. In section 3, we summarize related work. In section 4, we present our approach for the formalization of the SystemC memory model with Uppaal timed automata. Then, we show how we incorporated the memory model into our previously proposed automatic transformation from System-C/TLM to Uppaal. In Section 5, we describe the verification of memory safety with our approach. Finally, we present the results of this transformation for our case study in Section 6 and conclude in Section 7.
Preliminaries
In this section, we briefly introduce the preliminaries that are necessary to understand the remainder of the paper. First, we give an overview over the system level design language SystemC/TLM and Uppaal timed automata (UTA). Then we give a brief introduction into our transformation from SystemC to timed automata.
SystemC/TLM
SystemC [START_REF]IEEE Standards Association: IEEE Std. 1666-2005, Open SystemC Language Reference Manual[END_REF] is a system level design language and a framework for HW/SW cosimulation. It allows modeling and executing of hardware and software on various levels of abstraction. It is implemented as a C ++ class library, which provides the language elements for the description of hardware and software, and an eventdriven simulation kernel. A SystemC design is a set of communicating processes, triggered by events and interacting through channels. Modules and channels represent structural information. SystemC also introduces an integer-valued time model with arbitrary time resolution. The execution of the design is controlled by the SystemC scheduler. It controls the simulation time, the execution of processes, handles event notifications and updates primitive channels.
Transaction Level Modeling (TLM) is mainly used for early platform evaluation, performance analysis, and fast simulation of HW/SW systems. The general idea is to use transactions as an abstraction for all kind of data that is transmitted between different modules. This enables simulations on different abstraction levels, trading off accuracy and simulation speed. The TLM standard [START_REF]Open SystemC Initiative (OSCI): TLM 2.0 Reference Manual[END_REF] and its implementation are an extension of SystemC, which provide interoperability between different transaction level models. The core of the TLM standard is the interoperability layer, which comprises standardized transport interfaces, sockets, and a generic payload.
Uppaal Timed Automata
Timed automata [START_REF] Alur | A Theory of Timed Automata[END_REF] are finite-state machines extended by clocks. Two types of clock constraints are used to model time-dependent behavior: Invariants are assigned to locations and restrict the time the automaton can stay in this location. Guards are assigned to edges and enable progress only if they evaluate to true. Networks of timed automata are used to model concurrent processes, which are executed with an interleaving semantics and synchronize on channels.
Uppaal [START_REF] Behrmann | A Tutorial on Uppaal[END_REF] is a tool suite for modeling, simulation, and verification of networks of timed automata. The Uppaal modeling language extends timed automata by bounded integer variables, a template mechanism, binary and broadcast channels, and urgent and committed locations. Binary channels enable a blocking synchronization between two processes, whereas broadcast channels enable non-blocking synchronization between one sender and arbitrarily many receivers. Urgent and committed locations are used to model locations where no time may pass. Furthermore, leaving a committed location has priority over non-committed locations.
A small example Uppaal timed automaton (UTA) is shown in Figure 1. The initial location is denoted by • , and request? and ack! denote sending and receiving on channels, respectively. The clock variable x is first set to zero and then used in two clock constraints: the invariant x <= maxtime denotes that the corresponding location must be left before x becomes greater than maxtime, and the guard x >= mintime enables the corresponding edge at mintime. The symbols ∪ and c depict urgent and committed locations.
Transformation from SystemC to UPPAAL
In previous work [START_REF] Herber | Model Checking SystemC Designs Using Timed Automata[END_REF][START_REF] Herber | Transforming SystemC Transaction Level Models into UPPAAL Timed Automata[END_REF][START_REF] Pockrandt | Model Checking a SystemC/TLM Design of the AMBA AHB Protocol[END_REF], we have presented an approach for the automatic transformation of the informally defined semantics of SystemC/TLM designs
x <= maxtime ack! value = f(t)
x >= mintime request? x = 0 Fig. 1: Example Timed Automaton into the formal semantics of UTA. The transformation preserves the (informally defined) behavioral semantics and the structure of a given SystemC design and can be applied fully automatically. It requires two major restrictions. First, we do not handle dynamic process or object creation. This hardly narrows the applicability of the approach, as dynamic object and process creation are rarely used in SystemC designs. Second, the approach only supports data types that can be mapped to (structs and arrays of) int and bool.
In our transformation, we use predefined templates for SystemC constructs such as events, processes and the scheduler. Then, each method is mapped to a single UTA template. Call-return semantics is modeled with binary channels. Process automata are used to encapsulate the method automata and care for the interactions with event objects, the scheduler, and primitive channels. Our transformation is compositional in the sense that we transform each module separately and compose the system in a final instantiation and binding phase. For detailed information on the transformation of SystemC/TLM designs to UTA we refer to [START_REF] Herber | A Framework for Automated HW/SW Co-Verification of SystemC Designs using Timed Automata[END_REF].
Related Work
In the past ten years, there has been a lot of work on the development of formal memory models for C and C-like languages and in particular on the verification of pointer programs. Three main approaches to reason about memory in C (cf. [START_REF] Tuch | Formal Memory Models for Verifying C Systems Code[END_REF]) exist: First, semantic approaches regard memory as a function from some kind of address to some kind of value. Second, there exist approaches that use multiple typed heaps in order to avoid the necessity of coping with inter-type aliasing. In these approaches, a separate heap is used for each language type that is present in a given program or design. In [START_REF] Bornat | Proving pointer programs in Hoare Logic[END_REF], Bornat describes under which restrictions such a memory model is semantically sound. Third, approaches based on separation logic (an extension of Hoare logic) [START_REF] Reynolds | Separation logic: A logic for shared mutable data structures[END_REF] are able to cope with aliasing and frame problems. The main idea of separation logic is to provide inference rules that allow for the expression of aliasing conditions and local reasoning.
With our approach, we mainly adapt the idea of multiple typed heaps [START_REF] Bornat | Proving pointer programs in Hoare Logic[END_REF] by providing a separate memory array for each datatype used in a given design.
There also have been several approaches to provide a formal semantics for SystemC in order to enable automatic and complete verification techniques. However, many of them only cope with a synchronous subset of SystemC [START_REF] Müller | SystemC: Methodologies and Applications, chap. An ASM based SystemC Simulation Semantics[END_REF][START_REF] Ruf | The Simulation Semantics of SystemC[END_REF][START_REF] Salem | Formal Semantics of Synchronous SystemC[END_REF][START_REF] Große | HW/SW Co-Verification of Embedded Systems using Bounded Model Checking[END_REF], cannot handle dynamic sensitivity or timing, and do not consider pointers or memory. Other approaches which are based on a transformation from SystemC into some kind of state machine formalism [START_REF] Habibi | Generating Finite State Machines from Sys-temC[END_REF][START_REF] Traulsen | A SystemC/TLM semantics in Promela and its possible applications[END_REF][START_REF] Zhang | SystemC Waiting-State Automata[END_REF][START_REF] Niemann | Formalizing TLM with Communicating State Machines[END_REF], process algebras [START_REF] Man | An Overview of SystemCFL[END_REF][START_REF] Garavel | Verification of an industrial SystemC/TLM model using LOTOS and CADP[END_REF] or sequential C programs [START_REF] Cimatti | Verifying SystemC: A software model checking approach[END_REF][START_REF] Cimatti | Kratos -A Software Model Checker for SystemC[END_REF] do not cope with pointers or memory as well. Furthermore, most of these approaches lack some important features (e.g., no support for time, no exact timing behavior, no automatic transformation). To the best of our knowledge, the only approach that can cope with pointers and memory is the work of [START_REF] Kroening | Formal Verification of SystemC by Automatic Hardware/Software Partitioning[END_REF][START_REF] Blanc | Scoot: A Tool for the Analysis of SystemC Models[END_REF]. There, a labeled Kripke structurebased semantics for SystemC is proposed and predicate abstraction techniques are used for verification. However, the main idea of this approach is to abstract from the hardware by grouping it into combinational and clocked threads, which are then combined into a synchronous product for the overall system. They do neither address timing issues nor inter-process communication via sockets and channels. Thus, it remains unclear how they would cope with deeply integrated hardware and software components and their interplay. Several approaches exists for the verification of memory safety properties. However, these approaches focus on pure C (e.g., BLAST [START_REF] Beyer | Checking Memory Safety with Blast[END_REF] and VCC/Z3 [START_REF] Cohen | VCC: A Practical System for Verifying Concurrent C[END_REF]) and cannot cope with the special semantics of SystemC/TLM.
Formalization and Transformation of the SystemC Memory Model
In this paper, we present a novel approach for model checking memory-related properties of HW/SW systems implemented in SystemC/TLM. The main idea of our approach is to formalize a clean subset of the SystemC memory model using separate memory arrays for each type present in a given design (cf. [START_REF] Bornat | Proving pointer programs in Hoare Logic[END_REF]).
In order to enable model checking of memory-related properties, we incorporate this formal memory model into our SystemC to Timed Automata Transformation Engine (STATE) [START_REF] Herber | Model Checking SystemC Designs Using Timed Automata[END_REF][START_REF] Herber | Transforming SystemC Transaction Level Models into UPPAAL Timed Automata[END_REF][START_REF] Pockrandt | Model Checking a SystemC/TLM Design of the AMBA AHB Protocol[END_REF]. To this end, we define a set of transformation rules, which covers all memory-related constructs that are relevant for our subset of the SystemC memory model. For each memory-related construct, we define a UTA representation. With that, we can automatically transform a given Sys-temC/TLM design that makes use of pointers and memory into a UTA model.
In the following, we first state a set of assumptions that define a subset of the SystemC memory model. Then, we present our representation of the SystemC memory model within Uppaal. Finally, we present the transformation itself.
Assumptions
Our memory model covers many memory related constructs like call-by-reference of methods, referencing of variables, derefencing of pointers and pointers to pointers in arbitrary depth. However, we require a given SystemC/TLM model to fulfill the following assumptions:
1. No typecasts are used. 2. No direct hardware access of memory addresses (e. g., int *p; p = 0xFFFFFF;). 3. Structs are only referenced by pointers of the same type as the struct. This also means that there are no direct references to struct members. With the assumptions above, we have a clear definition of the subset of the SystemC memory model that we want to support with our approach.
Representation
The main idea of our representation is to model the memory of the System-C/TLM design with multiple typed arrays. As Uppaal does not support polymorphic datatypes, we create a separate array for each datatype used in the design. Pointers then can be modeled as integer variables, which point to a position in the array for the corresponding type. Array variables are interpreted as pointers to the first array element. All other variables are modeled as constant pointers if they are ever referenced (e.g., call-by-reference or direct referencing).
Figure 2 shows a small example of our Uppaal representation for the Sys-temC memory model. While pointers, integers and struct variables are arbitrarily spread over the memory in the SystemC memory model, we group them together in our Uppaal representation. In our example, there exist an integer variable i and two objects s and t of type data. Furthermore, there is an integer pointer p, pointing to i, and a pointer q of type data, pointing to t. In the resulting Uppaal model, i is placed in the array intMem and s and t are placed in the array dataMem. The pointers are transformed from real addresses into the corresponding array indices. Note that the arrays have a finite and fixed size which cannot be altered during the execution of the model. However, the pointers can point to all existing data of their type.
Transformation
For the transformation of a given design, we sort all variables into three disjunct sets: PTR, containing all pointers, REF containing all referenced nonpointer variables and all arrays and VAR containing all other nonpointer variables.
The result can be used to extract the memory model of the SystemC/TLM model and to transform it into a Uppaal memory model as proposed in 4.2. Figure 3 shows a small example for the transformation. Except for additional Table 1 shows the transformation rules we use, with var ∈ REF, arr ∈ REF ∧ isArray(arr), p,q ∈ PTR, arbitrary data types T, U, V and W, and the arbitrary expression E. In general, every referenced variable is converted into a typed array index. While for nonpointer variables this index is constant, pointers can be arbitrarily changed. Direct accesses to these variables and pointer dereferencing operations can be modeled by typed array accesses. Direct pointer manipulation and variable referencing can be performed without any typed array access.
For all variable types in the REF and PTR sets, we generate a typed array representing the memory for this type. The size of each typed array is determined by the total amount of referenced variables of this type. For all members of the REF set, we reserve one field in the typed array per variable per module instance and generate a constant integer with the name of the variable and the index of the reserved field. For arrays, we reserve one field per element and set the constant integer to the index of the first element in the array. We replace every variable access with an array access to the typed array and every referencing of the variable by a direct access. If the variable is an array, the index is used as an offset. For all members of the PTR set we generate an integer variable with the initial value NULL (-1) or the index of the variable the pointer points to. As -1 is not a valid index of the typed array, all accesses to uninitialized pointers result in an array index error. Furthermore, we replace every dereferencing operation to the pointer with an array access to the corresponding typed array.
We implemented the transformation rules in our previously proposed transformation from SystemC to Uppaal and thus can transform a given System-C/TLM model with pointers fully automatically. Currently, our implementation does not support pointers to pointers and arrays of pointers, though both can be added with little effort. ⇒ int p = -1; W *q = E; As our transformation from SystemC to Uppaal is able to cope with pointers and other memory-related constructs, the Uppaal model checker can now be used to verify memory safety properties. In general, we can verify all properties that can be expressed within the subset of CTL supported by Uppaal [START_REF] Behrmann | A Tutorial on Uppaal[END_REF] (e.g., safety, liveness and timing properties as shown in [START_REF] Pockrandt | Model Checking a SystemC/TLM Design of the AMBA AHB Protocol[END_REF]). For convenience, our verification framework generates two memory safety properties automatically:
Table 1: Transformation Rules
⇒ int q = E; Variable Access var ⇒ TMEM[var] arr[E] ⇒ UMEM[arr+E] &(E) ⇒ E &arr[E] ⇒ arr+E Pointer Access *(E) ⇒ TMEM[E] (with E of type T) NULL ⇒ -
(a) All pointers in the design are always either null, or they point to a valid part of the memory array corresponding to their type. (b) The design never tries to access memory via a null pointer.
To verify the first property, it is necessary to check for all pointers p 0 ...p n that they are either null or have a value within the range of their typed array. If the function u(p i ) yields the size of the typed array of the type of p i , property (a) can be formalized as follows:
AG (p 0 = null ∨ 0 ≤ p 0 ≤ u(p 0 ) -1) ∧ ... ∧ (p n = null ∨ 0 ≤ p n ≤ u(p n ) -1)
The second property cannot be captured statically, as it needs the dynamic information where in the program a pointer is used to access memory. To solve this problem, we have developed an algorithm identifying all memory accesses in all processes Proc 0 ...Proc n . For each transition comprising a memory access, a unique label l i is assigned to its source location. With these labels, the property that a memory access ma i that uses a pointer p j is valid can be formalized as follows:
saf e(ma i ) ≡ (Proc(ma i ).l i =⇒ (p j = null))
Using this abbreviation, the second property can be formalized as follows:
AG saf e(ma 0 ) ∧ ... ∧ saf e(ma n ) Both memory safety properties described above are automatically generated for all pointers in a given design within our verification framework.
Evaluation
In this section we evaluate our approach with an industrial case study, namely a TLM implementation of the AMBA AHB, provided by Carbon Design Systems.
The original model consists of about 1500 LOC. To meet the assumptions of our approach, we performed the following modifications: (1) we changed the sockets to TLM standard sockets, (2) we replaced the generic payload type with a specific one, (3) we replaced operators for dynamic memory management (e.g., new, delete) by static memory allocation and (4) we only transfer constant data through the bus. The latter modification drastically simplifies the verification problem. However, our focus is on verifying the correct concurrent behavior, We also performed experiments on a pointer-free variant of the AMBA AHB design to evaluate the additional verification effort produced by our memory model. Therefore, we manually removed all memory related constructs from the original design and tried to keep the resulting design completely side-effect free. In the following, we compare the results of two different experiments: transformation and verification of (1) the pointer-free design and (2) of a design featuring pointers and other memory-related constructs (like call-by-reference).
For both designs, we verified the following properties: (1) deadlock freedom, (2) a bus request is always eventually answered with a grant signal, (3) the bus is only granted to one master at a time, (4) a transaction through the bus is always finished within a given time limit. For the CTL formulae we refer to [START_REF] Pockrandt | Model Checking a SystemC/TLM Design of the AMBA AHB Protocol[END_REF]. For the design with pointers and other memory-related constructs, we additionally verified memory safety, as described in Section 5. All experiments were run on a 64bit Linux system with a dual core 3.0 GHz CPU and 8 GB RAM. To evaluate the scalability of our approach we used different design sizes (from 1 master and 1 slave, 1M1S, to 2 master and 2 slaves, 2M2S). The results of the verification are shown in Table 2.
All properties have been proven to be satisfied at the end of the verification phase. During the verification, we detected a bug in the original design which led to a deadlock situation. When a transaction is split into several separate transfers, a counter variable is used to store the number of successful transfers before the split occurs. This variable was not reset in the original design. As a consequence, all split transactions besides the first one failed. This is a typical example which is both difficult to detect and to correct with simulation alone. With our approach, the generation of a counter example took only a few minutes. Due to the structure preservation of our transformation and the graphical visualization in Uppaal, it was easy to understand the cause of the problem.
Our results show that the verification effort, in terms of CPU time and memory consumption, is drastically increased if pointers and other memory-related constructs are taken into account. This is due to the fact that the memory model introduces an additional integer variable for each variable in the design. However, formal verification via model checking, if successful, is only performed once during the whole development cycle. At the same time, the generation of counter examples only takes a few minutes. Most importantly, we are not aware of any other approach that can cope with the HW/SW interplay within SystemC/TLM models and at the same time facilitates the verification of memory-related properties, for example memory safety.
Conclusion and Future Work
We presented a novel approach for model checking of memory-related properties on HW/SW systems implemented in SystemC/TLM. We formalized a clean subset of the SystemC memory model with UTA. We use this formalization for a fully-automatic transformation of SystemC/TLM into equivalent Uppaal timed automata. This enables the use of the Uppaal model checker to verify memoryrelated properties. For convenience, we generate two memory safety properties, namely that all pointers only point to valid memory locations or null and that no null pointer accesses are used, automatically within our verification framework.
We implemented our approach and showed its applicability with an industrial design of the AMBA Advanced High Performance Bus (AHB). We were able to verify deadlock freedom, timing, and memory safety. We detected a deadlock situation in the AMBA AHB design, which could easily be resolved with the help of the counter-example generated by the Uppaal model checker. Our memory model produces a significant overhead to verification time and memory consumption. However, this overhead is compensated with the possibility to verify memory-related properties and the drastically increased practical applicability of our approach.
In our case study, we manually modified the design such that only constant data is transfered over the bus. For future work, we plan to extend our approach with automatic data abstraction techniques to enable the verification of even larger SystemC/TLM designs without manual interaction.
Fig. 2 :Fig. 3 :
23 Fig. 2: Memory Representation in SystemC and UPPAAL
⇒ const int var = newIndex(T); T var = E; ⇒ const int var = newIndex(T); TMEM[var] = E; U arr[E]; ⇒ const int arr = newIndex(U, E); U arr[] = {v0,...,vn-1}; ⇒ const int arr = newIndex(U, n); UMEM[arr+0] = v0; ...; UMEM[arr+(n-1)] = vn-1; V *p;
4. No pointer arithmetic is used. 5. No dynamic memory allocation. 6. No recursion is used. 7. No function pointers are used.Assumptions 1, 5, and 6 are necessary as Uppaal does not support typecasting, dynamic memory allocation or recursion. The second assumption is necessary because we do not model the memory bytewise and can only access it per variable. The third assumption is due to the fact that we do not flatten structs and therefore struct members do not have an own address. As we only model the data memory, assumption 7 is necessary. The Assumptions 1-4 can be considered as minor ones and hardly restrict the expressiveness of our memory model. As most SystemC/TLM models do neither make use of dynamic memory allocation nor of recursion, Assumptions 5 and 6 are acceptable as well.
Table 2 :
2 Results from the Amba AHB Design , and memory safety which do not depend on the data that is transfered over the bus. The modified model consists of about 1600 LOC.
Verification time ([h:]min:sec)
Pointer-free design Design with pointers
1M1S 1M2S 2M1S 2M2S 1M1S 1M2S 2M1S 2M2S
transformation time 0:04 0:04 0:04 0:04 0:03 0:03 0:04 0:05
deadlock freedom < 1 < 1 0:27 1:08 6:17 12:06 37:28 1:24:25
only one master - - 0:14 0:34 - - 24:07 54:32
bus granted to M1 < 1 < 1 0:15 0:37 3:56 7:47 24:10 54:06
bus granted to M2 - - 0:15 0:38 - - 24:10 54:02
timing < 1 < 1 0:22 0:54 11:37 22:24 1:09:15 2:32:04
memory safety (a) - - - - 4:36 9:16 27:57 1:04:07
memory safety (b) - - - - 6:36 13:19 39:12 1:27:29
# states 5K 9K 537K 1M 15M 26M 64M 127M
memory usage < 1 mb 2 mb 61 mb 112 mb 873 mb 1.5 gb 2.2 gb 3.9 gb
synchronization, timing | 29,374 | [
"1001416",
"1001417",
"1001418",
"1001419"
] | [
"86624",
"86624",
"86624",
"86624"
] |
01466703 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01466703/file/978-3-642-38853-8_7_Chapter.pdf | Leonardo Steinfeld
Marcus Ritt
Fernando Silveira
Luigi Carro
Low-power processors require effective memory partitioning
Keywords: banked memory, event-driven applications, power management, wireless sensor network
The ever increasing complexity of embedded systems demands for rising memory size, and larger memories increase the power drain. In this work, we exploit banked memories with independent lowleakage retention mode in event-driven applications. The resulting energy saving for a given number of banks is close to the maximum achievable value, since the memory banks access pattern of event-driven applications presents a high temporal locality, leading to a low saving loss due to wake-up transitions. Results show an energy reduction up to 77.4% for a memory of ten banks with a partition overhead of 1%.
Introduction
In the last years, there has been a lot of research dealing with processing power optimization resulting in a variety of ultra-low power processors. These processors pose a primary energy limitation for SRAM, where the embedded SRAM consumes most of the total processor power [START_REF] Verma | Analysis Towards Minimization of Total SRAM Energy Over Active and Idle Operating Modes. Very Large Scale Integration (VLSI) Systems[END_REF]. Partitioning a SRAM memory into multiple banks that can be independently accessed reduces the dynamic power consumption, and since only one bank is active per access, the remaining idle banks can be put into a low-leakage sleep state to also reduce the static power [START_REF] Golubeva | Architectural leakage-aware management of partitioned scratchpad memories[END_REF]. However, the power and area overhead due to the extra wiring and duplication of address and control logic prohibits an arbitrary fine partitioning into a large number of small banks. Therefore, the final number of banks should be carefully chosen at design time, taking into account this partitioning overhead. The memory organization may be limited to equally-sized banks, or it can allow any bank size. Moreover, the strategy for the bank states management may range from a greedy policy (as soon as a bank memory is not being accessed it is put into low leakage state) to the use of more sophisticated prediction algorithms [START_REF] Calimera | Design Techniques and Architectures for Low-Leakage SRAMs[END_REF].
Memory banking has been applied for code and data using scratch-pad and cache memories in applications with high performance requirements (e.g. [START_REF] Golubeva | Architectural leakage-aware management of partitioned scratchpad memories[END_REF], [START_REF] Ozturk | ILP-Based energy minimization techniques for banked memories[END_REF]). We follow the methodology employed in [START_REF] Ozturk | ILP-Based energy minimization techniques for banked memories[END_REF], in which a memory access trace is used to solve an optimization problem for allocating the application memory divided in blocks to memory banks. However, to the best of our knowledge, this is the first time SRAM banked memories are considered for event-driven applications code, and the use of such characteristics leads to meaningful power savings, as it will be shown.
The main contribution of this work is to show that, thanks to our new problem formulation, one can find the optimum partitioning of memory banks in the very common event-driven applications. We derive expressions for energy savings in the case of equally sized banks based on a detailed model for different power management strategies. The maximum achievable energy saving is found, and the limiting factors are clearly determined. We show that it is possible to find a near optimum number of banks at design time, irrespective of the application and the access pattern to memory, provided that the energy memory parameters are given, such as energy consumption characteristics and the partition overhead as a function of the number of banks. We show that using our approach in a banked memory leads to aggressive (close to 80%) energy reduction in eventdriven applications.
The remainder of this paper is organized as follows. In Section 2, we present a memory energy model, and in Section 3 we derive expressions for the energy savings of a banked memory. The experiments are presented in Section 4 and in Section 5 we discuss the results. Finally Section 6 contains concluding remarks.
Banked memory energy model
In this section we present a memory energy model for deriving expressions for the energy consumption of an equally-sized banked memory with different power management strategies.
Memory energy model
The static power consumed by a SRAM memory depends on its actual state: ready or sleep. During the ready state read or write cycles can be performed, but not in the sleep state. Since the memory remains in one of these states for a certain amount of cycles, the static energy consumed can be expressed in terms of energy per cycle (E rdy and E slp ) and number of cycles in each state. Each memory access, performed during the ready state, consumes a certain amount of energy (E acc ). The ready period during which memory is accessed is usually called the active period, and the total energy spent corresponds to the sum of the access and the ready energy (E act = E acc + E rdy ), i.e. the dynamic and static energy. On the other hand, the ready cycles without access are called idle cycles, consuming only static energy (E idl = E rdy ). Each state transition from sleep to active (i.e. the wake-up transition) has an associated energy cost (E wkp ) and a latency, considered later. Based on the parameters defined above, the total energy consumption of a memory can be defined as where n act , n idl and n slp are the sum of the cycles in which the memory is in active, idle and in sleep state respectively, and n wkp is the number of times the memory switches from sleep to active state.
E = E act n act + E idl n idl + E slp n slp + E wkp n wkp , (1)
The energy values in Eq. ( 1) depend on the size of the memory, and generally energy is considered proportional to it [START_REF] Golubeva | Architectural leakage-aware management of partitioned scratchpad memories[END_REF]. Using the CACTI tool [START_REF] Thoziyoor | A Comprehensive Memory Modeling Tool and Its Application to the Design and Analysis of Future Memory Hierarchies[END_REF], we simulated a pure RAM memory, one read/write port, 65 nm technology and a high performance ITRS transistor type, varying its size from 512 B to 256 KB. CACTI outputs the dynamic and leakage energy, corresponding to the access and idle of our model. The active energy is directly computed (dynamic plus leakage). The access, active, and idle energy were fitted to a linear function as a function of the memory size to determine the energy coefficients. The energy consumed per cycle in the sleep state is a fraction of the idle energy, since we suppose that a technique based on reducing the supply voltage is used to exponentially reduce the leakage [START_REF] Qin | SRAM leakage suppression by minimizing standby supply voltage[END_REF]. We considered a reduction factor of leakage in sleep state of 0.1, which is generally accepted in the literature [START_REF] Rabaey | Low power design essentials[END_REF]. Finally, before a memory bank could be successfully accessed, the memory cells need to go back from the data retention voltage to the ready voltage, which involves the loading of internal capacitances. Since the involved currents in this process are similar to those in an access cycle, the associated wake-up energy cost is proportional to the access energy, ranging the proportionality constant from about 1 [START_REF] Calimera | Design of a Flexible Reactivation Cell for Safe Power-Mode Transition in Power-Gated Circuits[END_REF] to hundreds [START_REF] Loghi | Architectural Leakage Power Minimization of Scratchpad Memories by Application-Driven Sub-Banking[END_REF]. We adopt an intermediate value of 10. Table 1 shows the different coefficients used in the remainder of this work.
Finally, partitioning a memory in N equally sized banks reduce the energy by N,
E k = E k N . (2)
for k ∈ {act, idl, slp, wkp}, where E k is the corresponding energy consumption per cycle of the whole memory.
Energy savings
In this section we derive expressions for the energy savings of a memory of equally sized banks for two different management schemes. The first general expression corresponds to any power management by means of which a bank may remain in idle state even if it is not accessed. The decision algorithm may range from a simple fixed time-out policy to dynamic and sophisticated prediction algorithms.
The second expression correspond to the simplest policy, greedy, in which a bank is put into sleep state as soon as it is not being accessed, and is determined as a special case of the former.
The total energy consumption per cycle of the whole banked memory after n cycles have elapsed is
ĒN = E act N i=1 n acti n + E idl N i=1 n idli n + E slp N i=1 n slpi n + E wkp N i=1 n wkpi n , (3)
Since the total number of cycles is n = n acti + n idli + n slpi for all banks, and that there is only one bank active per cycle
N i=1 n acti = n (4)
then we obtain
ĒN = E act + (N -1) E slp + (E idl -E slp ) N i=1 n idli n + E wkp N i=1 n wkpi n . (5)
The first two terms of the sum are the consumption of having only one bank in active state and the remaining N -1 banks in sleep state. The third term, related to the idle energy, depends on the fraction of idle cycles performed by each bank i. The last term of the sum represents the wake-up energy as a function of the average wake-up rate of each memory bank, that is the average number of cycles elapsed between two consecutive bank transitions from sleep to active (for example, one transition in 1000 cycles).
We define the energy savings of a banked memory as the relative deviation of the energy consumption of a equivalent single bank memory (E 1 = N E act , always active)
δE = N E act -ĒN N E act . (6)
The energy saving of a banked memory of N uniform banks is
δE N = N -1 N 1 - E slp E act - 1 N E idl -E slp E act N i=1 n idli n - - 1 N E wkp E act N i=1 n wkpi n . (7)
If a greedy power management is considered a memory bank it is put into sleep state as soon as is not being accessed, hence there is no idle cycles and Eq. ( 7) simplifies to
δE grdy N = N -1 N 1 - E slp E act - 1 N E wkp E act N i=1 n wkpi n . (8)
In this case, the application blocks allocation to memory banks must minimize the accumulated wake-up rate in order to maximize the energy saving. Note that the energy saving does not depend on the access profile among the banks, since the access to every bank costs the same as all banks have the same size.
Still, the allocation of blocks to banks must consider the constraints of the banks size. Finally, the energy saving can be improved by increasing N and at the same time keeping the accumulated wake-up rate low. The maximum achievable saving corresponds to the sleep to active rate, which is equivalent to have the whole memory in sleep state. Even so, the partition overhead limits the maximum number of banks. Compared to Eq. ( 8), the general expression Eq. ( 7) has an additional term, which is related to the energy increase caused by the idle cycles. This does not mean that the energy saving is reduced, since the accumulated wake-up ratio may decrease.
Effective energy saving
As mentioned previously, the wake-up transition from sleep to active state of a bank memory has an associated latency. This latency forces the microprocessor to stall until the bank is ready. The microprocessor may remain idle for a few cycles each time a new bank is waken up, incrementing the energy drain. This extra microprocessor energy can be included with the bank wake-up energy and for simplicity we will not consider it explicitly. Moreover, if the wake-up rate is small and the active power of the microprocessor is much higher than idle power, this overhead can be neglected. Additionally, the extra time due to the wake-up transition is not an issue in low duty-cycle applications, since simply slightly increases the duty-cycle.
On the other hand, the partitioning overhead must be considered to determine the effective energy saving. A previous work had characterized the partitioning overhead as a function of the number of banks for a partitioned memory of arbitrary sizes [START_REF] Loghi | Architectural Leakage Power Minimization of Scratchpad Memories by Application-Driven Sub-Banking[END_REF]. In that case the hardware overhead is due to an additional decoder (to translate addresses and control signals into the multiple control and address signals), and the wiring to connect the decoder to the banks. As the number of memory banks increases, the complexity of the decoder is roughly constant, but the wiring overhead increases [START_REF] Loghi | Architectural Leakage Power Minimization of Scratchpad Memories by Application-Driven Sub-Banking[END_REF]. The partition overhead is proportional to the active energy of an equivalent monolithic memory and roughly linear with the number of banks, as can be clearly seen by inspecting the data of the aforementioned work (3.5%, 5.6%, 7.3% and 9% for a 2-, 3-, 4-, and 5-bank partitions, resulting in an overhead factor of approximately 1.8% per bank). Consequently, the relative overhead energy can be modeled as:
δE ovhd N = k ovhd N. ( 9
)
In this work, the memory is partitioned into equally-sized banks. As result the overhead is expected to decrease leading to a lower value for the overhead factor.
Energy savings limits
The energy savings in the limit, as the wake-up and idle contributions tend to zero, is
δE max N = N -1 N 1 - E slp E act . (10)
If the partition overhead is considered (Eq. 9), the maximum effective energy saving is
δE max N,ef f = N -1 N 1 - E slp E act -k ovhd N. (11)
δE max N,ef f is maximized for
N opt = 1 k ovhd 1 - E slp E act . ( 12
)
Experiments
In this section we present experiments comparing the predicted energy savings by our model to the energy savings obtained by solving an integer linear program (ILP).
The criteria for selecting the case study application were: public availability of source files, realistic and ready-to-use application. We chose a wireless sensor network application (data-collection) from the standard distribution of TinyOS (version 2.1.0)3 . The application, MultihopOscilloscope, was compiled for nodes based on a MSP430 microcontroller 4 . Each node of the network periodically samples a sensor and the readings are transmitted to a sink node using a network collection protocol.
We simulated a network composed of 25 nodes using COOJA [START_REF] Eriksson | COOJA/MSPSim: interoperability testing for wireless sensor networks[END_REF] to obtain a memory access trace of one million cycles or time steps.
For the sake of simplicity, the block set was selected as those defined by the program functions and the compiler generated global symbols (user and library functions, plus those created by the compiler). The size of the blocks ranges from tens to hundreds of bytes, in accordance with the general guideline of writing short functions, considering the run-to-completion characteristic of TinyOS and any non-preemptive event-driven software architecture. The segments size of the application are 3205 bytes of text, 122 and 3534 bytes of zero-valued and initialized data respectively. The number of global symbols is 261.
The problem of allocating the code to equally sized banks was solved using an ILP solver for up eight banks, for the greedy power management using a segment trace of 5000 cycles. The total memory size was considered 10% larger than the application size, to ensure the feasibility of the solution. The average energy consumption is calculated considering the whole trace using the memory energy model and the block-to-bank allocation map. The energy saving is determined comparing with a memory with a single bank with no power management.
Results and Discussion
The optimum number of banks estimated using Eq. ( 12) (after rounding) as a function of k ovhd (1%, 2%, 3% and 5%) is shown in Table 2. The energy savings is limited by the partition overhead, reaching a value of 77.4% for an overhead of 1%. The energy saving limit, as the partition overhead tends to zero and N to infinity, is 97.1% (the corresponding value of 1 -E slp /E act ). Table 3 compares the energy saving results as a function of the number of banks and the partition overhead. It can be observed that the maximum energy saving for greedy strategy with 2%, 3% and 5% of partition overhead is achieved for seven, six and five banks respectively (both marked with a gray background). The optimum number of banks for an overhead of 5% differs from what arises in the previous limit case (see Table 2). This means that the saving loss due to wake-up transitions shifts the optimum number of banks. For a given partition overhead, the energy saving for the estimated optimum number of banks is within about 4% of the saving limit (comparing saving values in Table 2 to the corresponding values in Table 3, number of banks: four, six and seven, and partition overhead: 5%, 3% and 2% respectively). This difference came from the wake-up transitions losses. In this case study, due to its even-driven nature, the code memory access patterns are caused by external events. Each event triggers a chain of function calls starting with the interrupt subroutine. This chain may include the execution of subsequent functions calls starting with a queued handler function called by a basic scheduler. The allocation of highly correlated functions to the same bank leads to a bank access pattern with a high temporal locality. Hence, the total wake-up fraction across the banks is very low. This explain the modest difference with the limit value.
Conclusions
We have found that aggressive energy savings can be obtained using a banked memory, up to 77.4% (estimated) for a partition overhead of 1% with a memory of ten banks, and 65.9% (simulated) for a partition overhead of 2% with a memory of seven banks. The energy saving is maximized properly allocating the program memory to the banks in order to minimize the accumulated wake-up rate. The saving increases as a function of the number of banks, and is limited by the partition overhead. The derived model gives valuable insight into the particular factors (coming from the application and the technology) critical for reaching the maximum achievable energy saving. Moreover, at design time the optimum number of banks can be estimated, considering just energy memory parameters. The resulting energy saving for a given number of banks is close to the derived limit, since event-driven applications present access patterns to banks with a high temporal locality.
Table 1 .
1 Memory energy consumption coefficients. 10 -6 3.28 × 10 -7 3.28 × 10 -8 7.95 × 10 -6
Eact E idl E slp E wkp
1.78 ×
Table 2 .
2 Optimum number of banks as a function of partition overhead.
k ovhd (%) 0 1 2 3 5
Nopt ∞ 10 7 6 4
δE max N,ef f (%) 97.1 77.4 69.2 62.9 52.8
Table 3 .
3 Energy saving for greedy power management.
greedy 2 number of banks 3 4 5 6 7 8
k ovhd 1 43.7 58.1 64.8 68.9 71.4 72.9 73.7
(%) 2 41.7 55.1 60.8 63.9 65.4 65.9 65.7 3 39.7 52.1 56.8 58.9 59.4 58.9 57.7
5 35.7 46.1 48.8 48.9 47.4 44.9 41.7
www.tinyos.net | 19,564 | [
"759154"
] | [
"471945",
"302610",
"471945",
"302610"
] |
01456768 | en | [
"info"
] | 2024/03/04 23:41:44 | 2016 | https://theses.hal.science/tel-01456768v2/file/TH2016NADERGEORGES.pdf | Keywords: 2AFC Two Alternative Forced Choice BPC Bits Per Coordinates CSF Contrast Sensitivity Function DAME Dehedral Angle Metric Error FMPD
Par les quelques lignes suivantes, je tiens à remercier toutes les personnes qui ont contribué directement ou indirectement à cette thèse en espérant de n'oublier personne.
Je tiens à remercier dans un premier temps mes directeurs de thèse, Pr. Florent Dupont, Dr. Franck Hétroy-Wheeler et Dr. Kai Wang pour m'avoir fait confiance et pour m'avoir créé un environnement de travail parfait pendant les trois années de thèse.
Je souhaite aussi remercier Pr. Patrick Le Callet et Pr. Tamy Boubekeur pour avoir accepté de relire mon manuscrit ainsi que les membres du Jury d'avoir accepté ce rôle. J'ai vraiment apprécié leurs remarques, leurs questions, et les discussions que nous avons pu avoir le jour de la soutenance. Cela m'a permis de prendre du recul envers mes travaux et d'ouvrir d'avantage mes horizons professionnels.
Je tiens à remercier aussi toutes les personnes qui ont participé aux nombreuses mesures et expériences utilisateurs menées durant cette thèse. Leurs patiences dans la salle d'expérience et le temps qu'ils m'ont accordés étaient essentiels pour pouvoir obtenir un résultat fiable et précis.
Vue que ma thèse était entre Lyon et Grenoble, je remercie les membres de l'équipe AGPIG du Gipsa-lab à Grenoble ainsi que ceux de l'équipe M2Disco du LIRIS à Lyon pour l'ambiance fraternelle qui est importante afin d'allèger le stress de la thèse. Un spécial merci à mes co-bureaux Anton Andreev et Bahram Ehsandoust de m'avoir supporté pendant ces trois ans. En plus, je remercie le personnel administratif des deux laboratoires (LIRIS et Gipsa-Lab) pour la gestion des nombreuses missions de déplacement entre Grenoble et Lyon.
Certainement, il ne faut pas oublier de remercier tous mes amis à Grenoble et ailleur pour le temps convivial qu'on a passé ensemble que ça soit autour d'un repas, ou bien pendant les nombreuses randonnées dans les montagnes, etc.
Un merci très très spécial à Talia pour toutes les belles mémoires.
Finalement, rien ne pourra exprimer ma gratitude envers mes parents. Je leurs remercie pour la confiance qu'ils m'ont accordé et leur soutien infini. Je suis sûr que ce travail n'existerait pas sans eux. i Résumé Calcul du seuil de visibilité d'une distorsion géométrique locale sur un maillage et ses applications Les opérations géométriques appliquées aux maillages 3D introduisent des distorsions géometirques qui peuvent être visibles pour un observateur humain. Dans cette thèse, nous étudions l'impact perceptuel de ces distorsions. Plus précisément, notre objectif est de calculer le seuil à partir duquel les distorsions géométriques locales deviennent visibles. Afin d'atteindre notre but, nous définissons tout d'abord des caractéristiques perceptuelles pour les maillages 3D. Nous avons ensuite effectué une étude expérimentale des propriétés du système visuel humain (sensibilité au contraste et effet du masquage visuel) en observant un maillage 3D. Les résultats de ces expériences sont finalement utilisés pour proposer un algorithme qui calcule le seuil de visibilité relatif à une distorsion locale. L'algorithme proposé s'adapte aux différentes conditions d'affichage (résolution et taille de l'écran), d'illumination et au type de rendu. Enfin, nous montrons l'utilité d'un tel algorithme en intégrant le seuil de visibilité dans le pipeline de plusieurs opérations géométriques (ex: simplification, subdivision adaptative).
List of Figures
Context and Motivation
Three-Dimensional (3D) objects, most commonly represented by triangular meshes, are nowadays more and more used in a large number of applications spanning over different fields such as digital entertainment, cultural heritage, scientific visualization, and medical imaging. Moreover, the popularity of 3D objects is bound to drastically increase in the near future with the release of affordable and commercial virtual reality headsets and the current evolution of the web which revolves around the development of web3D technologies. 3D objects are usually created by artists via 3D modeling and sculpting tools or more recently obtained by scanning a real world object. In both cases, the raw 3D data cannot be directly used in a practical application. It is therefore common for 3D data to undergo various lossy operations in order to accommodate to the needs of these applications. For example, in a video game, the detailed 3D mesh created by the artist is simplified so that a real time and interactive visualization of the 3D world becomes possible. On the other hand, in a 3D web application, the 3D data usually need to be compressed in order to limit the bandwidth usage and reduce the data transfer time. Finally, a watermarking operation might be applied to the 3D mesh in order to limit any illegal or unauthorized duplication of the 3D object in question. These operations introduce geometric distortions in form of perturbation of vertex coordinates which might be visible to a human observer (Fig. 1.1). This is key issue for human-centered applications, as the visibility of these geometric distortions can directly impact the quality of experience of the user. It is therefore important to be able to predict or control the visibility of such geometric distortions.
Although the importance of predicting the visibility of geometric distortions has been recognized within the computer graphics community [OHM + 04, Fer08, FS09, MMBH10, LC10, TFCRS11, CLL + 13], most existing geometry processing algorithms are driven and/or evaluated by geometric metrics like Hausdorff distance [START_REF] Aspert | MESH: Measuring error between surfaces using the Hausdorff distance[END_REF] or root mean square error (RMS) [START_REF] Cignoni | Metro: Measuring error on simplified surfaces[END_REF] which do not correlate with human perception [CLL + 13]. Figure 1.1 shows two distorted versions of the Rabbit model with the same RMS value that are perceptually different. Recently, a number of perceptually driven algorithms have been proposed [FSPG97, RPG99, WLC + 03, CB05, CDGEB07, QM08, Lav09, MKRH11, TWC14, DFLS14, DFL + 15]. The goal of these methods is to evaluate the perceptual impact of geometric distortion or to guide geometric operations. However, existing methods are usually based on assumptions about the general behavior of the Human Visual System (HVS) instead of taking advantage of the characteristics of its internal mechanism. Moreover, in most cases, the perceptual analysis of existing methods is carried out using geometric features such as surface curvature and surface roughness which are not necessarily perceptually relevant attributes. Consequently, these methods are in general neither easily applicable to models of different properties (size, details and density) nor capable of adapting to varying circumstances Early studies of the physiology of the Human Visual System (HVS) have shown that human vision is primarily sensitive to variation in light energy, i.e., contrast, rather than its absolute magnitude [START_REF] Wandell | Foundations of Vision[END_REF]. In other words, a visual pattern is only visible if its contrast value is above a certain threshold. Therefore, in the case of geometric distortions, a vertex displacement is visible to a human observer if it causes a change in contrast that is large enough for the HVS to detect its presence. In the field of image processing, the analysis of contrast information has been basis in many studies that are related to the visibility of pixel-based distortions [START_REF] Wang | Modern image quality assessment[END_REF][START_REF] Lin | Perceptual visual quality metrics: A survey[END_REF][START_REF] Beghdadi | A survey of perceptual image processing methods[END_REF]. In this thesis, we focus on using perceptual properties, such as contrast, to evaluate whether a geometric distortion is visible or not through an experimental study of the characteristics of the HVS, in particular Contrast Sensitivity and Visual Masking.
Objectives and Methodology
The goal of this thesis is to compute threshold beyond which a local geometric distortion becomes visible. The visibility threshold should take into account the various parameters that affect the visibility of geometric distortions such as the display specification (size and resolution), the rendering algorithm, the scene illumination, etc. The computed threshold can then be used to either predict the visibility of local geometric distortion or guide geometric operations. In order to achieve this goal we will:
1. Define perceptually relevant features for 3D meshes that are sensitive to the different parameters affecting the perception of 3D objects.
2. Perform a series of psychophysical experiments in order to study the properties of the HVS while observing a 3D mesh.
3. Use the results of these experiments to derive an algorithm that is able to compute the visibility threshold relative to local geometric distortions.
Thesis Organization
The work presented in this thesis focuses on computing the visibility threshold of local distortions on the surface of 3D meshes. In Chapter 2 we present the background on the properties of the HVS that are relevant to the research presented in this thesis. In Chapter 3 we discuss existing work on perceptually driven graphics techniques. Chapter 4 explains how perceptual properties are evaluated on a 3D mesh and presents a series of psychophysical experiments that were carried out in order to measure the visibility threshold and their results. Chapter 5 describes our method to evaluate the threshold beyond which a vertex displacement becomes visible. Finally, in Chapter 6 we showcase how our perceptual method can be integrated in geometric applications such as mesh simplification and adaptive subdivision.
Chapter 2
Background on the Human Visual System
Over the last two decades, perceptually driven methods have drawn more and more attention in both the computer graphics [OHM + 04, Fer08, CLL + 13] and image/video processing [WB06, LK11, BLBI13] communities. Much of the existing work is based on the visual mechanisms of the human vision. It is therefore important to understand the basic properties of the Human Visual System (HVS) relevant to these methods. This chapter is divided into two parts. The first part gives a brief overview about the process of vision which consists of analyzing information derived from incident light (Section 2.1.1). In particular, we focus on the physiology of the retinal receptive fields which gives us an understanding about the type of visual information that reaches the brain (Section 2.1.2). The second part focuses on the major characteristics of the HVS that are of relevance to our perceptual study (Section 2.2). More precisely, we discuss the contrast sensitivity and the visual masking characteristics of the HVS. For a more detailed discussion about human vision, we refer the reader to [START_REF] Wandell | Foundations of Vision[END_REF][START_REF] Stephen | Vision Science: Photons to Phenomenology[END_REF].
The Process of Vision
Overview
Vision is the process of extracting and analyzing the information that is contained in the incident light. This process (Fig. 2.1) starts in the eyes as the outside light enters through the pupil and gets focused on the retina, a light-sensitive tissue, with the help of the lens. The retina is composed of two types of photoreceptors cells: the rods and the cones. The rods are extremely sensitive to light as they can be excited by as little as one photon and provide the visual system with the necessary information for an achromatic vision in low illumination levels. The cones, on the other hand, are less sensitive to light but allow the HVS to have color vision. This is due to the existence of three types of cones which are sensitive to a distinct (yet overlapped) interval of light frequency. The stimulation of these photoreceptor cells by the incident light creates an electrical signal which then reaches the receptive fields of the ganglion cells. These cells play a crucial role in visual perception as their goal is to encode the visual signal for a more efficient treatment of it. The visual information then travels through the visual pathways which lead to the lateral geniculate nucleus (LGN) and finally on to the visual cortex which is responsible for all the higher-level aspects of vision. While the details of the properties of the LGN and visual cortex are not within the scope of this thesis, it is interesting to know that one of the roles of the LGN is to control the amount of information that is allowed to pass to the visual cortex. In this work, we are mainly interested in the role of the receptive fields of the ganglion cells which provides us with the basic understanding of the characteristics of the HVS that are relevant to our perceptual study.
The Receptive Fields
The range of light intensity that we experience is huge. For example, the intensity of light coming from the sun is approximately 10 million times bigger than the intensity of moonlight. The first challenge the HVS faces is to be able to cope with this wide range of light intensity. More precisely, the challenge is to represent the visual information in an effective and meaningful way. This problem is solved in the early stages of the visual system at the receptive fields of the ganglion cells. Their role is to pre-process the visual information before passing it on to the brain. Early studies of the physiology of the HVS [CK66, [START_REF] Campbell | Application of Fourier analysis to the visibility of gratings[END_REF][START_REF] Blakemore | On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images[END_REF] have shown that the receptive fields have a center/surround organization (Fig. 2.2). In other words, the light reaching the center of a ganglion cell's receptive field can The center-surround organization makes the human visual system sensitive to patterns of light rather than its absolute value. The neural response is at its peak when the visual signal lines up with the size of the receptive field.
either excite or inhibit the cell while the light in the surrounding region will have an opposite effect. This means that for a uniform visual stimulus, the inhibitory and excitatory signals will neutralize each other resulting in a weak neural response. On the other hand, if the visual stimulus consists of a pattern of dark and bright light, then the receptive fields will produce a strong neural response. The center/surround organization of the retinal receptive fields implies that information about the absolute value of light intensity is less important to the visual system than contrast information, i.e., variation in light energy information. This sensitivity to patterns of light rather than to its absolute value is at the heart of the properties of the HVS that most perceptual methods rely on. In the following section we will detail these characteristics of the HVS.
Characteristics of the Human Visual System
As we mentioned earlier, the center/surround organization of the receptive fields of the ganglion cells makes the HVS primarily sensitive to variation of light energy rather than to its absolute value. This difference of light energy in a visual pattern in generally represented by its contrast value. Ultimately, a high contrast visual pattern should generate a strong neural response and a low contrast visual pattern should generate a weak neural response. So studying the characteristics of the early stages of the HVS boils down to studying the perception of the visual pattern's contrast.
Contrast Sensitivity
The contrast threshold is the contrast value beyond which a visual stimulus becomes visible to a human. It represents the minimal amount of contrast required to generate an exitatory neural response. The value of that threshold is mainly dependent on three factors: spatial frequency, global illumination and retinal velocity.
Spatial Frequency. The spatial frequency is related to the size of the visual stimulus with respect to the size of one degree of the visual angle. It is expressed in terms of cycles per degree (cpd) which represents the number of times a visual stimulus can be repeated within one degree of the visual angle. The effect of the spatial frequency on the visibility threshold can be derived from the size of the receptor fields. While a pattern of dark and bright light will excite the ganglion cells, the neural response will be at its strongest if the size of the visual pattern lines up with the size of the center and surround region of the receptive field (Fig. 2.2). This means that the human visual system will be more sensitive to the spatial frequencies that correspond to the size of the receptive fields, generally between 2 and 5 cpd for humans, and less sensitive to the other spatial frequencies.
Global Luminance. The average energy of the light, i.e., luminance, illuminating the observed scene also affects the contrast visibility threshold. In dark environments, the low energy light reaching the retina will trigger the rods photoreceptiors as the energy is not sufficient to provoke the cones. The difference in the source of the visual signal between low and bright lights, i.e., rods in low light and cones in bright light, causes the change in contrast visibility threshold when the light energy changes. In summary, physiological experiments show that at low luminance levels, the contrast threshold increases when the average luminance decreases, while it becomes relatively stable for luminance levels above 100 cd/m 2 [START_REF] Peter | The Square Root Integral (SQRI): A new metric to describe the effect of various display parameters on perceived image quality[END_REF].
Retinal Velocity. When the visual stimulus is in motion, its image on the retina will also move. The retinal velocity is therefore defined as the velocity of the retinal image of an object. This velocity is affected by the movement of the object and the eyes whose job is to track the moving stimulus in an attempt to stabilize the retinal image. The experiments of Kelly [START_REF] Kelly | Motion and vision. i. stabilized images of stationary gratings[END_REF][START_REF] Kelly | Motion and vision. II. stabilized spatio-temporal threshold surface[END_REF] have shown that the contrast visibility threshold is altered when the retinal velocity increases. For example, the range of sensitive spatial frequencies varies in general from between 2 and 5 cpd for a stationary stimulus to around 0.2 and 0.8 cpd for a stimulus whose image is moving at a 11 deg/s speed on the retina.
The reciprocal of the contrast visibility threshold is the contrast sensitivity. The Contrast Sensitivity Function (CSF) is a mathematical model that describes the evolution of the visibility threshold with respects to the aforementioned three parameters (spatial frequency, global luminance and retinal velocity). It was first introduced by Campbell and Robson [START_REF] Campbell | Orientational selectivity of the human visual system[END_REF] whose CSF model takes only into consideration the effects of spatial frequency and was later extended to consider global luminance levels [START_REF] Peter | The Square Root Integral (SQRI): A new metric to describe the effect of various display parameters on perceived image quality[END_REF][START_REF] Peter Gj Barten | Contrast Sensivity of the Human Eye and its Effects on Image Quality[END_REF]. The effects of object motion on the contrast sensitivity are taken into account in the model proposed by Kelly [START_REF] Kelly | Motion and vision. II. stabilized spatio-temporal threshold surface[END_REF]. The CSF represents the visual system's band-pass filter characteristics when it comes to contrast sensitivity (Fig. 2.3). In general, it exhibits a peak between 2 and 5 cpd when the visual stimulus is stationary. The shape of the CSF (peak location and drop off slope) depends on the nature of the visual stimulus. For example, at high spatial frequencies the HVS is more sensitive to aperiodic visual patterns than to periodic ones [START_REF] Blakemore | Adaptation to spatial stimuli[END_REF].
Visual Masking
Visual masking is a very important characteristic of human vision as it describes how the HVS handles interactions between different visual stimuli. Masking occurs when a stimulus (target) that is visible on its own cannot be detected due to the presence of another visible stimulus (mask). Figure 2.4 illustrates this effect. A visual signal, the target, that is visible on its own might become hard to notice when it is added to a visual pattern containing a visible visual stimulus, the mask. The effects of visual masking are caused by several factors. In particular, the visibility of the target visual stimulus is dependent on the contrast and the visual complexity of the mask.
Ledge and Foley [START_REF] Gordon | Contrast masking in human vision[END_REF] studied the contrast threshold necessary to detect the target when varying the contrast and frequency of the mask. One important observation that can be taken out from their experimental study about this aspect of the HVS is that the visibility threshold increases almost linearly with the contrast of the mask. The effects of masking can be mathematically described by a curve (Fig. 2.5) which possesses two asymptotic regions: the first one with a slope of zero and the second one with a positive slope of about 0.6 to 1 (depending on the stimulus) [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF]. The zero slope occurs for mask contrast values below the mask's visibility threshold as given by the CSF, indicating that there is no masking effect. When the mask is visible (its contrast above the value given by the CSF), the threshold for detecting the target lies of the second asymptotic region.
The experiments of Ledge and Foley focused on studying the visual masking effect of simple sinusoidal patterns. This means that their results do not account for the impact of the mask's visual complexity on the visibility threshold. The visual complexity is an equally important factor as a complex mask, which is visually irregular, would introduce some uncertainty to the observer's judgement and thus increasing the masking effect [WBT97]. For instance, Fig. 2.6 shows a visual stimulus that is visible on its own or when added to a relatively simple mask. However, when added to an irregular mask, the visual stimulus becomes harder to notice which indicates a change in the visibility threshold. The influence of the mask's visual complexity on the visibility threshold can be inferred by the free-energy principle theory [START_REF] Friston | A free energy principle for the brain[END_REF][START_REF] Friston | The free-energy principle: a unified brain theory?[END_REF]. By analyzing the incoming visual information, the HVS helps us understand the outside world. However, due to the sheer amount of input, the HVS cannot fully process all of the visual information [START_REF] David | The bayesian brain: the role of uncertainty in neural coding and computation[END_REF]. Instead, the HVS tries to predict the visual information with an internal generative mechanism [START_REF] David | The bayesian brain: the role of uncertainty in neural coding and computation[END_REF][START_REF] Karl J Friston | Reinforcement learning or active inference?[END_REF]. The underlying idea behind the free-energy principle theory is that all adaptive biological agents have a tendency to resist disorder [START_REF] Friston | The free-energy principle: a unified brain theory?[END_REF]. In other words, the HVS will try to extract as much information as possible from the coming visual information in order to minimize any uncertainty and avoid surprises (i.e., information with uncertainty, which is often found in visually complex stimuli). This means that visual patterns with obvious statistical regularities are easier to be predicted and understood than those without any regularity (i.e., complex patterns). As a result, the change in information (i.e., contrast) in a regular visual pattern can be easily detected while it would be difficult to detect in an irregular, complex one [WBT97].
Summary
To summarize, the human visual system is sensitive to variation of light intensity (i.e., contrast) rather than its absolute value. This is primarily due to the center/surround organization of the receptive fields of the ganglion cells as discussed in Section 2.1.2 (Fig. 2.2). As a consequence, the contrast value of a visual stimulus is at the heart of the early properties of the HVS, in particular contrast sensitivity and visual masking. Contrast sensitivity refers to the threshold beyond which a contrast becomes visible for a human observer. This threshold is affected by the spatial frequency, global luminance and retinal velocity of the visual stimulus and can be mathematically modelled by the contrast sensitivity function.
Visual masking, on the other hand, explains how the HVS handles interactions between different visual stimuli. In other words, the visual masking describes the change in visibility threshold caused by the presence of another visible visual stimulus (mask). The amount by which the visibility threshold changes is proportional to the contrast value of mask and is affected by its visual regularity. Both the contrast sensitivity and especially the visual masking are at the center of many perceptual methods in computer graphics. The next chapter presents a detailed description of these methods.
Chapter 3
Perceptual Methods for Computer Graphics Applications
In the previous chapter, we have presented the theoretical and fundamental background about the early properties of the human visual system. This chapter focuses on the existing perceptual methods that have been proposed in the past two decades. We start by discussing the benefits of taking into account the perceptual characteristics of human vision in the design of computer graphics systems (Section 3.1). We then review in detail the most important methods in the field of perceptually adaptive graphics. Based on the general approach taken to perform the perceptual analysis, we group these methods into two categories: Image-Based (Section 3.2) and Model-Based (Section 3.3). Finally we compare these two approaches and discuss the limitations of current methods (Section 3.4).
The Role of Perception in Computer Graphics
One of the goals of computer graphics is to generate a 2D image for a human observer from the description of a 3D scene. In most cases, the scene is constituted of the following elements: (1) a geometric representation of the surface of an object, most commonly through a triangular mesh, (2) the material properties attributed to that surface and (3) the illumination information. In general, any computer graphics pipeline starts with an acquisition step where the scene's data is created. Usually, the 3D geometric data are then subject to various processing algorithms (compression, level of detail generation, . . . ) in order to accommodate for the need of the target application. Finally the scene's elements are passed on to a rendering algorithm which computes a 2D image. Ideally, we would like to generate a "perfect" image, i.e., without any distortion, from this pipeline which is in practice highly unlikely as visual artifacts are bound to appear on the rendered image. The source of these artifacts may be the geometric operations applied to the 3D data or the rendering algorithm which is trying to simplify the computation in order to cope with hardware limitations. Consequently, in practice, the best that we can hope for is that the computer graphics system is capable of generating a perceptually effective image, that is, an image that effectively provides the indended visual information [START_REF] Ferwerda | Three varieties of realism in computer graphics[END_REF][START_REF] Thompson | Visual Perception from a Computer Graphics Perspective[END_REF]. The success of a computer graphics method, whether it is a rendering algorithm or a geometric operation, is therefore dependent on the perceptual effectiveness of the resulting image. In order to improve perceptual effectiveness of computer graphics, one approach consists in taking advantage of the characteristics of the human vision in order to guide computer graphics algorithms. The perceptual properties of the human visual system can thus be used as an optimization criterion in the design of computer graphics operations, in particular geometric ones.
Over the last two decades, the computer graphics community has recognized the importance of exploiting the perceptual properties of the HVS [OHM + 04, Fer08, CLL + 13] as perceptually motivated techniques have proven to be useful for several practical applications. The goal of this research is to propose a method that allows us to predict the visibility of a geometric distortion on a 3D triangular mesh. This is important since before rendering a 3D model, almost all raw geometry data are subject to several operations (e.g., compression, watermarking, . . . ) that introduce geometric distortions in the form of vertex displacement which might be visible in the final 2D image. The perceptual analysis techniques that are related to the perception of surface material [START_REF] Fleming | Visual perception of materials and their properties[END_REF][START_REF] Havran | Perceptually motivated brdf comparison using single image[END_REF], non-photorealistic rendering [SD04, RBD06, CSD + 09], physical simulation [HK15, HK16] and character animation [MNO07, DM08, LO11] are therefore not within the scope of this research. Our work is more focused on the perceptual methods that aim to guide or evaluate the output of the geometric operations applied to a 3D triangular mesh. These methods can be grouped into two categories Image-Based and Model-Based methods [START_REF] Lavoué | Quality Assessment in Computer Graphics[END_REF]. The first category concerns the algorithms where the perceptual analysis is carried out by analyzing rendered 2D image. On the contrary, the perceptual methods in the second category rely on a perceptual analysis that is performed on the geometric surface of the 3D object. In the rest of this chapter we will detail and discuss the most notable methods belonging to these two approaches.
Image-Based Methods
Since the 3D models are visualized on 2D displays, it seems logical to use the 2D rendered image in order to carry out the perceptual analysis. Using an imagebased method for studying the perceptual impact of 3D distortions has its advantages. It allows researches to adapt the already established perceptual methods in the field of image processing to computer graphics. More importantly, applying the perceptual analysis on the rendered 2D image will implicitly take into consideration any rendering or illumination method [START_REF] Lavoué | Quality Assessment in Computer Graphics[END_REF]. In this section we will first present the most important perceptual methods in the field of image processing that were adapted to computer graphics. We will then describe how image-based perceptual methods have been used for computer graphics applications.
Perceptual Methods in the Field of Image Processing
Two main approaches exist for designing a perceptual analysis on 2D images: a top-down approach which relies on hypotheses about the global behavior of the HVS and a bottom-up approach which tries to model the visual process of the HVS. Ultimately, a bottom-up method tries to build a computational system that mimics the way the HVS works [START_REF] Wang | Modern image quality assessment[END_REF]. On the contrary, in a top-down approach, the relationship between the input and the output is the thing that matters. This means that a top-down approach takes the visual signal as an input and outputs a result that is in agreement with the general behavior of the HVS without having to explicitly deal with the inner-working of the HVS.
Top-Down Perceptual Methods
Top-down approaches do not look to simulate the HVS. Instead, they are only concerned by outputting a result that is in accordance with the general behavior of the HVS. The advantage of these approaches is that they allow researchers to take into consideration complex aspects of the visual system that would rather be difficult to simulate in a bottom-up approach. Having the freedom to focus on the general behavior of the HVS rather than its internal mechanisms has led to the development of many top-down perceptual methods that either rely on studying whether the structure between a reference of a distorted image has changed [WB02, ?, WBSS04] or focus more on analyzing if the distortion has caused a disruption in the information contained in the image [START_REF] Field | Relations between the statistics of natural images and the response properties of cortical cells[END_REF][START_REF] Eero | Statistical Modeling of Photographic Images[END_REF][START_REF] Hamid | Image information and visual quality[END_REF]. In this section we focus on the methods that study the structure of an image as they have been popular in many computer graphics methods.
Structural Similarity. The visual signal contains information about the outside world that is analyzed by the visual system. This makes the HVS highly adapted and effective in extracting the key features in a natural image, i.e., an image that represents the natural world. The idea behind structural similarity is that a change in the structure of an image, caused by a distortion, will be easily detected by the HVS. Wang et al. [START_REF] Zhou Wang | Image quality assessment: from error visibility to structural similarity[END_REF] proposed a metric (SSIM Index) whose goal is to measure the perceptual quality of a distorted image by comparing its similarity to the reference image. Having two images x and y, respectively the reference and the distorted, the SSIM algorithm defines the task of comparing the similarity as the combination of a comparison of luminance, contrast and structure between x and y. The SSIM index is therefore defined as:
SSIM(x, y) = l(x, y) α • c(x, y) β • s(x, y) γ , (3.1)
where l(x, y), c(x, y) and s(x, y) are respectively the luminance, contrast and structure components and α > 0, β > 0 and γ > 0 are parameters that control their relative importance. Wang et al. [START_REF] Zhou Wang | Image quality assessment: from error visibility to structural similarity[END_REF] defined the luminance, contrast and structure components as follows:
l(x, y) = 2μ x μ y + C 1 μ 2 x + μ 2 y + C 1 , c(x, y) = 2σ x σ y + C 2 σ 2 x + σ 2 y + C 2 , s(x, y) = σ xy + C 3 σ x σ y + C 3 , (3.2)
where μ x , σ x and σ x σ y represent respectively the mean, the standard deviation and the covariance of the pixel intensity. C 1 , C 2 and C 3 are three constants added to avoid numerical instability when the compared image is dark (μ 2 x + μ 2 y close to 0) or when the image contains a uniform visual stimulus (σ x and σ y close to 0). In practice, it is preferable to apply the SSIM method locally on image patches (e.g. on a 11 × 11 window) rather than on the entire image. This will result in a quality map of the compared image which can then be aggregated into a single score. The SSIM approach has proven to be quite usefull in a number of image processing applications [START_REF] Wang | Local phase coherence and the perception of blur[END_REF][START_REF] Wang | Translation insensitive image similarity in complex wavelet domain[END_REF][START_REF] Wang | Information content weighting for perceptual image quality assessment[END_REF]. More importantly, the idea behind the SSIM Index has inspired the development of perceptual metrics for 3D meshes [LDGD + 06, Lav11] which we will detail later in this chapter (Section 3.3).
Although top-down methods present a simple approach of taking into account complex properties of the HVS, they also have some drawbacks. First, the success of top-down methods is heavily dependent on the validity of the hypotheses they are based on, which are most of the time difficult to justify. For instance, while the SSIM index metric has proven to correlate well with the HVS [START_REF] Hamid R Sheikh | A statistical evaluation of recent full reference image quality assessment algorithms[END_REF], there is no physiological evidence that would justify combining the contrast, luminance and structure elements using a multiplication [START_REF] Wang | Modern image quality assessment[END_REF]. Second, designing a topdown algorithm usually requires the inclusion of several abstract parameters to the model that are difficult to calibrate. For example, while the inclusion of the thee parameters α, β and γ to the SSIM model (Eq. (3.1)) has its benefits, it is however difficult to manually find a value that works the best on any type of image. These issues are much less likely to occur in bottom-up methods since these approaches rely on simulating the different components of the HVS.
Bottom-Up Perceptual Methods
Bottom-up perceptual methods focus on studying each relevant component or feature of the HVS, such as contrast sensitivity (Section 2.3) and contrast masking (Section 2.4), and then combining them together in a computational model that mimics how the HVS works. This computational model can then be integrated in various image processing algorithms [CL95, LKW06, Lin06, LLP + 10]. Bottom-up methods rely heavily on experimental results and physiological studies about the aspects of the human visual system [CK66, Kel79b, LF80, Wan95, WB06].
In general, most existing bottom-up methods try to compute a threshold map using mathematical models describing the characteristics of the HVS. This threshold refers to the maximum change in contrast a distortion is allowed to alter before it becomes visible. Most of bottom-up algorithms follow a framework similar to the one presented in Fig. 3.1. The first step consists of computing the perceptual properties, i.e., luminance, contrast and spatial frequency, from a reference image. The luminance is generally obtained by converting the pixel values using a non-linear function. While there are many methods for estimating the contrast for natural images [START_REF] Peli | Contrast in complex images[END_REF] , the Michelson contrast [START_REF] Abraham | Studies in optics[END_REF] is still adopted in most of the bottom-up methods. Finally, the spatial frequency is usually evaluated using a channel decomposition method such as Fourier decomposition [START_REF] Mannos | The effects of a visual fidelity criterion of the encoding of images[END_REF] , local block-DCT transform [?], cortical transform [START_REF] Andrew B Watson | The cortex transform: rapid computation of simulated neural images[END_REF][START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF]. In the second step, the computed perceptual properties are passed to a computational model representing the different properties of the HVS which then outputs a threshold that can be used to guide various image operations [CL95, LKW06, Lin06, LLP + 10]. In most cases, bottom-up algorithms take into consideration the contrast sensitivity, that is modeled by the CSF, and the visual masking aspects of the HVS. One of the most used masking models is the one proposed by Daly [Dal93]. This model, however, does not account for the effects of the signal's complexity on the visual masking (see Section 2.4). Recently several methods [WSL + 13, DFL + 15] have started to include the free-energy principle theory [START_REF] Friston | The free-energy principle: a unified brain theory?[END_REF] into the perceptual analysis for a more accurate simulation of visual masking.
A large number of bottom-up algorithms have been introduced in the past few decades. More notably, we mention Daly's Visual Difference Predictor (VDP) [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF] and the Visual Discrimination Model (VDM) [START_REF] Lubin | A visual discrimination model for imaging system design and evaluation[END_REF] since they are the basis of many image-based perceptual methods in computer graphics which we will detail in the next section. Both methods, although different in their technical details, aim to predict whether a difference between two images is visible or not. In summary, this is done by comparing the difference in contrast between a reference image and a distorted one with the threshold computed by the perceptual model.
Applying Image Processing Tools to Computer Graphics
One of the purposes of computer graphics is to render an image from a description of a 3D scene. However, since the 3D data are subject to various geometric operations that introduce geometric distortion to the model and due to the computational limitation of the hardware, it is practically impossible to obtain a "perfect" image. In consequence, computer graphics systems generally aim at generating a perceptually acceptable image by taking advantage of the properties of the HVS.
Early attempts to use perceptual elements consisted in adapting perceptual methods that were designed for image processing applications to computer graphics. Ferwerda et al. [START_REF] Ferwerda | A model of visual masking for computer graphics[END_REF] first showed how Daly's VDP can be used to hide geometric visual artifacts with a texture. Furthermore, Bolin and Meyer [?] used a simplified version of the VDM metric [START_REF] Lubin | A visual discrimination model for imaging system design and evaluation[END_REF] to optimize the sampling operation in a ray-tracing algorithm. In the same context of perceptually guided rendering, Ramasubramanian et al. [START_REF] Ramasubramanian | A perceptually based physical error metric for realistic image synthesis[END_REF] presented an iterative perceptual framework (Fig. 3.2) in order to reduce the computational cost of global illumination. In this case, the proposed perceptual method is able to define an automatic stopping criterion for the computationally demanding global illumination operation. The idea is to stop the rendering when the current iteration cannot produce a visible change in the image. In other words, the algorithm stops when the physical difference between the image at the current iteration and the previous one is below the threshold map which is evaluated using Daly's VDP. The results of this method can further be improved by using a more complex perceptual model [START_REF] Mantiuk | HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions[END_REF]. However, this approach can be over-conservative as it tends to overestimate the perceptual impact of none-disturbing visible distortions [START_REF] Ganesh Ramanarayanan | Visual equivalence: towards a new standard for image fidelity[END_REF].
For that purpose, Ramanarayanan et al. [START_REF] Ganesh Ramanarayanan | Visual equivalence: towards a new standard for image fidelity[END_REF] later introduced the concept of visual equivalence which considers two images equivalent if the viewer cannot tell them apart. In this work, the authors presented an experimental study which aimed at defining the elements that contributes to the definition of equivalency between two images. Using the results of this experiment, they then proposed a top-down perceptual metric, the visual equivalence predictor (VEP), which is based on machine-learning techniques to evaluate the equivalency of two images.
For the task of selecting the best level of detail (LOD) version of a 3D mesh, Reddy [START_REF] Reddy | Perceptually modulated level of detail for virtual environments[END_REF] analyzed pre-rendered images using the contrast sensitivity function. Later Dumont et al. [START_REF] Dumont | Perceptually-driven decision theory for interactive realistic rendering[END_REF] proposed a system based on a decision theory approach that is capable in real-time of selecting the best LOD and texture resolution with the help of Daly's VDP. Another interesting approach is the one of Zhu et al. [START_REF] Zhu | Quantitative analysis of discrete 3d geometrical detail levels based on perceptual metric[END_REF] which consists of studying the visibility of fine geometric details using perceptual image metrics such as the VDP and SSIM to design a discrete LOD for visualizing 3D buildings.
Image-based perceptual methods have also been used for guiding mesh simplification. Lindstrom and Turk [START_REF] Lindstrom | Image-driven simplification[END_REF] first presented an image based simplification method. This algorithm works by rendering the model being simplified for various viewpoints and uses image-based metrics such as the VDM [START_REF] Lubin | A visual discrimination model for imaging system design and evaluation[END_REF] to guide the simplification. Luebke and Hallen [START_REF] Luebke | Perceptually Driven Simplification for Interactive Rendering[END_REF] proposed a perceptual mesh simplification algorithm that uses a CSF model to estimate whether a local simplification operation will cause a visible change in contrast and frequency. Williams et al. [WLC + 03] extended later this method to textured model. In both of these methods the simplification result depends on the chosen viewpoint. Qu and Meyer [QM08] used the masking function in [START_REF] Zeng | An overview of the visual optimization tools in jpeg[END_REF] to compute a masking map taking into account the bump map and texture attached to the 3D mesh. This masking map is then used to drive the simplification process. Finally, Menzel and Guthe [START_REF] Menzel | Towards perceptual simplification of models with arbitrary materials[END_REF] combined a perceptual metric that takes into account the contrast and frequency of the 3D mesh on the rendered image with a geometric distance to decide whether to perform the edge collapse operation or not. The interesting point is that their method is able to handle different materials since the visual masking analysis is performed on an image-based bidirectional texture function (BTF).
Model-Based Methods
Apart from image-based methods, many algorithms have been developed that use the 3D geometry information for their perceptual analysis. Existing modelbased perceptual methods for 3D meshes are based on observations about the general behavior of the human visual system while observing 3D models. These approaches rely on the 3D information of surface geometry in order to perform the perceptual analysis. More precisely, they mostly rely on roughness and curvature information of 3D geometry.
Roughness-Based Methods
Relation to Visual Masking
The earliest methods for evaluating the magnitude of geometric distortions were simple geometric distances like the Hausdorff distance [START_REF] Aspert | MESH: Measuring error between surfaces using the Hausdorff distance[END_REF] or the root mean square error (RMS) [START_REF] Cignoni | Metro: Measuring error on simplified surfaces[END_REF]. These measures ignore the working principles of the HVS and reflect the physical variation of the mesh geometry. They do not correlate with the human vision [CLL + 13] (Fig. 1.1) and thus cannot be used to predict whether a geometric distortion is visible or not.
Motivated by the need of evaluating their compression algorithm, Karni and Gostman [START_REF] Karni | Spectral compression of mesh geometry[END_REF] combined the RMS measure with the average distance of the geometric Laplacian to obtain a visual metric capable of comparing two 3D objects A and B.
GL 1 (A, B) = αRM S(A, B) + (1 -α) n i-1 ||GL(v A i ) -GL(v B i )|| 2 1/2 , ( 3.3)
where α = 0.5 and GL is the geometric Laplacian computed as follows:
GL(v) = v - i∈n(v) l -1 i v i i∈n(v) l -1 i , (3.4)
where n(v) is the set of indices of the neighbors of v, and l i the Euclidean distance from v to v i .
The idea behind mixing the geometric Laplacian with the RMS distance is that the former represents a local measure of smoothness. This means that the visual distance between two versions of a 3D model (original and distorted) given by the GL 1 is higher when the distortion causes a change in smoothness. Sorkine et al. [START_REF] Sorkine | High-pass quantization for mesh encoding[END_REF] later proposed a small modification to the GL 1 distance: Setting the value of α to 0.15 instead of 0.5 gives geometric distortions on smooth regions a higher impact on the visual distance. Despite being better than simple geometric measures for computing the visual distance between two 3D models, the geometric Laplacian metric, GL 1 , does not correlate well with the human perception [CLL + 13]. However, the GL 1 distance and several other observations of visual artifacts produced by 3D watermarking techniques [START_REF] Gelasca | Objective evaluation of the perceptual quality of 3D watermarking[END_REF] suggest that the visibility of geometric distortions is related to the roughness of the surface. In other words, it was noted that the geometric distortions are more visible on a smooth surface than on a rough one [START_REF] Lavoué | A local roughness measure for 3D meshes and its application to visual masking[END_REF] (Fig. 3.3). This observation can indeed be explained by the visual masking effect of the human visual system (Section 2.2.2). For instance, the rough regions of a 3D model are more likely to generate a visually complex visual pattern in a computer graphics image and thus causing a visual masking effect. These observations have led to the development of many perceptual methods that rely on an estimation of surface roughness as their main perceptual tool.
Surface Roughness Measures
Following the idea that the effects of visual masking can be taken into consideration using the roughness value of the surface of a 3D object, many roughness estimations techniques have been proposed.
In order to evaluate the quality of the output of 3D watermarking algorithms, Corsini et al. [START_REF] Corsini | Watermarked 3D mesh quality assessment[END_REF] presented two perceptual metrics, each of which is based on a different method for estimating surface roughness. The first method builds on the work of Wu et al. [START_REF] Wu | An effective feature-preserving mesh simplification scheme based on face constriction[END_REF] which consists of using angles between two adjacent faces to evaluate surface roughness. This method proceeds as follows. At first, a per-face roughness measure is computed using angles between two adjacent faces, i.e., dihedral angles. The idea here is that the face normal varies slowly when the surface is smooth while the opposite is true for a rough surface. Consequently, a smooth surface can be detected by the value of the dihedral angles which should be close to 0. Finally, a per-vertex value is obtained by combining the per-face roughness of the N-ring adjacent faces. The number of rings taken into account for computing the per-vertex roughness value controls the scale at which the surface roughness is being evaluated. The second method, first introduced in [START_REF] Gelasca | Objective evaluation of the perceptual quality of 3D watermarking[END_REF], is based on the idea that the difference between a detailed and a smoothed version of a 3D model is higher in rough regions than in smooth regions. Therefore, in this approach, computing the local per-vertex roughness boils down two steps (Fig 3. model is computed, for example by using Taubin's smoothing operator [START_REF] Gabriel Taubin | A signal processing approach to fair surface design[END_REF]). Second, the per-vertex difference is computed as:
d(v, v s ) = proj n s (v -v s ) , (3.5)
where v s is the smoothed vertex and proj() indicates the projection of the vector (vv s ) on the vertex normal of the smoothed surface n s . The local surface roughness is finally evaluated as the variance of that difference over an N-ring scale.
Lavoué [START_REF] Lavoué | A local roughness measure for 3D meshes and its application to visual masking[END_REF] presented a roughness evaluation algorithm based on computing the difference between the original and smoothed version of a 3D model. The method of Lavoué can be summarized by the following steps:
1. A smoothed version of the 3D mesh is generated.
2. The maximum curvature for each vertex of the original and smoothed mesh is computed 3. The average curvature over a local window is evaluated for each vertex.
4. The local roughness value is computed as the difference between the average curvature values of the original and smoothed models.
The local roughness measure of Lavoué improved upon the classification of the surface type. It was able to efficiently differentiate between three types of regions (smooth regions, rough regions and edge regions) on a 3D object (Fig. 3.5), each of which can be attributed a masking level. Furthermore, in his paper Lavoué demonstrated the utility of the proposed roughness measure for two geometric operations: mesh compression and 3D watermarking. In summary, the roughness value is used to locally adapt the quantization operation of a compression algorithm. Instead of applying the same quantization level for the entire model, a higher level can be applied to the rough part as it can tolerate more geometric distortions. A similar idea was adapted to drive the 3D watermarking process as the strength of the watermark becomes dependent on the local roughness value.
In the interest of objectively evaluating the perceptual quality of a distorted mesh, Wang et al. [START_REF] Wang | A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF] proposed a perceptual metric (FMPD) that considers the visual masking effect using the roughness value of a surface. First the local roughness value is computed at each vertex as the Laplacian of the discrete Gaussian curvature which indicates whether the curvature is locally varying or not. The local roughness value is then modulated using a series of simple mathematical operations. The idea behind this modulation is to cause a small perceptual distance when a geometric distortion is located in a rough region and a big perceptual distance when the geometric distortion causes the smooth region to become rough, which mimics the effects of visual masking. The local roughness value is then integrated over the 3D mesh's surface to obtain a global roughness measure reflective of the overall roughness of the surface. Finally, the perceptual score relative to a distorted mesh is defined simply as the difference between the original and the distorted global roughness scores.
Finally, similarly to the first roughness measure proposed by Corsini et al. [START_REF] Corsini | Watermarked 3D mesh quality assessment[END_REF], Váša and Rus [START_REF] Váša | Dihedral angle mesh error: a fast perception correlated distortion measure for fixed connectivity triangle meshes[END_REF] also rely on the values of dihedral angles to measure the perceptual distance between two meshes. Their metric (DAME), is simply based on a weighted difference of the corresponding dihedral angles. Since the value of the angle between two adjacent faces is an indicator of surface roughness, then giving more weight to the difference of dihedral angle when its original value is small, i.e., smooth surface, simulates the effects of visual masking. Recently, DAME was integrated into a 3D mesh compression algorithm [START_REF] Marras | Perception-driven adaptive compression of static triangle meshes[END_REF] in order to perceptually drive the compression process by trying to minimize as much as possible the visible error caused by the absolute vertex displacement.
Curvature-Based Methods
In addition to surface roughness, the curvature of the surface has been another important geometric feature in the design of perceptual-driven methods as several observations [START_REF] Decarlo | Suggestive contours for conveying shape[END_REF][START_REF] Rusinkiewicz | Exaggerated shading for depicting shape and detail[END_REF] have led to the conclusion that curvature information affects the intensity of the rendered image and thus affects the visual characteristics of a 3D model. In fact, it was noted that the human visual system is sensitive to strong variation in surface curvature [START_REF] Kim | Discrete differential error metric for surface simplification[END_REF]. The surface curvature has, in particular, been used for assessing the visual quality of 3D models [Lav11, WTM12, TWC14, DFLS14] and mesh simplification through an estimation of visual saliency [HHO04, LVJ05, SLMR14].
Objective Quality Assessment for 3D Models
Following the concept of the structural similarity (SSIM) index [START_REF] Zhou Wang | Image quality assessment: from error visibility to structural similarity[END_REF] which consists of measuring the degradation of structural information in a 2D image, Lavoué et al. [LDGD + 06] introduced a method for measuring the perceptual quality of 3D distorted meshes, MSDM. Instead of extracting the structural information using the luminance value in 2D images, the MSDM metric uses a combination of statistical analysis of surface curvature for that task. In order to compute the perceptual distance between two meshes X and X , a local perceptual distance between two local patches x and x on the two meshes is first evaluated as follows (Fig. 3.6): where L, C and S respectively correspond to the luminance, contrast and structure term of the SSIM index and are computed as:
LM SDM (x, x ) = 0.4 • L(x, x ) 3 + 0.4 • C(x, x ) 3 + 0.2 • S(x, x ) 3 1/3 , ( 3
L(x, x ) = ||μ x -μ x || max(μ x , μ x ) , C(x, x ) = ||σ x -σ x || max(σ x , σ x ) , S(x, x ) = ||σ x σ x -σ xx || σ x σ x ,
(3.7) where μ x , σ x and σ xx are respectively the mean, variance and covariance of the surface curvature over a local window of size around the vertex. It was recommended by the authors that would be equivalent to 0.5% the size of the model's bounding box. Finally the perceptual score between two models X and X is obtained via a Minkowski pooling over the vertices of the 3D mesh as:
MSDM(X, X ) = 1 N N i=1 LM SDM (x i , x i ) 3 1/3 , (3.8)
where N is the number of vertices. The MSDM metric was later improved by integrating a multiscale analysis of the perceptual distance and allowing the comparison of two meshes that do not share the same connectivity information [START_REF] Lavoué | A multiscale metric for 3D mesh visual quality assessment[END_REF].
Different from MSDM and MSDM2 that only consider the amplitude of surface curvature, the TPDM perceptual metric [START_REF] Torkhani | A curvaturetensor-based perceptual quality metric for 3D triangular meshes[END_REF], proposed by Torkhani et al., makes use of the principal curvature directions in its perceptual analysis. The surface normals vary the fastest in the direction of the maximum curvature and the slowest in the direction of the minimum curvature. Consequently, taking into consideration the direction and amplitude of principal curvature provides a more detailed information about the structure of the surface. This is proven to be beneficial for the assessment of the perceptual quality of 3D triangular meshes by the TPDM metric.
Mesh Saliency and its Application to Mesh Simplification
There has been a large interest in the past few years to estimate the visual saliency on a 3D mesh as it has proven to be useful for several applications especially mesh simplification [HHO04, LVJ05, SLMR14]. Mesh saliency is a measure that tries to capture the visual importance of a region of a 3D mesh. In other words, a region with a high saliency value is more likely to attract the attention of the human viewer than a region with a low saliency value. Motivated by the experimental results of Howlett et al. [START_REF] Howlett | An experimental approach to predicting saliency for simplified polygonal models[END_REF], in which the authors demonstrated the potential advantage of a saliency measure for geometric operations, Lee et al. [START_REF] Ha | Mesh saliency[END_REF] proposed an algorithm for computing the saliency on a 3D mesh. Their method is based on the idea that a region with a high saliency value stands out relative to its neighbors. The saliency estimation algorithm proceeds by computing a series of saliency maps at different scales which are then combined in a normalized non-linear sum (Fig. 3.7). In order to compute the saliency map at a scale σ, they proceed by the following steps. First the curvature is evaluated for each vertex
Image-Based vs. Model-Based Methods
In the last few years, perceptual methods have seen a rise in popularity [OHM + 04, Fer08, CLL + 13, LM15] as they have proven to be quite effective for a large number of applications. For instance, as detailed in this chapter, these approaches have been useful in the context of mesh rendering by either providing a criterion for selecting the best LOD for a certain scene [Red97, LH01, CB05, CSYB06] or by allowing for a more efficient management of resources in a physically-based rendering system [START_REF] Ramasubramanian | A perceptually based physical error metric for realistic image synthesis[END_REF][START_REF] Ganesh Ramanarayanan | Visual equivalence: towards a new standard for image fidelity[END_REF]. Perceptual methods have also provided the computer graphics community with several objective quality metrics [Lav11, WTM12, VR12, DFLS14, TWC14] that can be used to evaluate and debug existing geometric operations. Moreover, perceptually motivated approaches have proven to be especially helpful for the task of mesh simplification [WLC + 03, LVJ05, QM08, SLMR14] as they are able to preserve the visually important features of a 3D model. Despite all the aforementioned benefits a major issue remains when designing a perceptually oriented algorithm: Is it better to perform the perceptual analysis on the 3D model or on the 2D rendered image?
There are two sources of visual artifacts in a computer graphics system. First the geometric operation applied to the 3D model might introduce a visible geometric distortion and second the rendering algorithm may also cause some pixelbased artifacts. In theory, model-based approaches allow a more accurate analysis of geometric artifacts since the perceptual study is independent from any artifacts caused by the rendering algorithm. In this case, the perceptual analysis is carried out before generating the image, therefore, the visual impact of geometric and rendering distortions will not be mixed. Image-based methods have some advantages over current model-based ones. First and most importantly, since the perceptual analysis is carried out on the 2D image after the rendering step, it can easily cope with different rendering pipelines. This means that, since the analysis is done on the 2D rendered image, its results will adapt to the lighting condition, material properties, textures and rendering algorithm without having to alter the perceptual analysis procedure. In addition, these approaches also offer the choice between a view-dependent perceptual analysis, by taking one snapshot of the model, and a view-independent one, by taking multiple snapshots around the model, each of which can be useful for a range of applications.
However, in a comparative study to test the efficiency of perceptual imagebased techniques in the case of computer generated images [CHM + 12], Čadík et al. have concluded that image metrics are too sensitive for evaluating the visibility of distortions generated by a computer graphics pipeline. This is probably due to the difference in the type of both visual artifacts and images (artificial vs. natural) between the fields of computer graphics and image processing, for which these methods are designed. Moreover, many subjective studies have tried to compare between these two classes of methods in order to find out which is the more suitable for the task of estimating the perceptual impact of geometric distortions. This started with the experiments of Rogowitz and Rushmeier [START_REF] Rogowitz | Are image quality metrics adequate to evaluate the quality of geometric objects?[END_REF] which concluded that image-based metrics might not be suited for evaluating the quality of 3D model. This conclusion came as the result of a subjective experiment where the authors noticed that users have rated differently the artifacts cause by a simplification procedure when they observed the 3D model opposed to 2D still images. This difference is theorized to be due to the interactions when manipulating a 3D model. On the contrary, Cleju and Saupe have conducted a similar experiment [START_REF] Cleju | Evaluation of supra-threshold perceptual metrics for 3d models[END_REF], but obtained conflicting results. The authors noticed that 2D metrics such as SSIM performed better than model-based ones when the simplification artifacts are beyond the visibility supra-threshold. Nevertheless, both of these experiments have some limitations as they only consider simplification artifacts and compare perceptual image metrics with non perceptual geometric ones (Hausdorff [START_REF] Aspert | MESH: Measuring error between surfaces using the Hausdorff distance[END_REF], RMS [START_REF] Cignoni | Metro: Measuring error on simplified surfaces[END_REF]) since at that time more sophisticated geometric metrics (MSDM2 [START_REF] Lavoué | A multiscale metric for 3D mesh visual quality assessment[END_REF], FMPD [START_REF] Wang | A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF], and DAME [START_REF] Váša | Dihedral angle mesh error: a fast perception correlated distortion measure for fixed connectivity triangle meshes[END_REF]) had not yet been developed. Recently and in the interest of providing a conclusive answer to the issue, Lavoué et al. [START_REF] Lavoué | On the efficiency of image metrics for evaluating the visual quality of 3D models[END_REF] realized a large study that compared the performance of state-of-the-art image-based methods with the state-of-theart model-based methods for the task of evaluating the perceptual effects of a geometric distortion. In this study the authors took into consideration a large number of variables that affects the appearance of the 3D object in order to determine the parameters for which image-based methods performed the best. For instance, they have considered 2 rendering algorithms and 4 lighting conditions. In addition they used a large number of 3D models from the 3D Mean Opinion databases [LDGD + 06, Lav09, VR12]. These databases contain 3D models with different distortion types (simplification, compression, filtering, . . . ) along with their corresponding mean subjective score. They have been used throughout the literature to evaluate the effectiveness of computer graphics perceptual metrics by computing the correlation between the metric results and the subjective scores. This study showed that, under the best parameters, the SSIM inspired metric of [START_REF] Wang | Information content weighting for perceptual image quality assessment[END_REF] along with the HDR-VDP2 [START_REF] Mantiuk | HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions[END_REF] metric performed the best among image-based methods with a correlation greater than 60% with every database and scenario. Moreover, in simple scenarii, where each type of distortion or class of model is considered independently, image-based approaches are at their best with correlations close to 80% and even just surpassing 90% for simplification artifacts. However, this study also shows that perceptual image-based methods fall short in front of state-of-the-art model-based metrics. Indeed, the latter have a correlation that hovers around 85% regardless of the scenario. Despite shown to be superior than image-based methods when it comes to evaluating the effects of geometric distortions on 3D objects, existing modelbased methods have some issues. For instance, these metrics try to account for the perceptual characteristics such as visual masking and saliency using geometric measures defined on the 3D surface (surface roughness, curvature). The problem here is that these geometric features are not necessarily perceptually relevant attributes and their relation to human perception is based on general observations. Moreover, relying only on the surface geometry for the perceptual analysis makes these methods unable of effectively adapting to a large number of important parameters that play an important role in the perception of rendered 3D objects such as illumination condition, the rendering procedure or display size and resolution.
Our Approach
In this thesis, different from all the methods mentioned in this chapter that either conduct the visibility analysis in a 2D space or completely rely on 3D geometric features, we present original algorithms whose goal is to compute the threshold beyond which a vertex displacement becomes visible, i.e., the so-called Just Noticeable Distortion (JND) profile. Our approach is inspired by the image-based bottom-up framework (Section 3.2) which consists of trying to predict whether a change in contrast is visible or not. Hence, we start by computing appropriate perceptual properties (contrast, spatial frequency and visual regularity) on the meshe surface of a 3D object. These perceptual attributes should take into consideration the various parameters of mesh display (rendering, illumination, scale and display resolution) that generally affect their appearance. We then perform a series of psychophysical experiments to study the effects of contrast sensitivity and visual masking of the human visual system while observing a 3D model. The results of these experiments will allow us to propose a perceptual model that is able to predict whether a change in local contrast on a 3D mesh, induced by a local geometric distortion, is visible or not. This visibility model can afterwards be used to compute the threshold beyond which a vertex displacement becomes visible.
Chapter 4 Experimental Studies and Threshold Model
Existing model-based methods use measures such as surface roughness [CDGEB07, Lav09, WTM12, VR12] and surface curvature [LDGD + 06, Lav11, TWC14] in order to carry out the perceptual analysis on a 3D mesh. However, these measures are not necessarily perceptually relevant. Our approach is inspired by the image-based bottom-up framework (Section 3.2) which consists of trying to predict whether a change in contrast is visible or not with the help of a mathematical model that simulates the perceptual properties of the HVS (Section 2.2). In this chapter, we define local perceptual properties for 3D meshes (i.e., local contrast, spatial frequency and visual regularity) that are appropriate for a bottom-up evaluation of vertex displacement visibility. These perceptual properties allow us to study the effects of the contrast sensitivity and the visual masking in the 3D setting. In the following, we start by discussing the main experimental methods that were used throughout the literature in order to measure a certain perceptual threshold (Section 4.1). We then present both the proposed perceptual attributes and our experimental study regarding the visibility threshold in the case of flatshaded (Section 4.2) and smooth-shaded (Section 4.3) 3D models.
Measuring the Visibility Threshold
The task of measuring the visual threshold is essential in vision science [CKT + 99, WA05, Fer08, PB13] as it provides us with important information about the visual system. The results of such experiments have helped us understand and model the basic characteristics of the HVS which then can be used in several fields spanning from clinical studies [HHL + 10] which usually consists in judging the visibility of some stimuli. For example, in a typical Yes/No task the subject is shown one image containing a visual stimulus and is asked to tell whether he can see it or not. On the other hand, in a two alternative forced choice (2AFC) type task the observer is presented with two images, one of which contains the stimulus. The observer is then asked to identify the image containing the stimulus. In order to measure the visibility threshold, the observer is tested over many trials. In each trial the stimulus is presented at a different contrast and the observer's answer is labeled either positive or negative. When the answer given by the observer indicates that he has seen the stimulus, for example a Yes answer in a yes/no type task or choosing the distorted stimulus in a 2AFC task, then it is labeled as positive. Otherwise, it is labeled as negative. The proportion of positive answers with respect to negative ones is then used to determine the contrast threshold.
There are many ways to select the contrast of the stimulus at which an observer is tested. They are grouped in two categories: (1) non-adaptive threshold methods where a set of contrasts is predetermined by the experimenter and (2) adaptive methods where the contrast of the stimulus on a certain trial is computed using the observer's response to the previous trials. While non-adaptive methods are easy to implement the main problem is their inefficiency. Before the experiment begins, the designer is required to perform a series of tests in order to choose the set of contrasts at which the stimulus will be presented. In addition, measuring an accurate threshold with this method usually requires a large set of contrasts to be tested which will make the experiments long and therefore tiring for the subject. Adaptive methods, on the other hand, are more efficient and can converge to the threshold in as low as 20 trials. Many adaptive threshold methods have been developed throughout the literature. Starting with the staircase method [START_REF] Tom | The staircase-method in psychophysics[END_REF] where the contrast of the stimulus is either reduced or increased by a certain step after respectively a positive and negative answer. One of the biggest challenges of using this method is to determine the value at which the contrast is increased or decreased after each trial. This value plays an important role in the accuracy of the resulting threshold and efficiency of the experiment. A large value reduces the experiment time while a small value makes the resulting threshold more precise. Many strategies has been proposed for dealing with this issue such as using a fixed step [START_REF] Tom | The staircase-method in psychophysics[END_REF] or adapting its value according to the trial number [START_REF] Gb Wetherill | Sequential estimation of points on a psychometric function[END_REF]. Another type of adaptive methods, such as the PEST (Parameter Estimation by Sequential Testing) [START_REF] Taylor | Pest: Efficient estimates on probability functions[END_REF], groups a certain number of trials in several blocks. In these methods, instead of changing the contrast value after each trial, it is changed after each block with respect to the ratio of positive answers in that block. The motivation for doing that is to reduce the effects of false positive answers on the final threshold. For instance this is an important issue for 2AFC tasks where the subject has a 50% chance of choosing the distorted mesh even if he has not seen it. However, due to the large number of trials these methods are time consuming. Finally, there are ones that are based on the maximum likelihood method [START_REF] David M Green | A maximum-likelihood method for estimating thresholds in a yes-no task[END_REF] such as QUEST [START_REF] Watson | QUEST: a Bayesian adaptive psychometric method[END_REF]. In this class of methods the contrast value and the response label are passed to a statistical model after each trial, which then outputs the next contrast value to test. This statistical model analyzes the user's response and outputs the most likely contrast threshold value so that the experiment is effective. it also takes into account the probability of false positive answers so that the measured threshold is as accurate as possible. In our experimental study we have used the QUEST method for measuring the visibility threshold, which we will detail in the next section.
The QUEST method
The idea of the QUEST method is to test at each trial the contrast value that is most likely to be the threshold. To do so, a probability density function (PDF) is first initialized on the contrast axis. In general, the PDF is assumed to be a Gaussian function whose parameters (mean and variance) are determined by the experimenter's prior knowledge about the threshold [START_REF] Watson | QUEST: a Bayesian adaptive psychometric method[END_REF]. King-Smith et al.
[KSGV + 94] have also proposed to use a modified hyperbolic secant function as the PDF of the contrast threshold. The first stimulus is then presented to the observer with a contrast corresponding to the mode of the initial PDF, i.e., the most likely threshold value. If the observer responds positively (contrast value is claimed to be visible by the observer), then the PDF is shifted towards lower contrast values otherwise it is shifted towards higher intensities. Watson and Pelly [START_REF] Watson | QUEST: a Bayesian adaptive psychometric method[END_REF] have demonstrated that shift of the PDF can be done by applying Bayes' theorem. In consequence, the new PDF is computed as:
P n+1 (T ) = Q(r, T ) • P n (T ) , (4.1)
where P n is the PDF of the contrast threshold T at the n th trial and Q(r, T ) represents the likelihood of a getting positive answer (r = 1) or a negative one (r = 0) for the tested contrast value (T ). This likelihood function is defined as:
Q(r, T ) = 1 -ψ(T ) if r = 0 ψ(T ) if r = 1 , (4.2)
where ψ(T ) is the standard Weibull psychometric function [Wei51] (i.e., 1-δ-(1γδ) • e -10 3.5T ) whose parameters γ and δ reflect the probability of respectively false positive and false negative answers for a certain task. For example, in a 2AFC-type task the probability of false positive answers is 50% and therefore γ is set to 0.5 while in a Yes/No-type experiment γ is usually initialized at 0.03 [START_REF] David M Green | A maximum-likelihood method for estimating thresholds in a yes-no task[END_REF][START_REF] Treutwein | Adaptive psychophysical procedures[END_REF]. Multiplying the PDF with the likelihood function will shift position of the
n 1 + n 2 (n 1 -n 2 ) × (n 1 + n 2 ) α n 1 -n 2 l θ n 2 φ n 1 FIGURE 4.3:
The contrast between adjacent faces is computed using the angle between their normals and the spherical coordinates of the light direction in the local coordinate system defined by the face normals.
PDF on the contrast axis and reduce its variance. The QUEST procedure usually stops when the PDF shift becomes negligible and the variance becomes too small. This is usually around 20 trials in a Yes/No-type experiment.
Measuring Contrast Threshold for Flat-Shaded Models
In this section we consider 3D meshes that are rendered with flat-shading algorithm. We start by describing a method for estimating the Michelson contrast (Section 4.2.1) and spatial frequency (Section 4.2.2) locally on a 3D mesh. We then present the psychophysical experiments (Section 4.2.3) that were aimed at measuring the visibility threshold in a flat-shaded setting. In addition, in this section, we limit our study to perfectly diffuse untextured surfaces that are illuminated with a directional white light.
Contrast Evaluation
The human visual system is primarily sensitive to variation in light energy, i.e., contrast due to the center/surround organization of the receptive fields [START_REF] Wandell | Foundations of Vision[END_REF].
In general, the most used contrast definition is that of Michelson where the contrast c is computed as:
c = L max -L min L max + L min , (4.3)
where L max and L min correspond to the luminance of the pixel with respectively the highest and lowest luminance value in a certain neighborhood.
In the case of a flat-shaded rendering, each face of the 3D mesh is attributed a single luminance value proportional to the cosine of the angle between its normal and the light direction. This means that the luminance value of each pixel belonging to a certain face is given by:
L = max (l • n, 0) , (4.4)
where n is the unit face normal and l is the light direction. In that setting, the local contrast is characterized between two faces as the contrast inside a face is 0 (all pixels have the same luminance level). The Michelson contrast between two adjacent faces is therefore defined by:
c = L 1 -L 2 L 1 + L 2 = max (l • n 1 , 0) -max (l • n 2 , 0) max (l • n 1 , 0) + max (l • n 2 , 0) , (4.5)
where n 1 and n 2 are the normals of the two adjacent faces. Under the circumstances where the inner products between the light direction and the two face normals are both positive, the above equation yields to the following equation:
c = cos α • tan θ • tan φ 2 , ( 4.6)
where α and θ are the spherical coordinates of the light direction in the local coordinate system defined by n 1n 2 , n 1 + n 2 and their outer product (see Fig. 4.3). φ is the angle between the normals of the two faces. A detailed explanation of the transition between Eqs. (4.5) and (4.6) can be found in Appendix A.
Equation (4.6) shows how the contrast is affected by surface geometry and the scene illumination. The term tan φ 2 indicates the impact of surface geometry on the local contrast. On the one hand, if the surface is locally smooth (φ ≈ 0 • ), then the local contrast is minimal. On the other hand, if the surface is locally rough (φ 0 • ), then the local contrast tends to be high. In addition, the term cos α × tan θ describes how the light direction affects the local contrast. A grazing light direction will maximize the value of the contrast where θ is close to 90 • and α is close to 0 • or 180 • , while a light direction close to the normal direction (θ ≈ 0 • ) makes the contrast minimal.
Spatial Frequency
The spatial frequency is related to the size of the visual pattern, with respect to the size of one degree of the visual angle (Fig. 4.4) and is expressed in terms of cycles per degree (cpd). It is affected by the physical size of the object and the observer's distance to the object. In a flat-shaded environment, the visual stimulus is displayed on a screen and consists of the difference in luminance between a pair of adjacent faces. The perceived size of this stimulus depends then on the display's properties (resolution and size), the observer's distance to the display, the position of the model in the virtual 3D world and the size of the faces. So estimating the spatial frequency in the 3D setting requires first converting the size of the visual stimulus from its virtual value in the 3D world to its physical size on the display. As a consequence, we first evaluate the number of pixels that are occupied by the visual pattern. To do so, we start by computing the size of the visual stimulus in the virtual 3D world. It corresponds to the distance between the opposites vertices of two adjacent faces in a flat shaded mode. We then compute the number of pixels that the visual pattern occupies on the screen by applying a perspective projection.
ω = 1 • d eye d 1cpd d p h
Having evaluated the number of pixels, the physical size of the visual pattern is then computed using the display's properties (resolution and size) as:
d ph = n px √ r h 2 + r v 2 /s , (4.7)
where n px is the number of pixels of the displayed visual pattern, r h and r v are the display's horizontal and vertical resolution and s its diagonal size. Finally the spatial frequency is estimated by:
f = d 1cpd d ph , d 1cpd ≈ d eye • π/180 , (4.8)
where d 1cpd is the size of one degree of the visual angle on the display and d eye is the distance between the eye and the display. It is interesting to note the effects of the vertex density of a 3D mesh on the perceived spatial frequency. While a dense model will most likely exhibit high frequency stimuli, a coarse model will show low frequency ones (Fig. 4.5).
Experimental Study
As the coordinates of a vertex on the surface of a 3D mesh change, the local contrast of the surface around this vertex changes also. The purpose of our experimental study is to measure the contrast threshold relative to the displacement of a vertex on the surface of a 3D model by studying the effects of contrast sensitivity and visual masking while observing a 3D model. We will start by describing the experimental protocol used for the task of measuring the visibility threshold relative to the contrast sensitivity and visual making of the visual system.
Experimental Protocol
Our first group of psychophysical experiments concerns measuring the visibility threshold relative to the effects of contrast sensitivity and visual masking in a flat shading rendering. To do so, we have proposed an experimental protocol that is similar to the usual visibility threshold measurement procedure (Fig. 4.1).
We have designed the task given to the participants to be as precise and efficient as possible at the same time. The efficiency of the task is an important criterion since long experiments will be both time consuming and more importantly tiring for the participants which might affect their performance. Hence, we have used the QUEST method to adjust the intensity of the stimulus after each trial and designed the task as follows. In a typical Yes/No-type task the subjects are shown one stimulus and are asked to indicate whether they are able to see it or not. The presence of only one stimulus means that the observer will judge it as visible (Yes answer) if its intensity is above a certain internal criterion [START_REF] Green | Signal Detection Theory and Psychophysics[END_REF]. The issue here is that this internal criterion might shift over time and is different for each subject and therefore could lead to a less accurate threshold. The solution for reducing the effects of the subjective criterion is to show two objects, one acting as a reference, so that the subject is able to base his answer on a comparison with this reference. For example, in a 2AFC-type task, the subject is required to identify which of the two displayed objects is the distorted one. However, due to the high probability of false-positive answers in a 2AFC the QUEST will slowly converge to the threshold which would make the experiments long and tiring [START_REF] Jäkel | Spatial four-alternative forcedchoice method is the preferred psychophysical method for naïve observers[END_REF]. Therefore, in our experiments, the task given to the observers was based on a slightly altered Yes/No procedure. Instead of showing one object, we have displayed two objects side by side on the screen, one of which exhibits a displaced vertex in its central area. The subjects where then asked to respond by Yes or No to the following question: Are the two objects different? If the vertex displacement is visible, then the objects will appear to be different and thus a Yes answer is expected. If the vertex displacement is not visible, then both objects will appear identical and a No answer is expected. Having a Yes/No-type task compared to a 2AFC-type one makes measuring the threshold faster as the probability of a falsepositive response is low [KSGV + 94, JW06]. In addition, having a response based on a comparison of two objects increases the accuracy of the measured threshold compared to a typical Yes/No procedure as it reduces the effects of the internal criterion based on which the subjects give their answers.
The experiments took place in a low illuminated laboratory environment (Fig. 4.6). The stimuli were displayed on an Asus 23-inch display in a low illuminated room. Screen resolution was 1920 × 1080. The stimuli were observed from a distance of 1 m, in order to allow us to measure the threshold for frequencies between 1 and 16 cpd (a closer screen would make high-frequency stimuli smaller than 1 pixel). 5 subjects participated in our experiments. All had normal or corrected-to-normal vision and were 22 to 26 years old. One of the participants was experienced in perceptual subjective evaluations and the other 4 were inexperienced. The participants repeated the experiment 4 times each on a different day and on a different time of day (morning, afternoon). No user interaction was allowed.
Contrast Sensitivity
Visual Stimulus
In order to measure the CSF in the 3D setting, the natural visual stimulus consists of a vertex displaced from the surface of a regular plane whose local contrast is 0 (Fig. 4.7). The displacement of the vertex alters the normal of the adjacent faces and thus changes the contrast. In order to measure the threshold of different frequencies we change the vertex density of the plane, which alters the size of its faces. The threshold is measured for 8 spatial frequencies (1.12, 2, 2.83, 4, 5.66, 8, 11.30 and 16 cpd). An additional "dummy" frequency, whose data were not taken into account, was included at the beginning of each session to stabilize the subject's answers. In order to avoid any bias, frequency order was randomized for each observer in each session. The plane is tilted by 20 • to give the observer a 3D feel.
Results
The results of this experiment are shown in Fig. 4.8. The displacement of a vertex causes a variation in contrast for multiple face pairs. We save the maximum contrast between the affected face pairs. The left panel of Fig. 4.8 plots the mean sensitivity for each observer over each frequency. The plot shows a high consistency between the participants: All of them exhibit a peak in sensitivity at 2 cpd and the drop off in sensitivity on either side of the peak is similar for all participants. The right panel of Fig. 4.8 shows the subjects' mean sensitivity over each frequency, fitted using Mannos and Sakrison's mathematical model [START_REF] Mannos | The effects of a visual fidelity criterion of the encoding of images[END_REF] that is defined by:
csf(f ) = 1 -a + f f 0 e -f p , (4.9)
with a = -15.13, f 0 = 0.0096 and p = 0.64. The fit predicts a peak in sensitivity at around 2 cpd that drops rapidly at high frequencies. At low frequencies the drop in sensitivity is much slower than the one measured with a 2D contrast grating [START_REF] Blakemore | On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images[END_REF][START_REF] Watson | A standard model for foveal detection of spatial contrast ModelFest experiment[END_REF]. This is probably due to the aperiodic nature of the visual stimulus [START_REF] Blakemore | On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images[END_REF].
Visual Masking
Visual masking occurs when the visibility of stimulus (the target) is reduced due to the presence of another visible stimulus (the mask). Since the visibility of a visual pattern depends on its spatial frequency, the masking threshold is different at each frequency. However, if we normalize the contrast values by the mask's CSF value, then the resulting threshold will be independent of the stimulus's spatial frequency [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF]. This normalization is achieved by the following
c = c • csf(f ) , (4.10)
where c and f are respectively the contrast and spatial frequency of a visual pattern. Ultimately, when c 1 this means that the contrast is above the visibility threshold given by the CSF, otherwise (c < 1) the contrast is considered not visible. Therefore, measuring the masking effect can be done by changing the contrast value of a mask signal without the need to pay much attention to its spatial frequency. We have verified this hypothesis through a preliminary experiment where the contrast masking threshold for three different frequencies was the almost same after normalization by the corresponding CSF value. The results of this preliminary experiment can be found in Appendix B. Visual Stimulus In order to measure the threshold relative to the masking effect, the initial visual stimulus needs to exhibit a visible contrast (i.e., c 1). We then increase the initial contrast and measure the value needed to notice that change.
In other words, if c is the initial contrast (mask signal) and c is the increased value, we measure Δc = cc (target signal) needed to discriminate between c and c . The stimulus consists of a vertex displaced from a sphere approximated by a subdivided icosahedron (Fig. 4.9). The icosahedron is subdivided 3 times, which makes the contrast between two adjacent faces (stimulus of about 2 cpd) visible for an observer. This initial contrast represents the mask signal. Varying the light direction modifies the value of the initial contrast between two adjacent faces. We measured the threshold relative to 7 normalized contrasts that were log-linearly spaced from 0.6 to 4.
Results
The results of this experiment are shown in Fig. 4.10. The left panel plots for every participant the mean normalized threshold over the normalized contrast mask. For mask contrasts below the visibility threshold (normalized contrast of the mask lower than 1), the measured normalized threshold is close to 1. This indicates that the measured threshold refers to the one given by the CSF and that no masking has occurred. For mask contrasts above the visibility threshold, the measured normalized threshold is above the one given by CSF and lies close to the asymptotic region with a slope near 0.7. The right panel of Fig. 4.10 shows the subjects' mean threshold over each mask contrast fitted using Daly's mathematical masking model [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF] that is defined by:
masking(c) = 1 + (k 1 × (k 2 × c) s ) b 1/b , ( 4
Contrast Threshold
Having performed a series of psychophysical experiments in order to study the effects of contrast sensitivity and visual masking in a 3D setting, we can now derive a computational model to evaluate the threshold T beyond which a change in contrast becomes visible for the human observer as follows:
T (c, f ) = masking(c • csf(f )) csf(f ) , (4.12)
where c is the original contrast and f the spatial frequency. The proposed threshold T can adapt to various parameters. When computing the local spatial frequency, it takes into consideration the size and resolution of the display as well as the vertex density of the mesh. The threshold T can also adjusts to the scene's illumination since it influences the initial contrast.
Furthermore, for estimating the probability of detecting a change in contrast, it is common in the field of visual perception to use a psychometric function (Eq. (4.13)) with a slope set to 3.5 [START_REF] Melanie | Invariance of the slope of the psychometric function with spatial summation[END_REF].
p(Δc, T ) = 1 -e (Δc/T ) 3.5 , (4.13)
where T is the contrast threshold and Δc is the change in contrast which corresponds to contrast difference before and after the displacement of a vertex. Δc is evaluated as:
Δc = c -c if sgn(n 1 • (v 4 -v 3 )) does not change, c + c if sgn(n 1 • (v 4 -v 3 )) changes, (4.14)
where c and c are respectively the contrast of the adjacent faces before and after the vertex displacement. We test whether the vertex displacement causes a change in convexity, which is reflected by a change in the sign of n 1 • (v 4v 3 ) in order to detect, for instance, the ambiguous case as shown in Fig. 4.11, where the displacement does not induce a change in the "conventional" contrast between the adjacent faces.
The method proposed in this section works only for models illuminated by a directional light and rendered with a flat-shaded algorithm as a result of using the limited contrast estimation method. In addition, the perceptual model used for computing the visibility of the geometric distortions does not take into account either the regularity of the visual pattern or the effects of global luminance on the CSF value. This is due to the simplified CSF and masking functions (Eqs. (4.9) and (4.11)) used in the threshold model and will most likely result in overestimating the perceptual impact of distortions in complex or dark regions of a 3D mesh. These limitations are taken into account in our second stage of our experimental study where we present a more complete threshold model for smooth shaded meshes.
2D representation v 4 v 1 v 4 v 1 v 1 v 1 v 3 v 2
Measuring Contrast Threshold for Smooth-Shaded Models
Building on the aforementioned work on computing the contrast visibility threshold in a flat-shaded environment, we now extend it to smooth-shaded models. To do so, we generalize the method of estimating the local contrast on a 3D model to smooth-shaded algorithms and different illumination types (directional and point light). We also extend our study of the contrast sensitivity to include the effects of the scene's global luminance. Moreover, based on the free-energy principle, we propose a method to compute the visual regularity of a rendered mesh which allows us to take into account its influence over the visibility threshold.
Contrast for a Smooth-Rendering Setting
In a smooth-shaded rendering algorithm, each point on a triangular face surface is attributed a luminance value. In consequence, each face of the triangular mesh exhibits a local contrast. Hence, in order to compute the contrast of a face, we need to find the points corresponding to the highest and lowest luminance values, L max and L min respectively.
In this section, we propose an analytical method to compute these points. This will allow us to estimate the Michelson contrast (Eq. (4.3)) for a given face. Let F = {v 1 , v 2 , v 3 } be a face of a 3D mesh and let x i be a point belonging to F . The surface normal at x i is obtained using a barycentric interpolation of vertex
v 3 v 2 v 1 n 2 n 3 n 1 n 3 n 2 n 1 n 3 n 1 n 2 L L FIGURE 4
.12: The projection of the normals, [n 1 , n 2 , n 3 ], on the unit sphere and to the tangent plane allows us to compute the barycentric coordinates of the closest and farthest points to the light direction L.
normals:
n x i = h x i ||h x i || ; h x i = N × b x i , (4.15)
where N = [n 1 , n 2 , n 3 ] is the matrix of vertex normals and b
x i = [α i , β i , 1 -α i - β i ]
T is the vector of barycentric coordinates of x i . In the case of a diffuse surface, the luminance attributed to x i is proportional to the cosine of the angle between the surface normal n x i at x i and the light direction L. So finding the brightest and darkest points of a face boils down to finding the points with respectively the smallest and biggest angle between the corresponding normal and light direction. This task can be achieved by computing their barycentric coordinates as explained below.
We first map the normals of all the points x i ∈ F and the light direction L onto the unit sphere (Fig. 4.12). It is easy to prove that the set of normals of F forms a spherical triangle on the unit sphere as the normals of each edge of F correspond to a geodesic on the unit sphere. Let n x i be the gnomonic projection of n x i onto the tangent plane of the unit sphere at the centroid of the spherical triangle (Fig. 4.12) and let L be the projection of L. The gnomonic projection is especially useful to our purposes since it projects geodesics to straight lines. In consequence, the points n x i determine a euclidean triangle F in the tangent plane. This means that finding the barycentric coordinates of the points with the smallest and biggest angles between the normal and light direction can be achieved by computing the barycentric coordinates of closest and farthest points between F and L. For x i ∈ F , the distance between corresponding n x i and L can be expressed as:
d x i (α, β) 2 = ||α n 3 n 1 + β n 3 n 2 + Ln 3 || 2 , (4.16)
where α, β are the barycentric coordinates of x i . The barycentric coordinates relative to the point with the highest and lowest luminance value can finally be obtained by solving the following systems:
argmin {d x i (α, β)}, α + β ≤ 1 and α, β ∈ [0, 1] ; argmax {d x i (α, β)}, α + β ≤ 1 and α, β ∈ [0, 1] . (4.17)
A detailed description of the solution of Eq. (4.17) can be found in Appendix A.
Having computed the brightest and darkest points of a face, it is now possible to evaluate its Michelson contrast. The contrast computed according to the method described above is compatible with directional light sources. It is also possible to extend this method to point light sources by assigning to each point x i ∈ F a light direction according to:
l x i = g x i ||g x i || ; g x i = x i -p = M × b x i -p (4.18)
where l x i is the light direction at x i , p is the light position, M = [v 1 , v 2 , v 3 ] is the matrix of vertex position and b x i is vector of barycentric coordinates of x i . For the same reason, the mapping of the light directions on the unit sphere will form a spherical triangle as the light directions assigned to edges of the face correspond to a geodesic and thus creating a euclidean triangle when projected to the tangent plane. Finally, The distance between n x i and l x i on the tangent plane can be evaluated as:
d x i (α, β) 2 = ||α( n 3 n 1 -l 3 l 1 ) + β( n 3 n 2 -l 3 l 2 ) + ( l 3 n 3 )|| 2 . (4.19)
By solving Eq. (4.17) for the distance in Eq. (4.19) we can evaluate the Michelson contrast for 3D models illuminated by a point light.
Mapping the vertex normals on the unit sphere, makes it easy to understand how the shape of the surface affects the local contrast. As the curvature of the surface increases, the area of the spherical triangle increases. This makes the contrast value attributed to that face be potentially high as for a certain light direction the distance between the closest and farthest points on the triangle is most likely to be large. It is important to also note that the presented method is capable of adapting to simple rendering algorithms where the luminance is computed in a per-pixel basis so that the contrast inside a face remains the dominant local contrast. For example, Fig. 4.13 shows the contrast computed on a 3D mesh rendered with two different shadings: a regular smooth shading algorithm and a cell shading one. Notice how the contrast of the faces relative to the cell shaded rendering of the 3D model is 0 except for the ones where a transition in luminance occurs. This method is also independent from the normal evaluation algorithm used to compute the normals of the vertices of the 3D model.
Regularity of the Visual Signal
The regularity of a visual pattern plays an important role in the ability of the visual system to distinguish between two visible contrasts. The effects of the visual regularity of a pattern on the contrast threshold can be explained by the free-energy principle theory [START_REF] Friston | A free energy principle for the brain[END_REF][START_REF] Friston | The free-energy principle: a unified brain theory?[END_REF]. The brain can easily and successfully predict the visual patterns of a regular stimulus while irregular visual stimuli are difficult to be predicted [START_REF] David | The bayesian brain: the role of uncertainty in neural coding and computation[END_REF][START_REF] Karl J Friston | Reinforcement learning or active inference?[END_REF]. Based on this fact, we can relate the visual regularity to the prediction error of a visual pattern.
We propose a computational model that aims to predict the local contrast value from the contrast information of its surrounding. The visual regularity can then be estimated from the residue between the actual contrast value and the predicted one. We suppose that the local contrast of a triangular face F , denoted by c, can be estimated using a linear combination of the local contrast of the three surrounding faces sharing an edge with F :
c = x 1 c 1 + x 2 c 2 + x 3 c 3 , (4.20)
where c is the estimated contrast and c 1 , c 2 and c 3 are the contrast values of the adjacent faces organized in a descending order. So in order to evaluate c we must estimate the linear coefficients [x 1 , x 2 , x 3 ]. This can be achieved by solving the following linear system using the least square regression method:
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ c 1,1 c 1,2 c 1,3 . . . c i,1 c i,2 c i,3 . . . c n,1 c n,2 c n,3 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎣ x 1 x 2 x 3 ⎤ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ c 1 . . . c i . . . c n ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , (4.21)
where c i is the contrast value of the ith face within a predefined neighborhood centered at the current face F , c i,1 , c i,2 , c i,3 are the contrast values of the corresponding adjacent faces and n is the total number of faces in the neighborhood.
In practice we have used a neighborhood of a size equivalent to 3.5 cpd which corresponds to the most sensitive spatial frequency according to the contrast sensitivity function in order to estimate the value of [x 1 , x 2 , x 3 ] for each face. Finally the visual regularity (closer to 0 means more regular) assigned to a face F is obtained by computing the absolute difference between the actual contrast and the estimated one:
r = |c -c | . (4.22)
Figure 4.14 shows the visual regularity for the Lion-vase model. Notice how the region containing the lion's mane is considered as visually irregular while the smooth face is visually regular.
Spatial Frequency
In the case of a smooth-shaded model, the local visual stimulus consists of a luminance patterns inside a triangular face. The frequency of this pattern is related to the distance between its brightest and darkest points. These points can be obtained using the method described in Section 4.3.1. Having computed the virtual distance between the brightest and darkest points or a face, we now proceed similarly to the case of flat-shaded models. First we apply the perceptive projection to get the number of pixels corresponding to this distance and then apply Eqs. (4.7) and (4.8) to obtain the frequency in cycles per degree.
Having defined local perceptual attributes for 3D models (contrast, frequency and visual regularity), we now present our experimental study that aims to measure the contrast threshold required to detect a change in the geometry of the mesh.
Experimental Study
In our second batch of psychophysical experiments, we measured the contrast threshold relative to the displacement of a vertex in case of a smooth-shaded environment. Compared with our previous experiments, we consider this time a more complex CSF model that takes into account not only the spatial frequency of the local stimulus but also the global luminance value of the rendered scene.
In addition, we extend the masking model to include the effects of the regularity of a visual pattern on the contrast threshold.
Experimental Procedure
For these threshold measurements we have kept almost the same experimental procedure used in our previous experiments as described in Section 4.2.3. Two objects were displayed side by side on a screen, one of which exhibits a displaced vertex or a series of displaced vertices. The subjects were instructed to answer by Yes or No to whether the two objects on the screen appear identical. The magnitude of the vertex displacement was then adjusted after each response according to the QUEST procedure [START_REF] Watson | QUEST: a Bayesian adaptive psychometric method[END_REF]. The experiments took place in the same experimental environment as the previous ones. 12 subjects took part in this series of experiments. 4 of them have participated in the previous measurements. Also no user interaction was allowed.
16 cpd 8 cpd 2 cpd
Contrast Sensitivity
Visual Stimulus Similarly to the flat-shaded experiments, we measured the visibility threshold relative to the contrast sensitivity using a regular plane (Fig. 4.15). The difference this time is that we consider the effects of both spatial frequency and global luminance on the contrast threshold. To do so, we alter the mesh density to change the spatial frequency of the stimulus and we vary the lighting conditions (light energy) to change the global luminance level of the scene. The threshold was measured for 7 spatial frequencies (0.5, 2, 4, 5.66, 8, 11.3 and 16 cpd) and for 3 luminance levels (180, 110 and 33 cd/m 2 ).
Results
The results of these experiments are shown in Fig. 4.16. The plot shows a peak in sensitivity at around 3.5 cpd and a drop in sensitivity on either side of the peak. Additionally we can see that there is a decrease in sensitivity for low luminance while the sensitivity is relatively stable for a luminance level that is above 100 cd/m 2 . The mean sensitivity over each frequency and luminance was then fitted to Barten's model [START_REF] Peter | The Square Root Integral (SQRI): A new metric to describe the effect of various display parameters on perceived image quality[END_REF] which takes into consideration both the frequency and the luminance and is defined by: Visual Stimulus Measuring the threshold relative to the visual masking aspect of the HVS requires a visual stimulus that exhibits a visible initial contrast (i.e., above the CSF value) and certain visual regularity. We then gradually increase this initial contrast and measure the value needed to notice a change. Like previous experiments, we displace a series of vertices from a sphere approximated by a subdivided icosahedron. The icosahedron is subdivided 3 times which makes the contrast in each face visible for a human observer. By changing the light direction we can control the initial contrast value and by adding uniform noise to the sphere we can change its visual regularity (Fig. 4.17). We measure the masking threshold relative to 5 levels of visual regularity and 4 initial contrast values for each regularity level.
csf(f, l) = A(l)fe -B(l)f √ 1 + ce B(l)f , A(l) = a 0 (1 + 0.7/l) a 1 B(l) = b 0 (1 + 100/l) b 1 (4.
Results
The results of these experiments are shown in Fig. 4.18. The plot shows the subjects' mean threshold for each of the visual regularity levels and initial contrast values. For the visible initial contrast whose normalized value is greater than 1 (normalization means multiplying by the corresponding CSF value, see Eq. (4.23)), the measured threshold lies close to an asymptote with a slope increasing with the value of r. This means the less the human visual system is capable of predicting the observed surface, the higher the slope of the asymptote. This result is consistent with the analysis of Daly [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF] which relates the value of the slope to the observer's familiarity with the observed stimulus. It also agrees with the observation that geometric distortions are more visible in smooth regions of the mesh than in rough ones [START_REF] Lavoué | A local roughness measure for 3D meshes and its application to visual masking[END_REF]. In order to take into consideration the visual regularity of a 3D mesh, we altered Daly's visual masking model by mapping the value of visual regularity to the value of the slope using an S-shaped psychometric function [Wei51], s(r):
masking(c, r) = 1 + k 1 • (k 2 • c) s(r) b 1/b , s(r) = (1 -δ) -(1 -γ -δ) • e -10 β(-log(r)-) (4.24)
with c the normalized contrast, r the visual regularity and the fitted values k 1 = 0.015, k 2 = 392.5, b = 4, γ = 0.63, δ = -0.23, β = -0.12 and = -3.5.
Contrast Threshold
With results of these psychophysical experiments, we can compute the contrast threshold similarly to the case of flat-shaded rendering. However, since we have carried out a more precise threshold measurement in the case of smooth-shaded rendering mode by taking into account more parameters (luminance and visual regularity), the computed threshold T is now a function of four variables:
T (c, f, l, r) = masking(c • csf(f, l), r) csf(f, l) , ( 4
Discussion
The results of this experimental study of the visibility threshold on 3D meshes and the relation between the defined local perceptual attributes (contrast, frequency and visual regularity) and the mesh properties (density, shape) can give us some interesting insights about the behavior of this threshold. First from the shape of the CSF and the relation between the mesh density and spatial frequency we can deduce that a 3D mesh will become more sensitive to geometric distortions when its density increases from a low value since it becomes easier for the human visual system to detect them. However, if the density of the model passes a certain value, then it becomes hard to detect the local geometric distortions because at high spatial frequencies the sensitivity of the visual system with respect to contrast, decreases. This result is in part inline with previous observations in computer graphics where it was noted that coarse mesh are better at hiding compression artifacts than dense ones [START_REF] Sorkine | High-pass quantization for mesh encoding[END_REF]. This low sensitivity for coarse meshes will normally be located at curved coarse regions of a 3D mesh. This can be explained by two factors. The first is that the low frequency stimuli caused by the low density makes the human visual system less sensitive to contrast. The second is due to the curved shape of the surface which will potentially be reflected by high contrast values and thus create an important masking effect where a big change in contrast would be needed to notice the difference. In addition, our results justify the relation between mesh roughness and noise visibility on which most existing model-based methods are based. In fact, it is more likely to encounter complex visual pattern in rough regions which will increase the visibility threshold due to the increasing slope of the masking function.
Moreover, while we have used Barten's CSF mathematical model which is quite popular in the image/video processing communities, the sensitivity values and peak frequency positions that we have obtained are different from, for example, the ones computed with Daly's model [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF]. We think that is due to fact that models used in image-based methods are usually fitted using data from experiments where the visibility threshold is measured using a continuous sinusoidal signal. This is also in accordance with the observations of [CHM + 12] in which the authors' main objective is to test the efficiency of perceptual imagebased techniques in the case of computer generated images. They concluded that image metrics are too sensitive for evaluating the visibility of computer graphics distortions. This is probably due to the difference in the types of visual artifacts caused by a geometric operation on the 3D mesh compared to the ones produced by an image processing operation.
The defined perceptual attributes and the experimental study have allowed us to propose a model whose goal is to compute the contrast visibility threshold. However, this threshold does not reflect the amount of displacement a vertex can tolerate but it rather indicates the maximum change in contrast a distortion can cause before it becomes visible. In the next chapter we will present an algorithm that will compute the vertex displacement threshold using the contrast visibility threshold model presented in this chapter.
Chapter 5
Just Noticeable Distortion Profile
The just noticeable distortion (JND) profile refers to the threshold beyond which a change in contrast becomes visible for the average observer [START_REF] Lin | Computational models for just-noticeable difference[END_REF]. In the 3D setting, the JND is evaluated by computing the maximum displacement each vertex can tolerate. The displacement of a vertex in a given direction will probably cause a change in the normals of adjacent faces and a change in local density. This means that the displacement of a vertex probably alters the local perceptual properties (contrast, frequency) which will become visible at some point. In this chapter, we present a numerical method for computing the maximum displacement beyond which the local vertex distortion can be detected by the average human observer (Section 5.1). We then validate this computed threshold with a series of subjective experiments (Section 5.2)
Vertex Displacement Threshold
Visibility of a Vertex Displacement
The displacement of a vertex v 1 in a certain direction dir with a magnitude d alters the normals of the faces belonging to its one-ring neighborhood (Fig. 5.1). In addition, it will also cause a rotation in the normals of the vertices belonging to the one-ring neighborhood of v 1 . As a result, if the 3D model is rendered in a flatshaded mode, then this displacement is likely to alter the contrast and frequency of the surrounding pair of adjacent faces (Fig. 5.1.(b)). Similarly, in the case where the model is rendered with a smooth-shading algorithm, this vertex displacement also causes a change in contrast and frequency of faces that have at least one vertex in the 1-ring neighborhood of v 1 (Fig. 5.1.(c)). This alteration in the local perceptual attributes of a 3D mesh might cause the vertex displacement to be visible for a human observer. To estimate the visibility of a certain displacement, we start by evaluating the new normals of each of the effected faces. For instance, we express the new normal n 1 of the face {v 1 , v 3 , v 2 } (see Fig. 5.1.(a)) after displacing v 1 in a direction dir with a magnitude d by:
v 1 v 1 v 1 (b) Flat shading (c) Smooth shading (a) Vertex displacement dir v 1 v 3 v 4 v 2 v 1
ñ 1 = (v 1 -v 2 ) × (v 3 -v 2 ) + d • (dir × (v 3 -v 2 )) , n 1 = ñ 1 ñ 1 . (5.1)
Since none of the vertices of the second face {v 2 , v 3 , v 4 } in Fig 5 .1.(a) is displaced, its normal direction does not change. We note that in the case of smooth shading, an additional step is required which consists of computing the new normals of the 1-ring vertices using the new normals of the 1-ring faces. Having computed the new normals we now evaluate the new contrast c using the methods described in Sections 4.2.1 and 4.3.1 for respectively the set of affected face pairs in the case of flat and the set of affected faces in the case of smooth shaded rendering. The change in contrast Δc i (Eq. (4.14)) along with the contrast threshold T i is evaluated for each of the affected elements using respectively Eqs. (4.12) and (4.25) in the case of a flat or smooth shaded rendering. This change in contrast and contrast threshold are then passed to a probability function (Eq. (4.13)) which outputs the visibility likelihood for each affected element. The visibility of the displacement of v 1 in a direction dir with a magnitude d is then computed as:
visibility(v 1 , dir, d) = max{p(Δc i , T i )}, (5.2)
where p(Δc i , T i ) is the likelihood of detecting the change in contrast for the i th affected pair of adjacent faces in the case of a flat shading or the i th affected face in the case of smooth shading. Ultimately, the displacement of a vertex is considered visible when the change in contrast in at least one of the elements affected by this displacement becomes noticeable. In the following, we will describe an algorithm that allows us to efficiently find the threshold beyond which the displacement of a vertex becomes visible.
Evaluating the Vertex Displacement Threshold
In order to compute the threshold beyond which the displacement of a vertex v in a direction dir is visible, we proceed by the following steps. First, a list of the elements whose contrast is affected by the displacement is built (adjacent pairs of faces in the case of flat shading and faces having a vertex in the 1-ring neighborhood of v in the case of smooth shading). For each affected element, we start by computing its original perceptual properties (contrast, frequency and visual regularity) and the corresponding contrast threshold using Eq. (4.12) in the case of flat shading and Eq. (4.25) in a smooth shading setting. Then we gradually increase the displacement magnitude of v and compute its visibility as described in Section 5.1.1. Note that when the displacement causes a change in spatial frequencies (e.g., in the case of a displacement in the local tangent plane of the vertex), we take into account the most sensitive frequency that results in a higher detection probability. Finally, the threshold is attributed to the magnitude where the vertex displacement visibility reaches a certain threshold. In practice we set the probability threshold to 0.95. To better understand this process, let us consider the two vertices v 1 and v 2 in Fig 5 .2 in a flat shaded setting. Both vertices are displaced in their normal direction. The first vertex v 1 is situated on a rough region (initial contrast of all surrounding pairs of adjacent faces > CSF threshold) and the second vertex v 2 on a smooth region (initial contrast < CSF threshold). The displacement of v 1 and v 2 barely affects the spatial frequency of the surrounding face pairs as can be seen in the plots of the first row. The plots in the second row, show how displacing v 1 and v 2 in the normal direction affects the local contrast. The probability of detecting this change in contrast is shown in the plots in the third row. These plots show that v 2 is more sensitive and can tolerate less displacement than v 1 . This is due to the different initial contrasts of the two vertices. The initial contrast around v 1 is above the CSF threshold. This implies that the visibility threshold is increased due to the masking effect, which explains the slow increase in detection probability. For v 2 all initial contrasts are below the CSF threshold. No masking should occur which means that once the contrast is above the CSF threshold the displacement should be visible. This is exactly what we observe. When the contrast of "face pair 4" reaches the CSF level then the detection probability becomes close to 1. For the case of smooth shading, the exact same process is used to compute the displacement threshold. The only change in this case resides in the method to compute the local perceptual properties and the contrast threshold.
In the description above, we explain how to compute the displacement threshold by brute-force incremental step searching only for clarity purposes. However, it is important to note that as the vertex displacement increases, the contrast difference always increases as well. In addition, the psychometric function used to compute the probability to detect a change in contrast (Eq. (4.13)) is a monotone function. Therefore, in our implementation, we instead use a half-interval search algorithm to find the threshold (as described in Algorithm 1), which is simple yet The set of all possible light directions belongs to the local sphere around a vertex. However, the local contrast is well defined when the dot product between the light direction and the normals is positive since otherwise its value is 0. This means that the set of all possible lights can be reduced to the local half sphere in the direction of the unit normal. Furthermore, we can intuitively say that the local contrast increases as the light direction gets close to the local tangent plane. This means that if the light direction is close to the base of the local half sphere, then a small displacement of a vertex will most likely cause a big change in contrast which might be visible. On the contrary, if the light direction is close to the normal direction of the displaced vertex, then even a big vertex displacement can only cause a small change in contrast which will make the displacement threshold value high. Figure 5.3 shows the vertex displacement threshold obtained from different light directions belonging to the half sphere of a vertex v. As previously explained, we notice that as the light direction approaches the base of the half sphere, the threshold gets smaller. This implies that the worst possible illumination is most of the time found near the base of the half sphere. Therefore, it is actually not necessary to densely sample the half sphere in order to obtain an accurate solution. It will be more efficient to concentrate the light samples near the base of the local half sphere.
Results
By computing the vertex displacement threshold relative to a certain direction for each vertex of a 3D mesh, we obtain the just noticeable distortion profile. Figure 5.4 shows the JND profile for a mesh under various circumstances. Figure 5.4.(a) displays the JND profile relative to a displacement in the normal direction in a light independent mode. Due to the effects of contrast masking, the rough region of the model can tolerate more noise than the smooth part. This is not the case when the JND is computed relative to a displacement in the tangent direction (Fig. 5.4.(b)) where the smooth part can tolerate more displacement. This is because a displacement in the tangent plane for vertex in a smooth region will barely alter the normal of the surrounding faces and thus the local contrast will not be affected by the displacement, leading to a higher displacement threshold than in a rough region. We note here that in the case of a displacement in the tangent plane we make sure that the computed displacement threshold does not cause an auto-intersection. Figure 5.4.(c) shows the JND profile relative to a displacement in the normal direction when the light source is fixed at the viewpoint.
As expected, we can see that the obtained threshold is maximal when the surface normals are in the same direction of the light as the contrast will increase slower compared to when the light direction is close to the local tangent plane.
Figure 5.5 presents side by side the JND profile for the Bunny model in a smooth shading mode and a flat shading mode. In general we have observed that the displacement threshold relative to a smooth shading rendering is 5 to 10 times bigger than the one relative to a flat shading mode. This difference is due to the way how the surface normals and contrast are computed in each mode. In a flat shading mode, the contrast is computed using the normals of a pair of faces, while in a smooth shading the contrast is evaluated using the vertex normals within a triangular face which reflects a smooth shading rendering. In general, the displacement of a vertex causes a bigger rotation in the normal direction of the faces adjacent to the displaced vertex in a flat shaded rendering, compared to the rotation of normal direction of the vertices in the 1-ring neighborhood computed in a smooth shaded rendering. In consequence, for the same displacement magnitude, the change in contrast is higher in a flat shading mode compared to a smooth shading mode which explains the lower visibility threshold for the former.
Figure 5.6 compares the vertex displacement threshold profile when the resolution of the 3D model changes. In general, as the density of a triangular mesh decreases the magnitude of the displacement a vertex that can tolerate, increases. In a dense mesh, the triangular faces are small compared to the ones belonging to a coarse mesh. As a result, rotation of the normals relative to a vertex displacement of the same magnitude is higher for a dense mesh. This means that, in general, as the displacement increases, the contrast varies slower in a low resolution mesh. In addition, the difference in density also affects the spatial frequency of the visual stimulus which further affects the visibility of a vertex displacement. As it can be deduced from the contrast sensitivity properties of the visual system that are modeled by the CSF, when the density of a 3D mesh increases from a low value it becomes easier for the visual system to detect the change in contrast caused by the displacement of a vertex. Both of these reasons make the vertex displacement threshold in high resolution models, in general, smaller than the threshold in low resolution models.
Finally in Fig. 5.7 we show how the intensity of the scene's illumination can affect the vertex displacement threshold; in the case where a 3D scene is illuminated with a point light whose energy decreases proportionally to the distance between the light source and the illuminated object. So if an object is far from the point light source (Fig. 5.7.(b)), the global luminance around that object is reduced, which causes an increase of the magnitude of the vertex displacement threshold. This boost in the value of the visibility threshold is mainly due to the CSF, which describes a reduction in contrast sensitivity when the global luminance is low.
Performance Analysis
Here we present some information about the theoretical and practical results concerning accuracy and the execution time of the vertex displacement threshold computation. In a light-independent mode, we compute the vertex displacement threshold relative to several light directions sampled from a local half sphere around a vertex. We have observed that the algorithm begins to converge to an accurate displacement threshold value with 8 samples as it can be seen in Fig. 5.8, where the root mean square error (RMS), computed with regard to the displacement threshold obtained with 64 light direction samples (shown in the leftmost corner of each graph, starts to stabilize beyond this point. In practice, we have used the 12-points sampling, similar to the one in Fig. 5.3 (excluding the point (0, 0)), which ensures a very good trade-off between threshold accuracy and algorithm speed according to our tests.
Threshold Accuracy
Theoretical Computational Complexity
A theoretical analysis of the proposed algorithm for computing the vertex displacement threshold shows that the complexity of computing the light independentmode for one vertex is equivalent to:
O L × log x max x precision , (5.3)
where L is the number of light samples and x max and x precision are respectively the upper displacement bound and the precision used in the half-interval search algorithm (Algorithm 1 of this chapter). This means that the complexity for computing this threshold for an entire mesh is:
O V × L × log x max x precision , (5.4)
where V is the number of vertices. This shows that the execution time increases linearly with the number of vertices at a rate relative to the number of light samples and the precision of the search procedure.
Execution Time
Having adopted a half-interval search algorithm makes finding the JND threshold a very efficient operation. On average computing the vertex displacement threshold for a vertex in the light independent mode takes about 7 × 10 -4 s. We have used an HP EliteBook 8570w with an i7-37400QM cpu (4 cores / 8 threads) and 16GB of RAM in our computation. As suggested by Eq. (5.4), when the number of vertices or light samples increases, we have observed that the execution time increases approximately in a linear way (Fig. 5.9). Figure 5.9 also shows that the computation of the vertex displacement threshold for a flat-shaded rendering is faster than a smooth-shaded one. For instance, for a model with approximately 200k vertices the computation of the vertex displacement threshold on the entire mesh took about 50s in a flat shading mode and about 95s in a smooth shading mode. This difference in computation time is due to the different normal update after each displacement iteration. In the case of a flat shading mode, the update of the normal direction of the 1-ring adjacent faces is straightforward, using Eq. (5.1). However, in a smooth shading mode, the normal update operation requires an additional step which consists of evaluating the normals of the vertices in the 1-ring neighborhood. In addition, since the computed displacement threshold of a vertex is independent from the threshold of other vertices then this operation can be computed on the entire mesh in a parallel way (with respect to each vertex). Using OpenMP and 8 threads, the algorithm performs about three to five times faster. For a model with 237k vertices, the vertex displacement threshold can be computed in about 18s instead of 52s.
Subjective Validation
In order to test the performance of a Just Noticeable Distortion profile, it is common in the image or video JND context to perform a subjective experiment where the visibility of a JND modulated random noise added to a series of images or videos is rated by several subjects [LLP + 10, ZCZ + 11, WSL + 13]. A JND model should be able to maximize the amount of noise injected into the image or video while keeping it invisible; the best JND model being the one that is able to add the largest amount of invisible noise. We have conducted a series of subjective experiments where we have tested the performance of the proposed JND model in the flat and smooth shaded settings. We have compared the visibility and the quantity of vertex noise on altered 3D meshes, which were obtained by modulating the vertex noise in three different ways. The three types of noise modulation are:
• uniform random noise, i.e., without any modulation; • random noise modulated by the surface roughness; • random noise modulated by the proposed JND model. Surface roughness is an important candidate to test our JND model against since it is accepted in the computer graphics community that noise is less visible in rough regions [START_REF] Lavoué | A local roughness measure for 3D meshes and its application to visual masking[END_REF]. As discussed in the previous chapter, this observation can also be justified through the results of our experimental study.
Mesh Alteration
We injected noise into 3D meshes according to the following equation:
v i = v i + rnd ×M (v i ) × dir i , (5.5)
where v i is the i th vertex of the initial mesh and v i is the corresponding noisy vertex. dir is the noise direction. rnd is a random value equal to either +1 or -1 and M (v i ) represents the magnitude of the noise for v i . It is defined as:
M (v i ) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ β unif uniform noise, β rough × lr (v i ) roughness modulated noise,
β jnd × jnd (v i ) JND modulated noise, (5.6)
where β unif , β rough and β jnd regulate the global noise energy for each of the noise injection methods. lr (v i ) is the local surface roughness as defined in [START_REF] Wang | A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF] and jnd (v i ) is the JND value computed as explained in Section 5.1. In order to allow user interaction during the experiments, the JND value was computed independently from any light direction. It is important to note that in the case of a JND modulated noise, the value β jnd of the global noise energy value should have some implication on the visibility of the injected noise. Therefore, if β jnd > 1 the injected noise onto the 3D mesh should be visible, while if β jnd < 1 the noise should remain undetected by the human observer. The following mesh models were used in our experiments:
• Bimba is a coarse model with a smooth and a rough part;
• Horse is a smooth model with a varying vertex density. Its head is densely sampled while its body is coarse. • Lion-vase is a dense model with a mix of smooth and rough surfaces.
• Venus is a dense model with a smooth surface.
• Dinosaur is a dense model with a rough surface.
Validation of Flat-Shaded Vertex Displacement Threshold
Inspired by the literature on validation of image JND profiles, we have performed two subjective experiments whose goal is to test the accuracy of the computed vertex displacement threshold in the flat-shaded setting.
Experiment 1
The goal of the first experiment is to rate the visibility of noise on 3D models according to several noise modulation types and two global energy levels (β jnd = 1 and β jnd = 2). These levels correspond to a near-threshold noise and to a suprathreshold noise, respectively. For β jnd = 1 the injected noise is supposed to be difficult to notice while for β jnd = 2 the noise is expected to be visible. We then fix β unif and β rough such that for the meshes altered using our JND model, the maximum root mean square error (MRMS) [CRS98, ASCE02], a widely used purely geometric distance, is the biggest for each noise level. Indeed, the objective here is to show that our JND model is able to inject the highest amount of noise onto the mesh among the three methods, while producing the least visible one. In addition, we tested the performance of the JND model for noise in a random direction for each vertex and that in the normal direction for each vertex. To see the effects of light direction we ran the experiment twice: once with the light source in front of the model and another time with the light on top left of the model.
Experimental Protocol
The first subjective experiment followed the adjectival categorical judgement method [START_REF]Methodology for the Subjective Assessment of the Quality of Television Pictures[END_REF]. This procedure consists of displaying two 3D meshes side by side, the reference on the left and the noisy one on the right. The participants were asked to rate the visibility of the noise on a discrete scale from 0 to 5, 0 being the score attributed when the noise cannot be seen and 5 when the noise is clearly visible. 5 "dummy" models were included at the beginning of each session to stabilize subjective scores. The models were presented in a randomized order. To avoid any memory-based bias, two meshes derived from the same reference model were never displayed consecutively.
The experiment was conducted in a low illuminated environment. We used a 23-inch Asus screen with a 1920 × 1080 resolution to display the 3D models. The participants viewed the models from a distance of 50 cm. During the experiment, the two displayed meshes had a synchronized viewpoint and subjects could freely rotate around the displayed meshes. To encourage close examination of the displayed mesh, no score could be registered before 10 seconds of interaction occur. The initial viewpoint was manually set for all models. The light source
Noise in normal direction
Noise in random direction was fixed with reference to the camera position. A front and a top-left light directions were used. 12 subjects participated in these experiments. All of them had normal or corrected-to-normal vision and were between the age of 20 and 29.
Results After collecting the subjective scores, we have computed the mean score over each of the noise types. "JND 1" and "JND 2" refer to the models obtained by modulating the random noise with our JND model for near-threshold and suprathreshold levels, respectively. "Rough 1" and "Rough 2" refer to the ones obtained using the surface roughness measure and "Unif 1" and "Unif 2" to the ones with uniform random noise. Figure 5.10 displays the results of the subjective experiments. Plots (a) to (c) present the results for the noise in the normal direction and plots (d) to (e) the results for the noise in a random direction. Figures 5.10(a) and 5. 10(d) show that the noise on the "JND 1" models was indeed difficult to detect as the mean subjective score is about 0.45. Interestingly, the participants rated "Unif 1" and "Rough 1" models similarly to "JND 2" which refers to the suprathreshold noise level models that contain approximately twice the noise of "Unif 1" and "Rough 1". Plots (b) and (e) also show that "JND 1" models were perceived almost identically under both front and top-left illumination conditions. This is not the case for "Unif 1" and "Rough 1" models where the grazing light direction of the top-left illumination made the noise more apparent. It is also important to note that the visibility of the noise for "JND 1" models was almost identical for all models. This is not the case for "Rough 1" and "Unif 1" where the visibility of noise varied a lot for different models (see Figs. 5.10(c) and 5.10(f)). This is mainly due to the difference in mesh density between the models; high density models are in general more sensitive to noise than low density ones.
These results show that the proposed JND model is indeed able to add the largest amount of invisible noise onto the mesh surface among the three methods. Furthermore, the proposed JND model can accurately predict the visibility threshold for 3D meshes, taking into account the noise direction, the mesh characteristics and the scene illumination. However, the proposed model cannot accurately describe how the supra-threshold noise visibility (or annoyance) is perceived since it has not been designed for this purpose; the noise was perceived differently for each model in "JND 2" (Figs. 5.10(c) and (f)).
Experiment 2
The first experiment showed that the models with a JND modulated noise were rated the lowest on the visibility scale and could tolerate the biggest amount of distortions. In the following experiment we measure the global noise energy threshold beyond which the injected noise becomes visible for a 3D model. The idea behind this experiment is to find the minimum noise intensity (β unif , β rough and β jnd ) starting from which the participants notice the noise in the model and then compare their respective MRMS value. The JND modulated noise should have the highest amount of geometric distortion which is an indication that the proposed JND model is capable of effectively hiding a large amount of noise.
Experimental Protocol For this second experiment, we have adapted the same experimental procedure that we have used to measure the local contrast threshold in the studies of contrast sensitivity and visual masking (see Chapter 4). Two models were displayed on the screen, one of which has noise injected. The subjects had to answer by either Yes or No whether they saw the noise on one of the model. The intensity of the noise (β unif , β rough and β jnd ) is then adjusted using the QUEST procedure [START_REF] Watson | QUEST: a Bayesian adaptive psychometric method[END_REF]. The subjects were allowed to interact with the displayed models by rotating the camera around them. 5 new subjects participated in the experiment.
Results Table 5.1 and Fig. 5.11 display the results of this subjective experiment. Table 5.1 shows the mean measured intensity required to make JND modulated noise visible on a 3D mesh. We see that the measured β jnd is close to 1 for all of the models, meaning that the proposed JND profile is able to accurately detect the threshold beyond which a noise is visible. Figure 5.11 shows that the MRMS value of the mesh model with JND modulated noise of just noticeable level is higher than those of the corresponding models with uniform noise or roughness modulated noise at the same visibility level. This means that the JND model is able to tolerate the highest amount of noise among the three candidates, which is what we expected.
Validation of Smooth-Shaded Vertex Displacement Threshold
In order to test the accuracy of the computed vertex displacement threshold in the smooth-shading setting, we have performed a subjective experiment similar to the second one for the flat-shading setting presented in Section 5. here is to measure the global noise energy threshold beyond which the injected noise becomes visible on the 3D model.
Experimental Protocol
We have followed the same protocol as described in Section 5.2.2.2. 12 new subjects participated in the experiment. The experiment was carried out with two lighting conditions: front directional light and front point light whose energy decreases proportionally to the square of the distance to the model.
Results
Table 5.2 and Fig. 5.12 show the results of this subjective experiment. It is important to note that the value of the global noise intensity relative to the JND (β jnd ) is on average close to 1 for both types of illumination which indicates that the perceptual model was able to accurately predict the vertex displacement and adapt to the change in illumination conditions. Additionally, plots (a) and (b) in Fig. 5.12 show the MRMS (maximum root mean square error [START_REF] Cignoni | Metro: Measuring error on simplified surfaces[END_REF]) value of each model for the three noise intensity types. In all cases, the JND modulated models have the highest MRMS value indicating that our perceptual model is able to inject the highest amount of tolerable noise into the meshes. In addition, we point out that the models illuminated with the point light can tolerate more noise than the ones illuminated with the high energy directional one since the far distance of the point light will reduce the global luminance of the scene and thus reduces the sensitivity to contrast. We also note that the MRMS value of the models relative to the measured global energy threshold is higher when using a smooth-shaded rendering mode than under a flat-shaded rendering mode. The reason is that the change in contrast relative to the displacement magnitude increases slower for smooth-shaded surfaces. On average we have observed that smooth-shaded surfaces can tolerate 5 to 10 times (depending on the mesh's properties) more displacement noise than flat-shaded ones.
Further Comparisons and Examples
The series of subjective experiments that we have performed have shown that the proposed vertex displacement threshold algorithm is capable of accurately finding the maximum displacement magnitude a vertex can tolerate. This allows our perceptual JND model to inject the largest amount of tolerable noise onto a 3D mesh compared to modulating the vertex displacement magnitude with surface roughness or having a uniform displacement magnitude. The main advantage that our proposed JND model has over surface roughness measures is that it adapts to the mesh characteristics (density, size), the noise direction and the scene illumination. Figure 5.13 illustrates the visibility of vertex noise for three versions of a 3D model with the same RMS value and injected with respectively a JND modulated noise, a roughness modulated noise and a uniform noise in a smooth shaded setting and flat shaded setting. In the case of the Venus model (Fig. 5.13.(a)), a roughness modulated noise will concentrate the vertex distortions in the rough parts of the model and neglects its smooth parts, while a uniform noise will blindly displace all the vertices of the model with the same amount making it visible in the smooth areas at first. On the other hand the JND model will take advantage of all the vertices of the 3D mesh by adapting the magnitude of a vertex displacement to the local perceptual properties of this vertex and thus allowing a greater quantity of invisible noise to be added. The Horse (Fig. 5.13.(b)) is a model with mostly smooth regions, the rough regions are packed in the head's features. In addition, the head is densely sampled while the body is coarsely sampled. The JND model avoids adding noise in the dense head and takes advantage of the coarse body, while surface roughness measures are not able to detect the difference in sampling. The noise is thus rather injected in the dense head features, which makes it visible. By changing the perceptual model used to compute the contrast threshold (Eqs. (4.12) and (4.25)) the computed vertex displacement threshold can adapt to either a flat shaded rendering or a smooth shaded rendering. For instance in Fig. 5.14, the vertex displacement threshold computed for smooth shaded mode will cause the JND modulated noise to be extremely visible if rendered with a flat shading algorithm. As we mentioned earlier, a 3D model rendered with a smooth shading method is capable of tolerating up to 10 times the amount of noise compared to a flat shading rendering.
Discussion
Computing the Vertex Displacement Threshold for an Interactive Scene
In an interactive scene, the user can manipulate the model displayed on the screen. Therefore, the light direction and position of the model are susceptible to be changed over the course of the viewing session. Changing the light direction will change the local contrast. In addition, changing the position of the model can lead to a change in either local contrast or local spatial frequency. If the light is fixed relative to the view point and the model is rotated or translated according to the X and Y axis of the view frame, i.e., distance from camera is not changed, then this will cause a change in the angles between the surface normal and the light direction. This is the same as fixing the model and changing the light direction and therefore causes a change in contrast. If the model is being scaled or translated according to the Z axis of the view frame, i.e., distance from camera is changed, then this will cause a change in the perceived spatial frequency as size of the visual stimulus is altered. Regardless of the situation, a varying local contrast and/or local spatial frequency would change the threshold beyond which a displacement becomes visible.
First, to account for the change in contrast which is caused by changing the light direction with respect to the surface normal, either by fixing the light and changing the model's position or by fixing the model and changing the light direction, we compute the JND profile in the light independent mode. As explained in Section 5.1.2.1, this is done by computing the displacement threshold from a number of light directions sampled from a hemi-sphere around the local vertex and then choosing the lowest displacement value. The idea here is that the light independent threshold corresponds to the one relative to the "worst possible" light configuration. Figure 5.15 shows the Venus model injected with a vertex noise equivalent to the JND level computed with a light dependent and light independent mode. Notice how the noise relative to the light independent threshold remains invisible while the one related to the light dependent mode becomes visible when the light changed from a front direction (first row) to a top-right one (second row).
Second, if in the interactive session the user is allowed to change the distance between the view point and the object by either changing the view distance or scaling the model, then the perceived spatial frequency is affected. Similarly to the first case, computing a threshold that works with any distance boils down to computing a threshold that works with the worst distance. Since a change in distance (or scale) affects only the spatial frequency then computing the JND relative to the worst possible frequency results in a threshold that is adaptive to various distances (or scales). This worst possible frequency is the one corresponding to the peak of the CSF curve since it is by definition the most sensitive frequency (i.e., the frequency with the lowest visibility threshold). According to our experimental study we have found that the CSF peaks at around 3.5 cpd under smooth shading. Figure 5.16 shows the Venus model injected with a vertex noise at the JND threshold level computed according to the most sensitive frequency and that of a JND under a certain fixed distance. The Venus model is a dense mesh, therefore the frequencies relative to the first row of Fig. 5.16 are high (≈10 cpd). As the camera approaches the model, the size of the visual stimuli becomes bigger and therefore the frequency decreases. This means that as the model becomes closer to the camera, the frequency becomes more sensitive (according to the CSF) and thus should reduce the value of the visibility threshold. Indeed, when injecting in the model a vertex noise relative to a JND computed according to the original distance, it becomes visible when zooming in on the model (see Fig. 5.16.(c)). On the contrary, when computing the JND according to the most sensitive frequency, the noise remains invisible regardless of the distance since it has already been taken into account as the worst possible distance (see Fig. 5.16.(b)).
Comparison with an Image-Based Method: HDR-VDP2 [MKRH11]
In this section we compare the proposed vertex displacement threshold method to the HDR-VDP2 [START_REF] Mantiuk | HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions[END_REF] image-based method. HDR-VDP2 is a popular perceptual metric that extends Daly's VDP to HDR images and its code is freely available at http://hdrvdp.sourceforge.net/wiki/. Figure 5.17 shows the visibility map given by the HDR-VPD2 method when comparing images of a reference Lion model with a distorted one from various viewpoints. The distorted model was obtained by injecting in the Lion model a vertex displacement noise that is below the threshold given by our method. To be more precise, the noise was injected according to the method described in Section 5.2.1 of this manuscript with β jnd = 0.85. Therefore, according to our method the injected noise should be invisible which is the case if we visually compare the first two rows of Fig. 5.17. However, when looking at the visibility map given by the HDRVDP2 method we notice that the distortion visibility is particularly exaggerated in the rough regions of the Lion model. This exaggeration of distortion visibility on surface regions with high roughness and curvature can be attributed to a possible limitation of image-based approaches for estimating the visibility. A local geometric distortion will change the geometry of the object which will be reflected in a slight change of pixel positions in addition to the change in contrast. This change in pixel positions is considerable when the surface is curved because it can cause a local displacement of pixels without changing the global shape. Since in image-based methods, the perceptual analysis is in general done in a per-pixel basis, then these local pixel displacements caused by the geometric distortions are detected to be of high amplitude, leading to an exaggeration in visibility. Furthermore, image-based methods such as HDRVDP2 have an intrinsic limitation compared to our method, i.e., for image-based methods it is necessary to generate the rendered 2D image before applying any perceptual analysis. This will ultimately make the integration of these methods in geometric operations a more complicated task and such methods also mix the visual effects of geometric distortions with rendering noise. Therefore we think that image-based methods are more suited for evaluating and/or guiding rendering algorithms rather than geometric operations.
Summary
In this chapter we have presented an algorithm that uses the perceptual model presented in Chapter 4 in order to compute the maximum displacement a vertex can tolerate before becoming visible. This algorithm takes as parameters the display setting (i.e., screen resolution, size and brightness), the scene's illumination and the rendering method (flat or smooth shading). This computed vertex displacement threshold, therefore, adapts to these various parameters. As demonstrated in our subjective validations, the proposed perceptual model can effectively guide the injection of vertex noise into the 3D mesh while keeping it invisible at the same time. This can have a direct application in the case of 3D watermarking and geometric compression algorithms (e.g., vertex coordinates quantization) as their corresponding performance usually relies on the degree of tolerable change in vertex coordinates. In the following chapter we showcase how the vertex displacement threshold and the proposed perceptual models can be integrated into various mesh processing operations. More precisely, we use the computed vertex displacement threshold to detect the optimal vertex quantization level for a certain mesh, guide simplification process and we finally use the perceptual models, i.e., the CSF, to perform an adaptive subdivision on a coarse 3D model.
Chapter 6
Applications
The JND models of 2D images and videos have been used extensively throughout the literature to guide and perceptually optimize several image and video processing algorithms [START_REF] Chou | A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile[END_REF][START_REF] Liu | JPEG2000 encoding with perceptual distortion control[END_REF][START_REF] Wei | Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain[END_REF]. In this chapter, we show how the proposed JND profile as well as the perceptual models can be integrated to several mesh processing algorithms. First, we use the JND profile to compare the global visibility of the geometric distortions caused by a vertex coordinate quantization operation and then use this comparison to automatically select the optimal vertex coordinates quantization level for a given mesh. Second, we use the computed vertex displacement threshold to guide the simplification of 3D meshes. Finally, we integrate the CSF model into an adaptive mesh subdivision pipeline.
Automatic Selection of Optimal Vertex Quantization Level
Vertex coordinates quantization is an important step in many mesh processing algorithms, especially in the case of 3D mesh compression. This operation introduces geometric distortions onto the original mesh that might be visible to a human observer. It would thus be useful to evaluate whether a vertex quantization noise is visible or not. This would allow us to find the optimal quantization level (in bits per coordinate, bpc) for any mesh. We consider the optimal quantization level to be the one with the lowest bpc, i.e., having the highest noise engergy for which the quantization noise remains visible. The quantization noise should remain invisible and therefore the distorted mesh should remain visually indistinguishable from the original one. It is important to note that the optimal quantization level is different for each mesh due to the differences in geometric complexities, details and density.
The proposed JND model can provide a simple and automatic way to determine the optimal quantization level. The idea is to compute a global score which compares the model's JND profile to the magnitude of introduced noise. To do so, we start by computing the local displacement vectors caused by the quantization operation as:
d i = v i -v i , (6.1)
where v i and v i are the i th vertices of respectively the distorted mesh and the original one. The direction of d i represents the noise direction relative to the i th vertex. We then compute the JND profile of the original mesh with respect to the computed displacement direction at each vertex. This allows us to evaluate the visibility of the vertex displacement by comparing its magnitude to the computed displacement threshold as follow:
r i = ||d i || jnd v i , d i ||d i || , (6.2)
where jnd v i , d i ||d i || represents the vertex displacement threshold of v i in the direction of the local noise displacement, d i . Finally, we aggregate the local ratio values into a global visibility score using a Minkowski pooling technique as:
S = 1 n n i=1 r p i (1/p) , (6.3)
where n is the number of vertices in the mesh and p = 2 is the Minkowski power. This score allows us to test whether the distortion introduced by the vertex quantization operation is globally visible. If S ≤ 1, the noise magnitude is globally below the visibility threshold, which means that the distortion is not visible. On the contrary if S > 1, the distortion becomes visible as the noise magnitude is in general above the visibility threshold. The optimal quantization level would correspond to the one with the lowest bpc where the global visibility score is below 1. Figure 6.1 shows the global perceptual score versus the level of coordinates quantization for three meshes. According to the defined score the optimal quantization level is 10 bpc for the Lion model, 11 for the Rabbit model and 12 for the Feline model when rendered with a smooth shading algorithm. When the models are displayed with a flat shading algorithm the optimal quantization level, computed with the JND in flat shading mode, is 13 bpc for the Feline model and 12 bpc for the Rabbit and Lion models. This shows how our proposed JND profile can adapt to the shading algorithm as the optimal quantization level will adjust accordingly. In general, the higher optimal quantization level for the smooth shading algorithm indicates that it is less sensitive to vertex noise than flat shading rendering. This is consistent with the results of our subjective experiments where we noted that the vertex displacement threshold was higher for smooth shading than for flat shading (see Section 5.2). These results are compatible with the human observations as shown in Figs. 6.2, 6.3. In addition, the proposed global perceptual score can adapt to different circumstances of mesh usage such as view distance, light energy and mesh resolution (Fig. 6.4 (c)). By reducing the resolution of the Feline model, the optimal quantization level goes down from 12 bpc to 9 bpc while a distant view or low energy light makes the optimal quantization level become 11 bpc. By contrast, we cannot obtain all these results by thresholding the output of state-of-the-art mesh perceptual metrics used in computer graphics such as MSDM2 [START_REF] Lavoué | A multiscale metric for 3D mesh visual quality assessment[END_REF] and HDR-VDP2 [MKRH11] (Fig. 6.5). In particular, the output of MSDM2 remains the same under different light energies, viewing distances and renderings (flat or smooth shading) as it relies entirely on geometric attributes in its perceptual analysis. As for HDR-VDP2, its score is exaggerated for low-resolution meshes indicating that it was not able to accurately adapt to the change in mesh density. This is probably due to the fixed pixel window width used to perform the perceptual analysis, suggesting that the visibility of all distortions are evaluated relative to nearly the same spatial frequency. This is usually not necessarily the case in a 3D mesh where the vertex density might be variable and thus leading to the presence of visual distortions of different spatial frequencies.
JND Driven Mesh Simplification
The goal of mesh simplification algorithms is to reduce the number of vertices in a mesh to a certain degree by iteratively applying a simplification step (often via an edge collapse or vertex removal operation). Mesh simplification is usually used to efficiently display highly detailed models or to create multiple levels of details (LOD) of a mesh. Therefore it is required that the simplified mesh preserves the geometric features of the model as much as possible. To do so, most simplification methods tries to simplify the less important regions, i.e., the ones with no geometric features, more often than the ones containing the geometric details. More specifically, a popular class of mesh simplification algorithms achieves this by the steps described below:
1. compute a simplification cost for each of the mesh edges (or vertices). This cost indicates whether a certain edge (or vertex) should be removed or not.
2. apply the simplification step, i.e., edge collapse, to the edge with the lowest cost.
3. update the simplification cost.
v 1 v 2 v 1 v 2 A B v n FIGURE 6
.6: If v 1 v 2 and v 1 v 2 are in opposite directions, then the edge (v 1 , v 2 ) can be collapsed to v n without causing any visible distortion.
4. go back to step 2. until a certain number of vertices, edges or faces is obtained.
Ideally, the best visual quality can be obtained if the edge collapse operation is carried out starting from the one with the least visual impact. Throughout the literature, several perceptual and non perceptual methods have been proposed to compute a simplification cost that would best preserve the shape of mesh after simplification. However, existing perceptual methods either carry out the perceptual analysis on the rendered image [WLC + 03, QM08, MG10] or rely on a top-down estimation of saliency [START_REF] Ha | Mesh saliency[END_REF][START_REF] Wu | Mesh saliency with global rarity[END_REF][START_REF] Song | Mesh saliency via spectral processing[END_REF]. Moreover, none of the existing algorithms propose a method to automatically control the quality of the resulting output; the simplification is usually carried out until a manually prescribed number of edges, vertices or faces is reached. In this section, we show how we can use our proposed JND model to define both the simplification cost for each edge and more importantly a stopping criterion that is able to automatically control the quality of the simplified mesh.
Edge Cost In an edge collapse operation, an edge (v 1 , v 2 ) is removed and is replaced by a vertex v n (Fig. 6.6). This can be seen as if the vertices v 1 and v 2 moved towards the new vertex v n . Using our JND model we analyze the visibility of displacing v 1 and v 2 along the edge (v 1 , v 2 ). Let A (resp. B) be a part of (v 1 , v 2 ) bounded by v 1 and v 1 (resp. v 2 and v 2 ) (see Fig. 6.6) where v 1 (resp. v 2 ) is the vertex obtained by displacing v 1 (resp. v 2 ) by exactly the JND value in the direction of v 1 v 2 (resp. v 2 v 1 ). This means that replacing v 1 (resp. v 2 ) by a vertex belonging to A (resp. B) will not cause any visible distortion. In order to apply an edge collapse that is invisible to a human observer, we need to find a new vertex v n such that v n ∈ A ∩ B. This requires that the vectors v 1 v 2 and v 1 v 2 should be in opposite directions so that A ∩ B = ∅. Otherwise, if v 1 v 2 and v 1 v 2 are in the same direction, then we have A ∩ B = ∅, making the distortion caused by the edge collapse visible. This analysis leads us to define the simplification cost which reflects the perceptual impact of an edge collapse by:
c = v 1 v 2 .v 1 v 2 ||v 1 v 2 || 2 . (6.4)
The value of our simplification cost c varies between [-1, 1]. If c < 0 then the collapse operation does not affect the visual fidelity of the model. If c > 0 then the edge collapse will be visible. Figure 6.7 shows the simplification cost on a cube under flat shading where we have injected a random noise of different intensity on each of its sides. The noise on the top side is below the JND threshold.
On the right side, the noise is barely visible as it is just above the JND threshold and on the left side is injected a visible noise. The simplification cost of the edges belonging to the top side is below 0 as the injected noise is under the JND threshold while it is above 0 on the right and left sides of the cube. The simplification algorithm guided by our simplification cost will keep the visible features on the right and left sides of the cube and simplify the non visible features on the top side. Figure 6.8 shows some additional results concerning the proposed perceptual cost. Notice how, for both the Feline and Lion models, the proposed perceptual simplification cost is high for the parts where there are geometric details on the surface mesh. This suggests that the simplification algorithm guided by this cost will tend to preserve these details by prioritizing the simplification of regions that do not contain fine geometric features.
Vertex Placement Having defined the simplification cost of an edge, we now should decide how the position of the new vertex v n is computed. In order to get the "optimal" position we have found that minimizing the following energy produces very good results:
arg min ||v 1 v n || jnd v 1 p + ||v 2 v n || jnd v 2 p , ( 6.5)
where jnd v 1 (resp. jnd v 2 ) is the JND threshold of v 1 (resp. v 2 ) in the direction of v 1 v 2 (resp. v 2 v 1 ) and p is the energy's order. For p = 2, solving Eq. (6.5) yields to:
||v 1 v n || = ||v 1 v 2 || × jnd 2 v 1 jnd 2 v 1 + jnd 2 v 2 , ( 6.6)
where ||v 1 v n || and ||v 2 v n || represent respectively the distances by which v 1 and v 2 are being displaced. The idea behind minimizing this energy is to make the displacement of v 1 and v 2 adaptive to their corresponding JND values.
Stopping Criterion
The value of the defined simplification cost varies between [-1, 1]. A negative value of the perceptual cost indicates that the collapse operation will remain unnoticed by a human observer while a positive value means that the collapse will be visible. So if we collapse the edges according to an increasing order of perceptual impact, then we can control the visual quality of the output mesh. Intuitively, by stopping the simplification process when all the edges have a positive perceptual impact we obtain a most simplified mesh that is visually similar to the detailed input. For instance, in Fig. 6.9 we show a simplified tetrahedron obtained using the proposed perceptual simplification method in the case of flat shading and smooth shading, and compare it with the traditional simplification method of [START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF]. In the case of flat shading, our method preserves the hard edges on the model so that the resulting mesh looks identical to the original one. The same result can also be achieved with [START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF], however the major difference is that this algorithm requires that the user explicitly input the number of edges beyond which the simplification would stop. This is not the case with our method as the simplification operation is stopped automatically according to the defined perceptual stopping criterion. In addition, our method can adapt to the rendering algorithm. When rendered in smooth shading, the simplification process does not collapse the edges that are adjacent to the hard edges of the tetrahedron as they have a big influence on the interpolation of luminance and thus have a high perceptual impact. This is however not the case for [START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF] which focuses on preserving the geometry of the object without taking into consideration the shading. Figure 6.10 shows a very dense 3D mesh rendered in flat shading which is then simplified with the JND-driven simplification method. The resulting simplified mesh (Fig. of the display (size, resolution and brightness) and to the distance between the model and the viewpoint, then the perceptual impact of an edge will adapt as well. In consequence the degree of simplification applied to a detailed mesh will depend on those parameters. This is important as it will be possible to generate multiple LODs by simply specifying the distances between the 3D model and the viewpoint (see Fig. 6.11).
Perceptual Adaptive Mesh Subdivision
In an adaptive mesh refinement setting, the subdivision operation is applied to the faces that fulfil a certain condition. In most cases, the subdivision is applied to the faces that are relatively close to the viewpoint or are part of the mesh silhouette. Moreover, the subdivision process is usually stopped when a certain polygon budget is reached. In general, the goal of adaptive mesh refinement methods is to display a coarse model in a way that appears visually smooth. Intuitively, this can be achieved if the normal vectors used for the shading computation produce a smooth visual pattern. In other words, we may consider that visual smoothness is achieved if the interpolation between the brightest and darkest luminance level inside a face is unnoticeable to a human observer under a smooth shading rendering. As a result, we can use the proposed perceptual model in order to test whether this interpolation is visible or not. This test can therefore be done by simply normalizing the contrast value by the CSF value:
c = c • csf(f, l) , (6.7)
where c is the normalized contrast, c is the face's contrast value, f is the corresponding frequency and l is the global luminance. Hence, the criterion to subdivide a certain face could be defined when the corresponding c value is above a certain threshold: c > α. (6.8)
One major advantage of this perceptual criterion, is that the value of c will change if the spatial frequency relative to a face changes. This means that the proposed subdivision criterion will automatically adapt to the original density of the 3D model and its distance to the viewpoint. In a conservative way, a mesh appears to be visually smooth if the local contrast of its faces is not visible as this means that the interpolation between the darkest point and the brightest point of the face will not be noticed by the observer. This implies that the subdivision operation is applied as long as c is greater than 1. We have tested this perceptual subdivision criterion using Boubekeur and Alexa's Phong tessellation method [START_REF] Boubekeur | Phong tessellation[END_REF] with a loop subdivision scheme. Figure 6.12 shows a coarse version of the Bimba model that was subdivided using the proposed perceptually driven method. The subdivided version exhibits an increase in density in rough regions which usually contain faces with visible contrast, i.e., c > 1. The subdivision process automatically stops around 30k vertices, when the local contrast in all the faces is invisible, and the resulting mesh appears to be visually smooth (Fig. 6.12 (a)). By increasing the view distance, the spatial frequency of the observed visual stimuli is increased. Consequently, this causes a changes in the visibility threshold and thus affects the degree of subdivision needed to obtain a visually smooth model. Therefore, in this case, the adaptive subdivision automatically stops after fewer subdivisions (Fig. 6.12 (b)). In addition, we have compared our perceptual subdivision criterion to a silhouettebased and proximity-based subdivision criterion. We have stopped the subdivision process when they reached the same number of vertices as in Fig. 6.12 (a). The silhouette subdivision criterion (Fig. 6.12 (c)) consists of subdividing the faces that are part of the mesh's contour so that it appears smooth. In that case, the subdivisions are concentrated in the silhouette and could potentially leave rough parts of the model untouched. In a proximity subdivision criterion (Fig. 6.12 (d)), the faces that are closer to the viewpoint are subdivided. This will most likely result in a waste of computing resources as the algorithm will subdivide faces that are already visually smooth and therefore the operation will not have much visual impact.
Summary
In this chapter we have integrated the proposed perceptual model into several mesh processing algorithms. For instance, we have used the JND profile to compare the geometric distortions caused by vertex coordinate quantization operation which allowed us to detect the optimal quatization level for a given mesh. In addition, we have used our perceptual method to guide mesh simplification. This is done by computing an edge collapse cost using the JND threshold that indicates whether the collapse operation is visible or not. Finally we have used the CSF model to define a perceptual criterion for the adaptive mesh subdivision task. This series of applications showcases the usefulness of our perceptual models in the case of guiding geometric operation. In particular, we think that the main advantage, that our method can offer, is that it is possible to adapt the output of the geometric operation to the various parameters (screen resolution and size, viewpoint distance, rendering type, illumination, mesh density). This advantage is due to the fact that our proposed perceptual analysis takes these parameters into consideration while computing the displacement threshold.
Chapter 7 Conclusion
It is common that a 3D mesh undergoes some lossy operations such as 3D compression, simplification and watermarking which introduce geometric noise to the underlying 3D object in the form of vertex displacement. In some cases, this noise can be visible to the human end user. Controlling the visibility of these geometric distortions is an important issue as, for human-centered applications, it can have a direct impact on the quality of experience of the user. It is therefore interesting to be able to evaluate whether the vertex displacement caused by the geometric operations is visible or not.
Summary of Contributions
In this thesis, we have presented our work which focused on evaluating the threshold beyond which a local geometric distortion becomes visible on a 3D mesh. This was achieved with the help of an experimental study of the properties of the human visual system. To reach our goal, we have started by evaluating perceptually oriented attributes (such as local contrast and spatial frequency) on 3D models. We have then performed a series of psychophysical experiments where we have measured the threshold needed for a human observer to detect a change in contrast induced by a change in vertex position under different viewing circumstances. The results of these experiments allowed us to derive a computational model that evaluates the threshold beyond which a change in contrast on a 3D mesh becomes visible. This model can adapt to the various display parameters (resolution, size and brightness), to the size and density of the triangular mesh as well as to directional and point light illumination. Using this perceptual model we have then presented an algorithm that computes the vertex displacement threshold relative to a certain direction, i.e., the Just Noticeable Distortion (JND) profile. Finally, we have illustrated the utility of the proposed JND profile and the perceptual model in several applications such as vertex coordinates quantization, mesh simplification and adaptive mesh subdivision.
The contributions of this thesis are summarized as follows.
Evaluating Local Perceptual Attributes on 3D Triangular Meshes
The properties related to the contrast are essential when studying the visibility of a certain visual pattern. Therefore we have started our perceptual study by evaluating the perceptual attributes related to this aspect of the human perception. More specifically, we have proposed a method to compute the local contrast in the case of flat and smooth shaded rendering. In both cases, our methods for computing the contrast highlight the effects of surface geometry and lighting condition on the value of the local contrast. Moreover, we have evaluated the spatial frequency and shown that it is affected by the size and vertex density of the observed mesh. Finally, inspired by the free energy principle, we have computed the regularity of a visual pattern on the 3D mesh. These perceptually related attributes allowed us to take into consideration the contrast sensitivity and visual masking effects of the visual system when it comes to its capacity of discriminating between two visual patterns.
Measuring the Visibility Threshold
The displacement of a vertex causes a change in contrast in the region surrounding the vertex being displaced. Therefore we have performed a series of psychophysical experiments that aims at measuring the thresholds related to the contrast sensitivity and visual masking aspects of the visual system. For measuring the Contrast Sensitivity Function (CSF) we have measured the contrast threshold required for observing a displaced vertex on the surface of a plane. By changing the density of the displayed plane, we have been able to measure the threshold at different frequencies. Measuring the threshold related to the Visual Masking requires detecting the threshold beyond which a human observer can notice a difference between two visible contrasts. Therefore for these measurements we have displaced a vertex on the surface of a sphere as its curved surface generates a visible contrast. These experiments were carried out both in the case where the displayed mesh is rendered by a flat shading algorithm and smooth shading one. In the latter case we have taken into account in our measurements the effects of luminance on the CSF threshold and the effects of visual regularity on the visual masking. Finally, the results of these experiments have allowed us to propose a perceptual model to obtain the contrast threshold which is then used to compute the probability of noticing a difference given a certain change in contrast.
Computing the Just Noticeable Distortion Profile for 3D Meshes
Having computed the probability of detecting a change in contrast caused by a vertex displacement, we have proposed an algorithm for the evaluation of the vertex displacement threshold, i.e., JND profile, for 3D meshes. The computed threshold is capable of adapting to the varying circumstances of mesh usage such as the display parameters, light direction and viewing distance since they affect the local perceptual properties. In the case of an interactive scene where the light and the viewing distance are likely to be changed through the course of the viewing session, we have proposed a way to compute this threshold independently from the light direction and with respect to the most sensitive spatial frequency. We have tested the performance of the computed JND profile via a series of subjective experiments where the participants had to rate the visibility of JND modulated random noise added to a number of 3D models with different geometric features. The results of these experiments show that our perceptual model can accurately predict the visibility threshold of local vertex distortions.
Integrating the Perceptual Model into Various Geometry Processing Operations
In Chapter 6 of this thesis we have demonstrated the utility of the proposed perceptual model in a number of geometric applications. First, we have used the JND profile to compare two meshes: a reference and a distorted version. The idea was to compute the distortion on the distorted model in terms of JND units then to deduce a global score that is indicative of the visibility of the vertex noise introduced by the distortion. This mesh comparison method was used to select the optimal quantization level (in bits per coordinates), for any mesh. Second, we used the vertex displacement threshold to guide edge-collapse-based mesh simplification. More precisely, the vertex JND allowed us to compute a simplification cost that indicates whether the edge collapse operation would be visible or not. The most important aspect of this application is that, since the JND value can adapt to the display parameters and view distance, then the cost will adapt as well. In addition, using our perceptual cost, we can define a stopping criterion that would automatically terminate the simplification operation. This aspect is particularly useful for generating a LOD. We have also shown that not only the JND threshold can be useful for geometric applications but also there is a potential in using the local contrast value in combination with the proposed perceptual model such as the CSF. For instance, we have presented a perceptual criterion for adaptively subdividing a 3D triangular mesh, which uses the normalized local contrast value. Similarly to the simplification operation, the main advantage of our perceptual criterion is that it makes possible to automatically stop the subdivision operation once the subdivision operation will not add any visual smoothness to the displayed mesh.
Perspectives
In this thesis, we have presented a perceptual framework for studying the visual impact of geometric distortions on the surface of 3D triangular mesh. In the future, we think that this work can be evolved according to three axes.
Perceptual Attributes
Computing the visibility of vertex noise relies heavily on perceptual attributes defined on the 3D mesh. In its current state, our method works for untextured diffuse surfaces that are illuminated by one light source and rendered with either a simple flat-shading or smooth-shading algorithm. This is due to restrictions that we put to the algorithms used to compute the contrast. In addition, since contrast is evaluated locally, i.e., between two faces in a flat shading setting and on a single face for the case of smooth shading, then our visibility prediction is also limited to local distortions. Therefore, we think that in order for our method to support more advanced lighting such as environment maps, complex surface materials and large-scale geometric distortions, it would require a more general definition of contrast that could be based on a thorough and non-trivial analysis of the rendering algorithm and that can be evaluated at multiple scales. Textures on the other hand can be taken into account by trying to combine the contrast due to the shading with the contrast of the texture. Additionally it would be interesting to rethink the way we define the spatial frequency on a 3D mesh. In its current state we have used the traditional unit of cpd, which represents the number of cycles of dark and bright patterns of light that can fit in one degree of the visual angle, to express the spatial frequency. While this frequency unit is more natural in the case of images with sinusoidal patterns, we explained in Section 4.2.2 how it can relate to the density of a 3D mesh. For instance, a noise injected onto a dense mesh will exhibit visual pattern with higher frequencies than when injected on a coarse mesh. This leads to the idea of expressing the spatial frequency in terms of vertices per degree (vpd) which represents the number of vertices in one degree of the visual angle and thus can be more natural in the case of 3D meshes.
Our proposed perceptual framework is now capable of evaluating whether a local distortion is visible or not. It was not designed to test whether or not a visible distortion is perceptually more annoying than another visible one. For instance, the value of our proposed perceptual simplification cost is bounded between -1 and 1 where a positive value indicates that the edge collapse operation would be visible. However, since our perceptual analysis is focused on the visibility of a distortion then it is possible to obtain a simplification cost of value 1 for both an edge whose collapse is visible but not annoying and an edge whose collapse is visible and not tolerable. This is due to our focus on low-level perceptual attributes such as contrast in this work. This is why it would be important and interesting to exploit the utility of the proposed contrast algorithm in order to define high-level visual attributes such as mesh saliency or visual attention.
Finally, it was important to estimate the regularity of a visual pattern in order to have a more accurate modeling of the visual masking of the human vision. Our method for estimating that measure was inspired from the free energy principle theory which suggests that the visual system is actively trying to predict the visual patterns in order to minimize surprise. Therefore we have used a simple linear system to predict the local contrast using its surrounding and thus compute a value of visual regularity. However, even if this simple linear system works well in practice, it would be interesting to test whether higher-order methods would give better results.
Perceptual Model
A further development of the perceptual models would make the perceptual analysis more accurate. For instance, modifying Daly's masking model to incorporate the effects of regularity of the visual system on the visibility threshold (Section 4.3) allowed for a better treatment of visual masking and resulted in a more precise visibility threshold. An interesting extension of the existing perceptual model would be to include the effects of velocity on the computed visibility threshold. As explained in Section 2.2 the contrast sensitivity of the HVS is affected by the velocity of a moving visual stimulus. We have conducted a preliminary study, which can be found in Appendix C, to measure the contrast sensitivity for moving 3D objects. This would allow us to expand this current perceptual model to support dynamic moving 3D meshes in addition to static ones. Moreover, as it stands, the proposed framework does not take into account the color attributed to the mesh or the illumination as it focuses on white lumination levels. An idea for extending this method to color would require conducting psychophysical experiments to measure the contrast threshold for each color channel as the sensitivity is different for various light frequencies [START_REF] Kelly | Spatiotemporal variation of chromatic and achromatic contrast thresholds[END_REF]. We can then apply the same perceptual analysis as the one described in this manuscript for each channel in order to compute the visibility threshold. On the long term, we also think that expanding the perceptual model to take into account the characteristics of the human visual system that are related to the orientation and scale selectivity would be interesting. These aspects would be important especially for developing methods that take into consideration the higher-level properties of human vision. However, this step first requires a multi-scale definition of contrast for 3D meshes.
Applications
In the end, the goal behind developing a perceptual method for 3D meshes is to integrate it into geometry processing so that it would be possible to control the quality of its output or to evaluate and compare existing geometric methods. The applications showcased in this thesis present an example about the potential of using perceptual methods in geometric application and can be further improved. For instance, the presented simplification cost reflects whether an edge collapse would be visible or not if the position of the new vertex is located on the collapsed edge. We think it would be more interesting to develop a perceptual simplification cost that overcomes this limitation. Also further improvements can be done to the perceptual criterion proposed for the adaptive subdivision operation which suggests that the mesh is visually smooth if the local contrast is not visible. While this assumption can be considered as correct, it is however too conservative as it is possible to have a visible local contrast that is visually smooth. One possible direction to improve this application is to adapt the threshold for applying the subdivision to the visual regularity of the surface. Intuitively, if a surface is visually regular (usually a smooth surface) then the threshold for subdividing can be high while for a visually complex surface (usually a rough surface) it is better to have a low threshold value.
Moreover, it would be interesting to use this perceptual method to develop an objective perceptual quality measure for 3D meshes. A geometric distortion would generally introduce a change in local perceptual attributes, most notably contrast, to the 3D mesh. Using our method, it is possible to evaluate this change in contrast and then convert it in terms of visibility threshold units. For example, for the vertices where the change in contrast is visible the value would be greater than 1 otherwise it will be lower than 1. The same can also be done for visual regularity as a distortion would also cause a change in that attribute (e.g., a noise on a smooth surface makes it more complex). Having a measure for a change in contrast and another for a change in visual regularity can be indicative of a change in structure and would allow us to develop, in the future, a perceptual quality metric for 3D meshes that is similar to the SSIM metric for images. A preliminary test of a quality metric using our perceptual method is presented in Appendix D and it shows some promising results.
Results
Figure B
.2 shows the results of this preliminary experiment with both the mask contrast and measured threshold normalized by the corresponding CSF value. When the mask contrast is not visible (normalized value < 1) the measured threshold lies on a horizontal line while when the mask contrast increases beyond the visibility threshold (normalized value > 1) the measured threshold increases almost linearly. Normalizing the measured threshold by multiplying it with the corresponding CSF value makes the data from different frequencies very close to each other. Therefore, it would be possible to measure the threshold relative to the visual masking effect independently from the spatial frequency of the displayed stimulus, just as suggested by Daly in this seminal paper [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF].
There has been little attention given towards modeling the movement of the eyes throughout the literature of CSF modeling. Nevertheless, it is suggested that the eye's movements can be divided into three categories [START_REF] William | Eye-movements and visual perception[END_REF]:
• The natural eye drift. This corresponds to the minimum velocity of the eyes.
It has been shown that even if the eyes are intentionally fixated on a certain position they will be drifting at a very slow speed of about 0.15cpd/s [START_REF] Kelly | Motion and vision. i. stabilized images of stationary gratings[END_REF].
• The smooth eye pursuit. This corresponds to movement of the eye when tracking a moving object.
• The saccadic movement. This corresponds to the almost instantaneous movements of the eye when the moving object is fast enough so that it cannot be tracked. It is considered that the contrast sensitivity during the saccadic movement is 0.
In [START_REF] Daly | Engineering Observations from Spatiovelocity and Spatiotemporal Visual Models[END_REF], Daly has proposed the following model to evaluate the velocity of the eyes:
v eye = min (α • v + v min , v max ) , (C.2)
where α is the tracking efficiency and v min = 0.15 deg/s, v max = 80 deg/s are respectively the minimum eye velocity due to the drift and the maximum eye velocity before transitioning to saccadic movements. To the best of our knowledge, there have not been any consensus about the value of α in perceptual methods for video processing. Despite having a measured value of around 0.82 [START_REF] Daly | Engineering Observations from Spatiovelocity and Spatiotemporal Visual Models[END_REF], many algorithms for computing JND of a video use a value of 0.90 or 0.98 [JLK06, WN09, AvMS10]. In addition, Yee et al. [START_REF] Yee | Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments[END_REF] adapted the value of α to the saliency value arguing that a salient object would attract the attention of an observer and therefore makes its tracking more efficient.
C.2 Experimental Study
It is clear that measuring the dynamic aspect of the Contrast Sensitivity Function (CSF), requires as well measuring the velocity at which the eye is moving. This would allow us to have a relatively accurate idea about the retinal velocity of the observed moving 3D object.
Experimental Protocol
We have adapted the experimental protocol presented in Section 4.1 of the thesis to the dynamic setting. Instead of displaying on the screen two models side by side, we now display them one on top of the other. The reason for that is to allow a large horizontal movement of the 3D object. Similarly ). The head of the subjects was fixed on a head-stand to restrict its movement. An eye tracker was placed in front of the display to register the position where the observer is looking at. This will allow us to evaluate the eye movement for each subject and therefore compute the retinal velocity using Eq. (C.1).
Results
We have used an eye tracker in order to measure the corresponding retinal velocity of the moving planes. Our data indicated that the observer's eyes were tracking the planes with an average velocity of 0.76 deg/s, 8.26 deg/s and 18.61 deg/s for respectively the 1 deg/s, 10 deg/s and 25 deg/s plane velocities. This leads to the retinal velocities of 0.24 deg/s, 1.74 deg/s and 6.39 deg/s. It is interesting to also note, that the average tracking efficiency deduced from these measurements is α = 0.76 which is close to the value reported by Daly [START_REF] Daly | Engineering Observations from Spatiovelocity and Spatiotemporal Visual Models[END_REF]. We think that the difference lies within the nature of the task presented to the subjects. In Daly's experiments, the subjects were looking at one moving stimulus while in our case the subjects were looking at two planes in order to compare their appearances. The more complex task given to subjects in our preliminary experiments might explain the slight reduction in the value of the tracking efficiency. The CSF results of these experiments are shown in Fig C .1. The plot shows the subjects' mean contrast sensitivity for each frequency for the three different retinal velocities. As expected, as the velocity increases the CSF shifts to the left; the CSF peak is shifted from about 4 cpd at 0.24 deg/s to about 2.9 at 6.39 deg/s. Moreover, for the relatively high velocity of 6.39 deg/s there is a considerable loss in contrast sensitivity observed. noisy models from the smoothed ones and computing the Pearson correlation scores of our method we get a correlation of 0.88 for the vertex noise type distortion and 0.80 for the smoothing one. Looking more closely at the scores given by our method we notice that there seems to be a difference in the range of the objective score between a vertex noise type distortion and a vertex smoothing type distortion. More precisely, it appears that the score for the surface smoothing distortion is being exaggerated. There are several hypotheses that could explain this behavior. First, our method is designed to handle local distortion types which is not the case for the smoothing distortion. A multi-scale contrast method might therefore help with this issue. Second and more importantly, in its current state our method is solely based on a measured difference of contrast. However as argued in [START_REF] Haun | Is image quality a function of contrast perception?[END_REF] the quality attributed to an image is not just a function of the change in contrast but is also related to the clarity of information which is generally conveyed by its structure. Intuitively, a change in the visual regularity can be indicative of a change in structure. Indeed, upon further investigation, we notice that a weak vertex noise and a weak surface smoothing will have an almost similar effect on the local contrast, but only the vertex noise will cause a change in the structure of the surface. Therefore, we think that it would be interesting to test whether integrating the visual regularity to our method in a similar fashion a a SSIM-based method [WBSS04, WL11] would improve the performance of this quality metric. As for the Compression database our method performs better, but remains slightly behind DAME. The reason for this good performance in this case comes from the magnitude of the compression distortion included in this database. In fact, we have noticed that a large number of meshes in that database has a compression distortion that is barely visible, i.e., close to the visibility threshold. This explains the high performance of our perceptual model as it is designed to compute the visibility threshold and therefore should be accurate in comparing distortions whose magnitude is close to it.
géométriques comme la courbure [START_REF] Lavoué | A local roughness measure for 3D meshes and its application to visual masking[END_REF][START_REF] Wang | A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF]. Puisque les méthodes perceptuelles actuelles basent leur analyse sur des propriétés géométriques, sans signification perceptuelle formellement justifiée, il est donc difficile d'adapter le résultat de ces méthodes aux différents facteurs affectant la perception des maillages 3D (taille, densité, illumination, affichage, ...).
Différent de toutes les méthodes actuelles, on a décidé d'adopter une approche qui se base sur une analyse perceptuelle où on cherche à modéliser le fonctionnement interne du système visuel humain relatif à la visibilité des stimuli visuels afin d'atteindre notre but. Ce type de méthodes a été prouvé être efficace en traitement d'image [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF][START_REF] Lin | Perceptual visual quality metrics: A survey[END_REF]. D'une manière simplifiée, les rayons lumineux provenant d'un objet arrivent dans l'oeil puis passent au cerveau à travers la rétine et le nerf optique [START_REF] Wandell | Foundations of Vision[END_REF]. Ce qui est intéressant et important dans ce processus concernant notre but est ce qui se passe au niveau de la rétine. Plus précisément, la structure physiologique des cellules ganglionnaires de la rétine fait en sorte que le système visuel humain soit plutôt sensible aux stimuli visuels ayant une large variation de luminance. Cette variation de luminance est généralement exprimée en matière de contraste. Donc plus le contraste est fort, plus les détails d'un stimulus visuel seront visibles. Cela fait que l'étude des propriétés reliées à la perception du contraste est essentielle au calcul du seuil de visibilité. En particulier, on se base sur les deux aspects suivants du système visuel humain :
• La sensibilité au contraste (CSF) qui indique le seuil de contraste à partir duquel un stimulus visuel devient visible [START_REF] Campbell | Orientational selectivity of the human visual system[END_REF]. Notamment, le seuil de visibilité dépend principalement de la fréquence spatiale et la luminance globale du stimulus observé.
• Le masquage visuel qui reflète la capacité du système visuel humain de distinguer entre deux stimuli visibles. Cela est principalement en fonction de la valeur du contraste de ces stimuli [START_REF] Gordon | Contrast masking in human vision[END_REF]
1. 1
1 The RMS error does not correlate with the perceptual impact of a geometric distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The visibility of a geometric distortion is affected by several parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The process of vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The center-surround organization of the receptive fields . . . . . . . 2.3 The contrast visibility threshold depends on the spatial frequency of the visual stimulus . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 The presence of a visible simulus reduces the visibility of another visible stimulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Mathematical description of visual masking . . . . . . . . . . . . . . 2.6 The complexity of the background affects the visibility of a visual stimulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The general framework of bottom-up perceptual methods. . . . . . 3.2 Overview of the perceptual rendering framework proposed by Ramasubramanian et al. [RPG99] (illustration extracted from [RPG99]). 19 3.3 Effects of surface roughness on the visibility of geometric distortion 3.4 Overview of the roughness second estimation method presented in [CDGEB07]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Surface roughness on the Armadillo model obtained with the roughness estimation method of Lavoué (image extracted from [Lav09]). 3.6 Local distortion maps of the Lion model obtained with the Hausdorff distance (left) and the MSDM2 metric (right) (image extracted from [Lav11]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Overview of the method for computing the saliency as proposed by Lee et al. [LVJ05] (illutration extracted from [LVJ05]). . . . . . . 3.8 Results of the saliency guided simplification [LVJ05] (image extracted from [LVJ05]). . . . . . . . . . . . . . . . . . . . . . . . . . .4.1 Diagram of a typical visibility threshold measurement experiment. 4.2 Example of an iteration according to the QUEST procedure . . . . . 4.3 Contrast between two adjacent faces . . . . . . . . . . . . . . . . . . 4.4 The spatial frequency is related to the size of the visual pattern with respect to the size of one degree of the visual angle . . . . . . . 4.5 The vertex density has an impact on the spatial frequency. . . . . . 4.6 Experimental setup. . . . . . . . . . . . . . . . . . . . . . . . . . . .
FIGURE 1.1: The simplified and noisy versions of the Rabbit model have the same RMS value, but are not perceptually equivalent.
FIGURE 1.2: (a) The noise injected onto the original 3D mesh is slightly visible. (b) Increasing the viewing distance makes that same noise invisible. (c) Changing the light direction from front to top-left increases the perceived intensity of the noise.
FIGURE 2.1: The light reflected by the observed objects enters the eyes and is focused onto the retina. The visual information captured by the retina is then tranfered by the optic nerve to the brain where it is finally processed.
FIGURE 2.2: The center-surround organization makes the human visual system sensitive to patterns of light rather than its absolute value. The neural response is at its peak when the visual signal lines up with the size of the receptive field.
FIGURE 2.3: Left: The Campbell and Robson chart [CR68] that shows a sinusoidal visual stimulus whose contrast decreases vertically and frequency increases horizontally. Right: An example of the Contrast Sensitivity Function.
FIGURE 2.4: (a) A visible sinusoidal stimulus on a grey background. (b) Having another visible sinusoidal stimulus as a background makes the original one less visible.
FIGURE 2.5: An example of a curve that describes the visual masking characteristic. When the contrast of the mask signal is visible (greater than 1) then the contrast threshold increases almost in a linear fashion.
FIGURE 3.1: The general framework of bottom-up perceptual methods.
FIGURE 3 . 2 :
32 FIGURE 3.2: Overview of the perceptual rendering framework proposed by Ramasubramanian et al. [RPG99] (illustration extracted from [RPG99]).
FIGURE 3
3 FIGURE 3.3: The roughness of a surface can affect the visibility of a geometric distortion. The geometric distortion injected into the smooth surface is visible while the one injected into the rough one is not visible (image extracted from [Lav09]).
FIGURE 3.4: Overview of the roughness second estimation method presented in [CDGEB07].
FIGURE 3. 5 :
5 FIGURE 3.5: Surface roughness on the Armadillo model obtained with the roughness estimation method of Lavoué (image extracted from [Lav09]).
FIGURE 3. 6 :
6 FIGURE 3.6: Local distortion maps of the Lion model obtained with the Hausdorff distance (left) and the MSDM2 metric (right) (image extracted from [Lav11]).
FIGURE 3 . 7 :
37 FIGURE 3.7: Overview of the method for computing the saliency as proposed by Lee et al. [LVJ05] (illutration extracted from [LVJ05]).
FIGURE 3. 8 :
8 FIGURE 3.8: Results of the saliency guided simplification [LVJ05] (image extracted from [LVJ05]).
FIGURE 4.1: Diagram of a typical visibility threshold measurement experiment.
( 1 )
1 FIGURE 4.2: Example of an iteration according to the QUEST procedure. The PDF of the contrast threshold at trial n is multiplied with the corresponding likelihood function in order to get the PDF for the trial n + 1.
FIGURE 4
4 FIGURE 4.4: The spatial frequency is related to the size of the observed visual pattern (d ph ), with respect to the size of one degree of the visual angle (d 1cpd ) on the display.
FIGURE 4.5: The vertex density has an impact on the spatial frequency.
FIGURE 4.6: Experimental setup.
FIGURE 4
4 FIGURE 4.7: Visual stimulus presented to the subjects for measuring the Contrast Sensitivity Function in the case of flat shading. Left: the reference plane. Right: a vertex is displaced in the central area of the plane.
FIGURE 4.8: Left: plot of the mean sensitivity for each observer over each frequency. Right: plot of the subjects' mean sensitivity over each frequency fitted using Manos and Sakrison's mathematical model.
FIGURE 4.9: Visual stimulus for measuring threshold relative to the aspects of contrast masking in the case of flat shading. Left: a sphere approximated by an icosahedron subdivided 3 times from which a vertex is displaced. Right: the reference sphere.
FIGURE 4 . 10 :
410 FIGURE 4.10: Left: plot of the normalized mean threshold for each observer over normalized mask contrast. Right: plot of the subjects' mean normalized threshold over each normalized mask contrast, fitted using Daly's mathematical contrast masking model.
.11) with c the normalized threshold, and the fitted values k 1 = 0.0078, k 2 = 88.29, s = 1.00 and b = 4.207.
FIGURE 4 .
4 FIGURE 4.11: When the displacement of a vertex alters the convexity of two adjacent faces, the contrast might remain the same as long as the angle between the light direction (yellow arrow) and the face normal (red arrow) does not change.
FIGURE 4.13: The Michelson contrast computed for each face of the Bimba model for a regular smooth shading algorithm (left) and cell shading rendering (right).
FIGURE 4
4 FIGURE 4.14: Visual regularity on the Lion-vase model.
FIGURE 4 .
4 FIGURE 4.15: Increasing the vertex density of the plane would increase the spatial frequency of the visual stimulus.
FIGURE 4.16: Left: plot of the subjects' mean sensitivity over each frequency and luminance level. Right: plot of the 3D fit of Barten's CSF model [Bar89].
FIGURE 4.17: Visual stimuli for measuring the visual masking threshold at different visual regularity levels.
FIGURE 4.18: Left: plot of the subjects' mean threshold over each initial contrast value and visual regularity value. Right: plot of the 3D fit of the contrast masking model (Eq. (4.24)).
.25)where c is the original value of the local contrast, f the local frequency and l and r correspond respectively to the global luminance of the scene and the regularity of the visual pattern. Estimating the visibility of a change in local contrast is performed using Eq. (4.13). At first the change in contrast Δc (Eq. (4.14)) and the contrast threshold T (Eqs. (4.23) and (4.24)) are evaluated. They are then passed to a psychometric function which outputs the visibility probability.
FIGURE 5
5 FIGURE 5.1: (a) The displacement of a vertex v 1 in a direction dir causes the normals of the surrounding 1-ring faces (highlighted in red) to rotate. (b) In the case of a flat-shaded rendering, it will cause a change in contrast for surrounding pairs of faces sharing a common edge in 1-ring and 2-ring of the displaced vertex. (c) In the case of smooth-shaded rendering, it will cause a change in contrast in the faces having a vertex in the 1-ring neighborhood of v 1 .
FIGURE 5 . 2 :
52 FIGURE 5.2: The evolution of the local perceptual properties and visibility, for two displaced vertices v 1 and v 2 on the Bimba model. Plots in the first row show the change in frequency, middle ones show the change in contrast and the bottom ones show the detection probability, of different pairs of affected adjacent faces of the two vertices. Note that some of the faces have the same spatial frequency, so the color curves overlap in the plots of the first row.
vertex displacement threshold for a displacement in the tangent direction in a light-independent mode (c) vertex displacement threshold for a displacement in the normal direction in a light-dependent mode
FIGURE 5
5 FIGURE 5.4: The vertex displacement profile for the Bimba model under different circumstances. (a) and (b) show color map representing the displacement threshold in a light-independent mode with respect to a displacement in the normal direction and tangent plane respectively. (c) shows the displacement threshold according to a displacement in the normal direction in a light dependant mode.
vertex displacement threshold in the case of a smooth rendering mode (b) vertex displacement threshold in the case of a at rendering mode
FIGURE 5
5 FIGURE 5.5: The vertex displacement profile for the Bunny model in a smooth shading mode (a) and flat shading mode (b).
Vertex displacement threshold for the 65k version of the Feline model (b) Vertex displacement threshold for the 16k version of the Feline model (c) Vertex displacement threshold for the 1k version of the Feline model
FIGURE 5
5 FIGURE 5.6: The vertex displacement threshold computed with the proposed algorithm is capable of adapting to the resolution of the 3D model.
FIGURE 5 . 7 :
57 FIGURE 5.7: In a smooth shading mode, the proposed threshold model takes into consideration the energy of the light illuminating the scene. For a point light source whose energy decreases proportionally to the square distance to the object, the scene becomes darker as the light source becomes farther. This increases the value of the displacement threshold.
FIGURE 5 . 8 :
58 FIGURE 5.8: Effects of the number of light samples on the accuracy of the vertex displacement threshold in a light independent mode.
FIGURE 5.9: Vertex displacement threshold execution time.
FIGURE 5 . 10 :
510 FIGURE 5.10: Mean subjective score values versus MRMS distance values. Plots (a) and (d) present, for different noise injections, the mean subjective scores over all test models and the two illumination settings. Plots (b) and (e) show the difference in mean subjective scores between the experiments in the two illumination settings. Plots (c) and (f) compare the mean subjective scores for the different models used in the experiments.
FIGURE 5.12: Plot of the MRMS induced by noise injection for three different types of noise at the same visibility level (under smooth shading).
FIGURE 5.13: The visibility of vertex noise in three models having the same RMS value and injected with respectively a JND modulated noise, roughness modulated noise and uniform noise in smooth (for Venus) and flat shading (for Horse).
FIGURE 5.14: (a) JND modulated noise computed for a smooth shaded mode added to the Bimba model and rendered in smooth shading. (b) JND modulated noise computed for a smooth shaded mode added to the Bimba model and rendered in flat shading. (c) JND modulated noise computed for a flat shaded mode added to the Bimba model and rendered in flat shading.
FIGURE 5.15: A vertex noise equivalent to the JND level computed in a light dependant mode might become visible when the light direction is altered between the two rows. This is not the case for light independent JND.
FIGURE 5.16: A vertex noise equivalent to the JND computed according to (b) the most sensitive frequency and (c) a certain fixed distance (that of the first row).
FIGURE 5 .
5 FIGURE 5.17: The HDR-VDP2 visibility map for a distorted Lion model whose noise is below the JND threshold.
FIGURE 6.1: The perceptual score versus the quantization levels (in bpc) of three models rendered in (a) smooth shading and (b) flat shading. (c) shows the effects of the model resolution, object distance and light energy on the perceptual score of the Feline model when rendered in smooth shading.
FIGURE 6.2: Quantized meshes with different quantization levels in a smooth shading setting. The middle column corresponds to the optimal quantization level (12, 11 and 10 bpc for respectively Feline, Rabbit and Lion). For better comparison between the models please refer to the electronic version of this manuscript.
FIGURE 6.5: (a) The MSDM2 and (b) the HDR-VDP2 score versus the quantization levels (in bpc) in a smooth shading setting.
FIGURE 6 FIGURE 6
66 FIGURE 6.7: The simplification algorithm guided by our simplification cost, computed for a flat shading redering, keeps the visible features on the right and left sides of the cube and simplifies the invisible features on the top side.
FIGURE 6.9: (a) The perceptually driven simplification is able to prevent perceptually relevant edges from being collapsed as it outputs a model that is visually similar to the original one in the case of flat and smooth shading. (b) The output model using the method of Lindstrom and Turk[START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF] with the same number of vertices is visually different from the original mesh under smooth shading.
FIGURE 6.10: (a) The JND-driven mesh simplification process outputs a model that is visually very similar to the flat shaded original model. (b) Removing 5% more vertices will introduce slightly visible distortions to the simplified model. (c) The simplified model by using the method of Lindstrom and Turk [LT98] to the same number of vertices as the JND-driven simplification. (b) and (c) contain slightly visible distortions, especially on the belly and thighs.
FIGURE 6.12: The perceptual subdivision process converges at around 30K vertices when the model is (a) close from the viewpoint and at 11K when it is (b) far away. (c) A silhouette subdivision criterion would concentrate the subdivisions in the contour region leaving the rough part of the model intact. (d) A proximity subdivision criterion would subdivide the face in already visually smooth areaswhere the subdivision will not have any visual impact.
FIGURE B. 1 :
1 FIGURE B.1: By changing the subdivision level of an icosphere we were able to measure the masking threshold at different spatial frequencies.
FIGURE C.1: As the retinal velocity of the 3D object increases, the contrast sensitivity is reduced and the CSF curve shits to the left.
2.2.2. The goal
Models Lion Bimba Horse Dino Venus
β jnd (directional light) 0.87 0.91 1.06 1.15 1.09
β jnd (point light) 0.91 0.89 0.97 1.11 1.05
TABLE 5.2: Global noise energy value relative to JND modulated noise (β jnd ) in a
smooth shading setting.
TABLE D .
D 1: Performance comparisons of our proposed perceptual quality metric prototype, existing model-based methods and image-based ones.
et leur régularité visuelle [WBT97].Une distorsion géométrique locale consiste à faire déplacer un sommet du maillage dans une certaine direction. Ce déplacement cause un changement de la direction des normales des faces adjacentes ce qui provoque un changement des propriétés perceptuelles locales, à savoir le contraste et la fréquence spatiale sur un maillage. En se basant sur l'analyse des aspects du système visuel humain présentés ci-dessus, on a proposé dans cette thèse une analyse perceptuelle qui a pour but de calculer le seuil de visibilité d'une distorsion géométrique. En d'autre terme, on a présenté un algorithme qui a pour but de calculer le déplacement maximal qu'un sommet peut tolérer avant que ce déplacement devienne visible. Notre méthode se résume de la manière suivante :1. On évalue les attributs perceptuels(contraste, fréquence, luminance globale et régularité visuelle) relatifs à l'étude de la visibilité d'un stimulus visuel sur un maillage (Sections 4.2.1 et 4.3.1). 2. Les attributs perceptuels ainsi calculés, sont utisés comme entrés à un modèle perceptuel qui calcule le seuil de visibilité et qui est obtenu suite à une étude expérimentale (Sections 4.2.3 et 4.3.4). Le modèle perceptuel proposé prend en considération la sensibilité au contraste et le masquage visuel. 3. Ayant calculé le seuil de visibilité, on a proposé un algorithme efficace afin d'évaluer le déplacement maximal qu'un sommet peut tolérer (Chapitre 5). Le seuil de déplacement d'un sommet ainsi calculé s'adapte aux différents facteurs pouvant affecter la visibilité des distorsions dans un environnement 3D (distance au point de vue, illumination,...). Ayant réussi à calculer le seuil à partir duquel le déplacement d'un sommet devient visible, on a ensuite intégré l'analyse perceptuelle proposée dans différents algorithmes de traitement géométrique afin de montrer son potentiel (Chapitre 6). Premièrement, on a utilisé le seuil de visibilité calculé pour étudier la visibilité d'un "edge collapse" lors de la simplification d'un maillage 3D. Ceci nous a permis de proposer un coût perceptuel de simplification qu'on a utilisé pour privilégier la simplification des arêtes n'ayant pas un effet perceptuel sur la qualité visuelle du maillage. En plus, ceci a encore permis de proposer un critère d'arrêt automatique à l'opération de simplification ce qui a un large potentiel dans la gestion des différents niveaux de détails d'un objet 3D. Deuxièmement, on a utilisé le modèle perceptuel proposé pour définir un critère de subdivision adaptative de maillage 3D. L'idée ici est de subdiviser une face seulement si le contraste local de celle-ci est visible. Vue la visibilité d'un contraste est en fonction de la fréquence spatiale, notre méthode est capable d'adapter le degré de subdivision en fonction de la distance au point de vue et de l'illumination d'une manière automatique. Pour résumer, dans cette thèse, nous avons proposé une méthode perceptuelle qui permet de calculer le déplacement maximal qu'un sommet puisse tolérer avant qu'il ne devienne visible pour un observateur humain. Différent de toutes les méthodes existantes en informatique graphique, notre approche se base sur un modèle perceptuel provenant d'une étude expérimentale sur le système visuel humain lors de l'observation d'un maillage 3D. Nous avons ensuite intégré ce modèle dans plusieurs algorithmes de traitements géométriques pour montrer l'utilité d'une telle analyse perceptuelle en informatique graphique. Cependant, dans son état actuel, notre méthode est limitée à l'étude de la visibilité des distorsions géométriques locales se trouvant sur des maillages éclairés par une lumière simple et affichés à l'aide d'un rendu "flat" ou "smooth" basique. Cela est dû aux algorithmes simplifiés utilisés pour estimer les propriétés perceptuelles locales sur un maillage comme le contraste ou la fréquence. L'extension de notre analyse perceptuelle pour les cas complexes nécessite donc une généralisation des méthodes d'évaluation des propriétés perceptuelles. Un autre point important est que notre modèle perceptuel proposé est plutôt adapté à l'étude de la visibilité des distorsions locales et n'est pas capable d'évaluer son impact perceptuel une fois cette distorsion est au-dessus du seuil de visibilité. Cela vient du fait que notre analyse perceptuelle se base sur des propriétés de bas niveau du système visuel humain comme la sensibilité au contraste et le masquage visuel. Il serait donc intéressant d'étendre notre modèle aux aspects perceptuels de haut niveau pour permettre une analyse plus rigoureuse des distorsions visibles.
Remerciements
Au cours des deux dernières décennies, les méthodes perceptuelles sont devenues de plus en plus populaires dans la communauté graphique [CLL + 13]. Ces méthodes ont prouvé leurs utilités dans l'évaluation de la qualité visuelle des modèles 3D [CDGEB07, Lav11, VR12, TWC14] et pour optimiser des processus de plusieurs traitements géométriques comme la compression [MVBH15] et la simplification [LVJ05,QM08,MG10]. Les techniques perceptuelles existantes se basent sur des hypothèses sur le comportement global du système visuel en observant des objets 3D. Par exemple, il est accepté dans la communauté graphique que les artefacts visuels sont moins visibles dans les régions rugueuses que dans les régions lisses d'un maillage 3D [Lav09]. Plusieurs méthodes ont donc été développées pour estimer la rugosité d'une surface en utilisant des propriétés
Algorithm 1: Half-interval search algorithm.
Data: v: vertex, dir: noise direction, l: light, th: visibility threshold, p: precision Result: dist: displacement threshold min = 0; max = very_high_value; dist = max; visibility = compute_visibility(v, dir, l, dist); while || visibilityth || > p do dist = (max -min) / 2 + min; visibility = compute_visibility(v, dir, l, dist); if visibility > th then max = dist; else min = dist; end end very fast and accurate. In our tests we have set the visibility threshold th to 0.95, the precision p to 0.005 and the parameter very_high_value to 1/10th of the mesh bounding box.
Computing the displacement threshold requires an estimation of the spatial frequency, the local contrast and the visual regularity. This makes the computed displacement threshold capable of adapting to various display parameters. In particular, size and resolution of the display as well as the observer's distance to the screen and the model's distance to the viewpoint are inputs to the frequency estimation operation. In addition, the scene's illumination and the rendering mode affect the local contrast of the model. However, in an interactive setting where the light source is usually fixed relative to the viewpoint, the light direction varies with respect to the 3D mesh. It is therefore important to compute the displacement threshold independently of the light direction. We hereby propose a light independent mode for computing the displacement threshold.
Light Independent Mode
The algorithm presented in the previous section computes the vertex displacement threshold for a given light direction. However, it is useful to compute the displacement threshold independently from the light direction as in an interactive setting, the light direction usually varies with respect to the 3D mesh. To do so, we compute the threshold according to multiple light directions and then choose the smallest one. The light independent threshold can then be seen as the one corresponding to the worst possible illumination (i.e., the light direction that makes the local vertex distortions the most visible).
Appendix A
Computational Details for Contrast Estimation
A.1 Contrast Estimation for Flat-Shaded Surfaces
We provide here the details of the transition between Eqs. (4.5) and (4.6) (see Section 4.2.1).
In the case where the inner products between the light direction and the two face normals are both positive, the first equation above becomes:
The ratio between the norms of the vectors n 1n 2 and n 1 + n 2 is evaluated by:
where φ is the angle between n 1 and n 2 (see Fig 4 .3). The cosines of the angles between l and n 1n 2 and between l and n 1 + n 2 are respectively equivalent to:
where α and θ are the spherical coordinates of the light direction in the local coordinate system defined by n 1n 2 , n 1 + n 2 and their outer product (see Fig. 3). This finally leads to:
A.2 Contrast Estimation for Smooth-Shaded Surfaces
We provide the details on how the optimization problem in Eq. (4.17) of the manuscript can be solved. First we rewrite Eq. (4.16) as:
where
By developing Eq. (A.1) we obtain:
2) represents a paraboloid, so solving Eq. (4.17) boils down to finding the minimum and maximum points on that paraboloid such that α + β ≤ 1 and α, β ∈ [0, 1]. The minimum point is computed by :
If the computed α and β do not respect the minimization constraints, then their values are adjusted accordingly:
It is easy to show that the maximum distance will always correspond to (α, β) = (0, 1), (1, 0) or (0, 0). Having computed α and β it is simple to compute the position or normal of the corresponding point using the barycentric coordinates
Appendix B
Measuring the Masking Threshold Independently from the Spatial Frequency
As we mentioned in Chapter 4, the effects of the spatial frequency on the visibility of a visual pattern makes the masking threshold value different at each frequency. However, it is not practical and is time consuming to measure the masking threshold at different frequencies. In addition, it would be more difficult to model this effect based on per-frequency data. Therefore if possible, it would be more interesting to measure the masking threshold independently from the spatial frequency of the visual signal. In his paper [START_REF] Daly | The visible differences predictor: An algorithm for the assessment of image fidelity[END_REF], Daly has suggested that it would be possible to model the masking effect regardless of the frequency if the contrast values were normalized by the CSF value. The idea here is that instead of dealing with absolute contrast values which will depend on the frequency, the masking model will handle normalized contrast values that are independent from the frequency. To test whether it is possible to follow this idea by normalizing the contrast value by the CSF (Eq. (4.10)), we have performed a preliminary experiment in which we measured the masking threshold at three different frequencies (1.1cpd, 2.63cpd and 5.59cpd) and in a flat shading setting.
Experimental Protocol For this preliminary experiment we followed the same experimental procedure as described in Section 4.2.3 of the manuscript. To summarize, two icospheres were displayed on the screen side by side, one of which exhibits a vertex displacement. The subjects responded by Yes or No to whether they can see a difference between the two displayed spheres which will then affect the displacement magnitude using the QUEST method. In order to measure the masking threshold at three different frequencies we have used three icospheres each with a different subdivision level (Fig B .1).
Appendix C
Measuring the Dynamic Aspects of the Contrast Sensitivity Function
As detailed in Chapter 2, the threshold related to the contrast sensitivity aspect of the HVS is affected by the velocity at which an object in moving. However, in our experimental study (see Chapter 4) we did not take this effect into account.
The reason is that we were more focused on computing the visibility threshold for static meshes rather than dynamic ones. Nevertheless, during the last few months of the thesis we have carried out a preliminary study that aims at measuring the contrast sensitivity for a moving 3D object.
C.1 Retinal Velocity and Eye Movement
Early physiological studies in human vision stated that the visibility threshold is affected by the retinal velocity of the object, velocity of the object's image on the retina [START_REF] Kelly | Motion and vision. i. stabilized images of stationary gratings[END_REF][START_REF] Kelly | Motion and vision. II. stabilized spatio-temporal threshold surface[END_REF]. These studies showed that as the retinal velocity increases the CSF curve is translated to the left. In other words, this means that the HVS becomes less sensitive to mid and high frequencies as a visual stimulus moves faster across the retina. Before measuring the CSF for a moving object, it is important to differentiate between the retinal velocity and the physical velocity of a moving object. In general, the HVS will try to track a moving object by following it with the eyes so that its image on the retina is as stable as possible. Therefore, it is important to take into account the movement of the eyes in the measurements of dynamic CSF. Having the value of the eye's velocity, the retinal velocity can be evaluated by the following:
where v is the physical velocity of the object and v eye is the eye's velocity.
Appendix D
Towards a Contrast-Based Perceptual Metric for 3D Meshes
As discussed in Chapter 3, the goal of many existing perceptual methods is to evaluate the perceptual quality of a distorted mesh. This could be a useful tool to evaluate and compare the performance of geometric algorithms. However, all existing methods rely on a purely geometric analysis for their perceptual evaluation which makes them not able to adapt to various display parameters. Therefore, we have also tried to use our perceptual analysis to evaluate the quality of a distorted mesh. In this method we have used the perceptual model that is suited for smooth shading as state-of-the-art subjective quality evaluation experiments were conducted by rendering the model with a smooth shading. The idea for evaluating the perceptual quality is relatively simple: Having a reference and a distorted model, we first evaluate the change in contrast (Eq. (4.14)) caused by the geometric distortion in a per-face basis. We then compute the per face threshold using Eq. (4.25). Computing the ratio between the change in contrast and threshold gives us a local distortion map. We then compute this distortion map for different light directions. In practice we have sampled the light direction from a sphere around the model. We finally aggregate the local distortion maps into a single score using a Minkowski pooling.
Table D.1 shows the Pearson correlation of our proposed contrast-based perceptual metric along with existing model-based methods (MSDM2 [START_REF] Lavoué | A multiscale metric for 3D mesh visual quality assessment[END_REF], FMPD [START_REF] Wang | A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF] and DAME [START_REF] Váša | Dihedral angle mesh error: a fast perception correlated distortion measure for fixed connectivity triangle meshes[END_REF]) and image-based ones (IW-SSIM [START_REF] Wang | Information content weighting for perceptual image quality assessment[END_REF] and HDR-VDP2 [START_REF] Mantiuk | HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions[END_REF]). We notice that compared to image-based methods our method performs better for all databases. However, it is not the case compared to modelbased methods. It is clear that MSDM2 and FMPD are better when it comes to the Masking and General Purpose databases. A closer look at the results reveals some interesting insights that can explain the behavior of our method and would help improve its performance. In the General Purpose database there are two types of distortions: a random noise type and a surface smoothing type. By isolating the
Appendix E
Résumé en Français
Les objets 3D deviennent de plus en plus utilisés dans des applications diverses allant des jeux vidéo à la visualisation scientifique. Avant que ces objets 3D soient affichés sur un écran, ils sont souvent soumis à plusieurs opérations géométriques. Par exemple, dans les applications interactives, les modèles géométriques détaillés sont simplifiés afin de garantir un affichage rapide des données 3D. Cependant, ces traitements géométriques introduisent des distorsions sous forme de perturbation des positions des sommets. Vu que ces distorsions peuvent affecter la qualité visuelle de l'objet 3D affiché, il est donc important de quantifier leur visibilité. La difficulté de ce problème est que la visibilité des distorsions géométriques sur un objet ne peut pas être déterminée en fonction de leur magnitude vue qu'il y a plusieurs paramètres externes qui entrent en jeu comme : illumination de la scène, résolution et taille de l'écran, point de vue, etc. Cela rend les métriques géométriques traditionnelles, comme la RMS ou la distance de Haussdorf, inefficace pour accomplir cette tâche. Le but du travail effectué dans cette thèse est d'évaluer le seuil de visibilité d'une distorsion géométrique. En d'autres termes, on cherche à évaluer le déplacement maximal qu'un sommet peut tolérer avant de devenir visible tout en prenant en compte les différents paramètres qui contribuent à la visibilité de ce déplacement.
Author's Papers
Papers Related to the Thesis | 236,712 | [
"6497"
] | [
"2003",
"391965"
] |
01466807 | en | [
"info"
] | 2024/03/04 23:41:44 | 2016 | https://hal.science/hal-01466807/file/doc00026420.pdf | Kiswendsida Abel Ouedraogo
Julie Beugin
El Miloudi
El Koursi
Joffrey Clarhaut
Dominique Renaux
Frédéric Lisiecki
El-Miloudi El-Koursi
Safety integrity level allocation shared or divergent practices in the railway domain
INTRODUCTION
Rail system safety remains a major concern in railway domain; the design and exploitation conditions of the railway systems are governed, in Europe, by rules described in legal texts (directives, regulations, decrees, etc.) and by a normative reference that require system safety demonstration. Member states have developed their own rules and safety standards mainly at national level based on national technical and operational concepts. Therefore, differences exist and can affect the optimum functioning of rail transport in the EU. Some steps have been taken to support the safety process harmonization as: the adoption of subsystems Technical Specifications for Interoperability (TSI), the definition of the Common Safety Targets (CST) and the definition of the Common Safety Method (CSM). The harmonization of railway methods and safety targets continues in agreement with the standards references as EN5012X (under review). These railway safety standards describe the safety aspects to be applied to the various levels of the railway system life cycle based on a risk management process which implies SIL allocation to the system various safety-related functions in order to control the complete system residual risk. The generic concept of SIL was introduced for the Electrical/Electronic/Programmable Electronic (E/E/PE) safety-related systems by taking into account system requirement specification.
Nevertheless, various methodologies are adopted to perform the SIL allocation to system safety-related functions. The basic difference in these methodologies stems usually from the risk evaluation procedure which varies from a rigorous quantitative estimation to a simple qualitative evaluation. Furthermore, there are several issues in the need to harmonize SIL allocation methodologies and they are inherent to the safety integrity level uses such as:
• The poor harmonization of definitions across the different standards which utilize the SIL concept;
• The derivation of SIL based on reliability estimates and system complexity.
This article aims to present the discussions results stemming from various rail stakeholders' consultations on their SIL use and/or allocation practices. In particular this research summarizes shared and divergent practices in the SIL allocation leading to a homogeneous allocation methodology proposition in order to provide a guide to French national rail authority EPSF for different actors concerned by SIL allocation problem in railway safety systems. As the methodology description and its implementation are presented in detail in [START_REF] Ouedraogo | Harmonized methodology for Safety Integrity Level allocation in a generic TCMS application[END_REF], some retained principles will be recalled briefly in this article to focus primarily on the shared and divergent practices issues.
FROM THE ALLOCATION OF SAFETY TARGETS TO SAFETY INTEGRITY LEVELS ALLOCATION WITHIN A RAILWAY RISK MANAGEMENT PROCESS
The approach for designing safe global rail system or defined subsystem includes a risk analysis and hazard control phases [START_REF] Blas | How to enhance the risk analysis methods and the allocation of THR, SIL and other safety objectives[END_REF]. For any railway technical system, an acceptable safety level must be ensured thus the socalled Hourglass Model has been introduced (cf. Figure 1), offering a supportive viewpoint to the sequence of lifecycle phases. The Hourglass Model provides an overview of the major safety-related activities that are needed during the development of a technical system, including the corresponding responsibility areas. Risk analysis in railway is generally based on the distinct involved actors' roles. The purpose of this model is to enhance cooperation between the relevant stakeholders, clarifying responsibilities and interfaces. The technical system is developed by one or several suppliers; therefore the functional risk assessment is the responsibility of the operator. The hazard control, for hazards associated purely with the technical system, is the responsibility of the supplier. These two stakeholders shall comply with the prevailing legal requirements [START_REF]Railways applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS)[END_REF][START_REF] Braband | Allocation of safety integrity requirements for railway signalling applications[END_REF].
In practice and according to the context of every project, several entities intervene. The division of the responsibilities between the entities is not clearly defined and definitive, in order to have some design latitude (objectives, different rolling stock exploitation rules) and to facilitate innovation. The entities which intervene are classified in 2 groups, the first one being the customer of the other one and this reciprocity finding itself at various levels of definition of the system:
-The project owner is the entity carrying the need. It defines the objective of the project, its planning calendar and the budget dedicated to this project.
-The project manager is the entity chosen by the project owner according to a contract, for the project realization in compliance with the deadlines and quality conditions as well as costs fixed by the aforementioned project.
Risk assessment
Risk assessment is a process covering risk analysis and risk evaluation [START_REF][END_REF] for the system under consideration. The risk analysis includes hazardous situation identification (conditions leading to an accident) with the associated operating environments, consequence analysis and selection of risk acceptance criterion principles. The accident is defined as an event or series of unexpected events leading to death, injury or to environment damages [START_REF]Railways applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS)[END_REF]. The consequence analysis identifies links between hazards and accidents for every accident scenario (through cause-consequence diagrams and event trees for example). The risk acceptability of the system under assessment shall be evaluated by using one or more of the following risk acceptance principles:
-Principle n°1: Applying Codes of Practice (CoP); the acceptability is proven by obeying to the codes or the rules: like TSI and associated norms, the national rules, the mutual recognition agreement protocols; -Principle n°2: Comparison with similar systems (reference systems). The system acceptability is based on "proven in use" of its similar parts concerned to the railway reference system. This principle is known as the GAME ("Globalement Au Moins Equivalent" for globally at least equivalent) principle.
-Principle n°3: Explicit risk estimation: the applicant needs to establish a complete demonstration in order to set explicit risk estimation (quantitative or qualitative). Explicit risk estimation is usually executed [START_REF] Schüller | Guide for the application of the commission regulation on the adoption of a common safety method on risk evaluation and assessment as referred to in article 6(3)(a) of the railway safety directive[END_REF]:
-When Codes of Practice (CoP) or reference systems cannot be fully applied to control risks to an acceptable level. This occurs typically when the system under assessment is entirely new or where there are deviations from a code of practice or a similar reference system; -When the chosen design strategy does not allow codes of practice or similar reference system uses, including desire to design a never used and more economical system (e.g., train positioning with an onboard satellite localization system that is less expensive to maintain than distributed trackside beacons). This risk assessment phase allows specifying the system requirements by establishing the list of identified hazards and a set of requirements on safety-related functions, subsystems or operating rules.
Hazard control
The second phase of the risk management process is hazard control and consists in ensuring/demonstrating that the specified system is in compliance with safety requirements. For that, the system internal causes are analyzed and determined and then the appropriate measures are implemented.
Risks related to safety-related function failures and systems that implement these functions, are controlled by complying with a set of technical measures required by regulations or standards. Risks induced by the E/E/PE systems are processed by a risk assessment followed by safety target allocations (often associated with frequency categories). Risks related to mechanical or pneumatic systems are processed by CoP or reference systems. The concept of risk acceptability level has been developed into the industrial IEC 61508 and railway EN 50126 standards. A matrix combining the accident severities and occurrence frequencies is used to set the risk acceptability levels. An acceptable maximum rate of the considered hazard occurrence noted THR (Tolerable Hazard Rate) is determined by a number of events per hour [START_REF] Blas | How to enhance the risk analysis methods and the allocation of THR, SIL and other safety objectives[END_REF].
In principle, the operator should specify the THR so that the system designer is able to determine whether their system design is capable of meeting the target. In practice, the operator will define targets at the railway system level and may need to work together with the railway suppliers to define THR at the technical hazard level [START_REF]Railways applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS)[END_REF]. The next step is to determine the SIL of the safety-related functions based on safety targets allocated initially using THR. THR are associated to each hazard.
SIL concept
The Safety Integrity Levels are discrete levels defined to specify safety requirements for safety-related functions carried out by E/E/PE systems. A given SIL between SIL 1 and SIL 4 is linked to qualitative and quantitative requirement specifications for a safety-related function that are defined according to the random and the systematic failures related to the E/E/PE safety-related systems that perform the function [START_REF]Railways applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS)[END_REF]. SIL 4 is related to the most demanding requirements to counteract the hazard causes arising from these two kinds of failures. SIL 0 is occasionally defined for functions with no safety requirements.
The next section presents the SIL use/allocation practices stemming from consultations with various railway actors within our harmonized methodology for SIL allocation development. The obtained methodology is a research project proposal and not a national rule; these works were not intended to develop an additional method (many methods are already available) but to propose a methodology based on several existing practices and widely used while formalizing the implicit principles used.
SIL USE/ALLOCATION PRACTICES ACCORDING TO RAILWAY ACTORS
In Table 1, three main points (among others) describe a SIL particular use. In Table 2, on each line, practices providing guidance to the allocation approaches are presented (the columns are merged when there are consensus on the practice). These points of views and practices are most often different and sometimes contradictory depending on choices made by involved railway stakeholders. Whether they are rail duty holder, manufacturers, or notified bodies for railway certification.
The current standards attempt to harmonize risk analysis processes and their associated process (hazard, accident scenarios and hazard cause identification; safety target allocation, etc.) in order to strengthen the weaknesses of existing methods for instance including railway domain mutations (e.g., new responsibility distribution; new technologies development onboard trains, on track or at the control stations). These changes aim to get more efficient systems but however result in more complex systems, especially for the safety analysis [START_REF] Blas | How to enhance the risk analysis methods and the allocation of THR, SIL and other safety objectives[END_REF]. It results in many discussions on approach evolution to be adopted in the revised standards, especially for the latest prEN50126 standard draft which is currently on investigation.
Description of a SIL particular use
Point of view 1
Point of view 2 Remarks
SIL 0 use additionally to other levels (SIL 1 to SIL 4)
SIL 0 is allocated to non-safety related functions. These functions, however, are considered as a first step to risk reduction. This type of function, although developed with a low level of confidence, brings a minimum but useful risk reduction (e.g., reduction of the accident occurrence less than or equal to a factor of 10).
Functions that have an impact on safety (safety-related) should be allocated to a minimum SIL1.
-Standard EN 50128-2001 uses SIL 0 for non-safety related functions performed by software while the 2011 version uses the SIL 0 for functions that have an impact on safety, although this impact is low.
-Standard prEN 50126 introduced the concept of basic integrity
(not yet adopted). This notion is based on the point of view 1.
SIL for a function combining two dependent or independent subfunctions among each other
The THR logic only is considered. Then a SIL is allocated according to THR range associated to the function regarding the independence of its sub-functions.
Functions with a low-level of SIL can be combined to obtain a function with a higher SIL level (e.g., a SIL 4 function can be obtained by two independent SIL2 subfunctions)
The concept of independence is not clearly achieved yet (in standard prEN 50126) because if there is dependency, the model that fits it is needed. The approach of EN50126 is still under discussion and might evolve.
Function involving a human operator
Human operator is taken into account in the studies (impact on SIL allocation) by considering it as a reliable (resilient) or, in contrast, unreliable.
Human operator is excluded.
In "acquire an emergency break request" function case, a set of solutions is possible as, request triggered by the driver after an alarm in the cab or by an automatic detection mechanism. The corresponding SIL might be the same regardless the solution. -Practice 1 tends to be banned.
-Practice 2 can be illustrated by the following example: the overspeed protection is not critical if there is no overspeed situation.
Level of breakdown of accident causes in functional causes for SIL allocation (i.e., stop level?)
Identification of all functional failure causes leading to the hazard (hazard upstream causes, i.e., events when combined lead to the hazard) Identification of each scenario from a given accident (hazard downstream causes, i.e., events following the hazard occurrence until an accident) in which event combinations from -In practice 2, a preliminary step is to use the risk graph as a method for allowing a prior SIL allocation ('conservative' results), i.e., it leads to levels which the associated safety requirements are more constraining than actually needed. In this approach, if the risk graph result identifies the need to implement a very high level safety-related function, another technical, human or operational origin can jointly occur. tool for a more detailed decomposition might be used (as an event graph or a fault tree). In addition, the risk graph is limited because it only considers one possibility of harm avoidance although several others could exist.
Item concerned by a safety target allocation (target obtained prior to the SIL)
Allocating a target on the identified functions from the system under consideration (e.g., rolling stock), i.e., the weight that is distributed to functions failures initially intended to reduce the risk; due to their failure, they no longer provide this reduction.
Allocation of a safety target related to hazard (in a specific accident scenario) by apportioning the risk reduction weight on the human, operational or technical components which perform a safety-related function.
-Example for practice 2: for overspeed hazard, there will be a risk part that will be supported by the infrastructure, another by the operator and another by the rolling stock.
-Remark associating demonstration to allocation concepts: allocation can be seen as only defining safety requirements related to barriers (technical/human/organizational) handling a hazard; these barriers are defined following the accident scenario analysis (practice 2). Allocating risk reduction weight to the system safety-related functions (to comply with the hazard safety requirements, practice 1), can be seen as a demonstration approach rather than an allocation one based on the fact that we seek to show whether the system meets the requirements or not. The boundary between allocation and demonstration does not appear so clearly in the practices; indeed, at the European level, a safety target can be allocated to a hazard (dangerous situation) or, a contractor can ask directly a function with a SIL x.
-Remark on the SSIL: The notion of Software SIL has disappeared in standard EN 50128-2011 and prEN50126 as SIL is allocated to a safety-related function.
Allocation practices in various accident scenarios involving the same function
For a specific accident scenario (when a trigger event such as an overspeed may lead to an accident with an unacceptable risk if a safety component is not involved), a technical component implementation among others facing the risk generated by the trigger event, allows to handle a final tolerated residual risk.
If the same function is active in several scenarios, the most constraining requirement from all scenarios is used.
To illustrate two accident scenarios (e.g., the accident being an obstacle collision) with a different context but involving the same function (automatic emergency braking) for the same dangerous situation (obstacles presence): in a first case, the train driver can trigger the emergency brake when he sees the obstacle. In a 2nd case braking can be triggered as soon as the train losses its catenary power supply cut off by the control center for example. Safety weight assigned on the human and technical components will be different from one case to another (the driver can support a risk reduction weight). Each case will lead, regarding the retained weight, to a different allocation of safety requirement on the function; the most constraining target is then retained. -9 per hour; if it's critical, the occurrence has to be 10E -7 per hour (these values refer to the CSM-Design Targets, which exclude human factors and operating rules as safety measures).
2.
Remark: For the operator, SIL Practice 1:-There is an activity Practice 2:-The system actor at the allocations provided by the manufacturers include a large heterogeneity in the details provided. But regardless the used modeling tree view, what is essential is that everyone understands each other. The necessary breakdowns level is the one that ensures the demonstration; the sufficient level depends on the manufacturer (need or not of detailed subsystems, for example with actuator mechanisms related to brake control).
prior to THR determination made by the infrastructure manager or the operator for a given function failure mode (some THR are defined by European legal texts as TSI). From a THR, the manufacturer will analyze how to design its system, to select its product in order to meet this target.
-In a functional allocation approach, the requirement is on function. Before defining a SIL, the requirement is defined regardless of the system technology in use. The requirement on the function is common and can be seen as being achieved by a "box". If an E/E/PE system is used to perform this "box", the requirement is expressed in terms of SIL. In this case, it involves specific steps for the systematic failures control. highest level can only allocate functional requirements to lower level actors. Then it is the latter ones responsibility, by their design choices, to perform a safety analysis in order to identify if their system is safe or not (e.g., Energy by hydrogen or electricity storage); the manufacturer from its product, demonstrates a target achievement (demonstration approach rather than allocation). -The actor managing the whole railway system actually defines the weight to allocate to the safety-related function based on technical, human or operational features (not only technical). -There are particular cases. For example, a SIL 4 function, achieved through track beacons, also requires the same safety level for the onboard part in rolling stock, information support, etc.
3.
Practice 1: The infrastructure manager or the operator has the responsibility to control external events (especially risk reduction brought by the system external barriers,). Indeed, the rolling stock is not subject to the same external events according to the operated lines (conventional line, automated line, driverless line with specific procedures). The possibility to release THR target regarding external events is the operator or the infrastructure manager responsibility.
-Practice 1 and 2: Based on observations at European level: a safety target can be allocated to a hazard (dangerous situation) or an operator may sometimes claims directly SIL x for a function. In particular this is observed for urban guided transport depending on the rail network size and depending on the operator engineering expertise. One might specify only the highest accident risk level (e.g., number of injuries per year). In this case, the manufacturer, who is involved in the preliminary safety study, can perform a fault tree beginning with the individual risk until specifying weight in the tree. -The German SIRF (Sicherheitsrichtlinie Fahrzeugrolling stock safety regulations) approach is based on the THR apportionment from the accident analysis.
Practice 2: THR safety targets should be assigned to a hazard considering the accident implying this hazard (scenario), and then different actors have to reach this target at the system level (in the overall rail system). The actors might then show that the THR are achieved.
Remark:
-The Safe Down Time (SDT) intervenes as soon as there is an "AND" gate in a fault tree. It is part of the items that must be allocated. The SDT has a direct impact on the THR choice: the lower the SDTis, the higher the rate is.
4.
Specifications on accident scenarios: These scenarios are jointly defined between the manufacturer and its suppliers to fix a safety target. At the rolling stock level, the manufacturer receives information on the safety performance of supplier's equipment in order to verify if the proposed equipment performance can be selected or if new more robust equipment should be developed.
Table 3. Different actor's reactions on SIL allocation practices mentioned in Table 2 Given these uses and practices related to SIL, the following section presents the choices retained in the methodology for SIL allocation to safety-related functions.
CHOICES RETAINED IN THE METHODOLOGY 1. Methodology general aspects
The proposed methodology is established through two processes and their different steps are based on practical rules and hypotheses to be tested. Its implementation begins with the use of THR quantitative safety targets. These quantitative targets allow taking into account the Common Safety Method -Design Targets (CSM-DT) values associated to technical systems design recently defined by European Railway Agency (ERA). Although THR is a quantitative criterion, it is a target measure with respect to both systematic and random failure integrity.
It is accepted that only random failure integrity will be possible to quantify; qualitative measures and judgements will be necessary to justify that the systematic integrity requirements are met. This is mainly covered by the SIL (and the measures derived from the SIL) (see standard EN50129 and prEN50126).
The methodology is illustrated by the overview in Figure 2. This is a macro view highlighting two main processes.
In process 1, the THR apportioning rules are applied to the safety-related functions. On the one hand, these rules are based on the logical combinations of these functions. On the other hand, to take into account technical conditions (last safety weak link, functional dependencies, technological complexity, etc.), specific rules implicitly used in existing practices, are defined for readjusting some THR values. SIL allocation based on apportioned and validated THR values, are finally established in process 2. The Fault Tree Analysis explicitly expresses how equipment failure, operation errors, and external factors lead to system failures therefore is commonly used to analyse system safety. It is used as a quantitative method because it shows the cause-consequence links of the system functions but in certain cases, other methods can be used.
process 1 based on the THR
The THR is associated to a particular hazard; it's a main criterion within SIL allocation in the railway domain. A hazard with a tolerable rate results from combinations or sequences of failures under control within the system in a particular operational context.
In this process, the system elements are considered from the functional point of view as several hardware/software architectures are possible. In the railway standards there is no explicit indication/rule or provided guidance on how to reduce or manage the SIL allocation considering dependable architectural solutions, as is done in the IEC 61508 generic standard. For a particular sub-function with a specific SIL, supplier architecture solutions may be different but equally satisfactory [START_REF] Blanquart | Criticality categories across safety standards in different domains[END_REF].
THR (safety target associated to hazard occurrence in the considered accident scenario) apportioning rules are applied to the safety-related functions or sub-functions. On the one hand, these rules are based on the logical combinations of these safety-related functions. On the other hand, to take into account technical conditions (last safety weak link, functional dependencies, technological complexity, etc.), specific rules implicitly used in existing practices, are defined for readjusting some THR values.
After the THR values readjustments based on the specific rules, a "Down-Top" quantitative analysis and validation is performed to verify compliance with the THR (safety target) apportionment for each corresponding hazard. This validation is intended to eventually make changes in SIL allocation process by considering specific technical architectures. When the safety target THR is not achieved, the risk acceptability need to be demonstrate (expert arguments, GAME, etc.) SIL allocation based on apportioned and validated THR values is finally established in process 2.
process 2 for SIL allocation
Safety target refers to a function failure rate, while SIL refers to a function: for each failure mode of a given function can be assigned a safety target as a THR and then a SIL being allocated to this function, based the most restrictive THR. The SIL allocation to safety-related functions is performed through THR => SIL correspondence (see Table A.1 from standard EN 50129).
How train functions and/or subsystems are implemented (functions technical design on the hardware/software architecture) has also an impact on the SIL allocation. Specific allocation rules taking into account these implementation conditions (complex technical solutions, mutual intrusion of implemented functions, restrictive or not safety requirements) are also defined in this latter process including a function with constraining quantitative requirements more than 10 -9
/h, the need to involve methods and technical or operational measures applicable to SIL 4, or to allocate at least a SIL 1 to a function with safety quantitative requirements low than THR≥10 -5 /h.
Methodology possible evolutions according to the changes in regulations
The prEN50126 standard draft (under review and subject to changes) advocates the use of the Tolerable Functional Failure Rate (TFFR) concept for safety-related functions and the THR for hazards, separating the hazard layer from functions layer. The proposed generic methodology can be adapted by setting a THR (not modifiable) for each identified hazards and an apportionment in terms of TFFR to safety-related functions and sub-functions.
CONCLUSION
This article has highlighted and focused on the practices for SIL allocation in railways given the concerned actors experience, the related literature review and works in passed research projects as MODUrban or MODTRAIN. Different points of views related to SIL uses, different identified SIL allocation practices and the associated actor's reactions on these allocation practices are described with examples. The retained practices are included in a guide describing a methodology for a harmonized SIL allocation method.
Figure 1 .
1 Figure1. The hourglass model for risk management within railway, adapted from[START_REF]Railways applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS)[END_REF]
Figure 2 .
2 Figure 2. Overview of process 1 & 2: THR apportionment and SIL allocation
Table 1 .
1 Different identified points of views related to SIL uses
Allocation approach Practice 1 Practice 2 Remarks and Examples
characteristic
1. Consequence Allocation approaches The Function demand
severity associated show a direct link rate (depending on
to the function between SIL and the hazard occurrence
failure for SIL severity of functional frequency) associated
allocation failure. with the severity if it fails,
allows a SIL
determination.
Table 2 .
2 Different identified SIL allocation practices
Ref. Operators Notified Bodies Manufacturers
Table
2
1.
Practice 2: Depending on the hazard consequences severity, a safety target associated to the hazard is defined in terms of occurrence. If the accident is catastrophic, given the European regulation 402/2013 on Common Safety Method, a function failure leading directly to the hazard occurrence has to be 10E
ACKNOWLEDGEMENTS
The authors would like to thank various operators, manufacturers and notified bodies for their relevant and detailed feedbacks and remarks on the proposed methodological guide. It allows improving the methodology through their professional expertise and highlighting the still open various points. | 31,299 | [
"1110519",
"890516"
] | [
"222119",
"222119",
"222119",
"1303",
"1303",
"485509"
] |
01281209 | en | [
"phys",
"sdv",
"sdu"
] | 2024/03/04 23:41:44 | 2016 | https://hal.science/hal-01281209v2/file/NBDaJaJoLu16-ESPL.pdf | Amina Nouhou Bako
Frédéric Darboux
Francois James
Christophe Josserand
Carine Lucas
Pressure and shear stress caused by raindrop impact at the soil surface: Scaling laws depending on the water depth
Keywords: Raindrop, Navier-Stokes equations, pressure, shear stress
Pressure and shear stress caused by raindrop impact at the soil surface: Scaling laws depending on the water depth
Introduction
Raindrop impact is a major driver of soil erosion and is acting through a wide range of processes [START_REF] Terry | A rainsplash component analysis to define mechanisms of soil detachment and transportation[END_REF][START_REF] Planchon | A physical model for the action of raindrop erosion on soil microtopography[END_REF]: the raindrop impacts break down aggregates, leading to soil detachment and crust formation [START_REF] Bresson | Role of compaction versus aggregate disruption on slumping and shrinking of repacked hardsetting seedbeds[END_REF]. They also cause splashes, i.e. the transport of soil material in the air over distances of a few decimeters [START_REF] Leguédois | Splash projection distance for aggregated soils. theory and experiment[END_REF]. Also, raindrop impacts are essential in shallow overland flow (i.e. sheet flow) for the detachment of particles. Indeed, sheet flow by itself does not have the ability to detach particles because of its limited velocity and thus weak shear stress [START_REF] Kinnell | The effect of flow depth on sediment transport induced by raindrops impacting shallow flows[END_REF]. The impacts of the raindrops can detach the material that is then transported by the sheet flow.
Drop impact effects can differ strongly depending on whether the soil is dry or wet because both the shear strength of the soil and the shear stress caused by the drops depend on the soil humidity. Rapidly, for raindrops, the soil can be considered wetted so that we will focus here on the impact on a thin liquid film. The presence of a thin water layer at the soil surface modifies the effect of raindrop impacts [START_REF] Kinnell | The effect of flow depth on sediment transport induced by raindrops impacting shallow flows[END_REF]. The consequences of drop impacts depend primarily on drop properties. However, the drops of concern for soil erosion have a narrow range of features: raindrops are considered at terminal velocity, leading to a clear relationship between diameter and velocity [START_REF] Atlas | Doppler radar characteristics of precipitation at vertical incidence[END_REF]. This contrasts with other usual applications in fluid mechanics (e.g. ink-jet printing where the ink drop impacts the paper or the coating of a surface by multiple drop impacts) where drops vary in viscosity, density, surface tension, velocity and diameter [START_REF] Marengo | Drop collisions with simple and complex surfaces[END_REF].
Raindrop-driven erosion depends also on soil properties such as soil resistance to shear stress [START_REF] Sharma | Soil detachment by single raindrops of varying kinetic-energy[END_REF][START_REF] Mouzai | Shear strength of compacted soil: Effects on splash erosion by single water drops[END_REF], hydrophobicity [START_REF] Ahn | Effects of hydrophobicity on splash erosion of model soil particles by a single water drop impact[END_REF] and roughness [START_REF] Erpul | Wind effects on sediment transport by raindrop-impacted shallow flow: A wind-tunnel study[END_REF]. While raindrop impacts cause splashes, the quantity of eroded material is controlled mostly by the shear created by the impact, which is not strongly affected by the splash itself [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF]. Indeed, it has been argued that the erosion, in term of bedload transport rate, is controlled by the shear stress affected at the soil boundary, usually measured through the dimensionless Shields number [START_REF] Parker | Surface-based bedload transport relation for gravel rivers[END_REF][START_REF] Charru | Erosion and deposition of particles on a bed sheared by a viscous flow[END_REF][START_REF] Houssais | Bedload transport of a bimodal sediment bed[END_REF]. Although these results have been deduced precisely for gravel river beds made of non-cohesive granular materials with a narrow granulometric distribution, it is believed that this bed shear stress and to a smaller extent the bed pressure, are the main ingredients of most of erosion processes. Therefore, if the transport of eroded materials can be influenced by the splashing itself, which is always present for raindrop impact, the bedload transport rate is primarily due to the shear stress created by the impact.
The influence of the water layer on the erosion process has also drawn attention: at first, one could argue that the erosion is limited by the shielding of the soil surface by the water layer. Raindrop energy is absorbed by the water layer, which lowers the pressure and shear stress exerted on the soil. It has been in fact documented that a water layer can maximize sheetflow erosion rate in comparison to a drained surface and that such erosion depends mostly on the ratio between the water depth and the raindrop diameter [START_REF] Singer | Soil erosion under simulated rainfall and runoff at varying cover levels[END_REF]. In fact, there is a critical depth h c at which the splash transport rate is maximum: beyond h c the transport rate decreases strongly. However, different values for h c have been proposed in the literature as shown in [START_REF] Dunne | A rain splash transport equation assimilating field and laboratory measurements[END_REF]: for instance it can vary from h c = D [START_REF] Palmer | The influence of a thin water layer on waterdrop impact forces[END_REF][START_REF] Palmer | Waterdrop impact forces[END_REF] to h c = 0.2D in [START_REF] Torri | Some aspects of soil erosion modeling[END_REF], and even with 0.14D ≤ h c ≤ 0.2D according to [START_REF] Mutchler | Soil detachment by raindrops[END_REF]. Finally, [START_REF] Ghadiri | The risk of leaving the soil surface unprotected against falling rain[END_REF] showed a reduction of soil splash once a water layer covers the soil surface while [START_REF] Moss | Movement of solids in air and water by raindrop impact. effects of drop-size and water-depth variations[END_REF] and [START_REF] Kinnell | The effect of flow depth on sediment transport induced by raindrops impacting shallow flows[END_REF] found that the outflow rate of raindrop-induced flow transport reaches its maximum value when the flow depth equals two to three drop diameters. For three drop diameters (and above), detachment by raindrops becomes quite limited but drop energy still allows for particle suspension, leading to a significant transportation rate (Ferreira and Singer, 1985).
Raindrop interaction with the soil surface has been investigated using numerical simulations from the 1970 [START_REF] Wang | The mechanics of a drop after striking a stagnant water layer[END_REF]. They allow for the computation of pressure and shear stress fields at the soil surface [START_REF] Huang | A numerical study of raindrop impact phenomena -the elastic-deformation case[END_REF]Ferreira et al., 1985;[START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF][START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF]. All these simulations considered a rigid soil surface, hence not accounting for the elasticity of the soil or its granular nature. According to [START_REF] Ghadiri | The risk of leaving the soil surface unprotected against falling rain[END_REF], the soil behaves like a solid during the short time of the impact, justifying the simplification. These simulations have enabled the determination of the critical variables. For example, the maximum shear stress was found to depend mostly on the Reynolds number and of the water layer thickness-drop diameter ratio [START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF][START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF]. However, due to limitations in computer and algorithm performance, simulations were carried out with critical parameters (such as the Reynolds number) well out of the natural range, moderating confidence in the results.
The present paper takes advantage of the recent developments of detailed and direct simulations in fluid mechanics to study the impact of single raindrops on a soil surface with a water layer (see the reviews on drop impacts in the fluid mechanics literature: [START_REF] Rein | Phenomena of liquid drop impact on solid and liquid surfaces[END_REF]; [START_REF] Yarin | Drop impact dynamics: Splashing, spreading, receding, bouncing[END_REF]; [START_REF] Marengo | Drop collisions with simple and complex surfaces[END_REF]; [START_REF] Josserand | Drop impact on a solid surface[END_REF]). The pressure field inside the water layer, the pressure field at the soil surface and the shear stress at the soil surface are analyzed for raindrop of diameter 2 mm and terminal velocity 6.5 m • s -1 , varying the thickness of the water layer. Short time scales are considered, i.e. develop- ment of stresses before particle splash initiation. A self-similar approach valid for a thin liquid layer is used to analyze the results, showing that scaling laws recently proposed in fluid mechanics apply to natural raindrops too. It confirms that the ratio between the water depth and the raindrop diameter is critical to understand the effect of raindrop impact.
Materials and Methods
Problem configuration
We consider the normal impact of a liquid drop of diameter D on a thin liquid film of thickness h in the paradigm of rainfall (Figure 1). The liquid has a density and dynamic viscosity denoted ρ l and µ l . The density and viscosity of the surrounding gas are denoted ρ g and µ g . The drop impacts on the ground at velocity U = -U 0 e z which corresponds to the terminal velocity for a raindrop. We will assume here for the sake of simplicity that the raindrop has a spherical shape, even though it is known that raindrops can have a deformed shape, particularly for large diameters [START_REF] Villermaux | Single-drop fragmentation determines size distribution of raindrops[END_REF]. However, it is not expected to significantly impact the dynamics and the effect of the specific shape of the impacting drop is postponed to future work. The gravity is denoted g = -ge z , and the liquid-gas surface tension γ.
Different dimensionless parameters can be constructed in this configuration. Two of them are commonly used in drop impact problems, since they characterize the balance between the inertia of the drop with the viscous and capillary forces respectively: firstly, the Reynolds number Re which is the ratio between inertia and viscous forces:
Re = ρ l U 0 R µ l .
For raindrops, Re ranges from 6500 to 23000 [START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF]. Secondly the Weber number We which is defined as the ratio between inertia and capillary forces:
We = ρ l U 2 0 R γ .
The Weber number ranges from 50 (for a raindrop diameter of 0.5 mm and a velocity of 2 m/s) to 12000 (for a raindrop diameter of 6 mm and a velocity of 9 m/s) for natural rainfall. The problem depends also on the aspect ratio of the problem geometry, i.e. the ratio between the drop diameter and the thickness of the liquid film: h D .
Additional dimensionless numbers are present in this problem, but are of limited interest, either because they do not depend on the raindrop impact configuration, or because they characterize a physical mechanism that can be neglected here. It is the case of the Froude number
Fr = U 2 0 gD ,
which quantifies the ratio between inertia and gravity forces, it can take values between 800 and 1400 for natural rainfall. Indeed, although gravity is crucial to accelerate the raindrop to its terminal velocity, the gravity itself plays quite a limited role in the impact dynamics and hence is usually not accounted for in the modeling and numerical simulation of drop impacts [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF]. For instance, for a drop falling from a height H, the free fall velocity gives U 2 0 ∼ 2gH so that the Froude number is simply the ratio 2H/D. Since the terminal velocity of a raindrop is equivalent to a free fall height H of several meters, the Froude number will always be very high in the present problem. Remark finally that while the Reynolds and Weber numbers are also high in the present problem, it does not mean that viscous and capillary effects can be neglected: the formation of a thin liquid layer means that viscous and capillary effects will be important in some region of the flows -something that is not valid for gravity, which can thus be safely neglected.
The two dimensionless numbers related to the liquid/gas properties, namely the density ratio ρ g /ρ l and the viscosity ratio µ g /µ l , present a limited interest too: while the surrounding gas can sometime influence the splashing properties [START_REF] Xu | Drop splashing on a dry smooth surface[END_REF], in particular through the entrapment of an air bubble beneath the drop at the impact [START_REF] Thoroddsen | Air entrapment under an impacting drop[END_REF][START_REF] Thoroddsen | The air bubble entrapped under a drop impacting on a solid surface[END_REF], this effect is negligible for the impact of a raindrop [START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF]. Moreover, these two dimensionless numbers are only related to the gas and liquid characteristics and not to the impact conditions. Therefore they do not vary significantly with the raindrop radius and velocity.
Based on the water hammer pressure, that is the pressure created by the inertia of the drop hitting a solid surface, it was first suggested in soil erosion literature that the amplitude of the stress on the soil surface can be quite large (2 -6 MPa), with a limited duration of about 50 microseconds [START_REF] Ghadiri | A study of soil splash using cine-photography[END_REF], this duration increasing with the depth of the water layer [START_REF] Ghadiri | The risk of leaving the soil surface unprotected against falling rain[END_REF]. However, such high water hammer pressures have not been observed experimentally [START_REF] Ghadiri | The risk of leaving the soil surface unprotected against falling rain[END_REF][START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF][START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF] and, following the argument of [START_REF] Ghadiri | A study of soil splash using cine-photography[END_REF], it can be shown that this pressure should arise only during a very short time of the order of D/c (where D is the drop diameter and c the sound speed) leading to a typical time scale of the order of one microsecond as found by [START_REF] Ghadiri | The risk of leaving the soil surface unprotected against falling rain[END_REF]. Then, the pressure decreases rapidly with time as shown in numerical simulations [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF]. Moreover, as discussed by [START_REF] Dep | Measurement of force vs time relations for waterdrop impact[END_REF] and [START_REF] Nearing | Measurement of waterdrop impact pressure on soil surfaces[END_REF], compressible effects can be neglected since they will only influence the very early time of contact and a very small region of the impacted zone. This is in agreement with former theoretical and experimental studies on drop impacts where compressible effects were shown to appear only at much higher drop velocities, typically of the order of a fraction of the sound velocity in water [START_REF] Lesser | The impact of compressible liquids[END_REF]. Therefore, as shown and as used in the recent studies, the liquid can be assumed incompressible during the impact [START_REF] Rein | Phenomena of liquid drop impact on solid and liquid surfaces[END_REF][START_REF] Yarin | Drop impact dynamics: Splashing, spreading, receding, bouncing[END_REF][START_REF] Marengo | Drop collisions with simple and complex surfaces[END_REF][START_REF] Josserand | Drop impact on a solid surface[END_REF].
The two-fluid Navier-Stokes equations
Both the gas and the liquid obey the incompressible Navier-Stokes equations (with respective densities and viscosities) with jump conditions at the interface. This complete dynamics can be described within the one fluid formulation of the incompressible Navier-Stokes equation, that reads:
ρ ∂u ∂t + u∇u = -∇p + ρg + µ∇ • ∇u + t ∇u 2 + γκδ s n (1)
to which is added the equation of mass conservation, which for incompressible fluid yields
∇ • u = 0, ( 2
)
where u is the vector of fluid velocity, ∇ is the usual differential operator, p the pressure field, function of space x and time t. In these equations, the density ρ(x, t) and viscosity µ(x, t) are discontinuous fields of space and time. ρ(x, t) (µ(x, t)) is ρ l or ρ g (µ l or µ g ) depending on whether we are in the liquid or gas phase. The term γκδ s n represents the surface tension force, proportional to the curvature κ and localized on the interface (the Dirac term δ s ) with normal is n.
The curvature is defined by the divergence of this vector:
κ = ∇ • n.
An additional equation has to be considered for the motion of each phase (gas and liquid) leading eventually to the movement of the interface. Indeed, considering the characteristic function χ(x, t) which is equal to one in the liquid phase and zero in the gas phase, the volume conservation of both phases implies that χ is solution of the advection equation:
∂χ ∂t + u • ∇χ = 0. ( 3
)
Within this framework, both fluids satisfy the incompressible Navier-Stokes equation with the applicable density and viscosity.
From here on the soil surface is taken to be rigid. This simplification comes from the unavailability of a realistic deformation law for soils at the scale of a raindrop.
Numerical Method and dimensionless version
The Navier-Stokes equations (1,2,3) are solved by the open source Gerris flow solver (version 2013/12/06) [START_REF] Popinet | Gerris flow solver[END_REF]. Gerris uses the Volume of Fluid method on an adaptive grid [START_REF] Popinet | Gerris: a tree-based adaptive solver for the incompressible euler equations in complex geometries[END_REF][START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF]. The rotational symmetry of the problem around the vertical axis is used to perform 2D numerical simulations using cylindrical coordinates (called 3D-axisymmetric coordinates).
The discretization of the equations is made on a quadtree structure for square cells. The quadtree structure allows for a dynamic mesh refinement: when needed, a "parent" cell of the mesh is divided into four identical square "children" cells (which length is half the one of the parent cell), up to a maximum level n of refinement. Similarly, a cell merging is performed whenever the precision of the computation is below a user-defined threshold. The refinement/merging criterion is based on a mix of high values of the density and velocity gradients. Hence, smaller cells are used at the gas/liquid interface and at locations showing large changes in velocity.
The interface between the gas and liquid phases is tracked using a color function C which corresponds to the integral of the characteristic function in each grid cell. C is taken as the fraction of liquid phase inside the cell. This allows for the interface to be reconstructed using the piecewise linear interface calculation [START_REF] Li | Calcul d'interface affine par morceaux (piecewise linear interface calculation)[END_REF], leading to a conservative advective scheme for the advection of the interface [START_REF] Brackbill | A continuum method for modeling surface tension[END_REF][START_REF] Lafaurie | Modelling merging and fragmentation in multiphase flows with SURFER[END_REF]. For each phase, the viscosities (µ g or µ l ) and the densities (ρ g or ρ l ) are constant because the fluids are assumed incompressible. Hence, each cell crossed by the interface has a viscosity µ and a density ρ determined by the relative volume fraction of each phase, following:
ρ = C ρ l + (1 -C )ρ g ; and µ = C µ l + (1 -C )µ g . (4)
Finally, the Navier-Stokes equations are solved in Gerris in a dimensionless form to lower numerical errors. The domain length has a size of one. The other lengths are rescaled by a factor λ using a numerical diameter of the raindrop D = 0.3 in Gerris (so λ = D/D ), the velocities, densities, time and pressure by U 0 , ρ l , λ/U 0 and ρ l U 2 0 , respectively. Hence, the effective Navier-Stokes equation to solve reads:
ρ ∂u' ∂t + u'∇ u' = -∇ P + ρg' + µ ∇ • ∇ u' + t ∇ u' 2 + γ κ δ s n (5)
where the primes represent dimensionless variables.
Simulated cases and conditions
We performed numerical simulations for typical raindrop impacts falling on a water layer. All computations were done for spherical raindrops of diameter equal to D = 2 mm. Considering the scaling factor D = 0.3, this leads to a domain of 6.67 mm in both width and height. The raindrop velocity was set to its terminal velocity, U 0 = 6.5 m s -1 . The thickness of the water film h varied from D/10 (i.e 0.2 mm) to 2D (i.e. 4 mm), with the intermediate cases D/5, D/3, D/2 and D.
Standard air and water properties were used: ρ l = 10 3 kg m -3 , µ l = 10 -3 kg m -1 s -1 , ρ g = 1 kg m -3 , µ g = 2 × 10 -5 kg m -1 s -1 , with a surface tension γ = 0.02 kg s -2 . In this configuration, the Reynolds number Re was 6500 and the Weber number We 2112.5. These large values indicate that inertia dominates a priori the other forces. Preliminary testing confirmed that the effect of gravity was negligible during a raindrop impact. Consequently, gravity was not included in the simulations.
At high velocities, drop impacts develop angular instabilities leading to the famous pictures of splashing, popularized for instance in commercials. Splash is one of the key issue of drop impacts identified already by [START_REF] Worthington | On the form assumed by drops of liquids falling vertically on a horizontal plate[END_REF] in the first studies on drop impacts, leading for instance to secondary droplet breakups [START_REF] Rein | Phenomena of liquid drop impact on solid and liquid surfaces[END_REF]. These splashing dynamics can be important in soil erosion because it can transport eroded material at large distances as shown by [START_REF] Planchon | A physical model for the action of raindrop erosion on soil microtopography[END_REF]. In the present case, the axisymmetric geometry can be used because 1) we are focusing on the erosion mechanism itself and not on the transport of particle, and 2) such instabilities become relevant for time scales much larger than the typical time scale of the pressure and shear stress development at the soil surface. Consequently, an axial boundary condition was imposed on the symmetry axis (r = 0). At the soil surface (z = 0), a zero velocity boundary condition (also known as Dirichlet condition) was set. This ensured that both 1) no infiltration (u z = 0) and 2) no slip (u r = 0) occurred. For the top (z = H max = 1) and radial (r = R max = 1) boundaries, either Neumann (no slip) or Dirichlet (zero velocity) boundary conditions could potentially be used. Preliminary testing showed that the type of boundary condition did not influence the results because the simulated domain was large enough compared to the area of interest. For the simulations, a Dirichlet condition was used at z = H max and r = R max .
During the simulation of a raindrop impact, the water height h can become zero (especially for thin initial water depths). The occurrence of cells with h = 0 requires special attention, because it involves the motion of the contact line separating the water and the air along the soil surface (i.e. a triple-point occurs). In general, a specific boundary condition should be applied at the moving contact point to account for the high viscous stresses involved [START_REF] Afkhami | A mesh-dependent model for applying dynamic contact angles to vof simulations[END_REF]. In our case, an alternative approach can be used by acknowledging that a real soil surface is not exactly smooth but involves some roughness that can be crucial for the dynamics of the impact. This roughness can be taken into account by imposing a Navier slip boundary condition on the soil surface with a slip length of the order of the roughness [START_REF] Barrat | Large slip effect at a nonwetting fluid-solid interface[END_REF]. Technically, since the usual no-slip boundary condition imposed by the numerical scheme corresponds to a Navier slip condition with a slip length of the order of the mesh size, one has to simply take the no-slip boundary condition here with a mesh size similar to the surface roughness. Therefore, the numerical no-slip boundary conditions imposed for a constant level of refinement can be interpreted as a natural model for the soil roughness. In that framework, throughout the simulations, we can consider that a surface roughness equal to 65 µm was used (level of refinement n = 10).
Results and discussion
Overall dynamics
The phenomenology of a drop impact on a thin water layer is illustrated for the case h = D/10 in Figure 2, where the interface, the velocity and the pressure fields are shown together for different times. In the following, the initial time t = 0 is taken as the theoretical time of impact, defined by the falling velocity U 0 = 1 of a sphere onto the undeformed flat liquid layer.
At t = 10 -3 (i.e. 1 µs after the impact initiation), the drop and the water layer are still separated by a narrow sheet of air (Figure 2a). Nevertheless, the pressure has started to increase in the water, mediated by the high lubrication pressure created in the cushioning air layer located between the drop and the liquid film. The maximum pressure (P max = 1.85, i.e. P max = 78.2 kPa) is between the drop and the water layer.
At t = 10 -2 (i.e. 10 µs after the impact initiation), the drop and the water layer have started to merge and some air is trapped inside the water (Figure 2b) due to the air cushioning [START_REF] Thoroddsen | Air entrapment under an impacting drop[END_REF][START_REF] Korobkin | Trapping of air in impact between a body and shallow water[END_REF]. A high pressure field is created, with a maximum pressure of P max = 2.68 (i.e. P max = 113.2 kPa) now located close to the wedge formed by the intersection between the drop and the liquid layer. At t = 0.03 (i.e. 31 µs after the impact initiation), most of the water which belonged to the raindrop still have its terminal velocity (Figure 2c). It is only in the impact region that the velocity vectors rotate from the vertical. In this same area, the velocities are smaller than the terminal velocity but in the small wedge region one can see the formation of a high-speed jet created by the high pressure peak. Indeed, the maximum pressure is still located near the wedge but has started to decrease (P max = 1.47, i.e. P max = 62.1 kPa). A few droplets are emitted from the wedge.
At t = 0.08 (i.e. at t = 82 µs), a complex velocity field is formed (Figure 2d). Firstly, a jet has been emitted by the impact, leading to a splash of which the specific dynamics would be fully three dimensional and which is not at the heart of the present study. Secondly, close to the soil surface, the velocity field is expanding mostly radially due to the spreading of the raindrop into the water layer. Together with the no-slip boundary condition on the soil surface, it leads to a radial velocity field depending both on the radius r and the vertical coordinate z . In fact, the no-slip boundary condition imposed at z = 0 induces the formation of a viscous boundary layer between the substrate and the radial flow created by the impact [START_REF] Roisman | Inertia dominated drop collisions. ii. an analytical solution of the navierstokes equations for a spreading viscous film[END_REF][START_REF] Eggers | Drop dynamics after impact on a solid wall: Theory and simulations[END_REF]. Hence, the soil is subjected to a significant shear stress which is crucial for erosion processes. The pressure field is now maximum near the soil surface, directly under the impact region, but its maximum value has decreased to P max = 0.8 (i.e. P max = 33.8 kPa).
This general description is in agreement with the previous publications on raindrop impacts on a water layer [START_REF] Wang | The mechanics of a drop after striking a stagnant water layer[END_REF][START_REF] Ghadiri | Raindrop impact stress and the breakdown of soil crumbs[END_REF][START_REF] Ghadiri | A study of soil splash using cine-photography[END_REF], 1986;[START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF][START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF][START_REF] Marengo | Drop collisions with simple and complex surfaces[END_REF].
Since the erosion rate depends mostly on the shear stress applied on the soil surface, a detailed analysis of the dynamical evolution of the stress tensor during the impact is needed. In particular, the Meyer-Peter and Müller equation is often used, relating the erosion rate q s to the shear stress τ through [START_REF] Meyer-Peter | Formulas for bed-load transport[END_REF][START_REF] Houssais | Bedload transport of a bimodal sediment bed[END_REF]:
q s = c(τ -τ c ) 3/2 ,
where the dimensionless erosion rate and shear stress are defined by:
q s = q s (ρ s /ρ l -1)gd 3 and τ = τ (ρ s -ρ l )gd
where d is the typical size of the grains composing the soil, ρ s its density and c an empirical constant fitted through experimental data. In the following, we will use the numerical simulations done for raindrop conditions to deduce scaling laws for the shear stress induced by the impact that we will compare with simple formulas obtained using a self-similar model. Prior to the shear stress itself, we will investigate the pressure field created by the impact, where self-similar behavior has already been observed [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF]. Here, selfsimilarity means that the pressure field depends only on a quantity that is time dependent. In particular, it means that the pressure field conserves the same shape with time, with only amplitude and size varying with time.
Pressure evolution inside the water and self-similar approach
In fluid mechanics, scaling laws have been deduced from numerical simulations of the pressure evolution inside the water during the impact of a droplet on a solid surface or in the limit of thin liquid films, using a self-similar approach [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF][START_REF] Eggers | Drop dynamics after impact on a solid wall: Theory and simulations[END_REF]. However, their validity has not been studied in the context of raindrop impacts yet, in particular when the liquid film thickness varies. The self-similar approach is based on a theory first developed by [START_REF] Wagner | über stoss und gleitvorgänge und der oberfläshe von flüssigkeiten[END_REF] using as the typical length scale involved in the impact, the intersection between a falling spherical drop and the unperturbed liquid layer surface. In other words, the pertinent length scale of the impact r c (t) (or r c (t ) in dimensionless form) follows:
r c ∝ DU 0 t or r c ∝ √ t , (6)
where t (t ) is the time after the contact of the falling drop on the surface. This formula corresponds to the intersection of the drop (taken as a circle of radius D/2) that is in contact with the water surface at time t = 0. The selfsimilar theory takes advantage of the observation on the numerical simulations that the perturbed region of the drop at short time after impact is governed by r c [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF]. Such self-similar approach is possible when no specific length scale is dominating the dynamics: this is precisely the situation at short time for high Reynolds and Weber numbers, where only the intersection between the falling drop and the impacting liquid layer is thus playing a role. Indeed, figure 3 shows the evolution with time of the radial position r c of the maximum pressure in the water for different liquid layer thicknesses. This evolution can be separated into three stages. For t < 2.10 -3 (i.e. for durations smaller than 2.1 µs), the evolution of r c depends on the ratio h/D. This is also true for t > 2.10 -2 (i.e. for durations larger than 20.5 µs), where it can also be noticed that, at the beginning of this period, r c is of the order of one raindrop radius (i.e. r c = 0.15 in Figure 3). In the intermediate stage, all the values of r c collapse onto a single straight line (in log-log scale), meaning that the relationship between the location of the maximum pressure and time is independent of the ratio h/D. Over this period, the position of the maximum pressure r c (t ) is in good agreement with the former square-root law (6), yielding quantitatively:
r c = 0.65 √ t .
Remarkably and as predicted by the theory, this law is independent of the layer thickness, in addition to being found independent on the Reynolds and Weber numbers in previous studies (as long as these numbers are high enough) [START_REF] Thoroddsen | The ejecta sheet generated by the impact of a drop[END_REF][START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF].
As geometrically deduced, this geometric law should not be valid for r c > D/2. However, it is well known that the square-root law for r c is in fact observed for much larger values and in the figure, the law is typically valid up to r c ∼ 2 D. Indeed, it has been argued that such a square-root law is also the cylindrical shock solution of the shallow-water equations as explained in [START_REF] Yarin | Impact of drops on solid surfaces: self-similar capillary waves, and splashing as a new type of kinematic discontinuity[END_REF] so that the geometric law matches this shock solution for longer times. The limitation of this regime at short and long times can be explained by two distinct arguments. At short time, the cushioning of the air layer delays the contact between the drop. At long time scales, numerical limitations can also be present: because the drop spreading has a large spatial extent, finite size effects coming from the size of the numerical box start to affect the dynamics.
Similar regimes are observed for the maximum pressure in the water P c as shown in Figure 4. Firstly, the pressure is slightly varying at short time t < 2.10 -2 (i.e. lower than 20.5 µs) and does not depend on the h/D ratio. At this stage the contact between the raindrop and the water layer is weak and we attribute this effect to the lubrication pressure created in the gas layer. At long time, typically corresponding to r c > D/2, the pressure is rapidly dropping to very small values. In between, corresponding roughly to 2.10 -2 ≤ t ≤ 10 the pressure is decreasing with time following power law behaviors. However, two distinct regimes are observed depending on the ratio h/D: for small aspect ratio (typically h/D < 1/4) the maximum pressure is found to decrease following the inverse of the square-root of the time (Figure 4 a), as usually observed for the impact on thin liquid layer [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF]. For thicker water layers (i.e. for h/D 1/4), another regime is observed where the maximum pressure decreases first like the inverse of time (Figure 4 b), while the inverse of the square-root of the time seems to remain valid at larger time. The crossover between these two time dependances increases with the aspect ratio h/D.
As detailed in [START_REF] Josserand | Droplet splashing on a thin liquid film[END_REF], the thin water layer behavior can be understood using a simple momentum balance in the self-similar impact region. Indeed, it has been observed that in this regime, the pressure field is perturbed in the impacted region only, defined by the characteristic length r c (t). Therefore, one can develop a self-similar approach using this length and perform the vertical momentum balance in the self-similar volume of radius r c (t) (the volume being that of a half sphere of radius r c , namely 2πr 3 c /3), yielding:
d(2ρ l πr c (t) 3 U 0 /3) dt ∼ πr c (t) 2 P c (t),
where P c (t) is the typical amplitude of the pressure field created by the impact. This equation balances the variation of vertical momentum in the self-similar half-sphere of radius r c (t) with the force of the pressure on the liquid layer, giving thus:
P c (t) ∝ ρ l U 0 dr c (t) dt ∼ ρ l U 2 0 D U 0 t .
This regime is in good agreement with the observed maximum pressure evolution for thin liquid films (Figure 4a). Moreover, this regime start to fail at longer time (typically corresponding to r c (t) ∼ D/2) and the dynamics can then be described by the shallow-water equation [START_REF] Yarin | Impact of drops on solid surfaces: self-similar capillary waves, and splashing as a new type of kinematic discontinuity[END_REF][START_REF] Lagubeau | Flower patterns in drop impact on thin liquid films[END_REF].
For thick water layers, the former vertical momentum balance does not work since the liquid layer dynamics have to be taken into account. However, inspired by the former balance, one can deduce a simple model: considering that the radial characteristic length is still r c but that the vertical one is now h, we obtain
d(ρ l πr c (t) 2 hU 0 ) dt ∼ πr c (t) 2 P c (t),
which gives the observed scaling for the pressure:
P c (t) ∼ ρ l U 0 h r c (t) dr c (t) dt ∼ ρ l U 2 0 h U 0 t .
The former thin layer regime is retrieved at larger time in this configuration and one can argue that it comes from the fact that the liquid contained in the layer has been pushed away by the impact so that only a thin residual liquid layer remains beneath the drop.
Therefore, we have exhibited that the pressure field due to the impact follows self-similar laws involving the spreading radius r c (t). However, as explained above, the crucial quantity for the erosion process is not the maximum pressure in the liquid but rather the shear stress at the soil surface. The self-similar approach can be used a priori to compute the soil surface quantities, but one has to notice that the soil surface does not coincide with the self-similar geometry (which involves the liquid layer interface rather than the solid surface). In conclusion it leads to the difficult challenge of determining the shear stress at the intersection between the self-similar geometry and the soil surface. Moreover, the shear stress is also a consequence of the boundary layer created by the large scale flow and the no slip boundary condition imposed at the surface [START_REF] Roisman | Inertia dominated drop collisions. ii. an analytical solution of the navierstokes equations for a spreading viscous film[END_REF][START_REF] Eggers | Drop dynamics after impact on a solid wall: Theory and simulations[END_REF][START_REF] Lagubeau | Flower patterns in drop impact on thin liquid films[END_REF], which makes its prediction even more difficult.
Scaling laws for stresses onto the soil surface
We thus now investigate the pressure and shear stress fields onto the soil surface, quantities of interest for understanding and modeling erosion processes, keeping in mind the underlying self-similar structure of the impact.
Dependency of the maximum shear stress with the water depth
First of all, let us mention that the dependence of the shear stress with the water depth has already been studied using numerical simulations by [START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF] and [START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF], leading to an algebraic fitted relationship for the maximum (over time and space) shear stress at the soil surface, as a function of the Reynolds number and the water depth:
τ max = 2.85ρ l U 2 0 ( h D/2 + 1) -3.16 Re -0.55 C 1 . (7)
The prefactor C 1 is almost a constant number varying only slightly with the impact parameters between 0.91 and 1 and deviating from one only for slow drops and thick water layers. However, this relationship was based on simulations that included only Reynolds numbers within the range 50-500 and Weber numbers in the range 18-1152 [START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF][START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF], values much lower than the range of natural raindrops (6500 Re 23000, 50 We 12150). Hence, their simulations underestimated the inertia forces compared to both viscous and capillary forces.
In the present study, we have performed numerical simulations for realistic Re and We numbers for raindrops, varying only the water depth. Figure 5 shows the maximum shear stress τ max as a function of the liquid depth plus the drop radius normalized by the drop radius, which can be written as: 1 + 2h/D. It is well fitted by the following relationship:
τ max ∝ ( h D/2 + 1) -2.6 (8)
The maximum shear stress is observed around the time (D/2 + h)/U 0 that would correspond to the penetration of half of the unperturbed drop over all the liquid layer. This relationship was fitted only varying the ratio h/D so that the physical prefactor involves ρ l U 2 0 multiplied by some function of the dimensionless numbers (in particular the Reynolds number). As the aspect ratio h/D is concerned, we observed that our fitted law is slightly different than the Hartley law (7), the exponent of the fraction being smaller. However, given the variation of 1 + 2h/D studied here, one should remark that the quantitative differences between our results and those of Hartley are not very large. In order to suggest an explanation for such dependence, a detailed study of the evolution of the stress tensor at the soil surface is first needed.
Self-similar evolution of the stress tensor on the soil surface
Pressure
The pressure field on the soil surface shows a bell-shape curve with a maximum on the symmetry axis (r = 0) for all times, the amplitude of this curve decreasing with time while its width increases. We thus define the characteristic length for the pressure field on the soil surface as the radius where the pressure is half the pressure on the axis r P 1/2 (t). The width r P 1/2 (t ) is shown in Figure 6 as a function of time for different aspect ratios h/D. For short time after impact, the evolution of r P 1/2 (t) can be fitted by a power-law, the exponent m decreasing with the aspect ratio. For the thinnest simulated water layers, r P 1/2 (t) evolves as the square-root of t (m ∼ 0.5) which is consistent with the law obtained for thin films. The exponent m of the power-law decreases when the water layer increases (see insert of Figure 6) Thus the water layer can be seen as a shield protecting the soil surface against the disturbance caused by raindrop impacts and it is only for the thinnest water layers, i.e. when the shielding is the lowest, that the disturbance (here the pressure) is similar inside the water layer and at the soil surface.
For deeper water layers, the shielding is more efficient, leading to a disconnection between the behavior of the pressure inside the liquid and the behavior of the pressure at the soil surface. This disconnection becomes quite significant for a water layer equal to the radius.
However, a self-similar structure of the pressure field on the soil surface can also be exhibited for the different aspect ratios h/D. Indeed, rescaling the pressure on the soil surface P (r , z = 0, t ) at different times by the maximum value P (0, 0, t ) and the coordinate r by r P 1/2 (t), we observe a good collapse of the different pressure curves into a single one for t < 10 -1 (i.e. smaller than 102.6 µs) (Figure 7). However, these self-similar curves vary with the aspect ratio h/D (Figure 8), the width e of the self-similar curve increasing with h/D.
Shear stress
The shear stress at the soil surface is computed numerically using the shear rate, yielding:
τ = µ l du r (r , z = 0) dz .
and it exhibits a ring shape of which the radius r τmax (t ), corresponding to the maximum shear stress location, increases with time. For thin liquid layers, r τmax (t) evolves again approximately as the square-root of t again (Figure 9a) while the situation is more complex for h/D 1, where no tendency could be extrapolated (Figure 9b). We rescaled again the dimensionless shear stress τ (r , z = 0, t) by its maximum value (noted τ max (r , z = 0, t)) and plotted it as a function of the rescaled radius r /r τmax (t) (where r τ max (t) is the position of the maximum value of τ at t) for small time t < 10 -1 (i.e. smaller than 102.6 µs)(Figure 10). The collapse of the profiles is reasonable for small (< 1/3) and large (> 1) aspect ratios h/D but the situation is much more complex for intermediate cases where only a partial collapse is found. The shear stress has a quasi-linear profile for small radius (r < r τmax (t)) and then relaxes to zero for large r . Such behavior for small radius can be understood within the dynamics of thin films [START_REF] Yarin | Impact of drops on solid surfaces: self-similar capillary waves, and splashing as a new type of kinematic discontinuity[END_REF][START_REF] Lagubeau | Flower patterns in drop impact on thin liquid films[END_REF]. This regime is valid after the self-similar regime of the impact for which the pressure is high. Then, assuming a small gradient of the interface, the dynamics follows the so-called thin film equations for which the radial velocity yields:
u = r t ,
for r < r τmax (t). This radial velocity is not consistent with the no-slip boundary condition on the z = 0 solid boundary so that a viscous boundary layer of thickness l v = µt/ρ grows from the solid [START_REF] Roisman | Inertia dominated drop collisions. ii. an analytical solution of the navierstokes equations for a spreading viscous film[END_REF][START_REF] Eggers | Drop dynamics after impact on a solid wall: Theory and simulations[END_REF]. Therefore, one obtains for the shear stress on the solid (in dimensionless form):
τ (r , z = 0, t) ∝ r t 3/2 Re 1/2 , which is consistent with the linear behavior for small r . On the other hand, this regime can also explain the dependence of the maximum shear stress. Indeed, assuming that for the maximum shear stress obtained at time t = 1 + 2H/D, the radial momentum of the thin film is equal to the vertical momentum of the impacting drop, one obtains approximately:
D 3 U 0 ∼ h c D 2 U 0 (r τmax ) 3 t τmax ,
where h c is the film height in the impacted zone, a priori different than the unperturbed film height h. Then, we obtain, taking the time of maximum shear stress at
t τmax ∝ 1 + 2h/D r τmax 1 + 2h/D ∼ 1 (r τmax ) 2 h c /D .
Then using the observed relation r τmax ∝ t τmax , we have
τ max ∝ 1 (1 + 2H/D) 3/2 h c /D Re -1/2 .
Assuming the same scaling relation for the film height h c /D ∝ 1 + 2h/D we obtain
τ max ∝ 1 (1 + 2H/D) 5/2 Re -1/2
which is in good agreement with the numerical results. Note that the exponent 5/2 is obtained from assumptions that are very speculative and which would need further studies to validate. However it is very close to the 2.6 fitted exponent of relation ( 8), shedding light on the underlying mechanism for the film thickness that is at play in the shear stress formula. In particular, this exponent combines the contribution of the thin film velocity field with the viscous boundary layer. The Re -1/2 is a direct consequence of the boundary layer structure and has thus better scientific grounds although it has not been tested in our numerics. Notice that it is in good agreement with the previous observed behavior [START_REF] Hartley | Numerical study of the maximum boundary shear-stress induced by raindrop impact[END_REF][START_REF] Hartley | Boundary shear-stress induced by raindrop impact[END_REF]. We would like also to emphasize that such analytical formula is very important since it could be implemented in macroscopic models coupling raindrop and erosion.
Conclusions
Using present numerical methods to solve the Navier-Stokes equations for liquidgas dynamics, we have studied raindrop impacts on water layers for realistic configurations. Quantities of interest for soil erosion, such as pressure and shear stress at the soil surface have been, therefore, accurately computed, paving the way for quantitative understanding of soil erosion driven by rain.
The simulations confirm that the maximum shear stress at the soil surface depends in particular on the ratio between the water depth and the drop size. The variation of the pressure inside the water layer during the raindrop impact is well explained by a self-similar approach where the self-similar length is the spreading radius. The position of this radius corresponding to the maximum pressure moves as the square-root of time after impact. Such a relationship comes from very general geometrical arguments and it was in fact previously observed numerically and experimentally for a wider range of drop impacts (especially with drop velocities different from the terminal velocity). Importantly, the present study shows that this relationship is independent of the ratio h/D.
At the soil surface, the maximum pressure is located at the center of the impact. Considering half of this pressure, it was found that it moves radially with a square-root of the time after impact only for thin water layers h/D < 1/5. For low h/D ratios, the location of both the maximum pressure inside the water and the pressure at the soil surface follow the same law because the shielding caused by the water layer is minimal. The shielding becomes significant for larger h/D ratios, especially for h/D 1, leading to a disconnection between pressure behaviors inside the water and at the soil surface. Nevertheless, for all h/D ratios, a self-similarity was found for the pressure rescaled by its central value P(r , z = 0, t) as a function of the radius rescaled by the half-pressure radius. The existence of this self-similarity shows that the dynamics of the pressure at the soil surface is quite similar for different h/D ratios (even though the rescaling depends on h/D).
The shear stress at the soil surface was also rescaled, but the self-similarity was not as consistent as the one for the pressure. This indicates that the dynamics of the shear stress is more complex, and that additional variables may have to be taken into account. In particular, one would need in further study to elucidate the interplay between the growth of the viscous boundary layer and the spreading dynamics.
By clarifying the dynamics of the raindrop impact on a water layer, these results could foster experimental and numerical studies of soil erosion by raindrops. By identifying the variables of interest, it will simplify the design of these studies. More precisely, the equations for the maximum shear stress could be implemented in macroscopic model of erosion to estimate the quantities of materials eroded. New insight could also come from theoretical developments carried out in fluid mechanics, such as the influence of the air cushioning prior to the impact on the interface deformation [START_REF] Xu | Drop splashing on a dry smooth surface[END_REF] or the changes in the flow due to the granular structure of the soil.
Indeed, the biggest drawback of current numerical modeling is probably the hypothesis of a rigid soil surface, which comes from the unavailability of a suitable deformation law. To be realistic such law should account for the aggregated
Figure 1 :
1 Figure 1: Schematic configuration of a raindrop (diameter D and terminal velocity U) impacting a water layer of depth h.
Figure 2 :
2 Figure 2: Pressure and velocity fields in the water (colors and arrows respectively) as function of time for a film layer of h = D 10 . The drop diameter is 2 mm with a terminal velocity of 6.5 m s -1 .
Figure 3 :
3 Figure3: Position of the maximal dimensionless pressure as function of time after impact for different water depth/drop diameter ratios. Note: the exponent was not fitted.
Figure 4 :
4 Figure 4: Maximum dimensionless pressure as function of time for (a) small and (b) large water depth to drop diameter ratios. Note: the exponents were not fitted.
Figure 5 :
5 Figure 5: Maximum physical shear stress at the soil surface as a function of (1 + 2h/D).
Figure 6 :
6 Figure 6: Dimensionless radial position r P 1/2 (t ) of half the maximum dimensionless pressure (located at r = 0) at the soil surface for different ratios h/D shown on insert. The fitted power-law is traced on the figure for each value of h/D.
Figure 8 :Figure 9 :
89 Figure 8: Comparison of the rescaled pressure for different aspect ratios h/D.
Figure 10 :
10 Figure 10: Rescaled dimensionless shear stress for different aspect ratios h/D for t < 10 -1 .
Figure 7: Rescaled pressure on the substrate for different aspect ratios h/D (indicated on each figure) for t < 10 -1 .
1.2
1 1/10 1/5 1/3
0.8
P'(r,z=0,t)/P'(0,0,t) 0 0.2 0.4 0.6 0.6 0.8 1 1.2 1/2 1 2
0.4
0.2
0
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 5
r'/r' (t) P 1/2
1/10
P'(r,z=0,t)/P'(0,0,t) 1/5 1/3 1/2 1 2
r'/r' (t)
Acknowledgments
It is our pleasure to thank Leon Malan for his help. C.J. wants to acknowledge the Agence Nationale de la Recherche through the grant ANR "TRAM" 13-BS09-0011-01.
status of soils. Finding such a law remains a challenge for soil physicists. | 54,499 | [
"739236",
"462",
"12008",
"6072"
] | [
"14556",
"98",
"14556",
"37895",
"98",
"33993",
"98"
] |
01466979 | en | [
"phys"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01466979/file/existenceV3.pdf | Victor A Eremeyev
email: [email protected]
Francesco Dell'isola
email: [email protected]
Claude Boutin
email: [email protected]
David Steigmann
email: [email protected]
Linear pantographic sheets: existence and uniqueness of weak solutions
Keywords: strain gradient elasticity, linear pantographic sheets, existence, uniqueness, anisotropic Sobolev's space
the equilibrium of pantographic lattices is studied via a homogenised second gradient deformation energy and the predictions obtained with such a model are successfully compared with experiments. This energy is not strongly elliptic in its dependence on second gradients. This circumstance motivates the present paper, where we address the well-posedness for the equilibrium problem for V.
Introduction
Mechanical scientists have been recently attracted to the formulation of design and construction criteria of new materials whose behaviour is established a priori. One can say that the aim of this stream of researches is to produce Materials on Demand. More precisely: once fixed the peculiar behaviour of a material which is desirable for optimising its use in a given application, the aim of aforementioned researches is to find the way for constructing such a material. Materials designed in order to get a specific behaviour are often called metamaterials.
The role of mathematical sciences in the design and constructions of metamaterials recently increased for two reasons: i) the development of the technology of 3D printing allowed for the transformation of mathematically conceived structures, geometries and material properties into the reality of precisely built specimens; ii) the way in which one specifies the set of properties to be realised is specifically mathematical, as it consists in choosing the equations one assumes are governing the mechanical response of the conceived metamaterial.
Once more we can say that mathematics is shaping our world, as it is allowing us to design new technological solutions and tools. The present paper deals with a mathematical problem arising in a specific context involving the design of second gradient metamaterials. More precisely: in order to find a class of materials whose deformation energy depends on both first and second gradient of placement field in [START_REF] Dell'isola | Large deformations of planar extensible beams and pantographic lattices: Heuristic homogenisation, experimental and numerical examples of equilibrium[END_REF] a microstructured (pantographic) fabric is introduced and its homogenised continuum model (which we call pantographic sheet) is determined. Various aspects of modelling of pantographic lattices are considered in [START_REF] Battista | Frequency shifts induced by large deformations in planar pantographic continua[END_REF][START_REF] Placidi | A review on 2D models for the description of pantographic fabrics[END_REF][START_REF] Placidi | A second gradient formulation for a 2D fabric sheet with inextensible fibres[END_REF][START_REF] Scerrato | Three-dimensional instabilities of pantographic sheets with parabolic lattices: numerical investigations[END_REF][START_REF] Turco | Non-standard coupled extensional and bending bias tests for planar pantographic lattices. part i: numerical simulations[END_REF][START_REF] Turco | Non-standard coupled extensional and bending bias tests for planar pantographic lattices. Part II: comparison with experimental evidence[END_REF][START_REF] Turco | Fiber rupture in sheared planar pantographic sheets: Numerical and experimental evidence[END_REF][START_REF] Turco | Large deformations induced in planar pantographic sheets by loads applied on fibers: experimental validation of a discrete Lagrangian model[END_REF][START_REF] Turco | Pantographic structures presenting statistically distributed defects: numerical investigations of the effects on deformation fields[END_REF] where discrete and homogenized models are considered. Let us note that one of the sources of generalized continua and models of metamaterials is the homogenization of heterogeneous materials, see e.g. [START_REF] Del Vescovo | Dynamic problems for metamaterials: review of existing models and ideas for further research[END_REF][START_REF] Placidi | Euromech 563 Cisterna di Latina 1721 March 2014 Generalized continua and their applications to the design of composites and metamaterials: A review of presentations and discussions[END_REF][START_REF] Reda | Wave propagation in 3D viscoelastic auxetic and textile materials by homogenized continuum micropolar models[END_REF][START_REF] Trinh | Evaluation of generalized continuum substitution models for heterogeneous materials[END_REF] and reference therein. Homogenization may lead to strain gradient models [START_REF] Challamel | Higher-order gradient elasticity models applied to geometrically nonlinear discrete systems[END_REF][START_REF] Cordero | Second strain gradient elasticity of nano-objects[END_REF]. While the ideas underlying the definition of pantographic microstructures have been exploited up to now only in the context of purely mechanical phenomena, it is expected that when introducing multiphysics effects (as the piezoelectric coupling phenomena exploited as explained in [START_REF] D'annibale | Linear stability of piezoelectric-controlled discrete mechanical systems under nonconservative positional forces[END_REF][START_REF] D'annibale | On the failure of the 'Similar Piezoelectric Control' in preventing loss of stability by nonconservative positional forces[END_REF][START_REF] Dell'isola | Piezo-ElectroMechanical (PEM) structures: passive vibration control using distributed piezoelectric transducers[END_REF][START_REF] Giorgio | Piezo-electromechanical smart materials with distributed arrays of piezoelectric transducers: current and upcoming applications[END_REF][START_REF] Pagnini | The three-hinged arch as an example of piezomechanic passive controlled structure[END_REF] including surface-related phenomena [START_REF] Eremeev | Natural vibrations of nanodimensional piezoelectric bodies with contact-type boundary conditions[END_REF][START_REF] Nasedkin | Harmonic vibrations of nanosized piezoelectric bodies with surface effects[END_REF]) the designed meta materials could have even more interesting features. We expect a fortiori that the mathematical tools used in the present context will be of use also in the envisioned more general context.
The linearised equilibrium equations valid in the neighbourhood of a stress free configuration for such pantographic sheets cannot be immediately studied by using the results available in the literature. However the standard strategy involving the use of Poincaré inequality, Lax-Milgram Theorem and coercivity of bilinear deformation energy form do apply also in the present more generalised context.
What has to be modified is the Energy space where the solutions, relative to suitable well-posed boundary conditions, are looked for and the Sobolev space which include this Energy Space.
Indeed the concept of Anisotropic Sobolev Space, whose definition was conceived on purely logical grounds by Sergei M. Nikol'skii, see [START_REF] Nikol'skii | On imbedding, continuation and approximation theorems for differentiable functions of several variables[END_REF], has to be used in order to apply the abstract Hilbertian setting of solution strategy.
We expect that further developments will lead us to study the complete nonlinear problem of deformation of pantographic sheets.
Postulated deformation energy for "long-fibers" pantographic sheets
Pantographic sheets are bidimensional continua whose microstructure is constituted by a lattice of extensible and continuous fibers having bending stiffness and interconnected by pivots (i.e. pin joints). It has to be explicitly remarked that, in general, we are not considering trusses. A truss, by definition, is assumed to comprise a set of independent beams that are connected by means of pin joints connecting only ending points of the beams. This means that if the truss is loaded only with concentrated forces applied to pin joints then each beam (or fiber) can only be either in compression or in extension. We call lattices of beams the most general beams structure involving pin joints (but also possibly clamping devices, or rollers or glyphs).
Roughly speaking, pantographic sheets can be characterized as those lattices of fibers whose microstructure, once pivots are assumed to be ideal and no external constraints are applied, allows for the existence of some homogeneous deformations which do not store deformation energy. These deformations are sometimes called "floppy-modes". In Fig. 3 such a structure is schematically described, while in Fig. 2 a picture of a 3D printed specimen in polyamide is shown.
The main feature of the considered pantographic structure consists in the presence of "long" continuous fibers constituting two arrays: at each intersection point of one fiber with all fibers of the other array it is present a pin joint which is not interrupting the mechanical and geometrical continuity of both interconnecting fibers.
We assume that in the reference configuration the two arrays of fibers are initially orthogonal and we denote D α , α = 1, 2, the unit vectors of their current directions.
As a macro model of the system described before we consider a continuum whose reference configuration is given by a (suitably regular) domain ω ⊂ R 2 . By assuming planar motion, the actual configuration of ω is described by the planar macro-placement
χ : ω → R 2 (1)
whose gradient ∇χ will be denoted by F.
For considered "long fibers" pantographic sheets a possible expression for deformation energy is given by (the two possible methods for getting this expression are described in [START_REF] Dell'isola | Large deformations of planar extensible beams and pantographic lattices: Heuristic homogenisation, experimental and numerical examples of equilibrium[END_REF] or in [START_REF] Boutin | Linear pantographic sheets. Part I: Asymptotic micro-macro models identification[END_REF]):
U (χ (•)) = ω α K α e 2 ( F D α -1) 2 dω + ω α K α b 2 ∇F : D α ⊗ D α • ∇F : D α ⊗ D α F D α 2 - F D α F D α • ∇F : D α ⊗ D α F D α 2 dω + ω K p 2 arccos F D 1 F D 1 • F D 2 F D 2 - π 2 2 dω (2)
which accounts for stretching (first integral) and bending deformations of fibers (second integral) as well as for the resistance to shear distortion (third integral) related to the variation of the angle between the fibers. The twisting deformations in fibers are not considering here. coefficients K α e > 0 and K α b > 0 are related respectively to the extensional and bending stiffnesses of the fibers at the interpivot scale, while the coefficient K p ≥ 0 models, at macro-level, the shear stiffness of the pantographic sheet and is related to the interaction between the two arrays of fibers via their interconnecting pivots: when these pivots are perfect this interaction is vanishing and K p vanishes. • is the Euclidean norm in R 2 and : is the double dot product.
In this paper we will start considering small deformations of the sheet in the neighborhood of the reference configuration. Therefore we calculate the second order Taylor expansion for the energy U (χ (•)) in terms of the small parameter η controlling the amplitude of the displacement u starting from the reference configuration. In formulas
χ (X) = X + ηu(X), X ∈ ω. (3)
By introducing the notations H := ∇u , H =: E+W , where E is symmetric and W is skew-symmetric we get formally (where I denotes the identity tensor)
F = I + ηH, ∇F = η∇H.
As a consequence
FD α = D α + ηHD α , FD α • FD α = 1 + 2ηHD α • D α + η 2 HD α • HD α , FD α = 1 + 2ηED α • D α + η 2 HD α • HD α 1 + ηHD α • D α , 1 FD α 1 -ηHD α • D α , 1 FD α 2 1 -2ηHD α • D α , K α e 2 ( FD α -1) 2 K α e 2 (ηHD α • D α ) 2 , F D 1 F D 1 • F D 2 F D 2 (ηD 1 • HD 2 + ηHD 1 • D 2 ) , K p 2 arccos F D 1 F D 1 • F D 2 F D 2 - π 2 2 K p 2 (ηD 1 • HD 2 + ηHD 1 • D 2 ) 2 , ∇F : D α ⊗ D α • ∇F : D α ⊗ D α FD α 2 η 2 (∇H : D α ⊗ D α • ∇HD α ⊗ D α ) , FD α FD α • ∇FD α ⊗ D α FD α (1 -ηHD α • D α ) 2 (D α + ηHD α ) • η∇HD α ⊗ D α , FD α FD α • ∇FD α ⊗ D α FD α D α • η∇HD α ⊗ D α .
Since D α are unit orthogonal vector we may replace them by standard basis vectors i α , D α = i α . As a consequence if the displacement u is represented in the basis D α by means of its components
u = u 1 i 1 + u 2 i 2 , u α = u α (x 1 , x 2 ), X = x 1 i 1 + x 2 i 2 ,
then the second order Taylor expansion of the energy ( 2) is given by:
U (u(•)) = ω K 1 e 2 u 2 1,1 + K 2 e 2 u 2 2,2 + K p 2 (u 1,2 + u 2,1 ) 2 + K 1 b 2 u 2 1,22 + K 2 b 2 u 2 2,11 dω, (4)
see also [START_REF] Boutin | Linear pantographic sheets. Part I: Asymptotic micro-macro models identification[END_REF] for a direct derivation of this energy by a rigorous homogenization procedure. Here indices after comma denote derivatives, so f ,α is the partial derivative of f with respect to x α , f ,α = ∂ α f ≡ ∂f ∂xα . A mathematically interesting case is represented by pantographic structures whose shear stiffness is vanishing: this singular limit case will be addressed in the following sections.
Energy for pantographic sheets and equilibrium conditions
Let us consider the deformation energy relative to pantographic structures having vanishing shear stiffness. The deformation energy becomes
U (u(•)) = ω W dω, (5)
where the strain energy density W is given by
W = K 1 e 2 u 2 1,1 + K 2 e 2 u 2 2,2 + K 1 b 2 u 2 1,22 + K 2 b 2 u 2 2,11 . � x 1 x 2 i 1 i 2 u n t =D 1 =D 2
Fig. 3 Deformation of a pantographic sheet.
For derivation of the equilibrium conditions we consider the first variation of U . First we obtain
δU = ω K 1 e u 1,1 δu 1,1 + K 2 e u 2,2 δu 2,2 + K 1 b u 1,22 δu 1,22 + K 2 b u 2,11 δu 2,11 dω.
Then, integrating by parts we transform δU into
δU = ω -K 1 e u 1,11 + K 1 b u 1,2222 δu 1 dω + ω -K 2 e u 2,22 + K 2 b u 2,1111 δu 2 dω + ∂ω n 1 K 1 e u 1,1 -n 2 K 1 b u 1,222 δu 1 + n 2 u 1,22 δu 1,2 ds + ∂ω n 2 K 2 e u 2,2 -n 1 K 2 b u 2,111 δu 2 + n 1 u 2,11 δu 2,1 ds. (6)
Here
n α = i α • n, t α = i α • t,
δu 1,2 = i 2 • ∇δu 1 , δu 2,1 = i 1 • ∇δu 2
with ∇ defined at the boundary through normal and tangent derivatives
∇ = n ∂ ∂n + t ∂ ∂s ,
where ∂/∂s and ∂/∂n are derivatives with respect to arc length s and normal coordinate, respectively, we obtain that
δu 1,2 = n 2 ∂δu 1 ∂n + t 2 ∂δu 1 ∂s , δu 2,1 = n 1 ∂δu 2 ∂n + t 1 ∂δu 2 ∂s .
Substituting the latter formulae into (6) and again integration by parts with respect to s we obtain that
δU = ω -K 1 e u 1,11 + K 1 b u 1,2222 δu 1 + -K 2 e u 2,22 + K 2 b u 2,1111 δu 2 dω + ∂ω n 1 K 1 e u 1,1 -n 2 K 1 b u 1,222 - ∂ ∂s (n 2 t 2 K 1 b u 1,22 ) δu 1 ds + ∂ω n 2 K 2 e u 2,2 -n 1 K 2 b u 2,111 - ∂ ∂s (n 1 t 1 K 2 b u 2,11 ) δu 2 ds + ∂ω K 1 b u 1,22 n 2 2 ∂ ∂n δu 1 + K 2 b u 2,11 n 2 1 ∂ ∂n δu 2 ds. (7)
Here for simplicity we assumed that boundary contour ∂ω is a plane curve which is smooth enough, i.e. differentiable and without corner points. The form of δU requires that only a class of external loads can be applied: indeed the virtual work of external loads must be consistent with it. So, we must assume that the virtual work of external loads δA is given in the following form:
δA = ω (f 1 δu 1 + f 2 δu 2 ) dω + ∂ω (ϕ 1 δu 1 + ϕ 2 δu 2 ) ds + ∂ω n 2 µ 1 ∂ ∂n δu 1 + n 1 µ 2 ∂ ∂n δu 2 ds. (8)
Here f α are surface loads. Moreover ϕ α and µ α are forces and couples respectively assigned on the part of the boundary ∂ω where u α and/or ∂uα ∂n are not assigned. Therefore we introduce a suitably regular partition of ∂ω into two disjoint subsets ∂ e ω α and ∂ n ω α (or ∂ e ω ⊥ α and ∂ n ω ⊥ α ) on which either displacements (or normal derivatives of displacements) are assigned or their dual quantities are assigned respectively (the index α = 1, 2 refers to the displacement component u α ).
Finally, from the principle of virtual action δU -δA = 0, and by assuming the following essential boundary conditions,
u 1 =u 0 1 , (x 1 , x 2 ) ∈ ∂ e ω 1 , (9)
u 2 =u 0 2 , (x 1 , x 2 ) ∈ ∂ e ω 2 , (10)
n 2 ∂ n u 1 =ϑ 1 n 2 , (x 1 , x 2 ) ∈ ∂ e ω ⊥ 1 , (11)
n 1 ∂ n u 2 =ϑ 2 n 1 , (x 1 , x 2 ) ∈ ∂ e ω ⊥ 2 , (12)
where u 0 1 , u 0 2 , ϑ 1 , and ϑ 2 are given functions at ∂ω and ∂ n = ∂/∂n, we obtain the equilibrium equations and natural (static) boundary conditions
-K 1 e u 1,11 + K 1 b u 1,2222 -f 1 = 0, (x 1 , x 2 ) ∈ ω (13) -K 2 e u 2,22 + K 2 b u 2,1111 -f 2 = 0, (x 1 , x 2 ) ∈ ω; (14)
n 1 K 1 e u 1,1 -n 2 K 1 b u 1,222 - ∂ ∂s (n 2 t 2 K 1 b u 1,22 ) = ϕ 1 , (x 1 , x 2 ) ∈ ∂ n ω 1 ( 15
)
n 2 K 2 e u 2,2 -n 1 K 2 b u 2,111 - ∂ ∂s (n 1 t 1 K 2 b u 2,11 ) = ϕ 2 , (x 1 , x 2 ) ∈ ∂ n ω 2 (16)
K 1 b u 1,22 n 2 2 = n 2 µ 1 , (x 1 , x 2 ) ∈ ∂ n ω ⊥ 1 (17) K 2 b u 2,11 n 2 1 = n 1 µ 2 , (x 1 , x 2 ) ∈ ∂ n ω ⊥ 2 (18)
It is interesting that ( 13) and ( 14) contain partial derivatives of different orders. For example, (13) contains second derivative with respect to x 1 and fourth derivative with respect to x 2 .
Since the energy has a reduced form (that is it has not all derivatives) we have also reduced boundary conditions. For example, for a fixed boundary unlike classic case instead of ∂ n u 1 = 0 we have n 2 ∂ n u 1 = 0 which has a sense if n 2 = 0. Let us consider a rectangle ABCD shown in Fig. 4. Note that here the fibers are oriented exactly the rectangle sides directions. Here two sides are free and at two sides the displacements are zero. The corresponding boundary conditions are
along AB : K 1 b u 1,22 = 0, K 1 b u 1,222 = 0, K 2 e u 2,2 = 0, along BC : K 1 e u 1,1 = 0, K 2 b u 2,111 = 0, K 2 b u 2,11 = 0, along CD : u 1 = 0, u 2 = 0, ∂ 2 u 1 = 0, along DA : u 1 = 0, u 2 = 0, ∂ 1 u 2 = 0.
Clearly, this rectangle gives an example of degenerated boundary conditions, since instead of four conditions in general case we have only three. It happens when the boundary or its part is parallel to one of the coordinate axes that is parallel to fiber direction.
Heuristics
It is evident that the immediate application of the classic methods used for proving existence and uniqueness of the solution of elastic problem is not possible [START_REF] Ciarlet | Mathematical Elasticity. Vol. I: Three-Dimensional Elasticity[END_REF][START_REF] Eremeyev | Existence of weak solutions in elasticity[END_REF][START_REF] Fichera | Existence theorems in elasticity[END_REF][START_REF] Lebedev | Functional Analysis[END_REF], as coercivity could seem, at first sight, a condition which is not verified as mentioned in [START_REF] Boutin | Linear pantographic sheets. Part I: Asymptotic micro-macro models identification[END_REF]. Moreover also the results for second gradient continua proven by Healey et al. [START_REF] Healey | Injective weak solutions in second-gradient nonlinear elasticity[END_REF][START_REF] Mareno | Global continuation in second-gradient nonlinear elasticity[END_REF] are not applicable here as this energy is not coercive with respect to highest order derivatives.
Before framing the problem in the appropriate energy space we present here some heuristic preliminary considerations. First of all: assume that for a displacement field u * the energy (5) vanishes. It is trivial to check that as u * 1,1 = 0
u * 1 = f (x 2 ) while, as u * 1,22 = f ,22 = 0 f = a 1 x 2 + b 1
and, finally,
u * 1 = a 1 x 2 + b 1 ,
where a 1 and b 1 are constants. we have that
u * 2 = a 2 x 1 + b 2
with constant a 2 and b 2 independent of a 1 and b 1 .
Note that in the case of plane infinitesimal deformations the rigid body motion is u r = φ × X + b, where φ = φi 3 is a constant rotation vector, i 3 = i 1 × i 2 , × is the cross product, and b = b 1 i 1 + b 2 i 2 is a constant vector. So in components the rigid body motion has the form
u 1 = φx 2 + b 1 , u 2 = -φx 1 + b 2 .
It is therefore evident that the "kernel" or "null-space" of strain energy not only include rigid (infinitesimal) motions (corresponding to a 1 = -a 2 ) but also pure shear that corresponds elongation/contraction in the directions at an angle ±π/4 with respect to the coordinate axes (when a 1 = a 2 ). The null-space of strain energy density consists of four linear independent modes and their linear combinations
u * 1 = i 1 , u * 2 = i 2 , u * 3 = i 3 × X, u * 4 = x 2 i 1 .
Instead of u * 4 one can use equivalent mode x 1 i 2 or symmetric mode x 2 i 1 +x 1 i 2 . Unlike classic elasticity fourth mode relates with shear in certain directions.
Clearly well-posedness results must take into account such a property. Second. On the other hand it has to be recalled that boundary conditions producing well-posed problems in the case of second gradient continua are more general than when dealing with first gradient continua (see e.g. [START_REF] Germain | La méthode des puissances virtuelles en mécanique des milieux continus. première partie: théorie du second gradient[END_REF][START_REF] Germain | The method of virtual power in continuum mechanics. part 2: Microstructure[END_REF][START_REF] Mindlin | On first strain-gradient theories in linear elasticity[END_REF][START_REF] Toupin | Theories of elasticity with couple-stress[END_REF]). The procedure which is used in the aforementioned papers can be summarized as follows (see [START_REF] Auffray | Analytical continuum mechanics à la Hamilton-Piola least action principle for second gradient continua and capillary fluids[END_REF][START_REF] Dell'isola | At the origins and in the vanguard of peridynamics, non-local and higher-gradient continuum mechanics: An underestimated and still topical contribution of Gabrio Piola[END_REF]): i) one postulates the principle of virtual work, i.e. the equality between internal and external work expended on virtual displacements; ii) one determines a class of internal work functional involving second gradients of virtual displacement; iii) one determines, by means of an integration by parts, the class of external work functionals which are compatible with the determined class of internal work functionals.
A consequence (see e.g. [START_REF] Dell'isola | How contact interactions may depend on the shape of cauchy cuts in Nth gradient continua: approach "á la d'Alember[END_REF][START_REF] Germain | La méthode des puissances virtuelles en mécanique des milieux continus. première partie: théorie du second gradient[END_REF][START_REF] Mindlin | On first strain-gradient theories in linear elasticity[END_REF][START_REF] Toupin | Theories of elasticity with couple-stress[END_REF]) of the just described procedure is that Neumann problems for considered (second gradient deformation energies) must include, to be complete, double symmetric and skew-symmetric boundary forces together with forces concentrated on points. To be more precise: the class of so-called natural boundary conditions must include the dual (with respect to work functionals) quantities of normal gradients of virtual displacements: following Germain the dual of tangential part of normal gradient of virtual displacement is a "couple" ( i.e. skew-symmetric contact double forces) while the duals of normal part of normal gradient of virtual displacement is a "double force" (i.e. symmetric contact double forces). For some reasons (initially investigated in [START_REF] Dell'isola | Some cases of unrecognized transmission of scientific knowledge: From Antiquity to Gabrio Piolas peridynamics and generalized continuum theories[END_REF][START_REF] Dell'isola | Higher-gradient continua: The legacy of Piola, Mindlin, Sedov and Toupin and some future research perspectives[END_REF]dellIsola Hellinger, which surely need further investigations) this kind of boundary conditions has been considered, sometimes and by some schools of mechanicians, unphysical: the reader is referred to the beautiful paper by Sedov, Leonid Ivanovich, [START_REF] Sedov | Mathematical methods for constructing new models of continuous media[END_REF] for a discussion of this point.
After having identified the displacements which are in the null space for deformation energy a Conjecture about mixed boundary conditions which are likely to produce well-posed problems can consequently be formulated. Indeed let us partition the boundary ∂ω of the body ω into two disjoint subsets, i.e. ∂ω e and ∂ω n . We assume that the displacements on ∂ω e are assigned and that the displacements on ∂ω n are free. We call AC the set of C 2 displacements verifying the assigned conditions on ∂ω e . We say that AC is singular if there exist an element u in AC and a displacement field u 0 belonging to the null space for deformation energy (i.e. a displacement having vanishing deformation energy) such that u + u 0 also belongs to AC. We conjecture here (and rigorously prove in the next section) that the considered mixed boundary problem is well-posed if and only if AC is NOT singular. Remark that the aforementioned statement reduces to the standard requirement that in well-posed problems the constraint body cannot undergo rigid displacements when the considered energy is a first gradient one and it is positive defined when regarded as a function of infinitesimal strain tensor. The concept of underconstrained system (see [START_REF] Kuznetsov | Underconstrained Structural Systems[END_REF]) has to be modified in order to include the treated case of planar second gradient continua: see for instance Figs. 1 and 2 for examples of underconstrained pantographic sheets. This point will need further investigations to include the case of pantographic sheets moving in three-dimensional space and three-dimensional pantographic bodies.
In the present paper we limit ourselves to consider Dirichlet's and mixed boundary problems in which on a part of the body boundary only the displacement is assigned, while the remaining part of the body boundary is left free.
Existence and uniqueness of weak solutions
Let us now come back to the first variation of the energy functional. For solution we have the principle of virtual work in the form
δU -δA = 0. ( 19
)
For simplicity let us replace δu by new function v. With v Eq. ( 19) transforms to
B(u, v) ≡ ω K 1 e u 1,1 v 1,1 + K 2 e u 2,2 v 2,2 + K 1 b u 1,22 v 1,22 + K 2 b u 2,11 v 2,11 dω = ω (f 1 v 1 + f 2 v 2 ) dω + ∂ω ϕ 1 v 1 + ϕ 2 v 2 + n 2 µ 1 ∂v 1 ∂n + n 1 µ 2 ∂v 2 ∂n ds. (20)
Here we introduce bilinear form B(u, v) for designation of quadratic terms in [START_REF] Dell'isola | Piezo-ElectroMechanical (PEM) structures: passive vibration control using distributed piezoelectric transducers[END_REF]. Now we introduce the weak solution of the boundary-value problem ( 13)-( 17) as the vector-function u such that the variational equation ( 20) is fulfilled for any test function v = v 1 i 1 + v 2 i 2 . The properties of u and v we specialize below.
Without loss of generality in what follows we use the following dimensionless form of
W 2W = u 2 1,1 + u 2 2,2 + u 2 1,22 + u 2 2,11 . (21)
Now the bilinear form is
B(u, v) = ω (u 1,1 v 1,1 + u 2,2 v 2,2 + u 1,22 v 1,22 + u 2,11 v 2,11 ) dω
Here we keep the same notations u α and x α for dimensionless displacements and dimensionless coordinates, respectively. So, W has the form of a seminorm in anisotropic Sobolev spaces, see definition in [START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF], and Appendix. More
precisely: let u 1 ∈ W (1,2) 2
(ω) and u 2 ∈ W
(2,1) 2
(ω), then we have that u ∈ W
(1,2) 2 (ω) ⊕ W (2,1) 2 (ω) and 2U (u) = ω u 2 1,1 + u 2 2,2 + u 2 1,22 + u 2 2,11 dω = |u 1 | W (1,2) 2 + |u 2 | W (2,1) 2 . ( 22
)
is a seminorm in W
(1,2) 2
(ω) ⊕ W
(2,1) 2
(ω). Here we introduced the following notations for some auxiliary seminorms
|f | W (1,2) 2 := f ,1 L2 + f ,22 L2 , |f | W (2,1) 2 := f ,11 L2 + f ,2 L2 ,
whereas it is possible to transform them into norms for instance following the standard choice:
f W (1,2) 2 = f L2 + |f | W (1,2) 2 , f W (2,1) 2 = f L2 + |f | W (2,1) 2 . ( 23
)
It clear that the functional space
W (1,2) 2 (ω) ⊕ W (2,1) 2
(ω) is constituted exactly by the set of all functions for which ( 22) is finite. We will call energy space E for the considered energy functional any subspace of
W (1,2) 2 (ω) ⊕ W (2,1) 2
(ω) which is the completion of one of the previously introduced space AC relative to NONSINGULAR boundary conditions using the norms [START_REF] Evans | Partial differential equations[END_REF]. Remark that when restricted to an energy space the seminorm given by ( 22) becomes a norm.
Now the definition of a weak solution for linear pantographic structures can be given as follows.
Definition 1 We call u ∈ E a weak solution of the equilibrium problem [START_REF] Dell'isola | Piezo-ElectroMechanical (PEM) structures: passive vibration control using distributed piezoelectric transducers[END_REF] if ( 20) is fulfilled for any test function v from a dense set in E.
The bilinear form B(u, v) is continuous and the following inequalities are valid
B(u, v) ≤ |u 1 | W (1,2) 2 |v 1 | W (1,2) 2 + |u 2 | W (2,1) 2 |v 2 | W (2,1) 2 ≤ u E v E . (24)
For the analysis of existence and uniqueness of weak solutions we start by considering two cases. The simplest case is given by Dirichlet boundary conditions.
Dirichlet's boundary conditions
We start by proving the existence and uniqueness for the simplest case when the whole boundary is fixed. So we consider the set of equations for the strong formulation of equilibrium problem:
-u 1,11 + u 1,2222 = f 1 , -u 2,22 + u 2,1111 = f 2 , (x 1 , x 2 ) ∈ ω; (25)
u 1 = 0, u 2 = 0, n 2 ∂ n u 1 = 0, n 1 ∂ n u 2 = 0, (x 1 , x 2 ) ∈ ∂ω. (26)
Here the weak solution is defined trough the integral equation
B(u, v) = ω (f 1 v 1 + f 2 v 2 ) dω; ∀ v 1 , v 2 ∈ C 2 0 (ω), (27)
which, when assuming that f belongs to L 2 (ω), can be written as
B(u, v) -(f , v) L2 = 0; ∀ v ∈ C 2 0 (ω)
Using the Poincaré inequalities (Friedrich's inequality, see e.g. [START_REF] Adams | Sobolev Spaces[END_REF]) we get
u 1 L2 ≤ C 1 u 1,1 L2 , u 2 L2 ≤ C 1 u 2,2 L2 (28)
with some constants C 1 and C 2 and, as a consequence, we can establish that
|u 1 | W (1,2) 2 + |u 2 | W (2,1) 2 ≥ C u 1 W (1,2) 2 + u 2 W (2,1)
2 with another constant C. In other words we proved that
| • | W (1,2) 2 and | • | W (2,1)
2 play the role of norms in
• W (1,2) 2 and • W (2,1) 2
, respectively, where with the upper ball we denoted the completion of C 2 0 (ω) (or C ∞ 0 (ω)) with respect to the corresponding norms. So here the energy space E is the anisotropic Sobolev's space
• W (1,2) 2 (ω)⊕ • W (2,1) 2
(ω). This means that we have proven that
B(u, v) is coercive B(u, u) ≥ C u 2 E .
One can easily prove that (f , v) L2 is a linear bounded functional in E. Thus, by using Lax-Milgram theorem [START_REF] Evans | Partial differential equations[END_REF], the following theorem can be easily proven Theorem 1 Let the Cartesian components f 1 and f 2 of f belong to the space L 2 (ω). There exists a weak solution u * ∈ E ≡
• W (1,2) 2 (ω)⊕ • W (2,1) 2
(ω) to the equilibrium problem (25) and [START_REF] Germain | The method of virtual power in continuum mechanics. part 2: Microstructure[END_REF], which for any v ∈ E satisfies the equation
(u * , v) E - ω f • v dω = 0.
Furthermore, u * is unique and it is a minimizer of the energy:
F (u * ) = inf u∈E F (u), F (u) ≡ U (u) - ω f • u dω.
Remark 1. Since for the coercivity we need inequalities [START_REF] Healey | Injective weak solutions in second-gradient nonlinear elasticity[END_REF] which require that only the functions are zero at the boundary,(i.e. u 1 = u 2 = 0 at ∂ω), for uniqueness it is enough to fulfillment of only the boundary conditions concerning displacements, without considering the condition on the normal derivatives [START_REF] Germain | The method of virtual power in continuum mechanics. part 2: Microstructure[END_REF].
Remark 2. We used here L 2 (ω) as a functional space for f . This condition can be weakened using imbedding theorems of E into anisotropic Lebesgue spaces [START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF] and we omit this for simplicity.
For non homogeneous boundary conditions [START_REF] D'annibale | Linear stability of piezoelectric-controlled discrete mechanical systems under nonconservative positional forces[END_REF] we seek solution in the form u = u * +u 0 , where u 0 is a vector function which satisfies [START_REF] D'annibale | Linear stability of piezoelectric-controlled discrete mechanical systems under nonconservative positional forces[END_REF] whereas for u * boundary conditions [START_REF] D'annibale | Linear stability of piezoelectric-controlled discrete mechanical systems under nonconservative positional forces[END_REF] to be assumed. Substituting this representation into [START_REF] Germain | La méthode des puissances virtuelles en mécanique des milieux continus. première partie: théorie du second gradient[END_REF] and [START_REF] D'annibale | Linear stability of piezoelectric-controlled discrete mechanical systems under nonconservative positional forces[END_REF] we reduce the non homogeneous boundary-value problem to the previous one, for which we already proved the theorem on existence and uniqueness of weak solutions.
Mixed boundary conditions
Somehow more difficult is the case of mixed boundary conditions. In the linear elasticity it is known that for existence and uniqueness it is enough to request that a part of the boundary is fixed [START_REF] Ciarlet | Mathematical Elasticity. Vol. I: Three-Dimensional Elasticity[END_REF][START_REF] Eremeyev | Existence of weak solutions in elasticity[END_REF][START_REF] Fichera | Existence theorems in elasticity[END_REF][START_REF] Lebedev | Functional Analysis[END_REF]. In our problem this is not the case. Indeed as an example we consider two rectangles with one fixed side whereas others are free, see Fig. 5. More precisely, on side AB the displacements are zero: u = 0. The difference in rectangles consists only on their orientation with respect to coordinate axes, that is to fiber orientations. It is clear that for the first rectangle the solution is not unique since vector u = ax 2 i 1 satisfies equilibrium and boundary conditions for any value of a: the set AC in this circumstance is indeed SINGULAR. For the second rectangle, instead, the set AC is actually NON SINGULAR and the aforementioned displacement is not a solution. In other words, for rectangle a) we have at least two solutions u = 0 and u = ax 2 i 1 . Obviously we should avoid such situation since even without loading we have an infinity non-trivial (deformative) solutions. Thus, in what follows we always assume that the boundary conditions are nonsingular.
Let us consider the mixed boundary-value problem formulated by ( 12)-( 18). Here energy space E is a subspace of W (ω) obtained as the completion of functions C 2 (ω) which verify [START_REF] D'annibale | Linear stability of piezoelectric-controlled discrete mechanical systems under nonconservative positional forces[END_REF].
The weak solution is defined as an element u belonging to E satisfying the equation
B(u, v) = ω (f 1 v 1 + f 2 v 2 ) dω + ∂ω ϕ 1 v 1 + ϕ 2 v 2 + n 2 µ 1 ∂v 1 ∂n + n 1 µ 2 ∂v 2 ∂n ds (29)
for any admissible function v (i.e. a function belonging to a dense subset of E).
Using the same technique we formulate the theorem on existence and uniqueness of the weak solution in E Theorem 2 Let the Cartesian components f 1 and f 2 of f belong to the space
L 2 (ω), ϕ α ∈ L 2 (∂ n ω α ), µ α ∈ L 2 (∂ n ω ⊥ α )
and assume that the boundary conditions are nonsingular. There exists a weak solution u * ∈ E to the equilibrium problem (12)-( 18) which for any v ∈ E satisfies the equation [START_REF] Kuznetsov | Underconstrained Structural Systems[END_REF].
Furthermore, u * is unique and it is a minimizer of the functional F (u):
F (u * ) = inf u∈E F (u), F (u) ≡ U (u) - ω f • u dω + ∂ω1 ϕ 1 u 1 + ϕ 2 u 2 + n 2 µ 1 ∂u 1 ∂n + n 1 µ 2 ∂u 2 ∂n ds.
Conclusions
The results presented in this paper allow us to prove existence and uniqueness theorems for the elastic problem in the case of planar pantographic sheets and for a variety of boundary conditions. The main difficulties which we had to confront were: i) the existence of floppy modes, i.e. deformations corresponding to zero deformation energy and ii) the absence in the deformation energies of many higher order derivatives. Therefore the results by Healey and Chambon [START_REF] Chambon | Uniqueness studies in boundary value problems involving some second gradient models[END_REF][START_REF] Healey | Injective weak solutions in second-gradient nonlinear elasticity[END_REF][START_REF] Mareno | Global continuation in second-gradient nonlinear elasticity[END_REF] could not be applied directly and there was the appearance of a lack of coercivity of considered energy. Indeed the second gradient deformation energy for pantographic sheets is not coercive if one considers the standard Sobolev Space, whose norm involves all second order derivatives.
However we prove that the standard Hilbertian abstract setting used for solving the elastic problem does not need to be changed. Instead one has to change the definition of the Energy spaces which correspond to the various imposed boundary conditions: they must be regarded as subsets of the Anisotropic Sobolev space whose norm is defined by involving only the derivatives appearing in the considered deformation energy. The abstraction effort due to Nikol'skii (and the to Besov and others) which lead him to introduce a wider class of Sobolev spaces was initially motivated only by the need of developing a mathematical theory based on the minimum possible necessary assumptions: Anisotropic Sobolev Spaces include functions which do not posses all higher order weak derivatives.
The abstract tool which he developed allowed us to frame rather naturally the numerical and mathematical problems concerning the equilibrium of linear pantographic sheets.
We are also confident that the same tools will allow us to study non-linear deformations problems.
Appendix. Anisotropic Lebesgue and Sobolev functional spaces
In the paper we used the classic and anisotropic Lebesgue and Sobolev functional spaces. Here we present necessary information on this topic. In plane elasticity and other problems of mechanics the functional spaces such as Lebesgue space L 2 (ω) and Sobolev's spaces W 1 2 (ω) and W 2 2 (ω) are widely used [START_REF] Ciarlet | Mathematical Elasticity. Vol. I: Three-Dimensional Elasticity[END_REF][START_REF] Eremeyev | Existence of weak solutions in elasticity[END_REF][START_REF] Lebedev | Functional Analysis in Mechanics[END_REF][START_REF] Lebedev | Functional Analysis[END_REF]. The norms in these spaces are defined as follows
f L2 = ω f 2 dω 1 2 , f W 1 2 = f 2 L2 + f ,1 2
L2 + f ,2 2 L2 1 2 , f W 2 2 = f 2 L2 + f ,1 2
L2 + f ,2 2 L2 + f ,11 2
L2 + 2 f ,12 2
L2 + f ,22 2 L2 1 2
, where f = f (x 1 , x 2 ) is a function defined in an open set in the two-dimensional Euclidean space, ω ⊂ R 2 , the boundary of ω is to abe assumed smooth enough.
Various useful requirements to the boundary of ω are discussed in [START_REF] Adams | Sobolev Spaces[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF]. The Greek indices take values 1, 2. L 2 (ω), W 1 2 (ω) and W 2 2 (ω) are examples of a separable Hilbert space [START_REF] Adams | Sobolev Spaces[END_REF].
In what follows we use the various imbedding theorems for Sobolev's spaces. Let us recall the general definition of imbedding. We say the normed space E is imbedded in the normed space H, and we write E → H to denote this imbedding, if (i) E is a vector subspace of H, and (ii) there exists constant C such that u H ≤ C u E ∀u ∈ E. For imbedding theorems in Sobolev's spaces we refer to [START_REF] Adams | Sobolev Spaces[END_REF][START_REF] Lebedev | Functional Analysis in Mechanics[END_REF][START_REF] Lebedev | Functional Analysis[END_REF].
In addition to classic Lebesgue and Sobolev's space we introduce the anisotropic Lebesgue and Sobolev spaces. Here we are restricted ourselves by functions defined on a set of R 2 . Let p = (p 1 , p 2 ) be a multiindex, where 1 < p α < ∞. Then the norm in the anisotropic Lebesgue space L p is defined as
f Lp = f (x 1 , x 2 ) p1 dx 1 p2/p1 dx 2 1/p2
.
If p 1 = p 2 = p we use standard notation L p = L p . • Lp is called the mixed norm [START_REF] Adams | Sobolev Spaces[END_REF]. An anisotropic Sobolev space consist's of functions having different differential properties in different coordinate directions, so such functions have generalized derivatives of different order and, in general, different L p in coordinate directions x 1 and x 2 . The theory of the anisotropic Sobolev spaces including imbedding theorems, relations with other Sobolev's spaces and analysis of the coercivity of differential operators is presented in [START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Nikol'skii | On imbedding, continuation and approximation theorems for differentiable functions of several variables[END_REF], see also [START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF]. We introduce the multiindex = (l 1 , l 2 ) where l α are natural numbers and the norm
f W p = f Lp + 2 α=1 ∂ lα α f Lp . (30)
So, the set of functions defining on ω and having generalized derivatives such that the introduced norm is finite, is called the anisotropic Sobolev space W p (ω). Obviously, when l 1 = l 2 = l and p 1 = p 2 = p we have the classical Sobolev space W l p . The anisotropic Sobolev's space W p is a separable Banach space whereas W 2 is a Hilbert space.
We also introduce the anisotropic Sobolev space • W p as the closure of C 2 0 (ω) (or C ∞ 0 (ω)) functions in norm [START_REF] Lebedev | Functional Analysis in Mechanics[END_REF]. For our purposes we consider two specific anisotropic Sobolev spaces W
With certain assumptions on the regularity of ω for these spaces there are following imbedding theorems [START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Nikol'skii | On imbedding, continuation and approximation theorems for differentiable functions of several variables[END_REF] W 2 (ω) → W 1 2 (ω), W 2 (ω) → C(ω), = {(1, 2), (2, 1)}.
Evidently, any function f ∈ W 2 2 (ω) belongs to W 2 (ω) with = {(1, 2), (2, 1)}, but not all elements of W 2 (ω) belong to W 2 2 (ω). For more details on imbeddings in anisotropic Sobolev's spaces and their further generalizations including results on traces of functions we refer to [START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Besov | Integral Representations of Functions and Imbedding Theorems[END_REF][START_REF] Nikol'skii | On imbedding, continuation and approximation theorems for differentiable functions of several variables[END_REF].
D 2 D 1 Fig. 1
211 Fig. 1 Scheme of a pantographic sheet and beams connection trough a pivot.
Fig. 2
2 Fig. 2 3D printed specimen of a pantographic sheet.
Fig. 4
4 Fig. 4 Pantographic rectangle.
Fig. 5
5 Fig. 5 Two rectangles with clamped edge with different fiber orientation.
2 = 2 =
22 f L2 + f ,1 L2 + f ,22 L2 , f L2 + f ,11 L2 + f ,2 L2 .
A.E. acknowledges the financial support by the Russian Science Foundation under grant "Methods of microstructural nonlinear analysis, wave dynamics and mechanics of composites for research and design of modern metamaterials and elements of structures made on its base" (No | 44,277 | [
"758686"
] | [
"159496",
"484972",
"177778",
"214370",
"159496",
"301088",
"82005"
] |
01467077 | en | [
"phys"
] | 2024/03/04 23:41:44 | 2016 | https://theses.hal.science/tel-01467077/file/2016PA066297.pdf | Alessandro Giulio E Serena
Flavia Luca
laboratorio e per il tempo passato insieme; un ringraziamento particolare va a Edoardo Sterpetti, che non esito a definire un amico ritrovato, ma anche un valido e fidato collega di lavoro che mi ha
First of all, I want to thank Professor Abhay Shukla. He gave me the opportunity to do the internship and the Ph.D. at the UPMC university. In the almost four year we worked together I learned many things from him and received all his support, especially in the most difficult times. He was for me a real example to follow. We discussed about many things, but most importantly we had fun doing scientific research, which is something I'll never forget.
I thank all my team, starting from Johan Biscaras for the things I learned from him and for the work done together for three years; I thank Zhesheng Chen, for his help in the laboratory and the time we passed together; a special thank goes to Edoardo Sterpetti, which I don't hesitate to define an rediscovered friend, but also a valuable and trusted colleague who helped me with my work but also in lifting weights. I deeply thank Mario Dagrada, a great officemate but most of all a great friend. I thank all my colleagues from the IMPMC institute, in particular Guilherme Ribeiro and all the other Ph.D. students.
I thank all the Paridini friends who contributed (and will continue to contribute) to make my life in Paris a little more Italian: Alessandro, Giulio and Serena, Luca, Flavia. I obviously thank Achille. I could write many pages of acknowledgements for him but I will only say "sei un zozzo, sei un delinguente". Thanks to Andrea, a friend always present but unfortunately far.
I thank Francesca's family who welcomed me as a son and have always supported and encouraged me.
I thank my parents and my sisters with all my hearth. Despite some misunderstanding in the past I have always received love and support from all of them and it is surely thanks to them that I'm here today.
Finally, my biggest thanks goes to Francesca, not only for her sacrifice to move in Paris, but also for her support, patience and love that she shows me every day. Space Charge Doping is a new technique invented and developed during this thesis for the electrostatic doping of such materials deposited on a glass surface. A space charge is created at the surface by causing sodium ions contained in glass to drift under the effect of heat and an external electric field. This space charge in turn induces a charge accumulation in the material deposited on the glass surface which can be higher than 10 14 cm -2 . Detailed characterization using transport, Hall effect, Raman and AFM measurements shows that the doping is reversible, ambipolar and does not induce chemical changes. It can be applied to large areas as shown with CVD graphene.
In a second phase the space charge doping method is applied to polycrystalline ultra-thin films (< 40 nm) of ZnO 1-x . A lowering of sheet resistance over 5 orders of magnitude is obtained. Low temperature magneto-transport measurements reveal that doped electrons are confined in two dimensions. A remarkable transition between weak localization and anti-localization is observed as a function of doping and temperature and conclusions are drawn concerning the scattering phenomena governing electronic transport under different conditions in this material.
Chapter 1
2D materials and doping 1.1 Introduction
The modulation of the electronic properties of a material with the eventual generation of electronic phase transitions is the driving force behind a variety of discoveries and inventions in condensed matter physics and chemistry and materials science. It is for example a key requirement for the use of a material in electronics. In semiconductors this is usually achieved by chemical means, by introducing a precise amount of impurities in a pure crystal and changing the charge carrier density. This process, known as chemical doping, is widely used to change material properties, for example to introduce a metallic or superconducting transition in a host material. If the introduced charge carriers are electrons, the doping is of type n, while we would have p-doping if the doped charge carriers are holes.
Modulating the charge carrier density in a very thin or surface layer can be also achieved electrostatically through the application of an electric field. The usual way to perform this kind of doping is through a metal-insulatorsemiconductor sandwich known as a field effect transistor. If a potential difference is applied between the metal and the semiconductor through the intermediary insulator, the corresponding band bending produced at the interface between the insulator and the semiconductor will create an accumulation of charge carriers (the sign of the charges depends on the polarity of the potential difference). This metal-insulator-semiconductor field effect transistor is a ubiquitous component, with dynamic or static attributes, of all microelectronic devices.
In recent years there there has been a growing interest in the study of two-dimensional materials. Very generally a two-dimensional (2D) system in condensed matter physics can be considered as a system where one dimension is much smaller than some typical coherence length. Thus, in very pure semiconductor heterostructures (for example the two-dimensional electron gas (2DEG) found in a Si inversion layers using a metal-oxide-semiconductor (MOS) structure [START_REF] Ando | Electronic properties of twodimensional systems[END_REF]), a channel several tens of nanometers thick containing mobile carriers with a mean free path which is ten times larger, is still 2D. More recently, 2D materials have come to mean few atomic layer materials, usually exfoliated from bulk layered precursors [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF]. This technique is based on the fact that some materials are formed in layers, with strong adhesion of the atoms within the layer and weak attraction (van der Waals forces) between adjacent layers so that it is possible to separate and isolate few layers or even a single atomic layer of material. Several other techniques are now used for fabricating 2D materials like chemical vapour deposition (CVD) [START_REF] Liang | Toward clean and crackless transfer of graphene[END_REF] or anodic bonding [START_REF] Chen | Anodic bonded 2D semiconductors: from synthesis to device fabrication[END_REF].They hold promise from an industrial point of view for example a 2D material could provide ways for increasing device density in microelectronics. From the point of view of fundamental research, a 2D material could present some unique features which are not observed in three-dimensional systems as a direct consequence of the quantization of the energy levels or for topological reasons. For our purposes 2D materials are ideal for the application of electrostatic doping since, as we point out above, this method is applicable to ultra-thin layers. However limited options exist to do this, especially to ultra-high carrier concentrations.
In this thesis, a new technique called Space Charge Doping for electrostatically doping materials deposited on glass has been developed. The technique exploits an intrinsic feature of glass in order to create a space charge at the interface between the glass and the material and was applied to two different materials: graphene and ultra-thin films of zinc oxide. The first tests on the space charge doping were done on graphene as it is a 2D material that can be produced relatively easily and can be doped n or p. It has also potential applications as a transparent conducting electrode (TCE). We then applied the space charge doping to ZnO in order to validate the technique for thin-films (thus not an intrinsically two-dimensional material) and to study the properties of this material when it is highly doped. ZnO can also be potentially used as a TCE material and also has been investigated for other applications which involve doping, for example spintronics.
In this chapter an overview of the electronic properties, synthesis methods, applications and doping techniques of graphene and zinc oxide will be given in order to prepare the ground for the results which will be presented in the rest of this thesis.
Electrostatic doping
A central result of this thesis is the development of a new and original electrostatic doping technique. Electrostatic doping is crucial for microelectronics since it is the basic working principle of the metal-oxide-semiconductor field effect transistor (MOSFET. In fundamental research it is progressively being used as a powerful and reversible tool for inducing electronic phase transitions in a number of materials [START_REF] Ahn | Electrostatic modification of novel materials[END_REF]. Here we give a brief description of the basic concepts of the electrostatic doping.
A MOSFET is composed of a conducting channel, drain and source contacts placed at the channel edges and a gate contact placed on top of the channel and separated from it by a dielectric layer. The gate electrode is used to modulate the conductivity of the channel (Figure 1.1). The channel and the gate electrode behave like the two conducting faces of a capacitor. Thus the application of a gate voltage V G will cause an accumulation of charges at the semiconductor-oxide interface which is the channel. The electric field generated by the gate electrode penetrates into the channel and causes a shift in the energy spectrum at the interface with the oxide. The relative shift of the energy bands of the semiconductor with respect to the Fermi energy of the whole device causes a change in the carrier concentration at the semiconductor/oxide interface. We could thus create an accumulation, depletion or inversion layer if, respectively, the density of charge carriers is augmented, reduced or reversed in sign [START_REF] Ahn | Electrostatic modification of novel materials[END_REF]. This is schematically shown in Figure 1.1. In this particular case, depending on the polarity and magni- tude of V G , in a p-doped semiconductor one can induce an enhancement or reduction of the hole concentration at the interface or, if the polarity of V G is reversed, an electron concentration. Moreover, like in a capacitor, the higher the capacitance associated with the MOS structure, the higher the carrier accumulation at the interface. This can be achieved with oxides or dielectrics with a high dielectric constant or with thinner oxides. Typical surface carrier density attainable with this method is of the order of 10 12 -10 13 cm -2 . This basic doping principle has been used not only to fabricate devices for electronics applications but also, for example, to induce insulatingsuperconducting transitions [START_REF] Hurand | Field-effect control of superconductivity and Rashba spin-orbit coupling in top-gated LaAlO3/SrTiO3 devices[END_REF] or in a more exotic implementation through the use of liquid electrolytes [START_REF] Ye | Liquid-gated interface superconductivity on an atomically flat film[END_REF] used to significantly increase the associated capacitance of the gate electrode.
This principle of modification of electronic properties at the interface has been exploited in this thesis to develop our original doping method capable of reaching considerably higher doping than the MOSFET structure in a reversible and reliable way.
Graphene 1.3.1 Electronic structure
Graphene is a single layer crystal of carbon atoms disposed in an hexagonal geometry. This hexagonal structure can also be seen as two compenetrating triangular lattices A and B [START_REF] Das | Electronic transport in two-dimensional graphene[END_REF], as shown in Figure 1.2. The 2D lattice vectors of the unit cell are a 1 = (a/2) 3, √ 3 and a 2 = (a/2) 3, -√ 3 with a ≈ 0.142 nm the carbon-carbon distance. The atoms are bond together with strong σ bonds with their three neighbours. The fourth bond is a π bond lying perpendicular to the graphene plane. The π bond can be seen as two lobes which are centred in the nucleus; each atom has it and they hybridize together to form what is called the π and the π * bands. Thanks to this particular atomic arrangement graphene has some unique properties in terms of electronic transport and mechanical properties.
The energy dispersion of a single layer graphene was calculated in 1947 by Wallace [START_REF] Wallace | The band theory of graphite[END_REF] with the tight-binding approximation for a single layer of graphite and the result of the calculation is shown in Figure 1 There exist two sets of three non-equivalent Dirac points, each set coming from one of the two triangular sub-lattices of graphene, giving a valley degeneracy g v = 2. In K and K the conduction and valence band meet, making graphene a zero-gap semiconductor, which is why it is often referred to as a semi-metal. In neutral conditions, the Fermi energy lies and the meeting point of the two bands. The most important remark is that charges in the region within 1 eV from the Dirac energy have a linear dispersion relation (Figure 1.4), as described by the Dirac equation for massless fermions:
E ± (k) ≈ ± v F |k -K| (1.3)
with the Fermi velocity v F ≈ 10 6 m/s or 1/300 th the speed of light in vacuum and is the reduced Planck constant.
Synthesis
There exist nowadays several techniques for the fabrication of graphene.
The aim is to produce graphene with the least possible concentration of defects and the largest possible area. The synthesis techniques discussed in the following in general either do one or the other.
Mechanical exfoliation
The mechanical exfoliation technique led to the isolation of the first graphene sample [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF]. A piece of highly oriented pyrolitic graphite (HOPG) or natural graphite is peeled off repeatedly with adhesive tape until really thin flakes of graphite are isolated. They are then deposited on the Si/SiO 2 surface. This technique generates samples of excellent quality, with typical lateral size of a few tens of microns and it is thus well suited for fundamental research but not for a large scale production.
Chemical Vapour Deposition (CVD) Chemical vapour deposition has shown to be a promising technique for the production of large area graphene samples. The deposition of graphene is due to the decomposition of a carbon carrier gas at high temperature (∼ 1000 ) onto the surface of a metal substrate (typically nickel or copper [START_REF] Mattevi | A review of chemical vapour deposition of graphene on copper[END_REF]). The atmosphere in which such a process is usually carried out is CH 4 (the carbon carrier) and hydrogen. The work done in Reference [START_REF] Yu | Graphene segregated on Ni surfaces and transferred to insulators[END_REF] revealed that the number of layers of graphene growth on nickel is dependent by the cooling rate giving a monolayer only at a certain cooling rate. On the other hand, the growth of graphene on copper is self limiting and the number of layers is independent on the cooling rate [START_REF] Mattevi | A review of chemical vapour deposition of graphene on copper[END_REF]. Copper substrates are generally preferred for the low C solubility in Cu. The final product is a metal substrate (Cu or Ni) entirely covered with graphene on both sides. The quality of the obtained films is good, but still not comparable to the exfoliated graphene flakes.
Despite the good quality and large area, as grown CVD graphene is not ready to be used for applications in electronics because of the metal substrate on which it is deposited. Thus, a transfer step onto an insulating substrate is necessary. The most widely used technique is the polymer assisted transfer: the graphene on the metal substrate (copper for example) is covered with a polymer film (usually polymethyl-methacrylate, PMMA) which is used as mechanical support [START_REF] Mattevi | A review of chemical vapour deposition of graphene on copper[END_REF]. The graphene layer on the other side in etched away and the Cu/graphene/PMMA stack is let float on the surface of an etchant solution for copper. After copper is removed, the floating polymer/graphene film is then rinsed several times in de-ionized water to remove the residues of the etching solution and finally it is "fished" with the final substrate. It is then let it dry in air and after a baking in air used to soften the polymer and enhance the adhesion on the substrate, the substrate/graphene/polymer is immersed in acetone to dissolve the polymer. The copper etchants usually involved are iron nitrate [START_REF] Li | Transfer of large-area graphene films for high-performance transparent conductive electrodes[END_REF], iron chloride or ammonium persulfate [START_REF] Liang | Toward clean and crackless transfer of graphene[END_REF]. An interesting technique for industrial application of CVD graphene was proposed by Bae et al. [START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF] where a roll-to-roll technique is used to transfer really large area of graphene on a PET substrate.
Thermal decomposition of SiC Another graphene synthesis method which is for production of large area graphene is the thermal decomposition of silicon carbide. Basically, this technique involves a substrate of SiC heated at really high temperatures (1000 ≤ T ≤ 1500 ) in ultra-high vacuum (UHV). The preferred forms of SiC which are involved are the hexagonal 4H-and 6H-SiC and the growth of graphene can be done on the silicon face or the carbon face of the substrate. The difference between the growth on the two faces are in terms of the growth rate and the crystalline orientation of the graphene film (better on the Si-face than the C-face) [START_REF] Hass | The growth and morphology of epitaxial multilayer graphene[END_REF].
Anodic Bonding Anodic Bonding is a welding technique developed in 1969 by Wallis [START_REF] Wallis | Field assisted glass-metal sealing[END_REF] used to seal silicon to glass without the addition of intermediate layers (such as glue). Glasses are formed by an amorphous network of SiO 2 . The introduction of Na 2 O and CaO breaks some Si-O-Si bridges and the non-bridging oxygen atoms serves as anions for the Na and Ca cations. It is found that sodium ions mobility is much higher than the calcium ions mobility [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF] at high temperature and an applied electric field causes sodium ions drift. The anodic bonding technique exploits the sodium ions drift to deplete the glass/material interface of Na + ions which leave and uncompensated oxygen ions space charge which forms a strong electrostatic bond with the silicon sealing them together. The same principle has been applied to 2D materials [START_REF] Chen | Anodic bonded 2D semiconductors: from synthesis to device fabrication[END_REF][START_REF] Shukla | Graphene made easy: high quality, large-area samples[END_REF][START_REF] Balan | Anodic bonded graphene[END_REF][START_REF] Gacem | High quality 2D crystals made by anodic bonding: a general technique for layered materials[END_REF], and in particular to graphene. A thin piece of graphite is deposited on a glass substrate. It is then heated at temperatures up to 290 • C and a voltage V G ≤ 2 kV is applied and leaved for 10 to 30 minutes. In this time the space charge at the interface is created and, when the sample is cooled down, the top layers are removed with the help of adhesive tape leaving from one to few layer of material on the glass surface. With graphene, high quality samples are obtained with typical lateral size ranging from 50 to 200 µm. The anodic bonding technique will be described in details in Chapter 2.
Graphene characterization
Many techniques are used for characterizing graphene. For device fabrication, the most important things to know about a graphene sample are its effective number of layers and its quality. The most widely used characterization methods, which are the ones used in this thesis, are optical characterization, Raman spectroscopy and atomic force microscopy (AFM).
Optical microscopy
The optical microscope is the first characterization tool which is usually involved in the identification of a graphene sample. Graphene is almost transparent, with ∼ 97% of transmittance in the visible range [START_REF] Li | Transfer of large-area graphene films for high-performance transparent conductive electrodes[END_REF]. However, when graphene is deposited on SiO 2 it is possible to vary the contrast by changing the light source wavelength and its intensity. The contrast increases also with the number of layers, making a monolayer sample to be recognizable with respect to a bi-layer or a multilayer graphene. The same considerations are valid also for other substrates such as glass, which is the substrate used in this thesis for our studies on graphene. The optical contrast of graphene deposited on glass is lower with respect to a sample deposited on SiO 2 , but it is still sufficient to uniquely recognize a monolayer graphene sample.
Raman spectroscopy Raman spectroscopy has been proven to be an essential tool in the characterization of graphene. From the Raman spectra of graphene it is possible to count the number of layers of graphene and check its quality [START_REF] Malard | Raman spectroscopy in graphene[END_REF][START_REF] Ferrari | Raman spectrum of graphene and graphene layers[END_REF]. A characteristic graphene Raman spectrum is showed in Figure 1.5, where the three main peaks of graphene are showed: the G band at ∼ 1580 cm -1 , the G' band (often called 2D band) at ∼ 2680 cm -1 and the D band at ∼ 1350 cm -1 . The Raman spectrum of graphene contains precious information: the G band comes from the inplane vibration of sp 2 carbon atoms and is the most prominent resonant phenomenon in graphitic materials. The 2D band comes from a mechanism of double resonance between the excited electron and the phonons near K, as explained in Reference [START_REF] Ferrari | Raman spectrum of graphene and graphene layers[END_REF]. This peak is the fingerprint of graphene as in the monolayer it is a single, sharp and symmetric peak with a higher intensity with respect to the G peak. Moreover, the shape on the 2D peak evolves with the number of layers (becoming asymmetrical), as shown in Figure 1.6. Finally, the D peak arises from the breaking of the double resonance and, thus, from the defects in graphene. From Raman spectroscopy it is also possible to get some information about the carrier concentration. In fact, as reported in Reference [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF], the position of the 2D and G peak, as well as the ratio of their intensities, changes significantly with the concentration of charge carriers in the graphene sheet. Raman spectroscopy is thus a powerful tool for the characterization of a graphene sample because it gives unambiguous information in terms of the number of layers, quality and doping.
Atomic force microscopy (AFM) AFM topography is another essential tool for the study of graphene. Fist of all, with AFM it is possible to count (or confirm) the number of layers of the studied sample. AFM offers an extremely accurate measure of the thickness of the sample and of the quality of its surface, indicating if the surface of the graphene sample is contaminated or not (which is useful information for the contact deposition).
Other techniques Some other techniques (not used in this thesis) are widely used for the characterization of graphene. We briefly cite transmission electron microscopy (TEM) and angle-resolved photoemission spectroscopy (ARPES). Electron microscopy has been used to see grain boundaries in CVD graphene transferred on a TEM grid [START_REF] Huang | Grains and grain boundaries in single-layer graphene atomic patchwork quilts[END_REF]. It has been shown that the different grains of graphene are connected together with distortion of the hexagonal lattice of graphene i.e. with pentagon-heptagon pairs or with distorted hexagons. These considerations are important in the understanding of the scattering mechanisms which dominates the electronic transport in graphene. On the other hand, ARPES measurements provide a direct measure of the electronic band structure through the analysis of the electrons emitted from the graphene sheet as a function of the direction of emission and their energy. An example of the measured electronic band structure of a multi-layer graphene can be found in Reference [START_REF] Sprinkle | First direct observation of a nearly ideal graphene band structure[END_REF].
Electronic transport properties
The motion of charge carriers in graphene has some unique properties as a consequence of its electronics band structure, such as the ambipolar field effect and minimum conductivity. This phenomenon can be seen if graphene is put in a field effect transistor configuration. What is usually done is to deposit a monolayer graphene sample on an oxidized Si substrate (which serves also as gate electrode) and it is then contacted for electrical measurements, usually with gold electrodes. The variation of the gate voltage induces a change in the carrier concentration in graphene (exactly like in a MOSFET) by moving the Fermi level from the equilibrium condition in the conduction or the valence band. Thus the gate voltage performs an electron (n) or hole (p) doping, depending on its polarity [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF]. Figure 1.7 shows the ambipolar field effect of graphene deposited on SiO 2 . The minimum conductivity in another interesting phenomenon: in fact, unlike conventional semiconductors, graphene conductivity σ does not vanish at zero field and it shows a minimum value. The value of σ min is of the order e 2 /h but it is unclear whether there is a universal prefactor to it [START_REF] Tan | Measurement of scattering rate and minimum conductivity in graphene[END_REF].
Scattering mechanisms and mobility
The electronic transport is determined by many factors such as: defects in the crystal structure, impurities, interaction with the substrate and phonons. All these factors produce inhomogeneities in the charge carrier distribution in the device for low values of the carrier density or reduce the mean free path l at high carrier density. From a theoretical point of view, two transport regimes can be distinguished, depending on the amplitude of l and the graphene device size L. When l > L the transport is said to be in the ballistic regime since the charge carriers can travel all along the device at the Fermi velocity v F without scattering. Transport is then described by the Landauer formalism as
σ ball = L W 4e 2 h ∞ n=1 T n (1.4)
where the T n are transmission probabilities of all the possible transport modes [START_REF] Peres | The transport properties of graphene[END_REF]. At the Dirac point, the theory predicts that the minimum conductivity has the value
σ min = 4e 2 πh = 4.92 × 10 -5 Ω -1 (1.5)
but experimental observations showed that this value can vary from device to device [START_REF] Tan | Measurement of scattering rate and minimum conductivity in graphene[END_REF]. For the case l < L, the charge carriers experience the elastic and inelatic scattering mechanisms and the system undergoes in the diffusive regime. The semiclassical Boltzmann transport theory come into play [START_REF] Das | Electronic transport in two-dimensional graphene[END_REF] and the transport at low temperature can be expressed as
σ sc = e 2 v F τ n π (1.6)
where τ depends on the scattering mechanism. The main scattering processes active in graphene are reviewed in Reference [START_REF] Cooper | Experimental review of graphene[END_REF] and briefly reported in the following:
-Phonon scattering: phonons are an intrinsic source of scattering at finite temperature even when all other scattering processes are suppressed. Longitudinal acoustic (LA) phonons are known to have the highest cross section with respect to the other vibrational modes and they also produce an elastic source of scattering. The contribution of the phonons to the resistivity of graphene can be recognized by the temperature dependence of the resistivity, being linear for high temperature and like ∼ T 4 at low temperature.
-Coulomb scattering: another important scattering phenomenon is represented by charged impurities (ions) trapped between the graphene sheet and the substrate. Those impurities act as a long range Coulomb scatterers which, for low charge carriers concentration (n n i , with n i the impurity concentration) bring to the formation of puddles of electrons and holes across the sample [START_REF] Adam | A selfconsistent theory for graphene transport[END_REF]. At high carrier density and for randomly distributed impurities the conductivity decreases as n i increases (σ ∝ 1/n i ).
-Short-range scattering: short range scatterers are responsible for the formation of mid-gap states. Those scattering centres are represented by cracks, edges and vacancies but they all can be modelled as vacancies of effective radius R [START_REF] Peres | The transport properties of graphene[END_REF].
Mobility
Mobility in graphene can reach the value of µ = 350000 cm 2 V -1 s -1 [START_REF] Banszerus | Ultrahigh mobility graphene devices from chemical vapor deposition on reusable copper[END_REF], which is quite surprising compared to other semiconductors. Having an electronic device which such a mobility would lead to a much faster electronics compared to the silicon based devices. But in the practice reaching values of mobilities so high is quite hard. Bolotin et al. measured µ ∼ 170000 cm 2 V -1 s -1 in a ballistic transport regime [START_REF] Bolotin | Temperature-dependent transport in suspended graphene[END_REF]. To do that, they fabricated suspended graphene devices by etching the substrate underlying the graphene sample and performing a current annealing to clean the device. Of course this is not of a practical use because for electronics applications since graphene needs to be in contact with a substrate and eventually an insulator to ensure the field effect. For this purpose, another type of substrate is often used: hexagonal boron nitride (h-BN). Boron nitride is a wide bandgap semiconductor (5.97 eV) with the same crystal structure of graphene, with B and N atoms occupying the A and B sites of the graphene lattice. Moreover, there is a good lattice matching between graphene and h-BN, which makes it a really good substrate for graphene. h-BN has also been used for packaging graphene for high-frequency applications [START_REF] Wang | BN/Graphene/BN transistors for RF applications[END_REF] reaching a cut-off frequency of several GHz or for obtaining high mobility graphene devices, reaching the stunning value of 350000 cm 2 V -1 s -1 [START_REF] Banszerus | Ultrahigh mobility graphene devices from chemical vapor deposition on reusable copper[END_REF].
Magnetotransport
Magnetotransport in graphene is a really important topic for many reasons: it is an essential tool for measuring the carrier density in graphene through the Hall effect and it shows really important phenomenons related to the 2D nature of graphene, the Quantum Hall Effect. The Hall effect is a phenomenon observed when charge carriers travelling inside a material are subjected to an external magnetic field. If a current is passed inside a device and a magnetic field is applied perpendicular to the current flow, the charges will experience the Lorentz force in the direction perpendicular to the current flow and to the magnetic field. As a consequence, a certain number of charge carriers will accumulate at the edge of the device and a transversal voltage will appear. For small magnetic fields, it is possible to define a Hall resistance defined as R xy = IBR H , with I the current flowing in the device, B the applied electric field and R H = 1/ne the Hall coefficient (e is the electrical charge and n the charge density). So, by plotting the value of R xy as a function of B it is possible to evaluate the charge density n in the material. In high mobility graphene and at high magnetic fields allowed energy levels for electrons split, forming the so called Landau levels [START_REF] Zhang | Experimental observation of the quantum Hall effect and Berry's phase in graphene[END_REF] and resistivity plateaux.
Doping of graphene
We show in Chapter 3 that we used the technique of the space charge doping to dope graphene with much better doping performances then conventional methods. A review of the commonly used doping technique in graphene is thus necessary.
Doping in graphene can be achieved in several ways. The first thing to mention is that, since graphene is only a one atom thick it is very sensible to air pollution and it is very easy to obtain some unwanted doping due to the air. Moreover, the transfer techniques used to deposit graphene onto different substrates introduce a certain density of charged impurities between the graphene layer and the substrate [START_REF] Pirkle | The effect of chemical residues on the physical and electrical properties of chemical vapor deposited graphene transferred to SiO2[END_REF] which, again, induces an unwanted doping. These aspects have to be taken into account in our experimental study of graphene.
The study of the properties of graphene requires that the carrier concentration must be precisely controlled [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF][START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF][START_REF] Chen | Controlling inelastic light scattering quantum pathways in graphene[END_REF]. In the following, an overview of the doping technique usually involved to dope graphene is given.
Electrostatic doping
The first technique to be used for the modulation of the carrier concentration in graphene is through the field effect [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF]. This technique is extensively used since it allows a fine control of the charge density in a reversible way, both in the electron and the hole side. The most straightforward way to build a transistor with graphene is by depositing it on an Si/SiO 2 substrate and deposit metal contacts on it. The Si/SiO 2 substrate serves as gate electrode and, depending on the polarity of the voltage applied to the silicon, an electron or hole concentration can be induced in graphene [START_REF] Schwierz | Graphene transistors[END_REF]. This method has the disadvantage of not inducing very high carrier concentration because of the low capacitance associated with the Si/SiO 2 /Graphene structure (the thickness of the SiO 2 layer is ≈ 200 nm). Alternatively, the capacitance associated to the gate electrode can be increased by diminishing the thickness of the dielectric. In this case, top-gated or bottom-gated graphene transistor have been realised with the used of dielectrics like boron nitride [START_REF] Meric | Graphene field-effect transistors based on boron-nitride dielectrics[END_REF], oxides [START_REF] Meric | Current saturation in zero-bandgap, top-gated graphene field-effect transistors[END_REF][START_REF] Pallecchi | Graphene nanotransistors for RF charge detection[END_REF][START_REF] Wang | Modification of electronic properties of top-gated graphene devices by ultrathin yttrium-oxide dielectric layers[END_REF], ionic liquids [START_REF] Wu | Graphene trahertz modulators by ionic liquid gating[END_REF] or polymers [START_REF] Pachoud | Graphene transport at high carrier densities using a polymer electrolyte gate[END_REF].
Electrostatic doping is a reliable way of doping graphene. However, it presents some disadvantages like the maximum reachable carrier density and the fact that graphene transport properties can be strongly affected by the presence of the substrate and eventually the top dielectric.
Chemical and substitutional doping
Another way to dope graphene is through the addition of foreign atoms to the atomic structure of graphene or placed on its surface which serve as donors or acceptors. One interesting method has been proposed by Bae et al. [START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF].
Here, a roll-to-roll technique is used to transfer a large area (∼ 80 cm in diagonal size) graphene sheet onto a PET substrate while at the same time it is doped with nitric acid (HNO 3 ). The result is a p-doped graphene sheet, but the doping has been found to be unstable in time.
Another way to chemically dope graphene can be achieved, for example, with AuCl 3 deposited on the graphene surface [START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF][START_REF] Shi | Work function engineering of graphene electrode via chemical doping[END_REF], but the authors found that this technique is not 100% reproducible. Aromatic molecules deposited on the surface of graphene have also been used [START_REF] Shi | Work function engineering of graphene electrode via chemical doping[END_REF][START_REF] Dong | Doping singlelayer graphene with aromatic molecules[END_REF], but also in this case problems regarding the stability of the doping have been met.
Substitutional doping is another interesting way for doping graphene. To perform n-doping, nitrogen can be used during the synthesis of CVD graphene [START_REF] Qu | Nitrogen-Doped Graphene as Efficient Metal-Free Electrocatalyst for Oxygen Reduction in Fuel Cells[END_REF][START_REF] Wei | Synthesis of N-doped graphene by chemical vapor deposition and its electrical properties[END_REF] or during the decomposition of SiC into graphene [START_REF] Velez-Fort | Epitaxial graphene on 4H-SiC(0001) grown under nitrogen flux: evidence of low nitrogen doping and high charge transfer[END_REF], while for p-doping it has been proposed the use of boron [START_REF] Panchakarla | Synthesis, structure, and properties of boron-and nitrogen-doped graphene[END_REF]. However, this kind of doping can induce defects in the graphene structure, can be unstable and it is not reversible.
Graphene as transparent conducting electrode
Most of the techniques described above have been used to try to transform the graphene sheet in a transparent conducting electrode (TCE). Graphene is very promising for this kind of application because of its transparency and its impressive transport properties. However, undoped (or slightly doped) graphene has a sheet resistance too high for TCE applications and doping is thus necessary to increase its carrier density and lower R S . Electrostatic doping performed with conventional techniques, like shown in References [START_REF] Meric | Graphene field-effect transistors based on boron-nitride dielectrics[END_REF][START_REF] Meric | Current saturation in zero-bandgap, top-gated graphene field-effect transistors[END_REF][START_REF] Pallecchi | Graphene nanotransistors for RF charge detection[END_REF][START_REF] Wang | Modification of electronic properties of top-gated graphene devices by ultrathin yttrium-oxide dielectric layers[END_REF][START_REF] Wu | Graphene trahertz modulators by ionic liquid gating[END_REF][START_REF] Pachoud | Graphene transport at high carrier densities using a polymer electrolyte gate[END_REF], is not suitable for TCE applications because it involves the use of other materials deposited on the graphene sheet which lower its transparency and may not reach very high doping. Moreover, the depositing materials on graphene significantly alter its electronic transport properties.
Chemical doping has been successfully used to dope graphene for TCE applications. For example, in Reference [START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF] CVD graphene doped with nitric acid is used as TCE for display and touchscreen applications, while in References [START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF][START_REF] Shi | Work function engineering of graphene electrode via chemical doping[END_REF] graphene doped with AuCl 3 is used for the fabrication of solar cells. As mentioned before, chemical doping has some important and not negligible issues in terms of time stability of the carrier concentration.
Another attempt to produce TCE with graphene is by using multi-layer graphene. If more layers of graphene are stuck together the resulting conductivity can be significantly enhanced with respect to the monolayer, still maintaining the transmittance above to 90% in the visible range [START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF]. In Reference [START_REF] Sun | Multilayered graphene used as anode of organic light emitting devices[END_REF] multi-layer graphene for the fabrication of LEDs has been demonstrated. However, this technique very is inconvenient to use because it involves many transfer processes in order to obtain the desired multi-layer graphene structure with the risk of damaging the whole device at every transfer step.
As we will show later in this thesis, space charge doping can overcome many of these problems and can have thus potential applications in the field of TCEs.
Zinc oxide
Zinc oxide (ZnO) is a wide band-gap semiconductor with a direct gap of ∼ 3.3 eV at 300 K [START_REF] Janotti | Fundamentals of zinc oxide as a semiconductor[END_REF]. It is a widely studied material since many decades for its electronic, optical and piezoelectric properties. In optoelectronics it has a number of features which overlap with GaN but ZnO presents some advantages also in terms of availability of high quality bulk crystals.
The space charge doping technique should be readily applicable to a well-defined single crystalline layer deposited onto a susbtrate, which is the case of graphene. But if our doping technique is versatile, we should be able to apply it to a variety of polycristalline thin films deposited on glass by standard deposition techniques. This was the reason to choose a large gap semiconductor like ZnO which is quite different from graphene as far as material and electronic properties are concerned. Moreover we had the possibility to produce high quality samples thanks to the magnetron sputtering facility at the Insitut des NanoSciences de Paris. Finally, doped ZnO, like graphene is a widely studied material in many fields and in particular in the field of Transparent Conducting Electrodes (TCEs) and spintronics, making it easy to compare results obtained by our method with other results in litterature.
Crystal and electronic structure
ZnO can crystallize in three different forms: wurtzite, zinc blende, and rocksalt [START_REF] Ozgur | A comprehensive review of ZnO materials and devices[END_REF] as shown schematically in Figure 1.8. The thermodynamically stable crystal structure in ambient conditions is the wurtzite phase and its corresponding band structure is presented in Figure 1.9. ZnO has been widely studied in the past because it possesses some very interesting properties and it can be a promising and cheap substitute for GaN. Its large and direct band-gap (as shown in Figure 1.9) makes ZnO suitable for optoelectronics applications in the blue/UV region like lightemitting diodes, lasers and photodetectors. However, nowadays its use is limited because of the inability to produced high quality p-type crystals. In fact, while is relatively easy to grow high quality n-type ZnO, the hole doped crystal has not yet demonstrated to be reproducible, requiring the use of heterostructure with other materials like Cu 2 O in order to realize a p-n junction [START_REF] Minami | High-efficiency oxide solar cells with ZnO/Cu 2 O heterojunction fabricated on thermally oxidized Cu 2 O sheets[END_REF]. However, ZnO is widely used for its piezoelectricity [START_REF] Yamamoto | Characterization of ZnO piezoelectric films prepared by rf planar-magnetron sputtering[END_REF] or as transparent conductor when it is heavily n-doped with foreign atoms like aluminium [START_REF] Jiang | Aluminum-doped zinc oxide films as transparent conductive electrode for organic lightemitting devices[END_REF].
Crystal growth
The most used technique for the growth of high quality ZnO films is RF magnetron sputtering. However many techniques are used to produce high quality samples such as molecular beam epitaxy, pulsed laser deposition, chemical vapour deposition or sol-gel.
RF magnetron sputtering
Sputtering is one of the most used techniques for the production of high quality ZnO thanks to its low cost, low operating temperature and simplicity. The deposition is carried out in a chamber where a mixture of oxygen and argon is injected in a precise proportion. The RF power is then activated and the deposition of ZnO takes place thanks to the Ar plasma generated in the chamber. Ar is used as sputtering enhancing gas while O is used for the reaction. The RF power activates the Ar plasma causing Ar + ions to hit the Zn or ZnO target. The extracted atoms travel towards the substrate facing the target through the O atmosphere and deposit as ZnO. In this thesis, a Zn target was used.
The choice of the substrate is important for the final orientation of the crystal and usually substrates such as diamond, glass, GaAs and Si are used [START_REF] Ozgur | A comprehensive review of ZnO materials and devices[END_REF], while the temperature can be varied from room temperature up to 400 • C. The choice of the O/Ar ratio is very important. In fact, the oxygen content in the deposition chamber determines the stoichiometry of the deposited film [START_REF] Water | Physical and structural properties of ZnO sputtered films[END_REF] and depending on the value of the O/Ar ratio the ZnO film presents more or less oxygen vacancies which are important for its electronic transport properties.
Other deposition techniques
A review of the most used deposition methods for ZnO can be found in Reference [START_REF] Ozgur | A comprehensive review of ZnO materials and devices[END_REF]. Here we give a brief description of some of these.
Molecular beam epitaxy
The advantage of using molecular beam epitaxy (MBE) is in the extremely precise control in situ of the growth parameters. Pure Zn is evaporated towards the substrate while oxygen flows on its surface, thus maximizing the oxidation of Zn. The deposition is controlled with reflection high-energy electron diffraction (RHEED), which gives a feedback for the adjustment of the deposition parameters. While it permits the deposition of high quality films, MBE is an expensive method and not as straightforward as RF magnetron sputtering.
Pulsed laser deposition
The principle of the pulsed laser deposition method is to illuminate a target material with a high intensity pulsed laser in order to evaporate the material and create a jet of particles directed normal to the surface. The pulsed laser is chosen such that the stoichiometry of the evaporated material is preserved. A substrate is placed in front of the target so that the evaporated material can sublimate on its surface. In the case of ZnO, a zinc oxide target obtained from pressed ZnO powder is used and temperature between 200 and 800 • C are involved.
Chemical vapour deposition Chemical vapour deposition (CVD) is an interesting deposition technique which allows the production of thin-films on a large scale. Generally speaking, the deposition takes place as the consequence of a chemical reaction. The precursors (in this case Zn and O) are transported in the reaction chamber by carrier gasses while a temperature gradient causes the chemical reaction which brings to the deposition of ZnO on the substrate.
Sol-gel Another widely used technique for the production of ZnO thinfilms is by spin-coating a solution containing ZnO onto a substrate. It is then let dry with the help of heat and a thin-film of ZnO is then obtained [START_REF] Natsume | Zinc oxide films prepared by sol-gel spincoating[END_REF][START_REF] Znaidi | Sol-gel-deposited ZnO thin films: A review[END_REF]. A heat treatment will then give crystallinity to the film.
Characterization of ZnO
Zinc oxide thin-film characterization is done mainly with two techniques: X-ray diffraction and atomic force microscopy. The first, gives informations about the size of the grains forming the film and their orientation, while the AFM is used to scan the surface in order to obtain a topography which also gives informations about the grain size. In this thesis the two techniques are used to characterize our ZnO samples.
X-ray diffraction spectrum of ZnO
The X-ray diffraction from ZnO powder is shown in Figure 1.10. We can see that there are several peaks corresponding to the different crystalline orientations. In the case of diffraction from thin-films, the XRD spectrum is usually acquired in a grazing geometry in order to maximize the penetration of the X-ray into the material and the detector is swept in a range of angles. A typical XRD spectrum from a ZnO thin-film is shown in Figure 1.11. As it is commonly observed the film presents a preferred orientation of the grains along the (00l) direction. As described in Chapter 2, the position and width of the peaks reveal the average size of the grains forming the film.
AFM
AFM scan of ZnO surface can give useful informations about the grain size which can be compared to the results obtained from the XRD data. An example of this kind of topography is shown in Figure 1.12.
Doping of ZnO
For electronics applications doping of ZnO is an essential issue. It is relatively easy to dope ZnO with electrons and several techniques are used both for device applications or to create very highly doped surface layers. On the other hand the p-type doping is difficult to obtain in a reproducible way.
In the following, a description of the most used methods for the n-doping of ZnO and a brief mention to the attempts to p-doping will be given.
n-doping
Intrinsic doping ZnO is a natural n-type semiconductor. The n-type conductivity originates from the non-stoichiometry of the material due to the presence of oxygen vacancies which behave like donor type impurities [START_REF] Chen | Structural, electrical, and optical properties of transparent conductive oxide ZnO:Al films prepared by dc magnetron reactive sputtering[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF][START_REF] Ziegler | Electrical properties and non-stoichiometry in ZnO single crystals[END_REF]. Reference [START_REF] Simpson | Defect clusters in zinc oxide[END_REF] shows that two types of defects exist associated with oxygen vacancies in ZnO, one identified with the oxygen vacancy itself and the other to clusters of oxygen vacancies. This work indicates that these defects act as donors in ZnO. However some authors argue that these being deep defects in the band-gap of ZnO they would instead act as compensating agent for any p-doping [START_REF] Janotti | Oxygen vacancies in ZnO[END_REF]. Experimental evidence however overwhelmingly shows that oxygen vacancies in ZnO are effectively responsible for the ntype conductivity. The ZnO samples studied in this thesis are intentionally fabricated with a degree of non-stoichiometry, being deficient in oxygen, in order to induce an initial carrier density. As it will be shown later in this manuscript, the evidence we found in our samples is in agreement with the fact that the oxygen vacancies are responsible for the n-doping in ZnO.
Chemical doping Aluminium is known to efficiently n-dope ZnO. The main use of aluminium-doped ZnO (Al:ZnO) is for transparent conducting electrodes and for this reason it is often deposited on glass by sputtering a ZnO target with the addition of Al 2 O 3 . The resistivity of the sputtered film can be significantly lowered with respect to the non-doped crystal, down to values around 10 -4 Ω • cm [START_REF] Minami | Highly conductive and transparent aluminumdDoped zinc oxide thin films prepared by RF magnetron sputtering[END_REF] and applications of Al:ZnO have been demonstrated in devices like organic light-emitting diodes [START_REF] Jiang | Aluminum-doped zinc oxide films as transparent conductive electrode for organic lightemitting devices[END_REF] or liquid crystal displays [START_REF] Oh | Transparent conductive Al-doped ZnO films for liquid crystal displays[END_REF]. However, the main disadvantage of this technique is that the transparency of the thin-film is strongly altered (down to 80%) being at the limit of the use of ZnO as transparent conducting electrode, and of course the crystal structure of ZnO is also altered by the Al atoms.
Electrostatic doping ZnO surface layers have been investigated for different purposes going from transparent field effect transistors to the study of two-dimensional accumulation layers (quantum wells). The use of ZnO as the active layer of a transistor has been demonstrated by using different transparent layers of materials used as electrodes and gate dielectric [START_REF] Fortunato | Fully transparent ZnO thin-film transistor produced at room temperature[END_REF][START_REF] Carcia | Transparent ZnO thin-film transistor fabricated by rf magnetron sputtering[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF][START_REF] Hoffman | ZnO-based transparent thin-film transistors[END_REF]. For many applications, the MOSFET configuration has the disadvantage of not achieving a very high carrier concentration in the accumulation layer and to significantly reduce the transparency of the film with the use of many layers which serve as dielectric and contacts. More exotic techniques have been also used to achieve extremely high doping electrostatically, like the use of ionic liquids [START_REF] Yuan | High-density carrier accumulation in ZnO field-effect transistors gated by electric double layers of ionic liquids[END_REF][START_REF] Yuan | Electrostatic and electrochemical nature of lquid-gated electric-double-layer transistors based on oxide semiconductors[END_REF]. These liquids are characterized by the presence of ions which are mobile in the presence of an electric field. If the ionic liquid is placed on top of ZnO, the ions can be accumulated at the surface of ZnO. The electric field associated with the ions is very high and electrons concentration up to 10 14 cm -2 have been measured in the associated accumulation layer. However, this technique raises some questions about eventual intercalation of the ions inside the ZnO structure and also a practical problem at lower temperatures because of the freezing of the liquid which occurs by lowering the temperature of the system.
Other techniques As said before, ZnO accumulation layers are extremely interesting from a physical point of view because of ZnO single valley conduction band [START_REF] Goldenblum | Weak localization effects in ZnO surface wells[END_REF]. We mention some efficient techniques for producing strong accumulation layers on the surface of ZnO single crystals like exposure to atomic hydrogen [START_REF] Shapira | Anomalous effect of UHV-component-generated atomic hydrogen on the surface properties of ZnO[END_REF], low-energy hydrogen ion implantation [START_REF] Yaron | Quantized electron accumulation layers on ZnO surfaces produced by low-energy hydrogen-ion implantation[END_REF], exposure to thermalized He + ions [START_REF] Goldstein | Extreme accumulation layers on ZnO surfaces due to He+ ions[END_REF]. Surface electron densities up to 5 × 10 14 cm -2 have been achieved. However these techniques are not reversible, thus posing severe limitations on their use in the study of the properties of ZnO quantum wells.
p-doping
It has been mentioned that p-type doping of ZnO is a difficult task. This can be attributed to several factors like compensation of the dopants by native defects or low solubility of the impurity in ZnO. It is known that in ZnO group-I elements like Li, Na and K, Cu, Ag, Zn vacancies and group-V elements such as N, P, and As act as acceptors in ZnO. However, from theoretical calculation it comes out that these are deep acceptors and thus contribute only slightly to the p-type conductivity [START_REF] Ozgur | A comprehensive review of ZnO materials and devices[END_REF]. Nitrogen appears to be a good candidate for the hole doping of ZnO, but its practical application has not yet been demonstrated.
Magneto-transport properties
The transport properties of ZnO have been widely studied under the effect of a magnetic field. The single, parabolic conduction band of ZnO (see Figure 1.9) allows an easy interpretation of the Hall effect for the determination of the carrier density and mobility. Especially in strong accumulation layers, quantum interference phenomenons have been found in ZnO and they are studied under the effect on an applied magnetic field like the works found in References [START_REF] Goldenblum | Weak localization effects in ZnO surface wells[END_REF][START_REF] Lü | Spin relaxation in n -type ZnO quantum wells[END_REF][START_REF] Reuss | Magnetoresistance in epitaxially grown degenerate ZnO thin films[END_REF][START_REF] Andrearczyk | Spin-related magnetoresistance of n-type ZnO:Al and Zn1 -xMn x O:Al thin film[END_REF], to cite a few.
ZnO as TCE
One of the promising application for ZnO is as transparent conducting electrode. The ability to easily produce high quality thin-films on a transparent substrate (glass or quartz for example) makes ZnO a very attractive material in this field. If its transparency is high thanks to its large band-gap, on the other hand, like in the case of graphene, its conductivity needs to be modulated with doping in order to lower the resistivity of the film. As mentioned before, Al doping is a commonly used technique for the fabrication of conductive ZnO for displays applications [START_REF] Oh | Transparent conductive Al-doped ZnO films for liquid crystal displays[END_REF] or light emitting diodes [START_REF] Jiang | Aluminum-doped zinc oxide films as transparent conductive electrode for organic lightemitting devices[END_REF]. However as said above Al doping has some drawbacks especially in terms of transparency of the doped thin-film which is significantly altered.
Chapter 2 Experimental
This chapter will describe in detail the techniques used for the fabrication of the graphene and ZnO samples, their characterization and the final device realization. The fabrication of graphene samples has been carried out with two techniques: the anodic bonding and the transfer of CVD graphene (commercially available) onto a glass substrate. The zinc oxide samples have been realized with the magnetron sputtering method. The samples are characterized after the production with an optical microscope, atomic force microscopy (AFM), Raman spectroscopy and X-ray diffraction (XRD). Finally the samples are contacted for electrical characterization with the evaporation of metals and shaped with different techniques. Finally, a description of the low-temperature magneto-transport system will be given.
Samples fabrication
In this section we introduce the different techniques used for the fabrication of the graphene and ZnO samples. For the production of graphene samples we used the anodic bonding method and the transfer of CVD graphene onto a glass substrate and for ZnO the radio-frequency magnetron sputtering.
Glass substrates involved
In this thesis two kind of glasses have been used as substrates for all experiments concerning both the sample fabrication and space charge doping: the first is SCHOTT D263T, a borosilicate glass, and the second is soda-lime glass also from SCHOTT. There are differences in composition between the two types of glasses which affect the properties of the samples deposited on them (as we will see in Chapter 3), the most important being the maximum reachable carrier concentration due to space charge doping. The latter is affected by an important parameter of the glass, which is also the most relevant for our purpose, that is the sodium ion content. In the D263T 25 the Na + concentration is ∼ 1.4 × 10 18 cm -3 while in the soda-lime it is ∼ 3.4 × 10 18 cm -3 . As will be confirmed by the experiments, this difference in concentration gives a higher doping capability to the soda-lime with respect to the borosilicate glass.
Graphene
Anodic Bonding
Anodic bonding is known in microelectronics to be a reliable method for sticking two surfaces without any intermediate layer like glue, provided that they are microscopically clean and polished. The original technique was used to stick silicon to a glass substrate [START_REF] Wallis | Field assisted glass-metal sealing[END_REF][START_REF] Anthony | Anodic bonding of imperfect surfaces[END_REF] or to stick two silicon surfaces with a glass layer in between. As mentioned in Chapter 1, anodic bonding can be used for the fabrication of two-dimensional materials, included graphene, in an original and reliable way [START_REF] Chen | Anodic bonded 2D semiconductors: from synthesis to device fabrication[END_REF][START_REF] Shukla | Graphene made easy: high quality, large-area samples[END_REF][START_REF] Balan | Anodic bonded graphene[END_REF][START_REF] Gacem | High quality 2D crystals made by anodic bonding: a general technique for layered materials[END_REF][START_REF] Moldt | High-Yield production and transfer of graphene flakes obtained by anodic bonding[END_REF].
Anodic bonding is based on the fact that glasses are ionic conductors at relatively high temperatures. They are formed by an amorphous network of SiO 2 and the introduction in the system of Na 2 O and CaO breaks some Si-O-Si bridges. Non-bridging oxygen atoms serve as anions for Na + and Ca 2+ ions. At the bonding temperature (60 • C < T < 300 • C) Na ions become mobile under an applied electric field (the mobility of Na + being much higher than the mobility of Ca 2+ [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF]) and it is possible to deplete the surface from sodium ions. A space charge of uncompensated oxygen ions (which are not mobile) is thus left at the surface of the glass which creates an electrostatic field and attraction between glass and sample. In the case of the original technique, the intimate contact generated by the strong electrostatic field between glass and silicon ultimately leads to chemical bonding between Si and O at the interface; in the case of 2D materials, as described below and for the parameters used the sticking process is purely electrostatic.
The schematics of our anodic bonding machine are depicted in Figure 2.1. It is composed of a controller (for the regulation of the heat and the voltage), an apparatus formed by a hot plate, which also serves as anode for the system, and an anvil (the cathode) at the ground potential which can be substituted by a micro-manipulator tip. Between the anode and the cathode is placed the glass substrate with the precursor of the nanomaterial to be fabricated. The choice of the precursor is a crucial step in the sample fabrication. The precursor is usually a lamellar bulk material from which are cleaved thin fresh flakes of 1 -2 mm in lateral size with the help of adhesive tape. The precursors are placed on the surface of a glass substrate (borosilicate or soda-lime glass) which has been previously cut to ∼ 1 cm 2 and cleaned in a ultrasound bath at 35 • C first in acetone and then in ethanol, 5 minutes each, and dried with nitrogen. The glass/precursor system is then placed on the hot plate, the anvil (or the micromanipulator tip) is put ------------ on top in contact with the precursor and the voltage is applied between the anvil (tip) and the hot plate. As highlighted in Figure 2.1 this cause Na + ions drift towards the anode leaving a space charge of O 2-at the interface between the glass and the precursor. After some time, the space charge reaches its optimal magnitude such that there is a sufficient image charge in the precursor. The voltage can thus be removed with the precursor electrostatically stick to the glass. With the help of adhesive tape, the top layers of the precursor are peeled off, leaving on the surface of the glass one to few-layers of material bonded to it. This technique has a high throughput compared to the purely adhesive tape technique [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF] and is capable of giving relatively big sized samples (of the order of ∼ 100 µm for graphene) of excellent quality.
+ ++ + + + + + Hot plate Hot plate Na + O 2- Si V G Hot plate
Two type of glasses are used in this thesis for the fabrication of anodic bonded graphene: borosilicate glass purchased from SCHOTT and standard soda-lime glass, both optically polished. The ionic conductivity of the soda-lime glass (Na + ion concentration 3.4 × 10 18 mm -3 ) is higher than that of borosilicate glass (Na + ion concentration 1.4 × 10 18 mm -3 ) and this is also reflected on the magnitude of the parameters used for anodic bonding.
The parameters for the fabrication of 2D samples with the anodic bonding method vary from material to material. The sample is heated at a temperature of 140 -250 • C applying a voltage of 200 -1500 V for 1 -15 minutes. Several samples have been fabricated during this thesis and the optimum parameters for the fabrication of graphene have been found, both for the borosilicate and soda-lime glass using the anvil or the micromanipulator tip. The details will be given in Chapter 3.
Transfer of CVD graphene
As mentioned in Chapter 1, chemical vapour deposition (CVD) is a widely used technique for the production of large surfaces of graphene. The graphene sheet is fabricated by heating a Cu foil at ∼ 1000 • C in the presence of carbon carrier gases [START_REF] Mattevi | A review of chemical vapour deposition of graphene on copper[END_REF]. The Cu foil acts as a catalyzer for carbon which is deposited on its surface and crystallizes into graphene. The principal advantage of using this technique is that the area of the graphene sample is only limited by the size of the copper foil. Furthermore, the quality of the graphene film produced is compatible for electronic applications, though it is still not comparable with the quality of anodic bonded graphene or graphene produced with the adhesive tape method.
As-fabricated CVD graphene cannot be used for electronics because it is grown on a conducting substrate (the Cu foil). A transfer process of graphene onto an insulating substrate is thus necessary. The polymer assisted transfer is the most widely used technique in research [START_REF] Suk | Transfer of CVD-grown monolayer graphene onto arbitrary substrates[END_REF] and is a delicate step for the realization of electronic devices with CVD graphene. In the following, the transfer process of CVD graphene used in this thesis will be described in details. The floating graphene/PMMA is rinsed several times in DI water and then is "fished" with the glass substrate. (e) The glass/graphene/PMMA system is let dry in air and, after a baking to enhance adhesion, the PMMA layer is dissolved in acetone. (f) Finally, graphene has been transferred onto the glass substrate.
-A square of ∼ 1 cm 2 is cut from the copper foil which is covered with graphene on both sides.
-The Cu square is fixed to a kapton support with the help of adhesive tape. This step is necessary to ensure mechanical support to the sample in the next step of spin coating.
-Polymethyl methacrylate (PMMA) is spin-coated on the sample at a speed of 1000 rpm for 1 minute (Figure 2.2a) in order to create a thin film of polymer on the surface of the graphene layer. It will be used as mechanical support. The PMMA is let dry in air at room temperature for an overnight period to let evaporate the solvent.
-The kapton support is removed and a oxygen plasma etching is performed on the back side of the copper foil (the one not covered by the PMMA) to remove the graphene from that side of the copper, as showed schematically in Figure 2.2b. This step is very important to ensure the success of the copper etching step. The O 2 etching is done at 60 W for 150 seconds.
-A solution of ∼ 1 g of Ammonium Persulfate (NH 4 ) 2 S 2 O 8 (APS) in 85 mL of de-ionized water is prepared (concentration of ∼ 0.01 g/mL). APS is an oxidizing agent for the copper [START_REF] Jo | Etching solution for etching Cu and Cu/Ti metal layer of liquid crystal display device and method of fabricating the same[END_REF]. A higher concentration should not be used in order to avoid bubbles in the solution which will result to be holes in the deposited graphene. With this concentration, the etching process takes about four hours. The sample must be placed on the surface of the solution with the PMMA-face of the stack outside of it. At the end of the process, the PMMA-graphene stack should float on the surface of the solution.
-After the copper is completely etched away, the sample must be rinsed in de-ionized water. The sample is then transferred in a watch-glass (circular, slightly convex piece of glass) where de-ionized (DI) water is added and then is pumped away several times (always taking care of letting the sample float on some water) in order to remove any trace of the etchant. The sample is then placed in a DI water bath. The process is then repeated.
-A glass substrate (borosilicate or soda-lime) is cut (with an area at least 50% bigger than the PMMA area) and cleaned in ultrasound first in acetone (4 minutes, 35 -The floating graphene/PMMA stack is caught on the glass substrate and let dry in air for 30 minutes. A baking at 150 • C for 5 to 10 minutes is then performed to enhance the adhesion of graphene to the glass surface.
-An acetone bath is warmed at ∼ 50 • C and the sample is immersed in it for 5 minutes to dissolve the PMMA layer. Then it is put in ethanol for 3 minutes to remove the residues. This process is repeated a second time. Nitrogen gas is blown on the sample at the end of the process to dry the sample.
-A second baking is performed at 150 • C for 5 minutes to further enhance the adhesion of graphene to the substrate.
After all those step the sample is ready for characterization and contacts deposition with the techniques described later in this chapter.
Zinc oxide
In this thesis we fabricated thin film zinc oxide on glass with the radiofrequency (RF) magnetron sputtering technique. RF magnetron sputtering is a straightforward, low cost technique from which is possible to obtain high quality ZnO films through a plasma assisted deposition method [START_REF] Ozgur | A comprehensive review of ZnO materials and devices[END_REF][START_REF] Fortunato | Fully transparent ZnO thin-film transistor produced at room temperature[END_REF][START_REF] Carcia | Transparent ZnO thin-film transistor fabricated by rf magnetron sputtering[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF].
The apparatus of the RF sputtering system is illustrated in or zinc oxide target in front of which, at a certain distance, is placed the substrate where ZnO has to be deposited. The substrate can be eventually heated. The RF power is applied to activate the argon plasma (the pink shadow in Figure 2.3) causing Ar ions to hit the zinc target. Zn atoms are kicked away and travel towards the glass substrate. During their path they oxidize because of the oxygen atmosphere and deposit on the substrate as zinc oxide. The properties of the final ZnO film such as the stoichiometry, the crystallinity and the thickness are determined by the growth parameters: RF power, Ar and O partial pressure, substrate temperature, distance of the substrate from the target and deposition time [START_REF] Kim | Magnetron sputtering growth and characterization of high quality single crystal ZnO thin films on sapphire substrates[END_REF]. The O/Ar ratio in the chamber is an important parameter for the determination of the stoichiometry of the deposited zinc oxide film. In particular, oxygen vacancies create mid-gap states in ZnO and contribute to the intrinsic n-type conductivity [START_REF] Simpson | Defect clusters in zinc oxide[END_REF]. The substrate temperature can affect the deposition rate and the quality of the sample, in terms of grain size and crystallinity, though good results can be obtained also at room temperature.
Characterization methods
Several techniques have been used to characterize the samples fabricated during this thesis. They are: optical characterization with an optical microscope, atomic force microscopy (AFM), Raman spectroscopy and X-ray diffraction (XRD).
Optical microscope
The optical microscope is a useful tool for a first characterization of the graphene and ZnO samples. The images of the samples are recorded with a Leica DM2500 and a CCD camera with 5X, 10X, 50X and 100X objectives in the bright field imaging mode. For the case of anodic bonded graphene, the optical microscope characterization is useful to determine the number of layers of the sample through the optical contrast while for CVD graphene and zinc oxide it is useful to determine the quality of the surface of the sample.
Atomic Force Microscopy
Atomic force microscopy (AFM) was invented in 1986 [START_REF] Binnig | Atomic force microscope[END_REF] as an evolution of the Scanning Tunneling Microscope (STM). It is a tool which is able to scan the surface of a sample to obtain a topography at extremely high resolution.
Contrary to STM which measures the tunneling current between the sample and the scanning tip, AFM measures the forces at the atomic scale and thus it makes possible to scan conductive and insulating samples [START_REF] Meyer | Atomic force microscopy[END_REF]. The basic principles is described in the following. The AFM is based on the interaction forces between the surface of the sample and a probing tip attached to a cantilever-like spring. In response to those forces the cantilever is deflected. The image of the surface of the sample is taken recording the deflection, which is shown as a z-displacement, as a function of the coordinates in the x -y plane. Displacements from 1 µm to 0.1 Å can be measured. The typical order of magnitude of the measured forces range from 10 -6 to 10 -11 N which makes possible to do a non-destructive scan of the surface. Three operating modes are distinguished: the contact mode, the non-contact mode and the tapping mode, also called AC mode. In the contact mode, the sample-tip separation is of the order of the Å and the tip is sensible to the ionic forces which makes possible to reach atomic resolution under certain conditions. In the non-contact mode the tip operates at higher distance from the sample, 10 -100 nm. For these two modes, the force between the tip and the surface of the sample causes the cantilever to bend until static equilibrium is reached. The scanning can be done either by maintaining the sample-tip distance constant and regulating the sample height through a feedback loop, or by recording the displacement of the cantilever in response to the interaction with the surface. Finally, in the AC mode the tip-sample distance is in between of the distances of the two previous modes and the tip vibrates at a frequency close to its resonance frequency. A distance-frequency curve dominates the system: a repulsive force will increase the resonant frequency of the cantilever, while an attractive force will decrease it. The feedback loop can keep the oscillating amplitude constant or it can maintain the frequency constant during the scan [START_REF] Meyer | Atomic force microscopy[END_REF], in both cases a measure of the height as a function of the position can be done.
The three operating modes have some advantages and disadvantages when applied to 2D samples and thin films of materials. With the contact mode, high speed scans and atomic resolution are possible; also, it is easier to scan rough samples. However, strong lateral forces are involved due to the proximity of the tip to the sample. In the non-contact mode the lateral forces involved are much smaller thus avoiding distortions of sample surface. However, the relatively high sample-tip separation doesn't allow a high resolution imaging. Moreover, the contact and non-contact modes are usually performed in ultrahigh vacuum to reach the atomic resolution, which is a condition not achievable on our setup. The AC mode is thus the best choice for the measures on graphene and ZnO thin films. It in fact involves small lateral forces on the sample together with high resolution and the possibility to perform the scan in ambient condition. The scan speed is however slower compared to the contact mode.
The setup used in this thesis for AFM measurements is a Scanning Probe Microscope SmartSPM-1000 instrument (AIST-NT) and the tapping mode was used in all the performed scans. In Chapter 3 and 4 will be shown the AFM images of graphene and ZnO samples.
Raman Spectroscopy
Raman spectroscopy is a powerful and non invasive tool for the characterization of materials. It is based on the Raman effect, a phenomenon of light scattering discovered in 1928 by C. V. Raman and K. S. Krishnan which led C. V. Raman to be awarded the Nobel prize in 1930 [START_REF] Raman | A new radiation[END_REF]. The origin of this phenomenon comes from the interaction of an incident light beam with the phonons in a solid or from molecular vibrations in molecules, producing inelastic scattering of light. When light hits a material, the most part of the phonons experience elastic scattering, called Rayleigh scattering and the intensity of this portion of light is ∼ 10 -3 times the intensity of the incident light. The portion of light which undergoes the Raman scattering is even less, ∼ 10 -6 times the intensity of the incident light. In the following, the principles of the Raman scattering will be explained.
The principles of Raman scattering
The explanation of the Raman effect can be understood in terms of the polarizability of molecules or the susceptibility of crystals. Let us consider a two-atom system. When an electromagnetic (EM) wave interacts with such a system, the electronic cloud and the nuclei will react to the incident electric field, thus inducing a change in the dipole moment
P D (ω) = α 0 E(ω) (2.1)
where E(ω) is the incident radiation, α 0 is the polarizability and P D (ω) is the dipole moment which acts as a source of evanescing EM wave. α 0 is a measure of the deformation induced in the system in the presence of an external electric field. In a solid crystal, the susceptibility tensor takes the place of α 0 . Suppose now that the molecule is vibrating with frequency Ω and the distance between the two atoms changes periodically. Then, α 0 will be modulated and Eq. 2.1 takes the form
P D (ω) = (α 0 + α 1 cos (Ωt)) E 0 (ω) cos (ωt) (2.2)
which, with the application of trigonometric rules, becomes
P D (ω) = α 0 E 0 cos (Ωt) + (α 1 E 0 /2) [cos (ω + Ω) t + cos (ω -Ω) t] (2.3)
From Eq. 2.3 we see that the scattered light oscillates at the frequency of the incident light ω and at the sibeband frequencies ω ± Ω. The side band frequencies are called Raman Stokes (ω -Ω) and Raman anti-Stokes (ω + Ω) frequencies.
In a quantum-mechanical picture, the Raman scattering comes from an exchange of energy between an incident photon and a vibrating mode of the crystal, i.e. a phonon. The photon is absorbed by an electron which is in a ground energy state and is excited in a virtual state (Figure 2.5). When it recombines it can either return to the ground energy state and emit light at the same frequency of the incident light (Rayleigh scattering) or recombine at a different energy level and emit light at a different frequency (Raman scattering). Since the energy and momentum must be conserved, the generation or absorption of an additional quasi-particle (a phonon) is required. if the vibration of a molecule (or a phonon mode in a crystal) is such that there is no polarizability, that vibration is said to be Raman inactive. Thus the condition for a vibration mode to be Raman active is
dα dQ 0 = 0 (2.4)
with dQ the displacement from the equilibrium position.
Raman microscopy in 2D materials
In the case of 2D materials and thin films Raman spectroscopy represents a fast and non destructive tool for the study of the properties of the materials.
From the frequency and relative intensities of the Raman peaks it is possible to characterize the quality, doping level and strain of the sample. Micro-Raman spectroscopy is the term used to indicate that the analysed area is of the order of the µm 2 and this is achieved focusing the laser with the help of a microscope objective. The Raman measurements done in this thesis were performed using our Xplora Raman spectrometer (Horiba Jobin-Yvon) in back scattering geometry in ambient conditions. In Figure 2.7 is shown the layout of the internal optical system for the detection of the Raman signal.
The main parts of the micro-Raman systems are shown: the laser, the microscope, the confocal system and the spectrograph. The lasers wavelength available are 532 and 638 nm but for our measurements we used only 532 nm. The laser power on the sample of the 532 nm laser is 12 mW, but the intensity of the radiation is often reduced to avoid heat damage on the samples. Usually, the measurements on graphene and ZnO are carried out with the 50% or 25% filters, meaning a laser power on the sample of 6 and 3 mW, respectively. The microscope part is a crucial component of the system since it is responsible for the spatial resolution of the measure. Our system is equipped with three objectives: 10X, 50X and 100X. The 100X objective has been used for all the measurements on 2D materials and thin films in this thesis because, thanks to its numerical aperture (NA) of 0.9, it is the objective with the best axial resolution, compared with the 10X and 50X. The laser spot of the 100X objective is 0.72 µm with the 532 nm laser. The confocal system is composed of a slit, a hole and various lenses. The slit is responsible for the spectral resolution and hole for the axial resolution. Finally, the spectrograph is composed by a series of lenses and a grating, which separates the different wavelengths of the Raman signal. The grating resolution can be changed, determining the number of points in the Raman spectrum. The best resolution that we can get with our system is ∼ 1.2 cm -1 . However, higher resolution means longer acquisition time because less photons hit the CCD camera resulting in a weaker signal.
X-ray diffraction
X-ray diffraction (XRD) is one of the most used tools to analyze the crystal structure of a material. Diffraction phenomenons appear when an electromagnetic (EM) wave hits a periodic structure with the characteristic length of the order of the EM wave. Since in a crystal the spacing within atoms is of the order of the Ångstrom and the X-ray wavelength varies from 0.1 to 100 Å, the structural properties of a crystal can be studied by observing the diffraction pattern generated by a crystal when it is hit by an X-ray radiation. In the following, a quick description of the principle of XRD will be given without claiming to be complete. We can treat a crystalline material as a set of parallel atomic planes placed at a distance d which interacts with the X-ray beam. The X-ray radiation is treated as a plane wave. When the radiation hits the sample with an angle θ (Figure 2.8) a portion of it will penetrate in the material and another portion will be reflected at the same angle of the incident wave. The reflected waves from the different planes will sum up producing constructive interference only if a certain condition is met, i.e. the well known Bragg's law 2d sin(θ) = nλ (2.5)
where θ is the incident angle of the X-ray, d is the spacing between the atomic planes, λ is the wavelength of the incident wave and n has the meaning of a reflection order. The above relation can be exploited in XRD in order to determine the crystalline orientation of the sample by sweeping in a certain range the angle of incidence of the X-ray beam and measuring the scattered waves as a function of the angle, as illustrated schematically in Figure 2.9. With this setup, the X-ray detector will measure a peak in the scattered wave only at certain angles, i.e. when the condition of Equation 2.5 is met. Grain size estimation XRD gives also information about the mean grain size of the analyzed crystal. The samples we analyze with XRD are far from being a monocrystal, mainly because of the growth conditions, their extremely small thickness and because the substrate (the glass) is amorphous. They are instead polycrystalline, which means that they are formed by many small single crystals one close to another. Besides the orientation of the single crystals of the sample, XRD gives information also about the average size of the grains composing the sample and they are estimated with the well known Scherrer equation [START_REF] Patterson | The Scherrer formula for X-ray particle size determination[END_REF]
τ = Kλ β cos θ (2.6)
with τ the mean grain size, K a dimensional factor close to unity, λ the wavelength of the X-ray radiation, β the broadening of the diffraction peak at the half of his intensity, or full width half maximum (FWHM) and θ is the Bragg angle. The grain size estimated with this formula can be compared to the grain size measured with the AFM for further confirmation of the data.
Grazing geometry For samples of extremely low thickness, like is the case of our ZnO samples, the classical theta-2theta configuration in the Xray diffraction analysis could not be sufficient to obtain a perceptible signal on the detector. In this cases a grazing geometry is used. Instead of sweeping the X-ray source together with the detector in a range of angles it is kept at a constant, small angle in order to maximize the penetration of the X-ray inside the material and enhancing the coherent diffraction signal. However, the signal which reach the detector is now the convolution of all the signals coming (in the worst case) from all the surface of the sample resulting in a broadening of the diffraction peaks [START_REF] Simeone | Grazing incidence X-ray diffraction for the study of polycrystalline layers[END_REF]. For a correct estimation of the grain size it is thus not sufficient to use the FWHM of the peaks in the diffraction pattern, but instead a pseudo Voigt fit of the peak should be done and only the FWHM of the Lorentzian part of the fit should be used in the Scherrer equation (Equation 2.6).
Device fabrication
Once the materials are deposited on glass and fully characterized with the methods described above, the devices are fabricated. This in general involves two steps: first, the sample must be shaped in order to make its shape compatible with the electrical measurements and second, the metal contacts are deposited with the help of a shadow mask. The shape of the sample is an important aspect because it affects the validity of the measured resistance. We chose the van der Pauw method for the electrical measurements which is a reliable method for the measure of the sheet resistance of a thin sample. The van der Pauw method is briefly explained in the following.
Van der Pauw method
The van der Pauw method for resistivity and Hall measurements was developed in 1958 by L. J. van der Pauw [START_REF] Van Der Pauw | A method of measuring specific resistivity and Hall effect of discs of arbitrary shape[END_REF] and provides a useful measurement for the determination of the resistivity thin samples of arbitrary shape, provided that there are no holes in the sample. Also, it is possible to measure the Hall effect simply by selecting specific contacts for current injection and voltage measurement. In the practice, square sample with metal contacts at the corners are often used. The contacts must also be small compared to the dimension of the sample. The contact configuration for sheet resistance (R S ) measurements is shown in Figure 2.10a. The current is injected from one contact and collected from one of the adjacent contact while the voltage is measured from the opposite pair of contacts. This is repeated for all possible pairs of contacts and it is possible to define a set of resistances from the measured voltage from a pair of contacts divided by the current injected from the opposite pair of contacts. For example, the R AB,DC is calculated from the ratio V DC /I AB . Averageing between the various resistances we can finally define a vertical and a horizontal resistance resulting from the average of the resistances calculated by flowing the current in the device vertically or horizontally, respectively, denoted as R V and R H . From these two values, the R S is derived from:
R S = π ln(2) R V + R H 2 f (2.7)
f is a correction factor which takes into account the inhomogeneity of the sample through the ratio r, which is defined as
R V /R H if R V < R H or R H /R V if R H < R V . f is thus equal to cosh r -1 r + 1 ln(2) f = 1 2 exp ln(2) f (2.8)
f is always equal to or smaller than 1 (it's 1 when r = 1) and it is not derivable analytically. In the practice for R S measurements a polynomial approximation can be used.
Hall measurements
The van der Pauw geometry is suitable also for the measurement of the carrier density and charge mobility through the Hall effect. In this case, the contact configuration used is the one depicted in Figure 2.10b. The current is applied along the diagonal of the sample and the voltage is measured on the other diagonal while the magnetic field is applied perpendicular to the sample plane and swept. In general, when a sample is crossed by a current and a magnetic field is applied perpendicular to it, a voltage (called Hall voltage) appears in the direction perpendicular to the current and to the magnetic field, whose amplitude is given by
R Hall = V Hall I = B qn S (2.9)
where n S is the surface charge density and q is the charge of the charge carriers (+/-times the electron charge e). The sign of V Hall is determined by the type of charge carrier present in the material, either electron or hole. From this measurement and the measurement of the sheet resistance at B = 0 it is possible to derive the Hall mobility as µ H = 1/(en S R S ). In our measurement system for the study of the electronic transport it is possible to measure simultaneously R S as a function of the magnetic field (the magneto-resistance) and the Hall effect as shown later in this chapter.
Contact deposition
The deposition of the metal contacts is done by thermal evaporation of gold through a stencil mask. This is a fast and clean method for the deposition of the contacts. The stencil mask is a steel foil on which the shape of the mask is engraved with a laser beam. The mask is aligned to the sample with the help of micromanipulators and is stuck onto it with double adhesive tape (Figure 2.11). The stencil mask method can be considered a clean method for the deposition of metal contacts because, contrary to photolitography or electron beam lithography, it doesn't require the deposition of a polymer on the surface of the sample, thus avoiding contamination. On the other hand, the achievable resolution for the contact dimension is much less for the stencil mask compared with the other methods, but is still acceptable for our samples. In Figure 2.12 are shown the two types of stencil mask we used, both in the van der Pauw geometry. One type of mask is for macroscopic samples and the contacts are placed 1.5 mm apart in a square geometry. This kind of mask has been used for the measurements on CVD graphene and ZnO. The other type of mask is for microscopic samples, with the contacts being at the distance of 40 µm, always in a square geometry. This kind of mask has been used for contacting anodic bonded graphene of a minimum lateral size of about 50 µm.
Sample shaping
As said before, the sample shape is an important factor in the electronic transport measurements. In order to properly shape our samples we used different methods, depending on the precision we wanted to achieve and the sample type. We used three techniques: shaping of graphene (CVD or anoding bonded graphene) with a micromanipulator tip, oxygen plasma etching on CVD graphene and dissolution of ZnO in HCl. All the methods
Microscopic mask
Macroscopic mask
Figure 2.12: The two types of masks we used for the contact deposition. On the left in shown the mask used for contacting microscopic samples of lateral size of 50 µm. On the right, the mask used for macroscopic CVD graphene and ZnO samples is shown. will be briefly discussed below.
Shaping with micromanipulator tip
The most direct and straightforward method to shape a 2D sample is to scratch it with a micromanipulator tip in order to give it the desired shape. Of course, the achievable resolution of the scratch is limited by the precision of the movements in the xy plane of the micromanipulator and the radius of the tip. Most of the CVD graphene and all the anodic bonded graphene were shaped using this method because graphene can be scratched away relatively easily.
Plasma etching shaping Oxygen plasma etching is often used to shape graphene into Hall bars or nanoribbons [START_REF] Janssen | Quantum resistance metrology using graphene[END_REF][START_REF] Pang | Graphene as transparent electrode material for organic electronics[END_REF]. We performed the plasma etching in a reactive ion etching system in order to shape our CVD graphene samples for electronic transport measurements. Before the etching procedure, an electron beam lithography step is necessary. The e-beam lithography is a procedure used to create patterns in a electron-sensitive polymer (like PMMA) through the exposure to an electron beam. It is a widely used technique in research and nanotechnology because it is possible to create patterns of extremely high resolution (of the order of few nanometers). The parts of the polymer exposed to the electron beam becomes soluble in a proper solvent with the final result that all the sample is covered by the polymer except in the area where it was exposed to the e-beam. This technique was used to create a pattern compatible with the van der Pauw method on our CVD graphene sample. The sample was subjected to the oxygen plasma and only the parts uncovered by the PMMA were etched away. The results will be shown in Chapter 3.
ZnO etching ZnO shaping is quite hard to achieve with the micromanipulator tip because it is a tough material. Instead, a straightforward way to etch ZnO in order to give it the proper shape is to put it in a aqueous solution of hydrochloridric acid (HCl). HCl etching of ZnO is usually performed in order to texture its surface in solar cells applications [START_REF] Kluth | Texture etched ZnO:Al coated glass substrates for silicon based thin film solar cells[END_REF]. For our purpose, where we want to completely remove part of the ZnO on the glass, we used an aqueous solution of HCl at 15%. A drop of PMMA with a diameter of about 2 -3 mm is put on the center of the glass substrate on which is deposited ZnO on the whole surface. PMMA is then left to dry in air at a temperature of ∼ 50 • C for 15 minutes. Then, the substrate is put in the solution of HCl for 7 seconds letting the acid completely remove the ZnO not protected by the PMMA; it is then rinsed in de-ionized water and dried with nitrogen gas. 7 seconds is a sufficient time to completely remove the few nanometers of ZnO from the surface. PMMA is then dissolved in warm acetone (∼ 50 • C) for 30 minutes.
Electronic transport measurements setup
Space charge doping, electronic transport and Hall measurements are all carried out under high vacuum (< 10 -6 mbar) in a custom made continuous He flow cryostat. It allows to control the temperature of the sample in the range 3 -420 K, thus allowing to perform the doping of the samples and the low temperature transport measurements in situ. The cryostat is held inside an electromagnet capable of reaching a magnetic field of 2 T. The doping and resistivity measurements are controlled by a LabVIEW program which coordinates the measurement instruments and saves the data into a file.
Chapter 3
Space Charge Doping
This chapter is partially based on our publication: A. Paradisi, J. Biscaras and A. Shukla, Space charge induced electrostatic doping of two-dimensional materials: Graphene as a case study, Applied Physics Letters, 107, 143103 (2015) [START_REF] Paradisi | Space charge induced electrostatic doping of two-dimensional materials: Graphene as a case study[END_REF].
During this thesis we have developed and patented a technique called space charge doping for ultra-high, ambipolar, electrostatic doping of materials deposited on a glass substrate. In this chapter as well as those that follow the technique will be detailed and its application to graphene and zinc oxide will be analysed. Space charge doping was first tested on graphene for the following reasons: (i) graphene has been studied extensively and there is an extensive bibliography covering a great number of topics allowing an easier interpretation of the results; (ii) the graphene band structure, as we saw in Section 1.3.1, makes graphene a zero band-gap semiconductor and it is possible to perform electron or hole doping quite easily; (iii) it is easy to produce high quality graphene with our Anodic Bonding technique (Section 2.1.2) or to transfer large area of CVD graphene on a glass substrate; (iv) we wanted to investigate an alternative way to produce an efficient transparent conducting electrode (TCE) using graphene doped with space charge doping.
In literature many publications work with graphene as a base material for a TCE [START_REF] Li | Transfer of large-area graphene films for high-performance transparent conductive electrodes[END_REF][START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF][START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF], using different techniques for lowering graphene sheet resistance and make it more conductive through doping. In the process these works highlight several problems related, among others, to the damage to the graphene layer and instability of the doping over time. As we will see later in this chapter, space charge doping can overcome some of these problems.
In the first part of this chapter we will describe the principle of space charge doping followed by the details of the production of the graphene samples. Finally the results of space charge doping applied to graphene will be detailed and discussed.
The principles of space charge doping
To understand the working principles of space charge doping it is worthwhile to take a closer look at the structure of glasses in general and of ionic transport inside glass.
Glass atomic structure
It is known that glasses are formed by an amorphous network of certain glassforming oxides which respect some rules [START_REF] Zachariasen | The atomic arrangement in glass[END_REF]. Commercial glasses are formed by three-dimensional networks of SiO 2 , as in the case of soda-lime glass, with B 2 O 3 also contributing in the borosilicate glasses. The building block of the glass atomic network is the oxygen tetrahedron which surrounds silicon atoms and each tetrahedron shares a corner with other tetrahedra. The angle between the different tetrahedra varies in an unpredictable way, and that is why the atomic structure of glass is considered amorphous. Network modifiers are added to the glass during the production in order to enhance certain properties. Sodium oxide (Na 2 O) and calcium oxide (CaO) are the main compounds used as modifiers. The introduction of these species breaks some Si-O-Si bridges creating non bridging oxygen units which serve as anions for the Na + and Ca 2+ cations which are incorporated in these sites. Oxygen atoms are linked covalently to the network and Ca 2+ ions posses a very low mobility, making Na ions the most mobile species in glass.
Ionic drift
In Reference [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF] the mobility of Na + and Ca 2+ ions in various glasses is studied, especially in soda-lime glass with a composition very close to that of our soda-lime glass. The conclusion is that Na + ions are the most mobile species in the glass. Figure 3.2 shows conductivity data of soda-lime glass extrapolated from Figure 2 in Reference [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF] compared with our measurements on soda-lime glass used in this work. We measured glass conductivity Measurements on the conductivity of our soda-lime glass with respect to temperature compared with the data in Reference [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF]. The small offset of our data from the extrapolation of the published data is probably due to slight differences in the composition of the glasses.
1 2 3 4 1 0 -1 6 1 0 -1 4 1 0 -1 2 1 0 -1 0 1 0 -8 1 0 -6 1 0 -4 1 0 -2 M e h
by monitoring the DC current through glass with a constant applied voltage as temperature was increased. The data from [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF] are extrapolated from the low frequency measurement of the conductivity as a function of the temperature. The first thing to notice is that our measurement is in good agreement with those from the article. The small shift of our data from the extrapolation of the published data from [START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF] is probably due to the small difference in composition of the two soda-lime glasses.
Another important factor for the purpose of space charge doping is that the conductivity of the glass due to Na + increases exponentially with the temperature, changing by several order of magnitudes from room temperature to the doping region highlighted in orange in Figure 3.2. As we can see the conductivity of Na ions is negligible at room temperature, but becomes appreciable in the doping region. In Figure 3.3 the exponential increase in the current through glass is shown when the temperature is varied from room temperature to ∼ 370 K. The gate voltage is kept constant at -10 V. Further details about the doping region will be given later in this chapter. Measurable ion mobility is activated above a certain temperature (∼ 330 K). If an electric field is applied across the glass Na ions begin to drift and it is possible to create an accumulation of Na + at the interface between the glass and the cathode. At the same time, at the anode a depletion of Na + is induced leaving the non-bridging oxygen ions uncompensated. Thus, two space charge layers are created at the opposite faces of the glass with opposite polarity, perturbing the local electrical neutrality but still keeping the whole glass substrate electrically neutral. For a typical concentration of Na 2 O of 10 21 cm -3 an accumulation layer of 1 nm can create a charge imbalance on the surface of the order of 10 14 cm -2 thanks to the image charge. This space charge can be generated with modest temperatures and voltages; the typical temperatures involved are between 330 K, where the ions mobility is activated, and 380 K, while the voltage can be varied between 1 and 300 V. On increasing temperature or voltage the space charge layer creation can be accelerated but care must be taken to avoid dielectric breakdown in the glass. In our case the upper limit for the voltage is technically imposed by the electronic instrumentation, but the need for higher voltage for the purpose of achieving a better doping performance has not been felt. Very rarely we have used higher temperature (up to 420 K) to perform doping on our samples, but this is a risky procedure because at this temperature applying the maximum allowed voltage can cause dielectric breakdown.
As long as the electric field is present at high temperature, the created space charge continues to increase until an equilibrium is reached, i.e. when the electric field of the space charge at the opposite faces of the glass compensates the external electric field. If the external voltage is turned off the Na ions tend to migrate towards the opposite face of the glass to restore the global electrical neutrality. However, if the system is cooled down fast with the voltage applied the ion mobility is rapidly suppressed and when the temperature is such that Na + mobility is negligible the space charge is preserved "permanently" even if the external electric field is removed.
The electric field generated by the space charge can be thought of as the one across a dielectric in a Field Effect Transistor (FET) configuration, with the difference that in the case of space charge doping the space charge which corresponds to the charge on the gate electrode is confined very close to the surface which carries the sample. The associated capacitance is high compared to the FET thus inducing a higher image charge. A material deposited on the surface of the glass will feel the electric field of the space charge and it will thus be doped n in the case of Na + accumulation at the interface or p in the case of depletion. This conceptually simple process turns out to be practically simple as well as an efficient doping method for materials deposited on the glass surface.
Space charge doping applied to graphene
There is considerable litterature concerning the use of graphene as a transparent conducting electrode (TCE) [START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF][START_REF] Wang | Transparent, conductive graphene electrodes for dye-sensitized solar cells[END_REF][START_REF] Wu | Organic solar cells with solution-processed graphene transparent electrodes[END_REF]. In fact, graphene has been considered as a very promising material for this application thanks to its exceptional electronic transport properties (Section 1.3.1) and its transparency, being higher than 95% in the visible range of the light (∼ 97% at a wavelength of 550 nm). However, its resistivity is of the order of few kΩ/ at low doping, too high to be used as a TCE. Doping is thus necessary to lower graphene sheet resistance. Several techniques have been used to dope graphene (see Section 1.3.5), but several drawbacks exist including limits on the maximum reachable carrier concentration, longer-term instability or transparency issues [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF][START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF][START_REF] Dong | Doping singlelayer graphene with aromatic molecules[END_REF].
We applied the space charge doping for the first time to graphene in order to fully characterize our doping technique and in parallel investigate its applicability as a TCE. The study was performed on several graphene samples deposited on glass via polymer assisted transfer of CVD graphene or with the anodic bonding method. The experiments are carried out on microscopic samples (lateral size l ≈ 50 µm), for limiting extrinsic effects and monitoring the sample area with micro-Raman spectroscopy before and after the doping, and macroscopic samples (l ≈ 1 cm) to validate the technique for large areas. The techniques used for the sample fabrication are described in Chapter 2 and in the following the experimental details of the sample fabrication and characterization will be given.
Fabrication of the graphene samples
CVD graphene CVD graphene we used in this study was purchased online (Graphene Supermarket, https://graphene-supermarket.com/). We intentionally used medium quality CVD graphene to perform our first experiments on space charge doping. The manufacturer declares that the 20 µm thick copper foil is entirely covered with a graphene layer on both sides with a small portion of bilayer islands (10 -30%). The samples were deposited on glass (soda-lime or borosilicate) following the PMMA assisted transfer procedure described in Section 2.1.2. At the end of the process the glass surface is almost entirely covered by the graphene sheet having an effective area of the order of 1 cm 2 .
The surface of the graphene presents some impurities and little holes which arise from the transfer process, as shown in Figure 3.4. The impurities are residues of the PMMA used as mechanical support for the graphene during the transfer, while the holes can be attributed to mechanical stress during the whole process. The wrinkles which are slightly visible on the surface come from the quality of the CVD graphene we purchased. The Holes Imputiries copper etching, the deposition of the graphene/PMMA stack and successive dry step and the first annealing have been carried out in a clean room environment.
The optical characterization of Figure 3.4 is always followed by a characterization with the µ-Raman spectroscopy, which gives detailed information about the quality of graphene through the D peak at ∼ 1350 cm -1 (see Section 1.3.3). The absence of this peak indicates a good quality sample. Figure 3.5 shows a representative Raman spectrum of CVD graphene measured just after the transfer process. The spectrum was acquired with a laser wavelength of 532 nm with a power of 3 mW, a grating of 1200 and an acquisition time of 8 seconds for 2 accumulations. The shape of the 2D peak clearly indicates that the sample is a mono-layer graphene [START_REF] Ferrari | Raman spectrum of graphene and graphene layers[END_REF] and we can see it very well from the inset in Figure 3.5 where the 2D peak has been fit with a single Lorentzian curve. Raman mapping have also been used to characterize relatively large areas of the sample surface and the results will be presented later in this chapter.
Anodic Bonded graphene
Graphene was also produced with the anodic bonding technique on sodalime or borosilicate glass, as in the case of the CVD graphene. The detailed explanation of this method is given in Section 2.1.2. As in space charge doping, an electrostatic field is created on the glass surface but in this case the bulk precursor is electrostatically stuck to the surface of the glass by the O 2-space charge formed at the interface between the two. Adhesive tape is then used to remove the upper layers of the precursor leaving monoto multilayer flakes of the sample on the glass surface. The parameters (temperature, voltage and time) for good bonding which also ensure a good yield depend on a number of factors: (i) the material which is being bonded to the glass, (ii) the glass type and (iii) if a anvil or a micromanipulator tip is used as the top electrode. Once the optimal range of parameters is found for a given material, anodic bonding is an extremely efficient method for the production of high quality 2D samples. During this thesis, more than 450 graphene samples have been fabricated with this method and the optimal parameters for each for each situation (borosilicate or soda-lime glass, anvil or tip) have been found. A typical sample presents several flakes of multi-layer graphene and it is possible to find few flakes of mono-layer graphene having a lateral size of 10 -100 µm. The optimal bonding parameters are listed in Table 3 Some things have to be pointed out: the parameters for sticking graphene to soda-lime are always lower than the borosilicate glass, as a consequence of the higher Na + concentration in the soda-lime substrate (3.4 × 10 18 cm -3 against 1.4 × 10 18 cm -3 ); the same thing is valid if the tip is used instead of the anvil because the tip concentrates the applied electric field more efficiently on the sample area, making the bonding process more efficient; the bonding time varies between 1 and 20 minutes, but usually it has been found that 15 minutes is a good period of time for getting a good yield.
.1. Anvil V G (V) T ( • C) t (
The samples are characterized with the techniques described in Chapter 2. First, the identification of the monolayer samples is done with the optical microscope. The contrast of graphene on glass is low [START_REF] Blake | Making graphene visible[END_REF], but with some experience it is easy to identify monolayer samples. In Figure 3.6 a big monolayer graphene sample deposited on soda-lime glass with the anodic bonding method is shown (T = 225 • C, V = -1250 V, t = 15 minutes). The dimensions of the sample with lateral size in excess of 200 micrometers are large compared to what can be obtained with the mechanical exfoliation method. This fact, combined with the good quality and the possibility to follow up with space charge doping means that anodic bonding is the preferred method for fabricating 2D samples in our group. Raman spectroscopy is then used to check the quality of the sample and verify its thickness, as showed in Figure 3.7. It is possible to note that the D peak is totally absent from this spectrum confirming that the sample is of excellent quality. However, it is known that in monolayer graphene the ratio of the intensities of the 2D over the G peaks should be around 2 or 3 [START_REF] Ferrari | Raman spectrum of graphene and graphene layers[END_REF] and this fact can be misleading in the identification of a monolayer. In the case of anodic bonded graphene it is found that the 2D/G ratio is close to unity and this is due to the fact that the graphene sheet is doped [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF], as a consequence of the anodic bonding process. In undoped graphene, with the Fermi level lying at the meeting point of the valence band (VB) and conduction band (CB), an incident photon can excite an electron from the VB to the CB at any value of the k vector. The 2D peak in the Raman spectrum results from a double resonance process involving two near-K point phonons (see Section 1.3.3). When the Fermi level is shifted upwards (n-doping) or downwards (p-doping) the number of possible vertical transitions in the momentum space is reduced with respect to the undoped case, thus the number of double resonance phenomena is decreased resulting in a lower 2D peak intensity [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF].
Finally, AFM is used to further confirm that the sample is a monolayer. A typical AFM topography is shown in Figure 3.8, where the edge of a graphene sample is scanned in order to measure its thickness. This is a further confirmation that the sample is just one atom thick, besides the fact that it is free from surface impurities (as opposed to the CVD graphene).
Contact deposition and shaping
As described in Section 2.3.2, the contact deposition is done through a stencil mask which is carefully aligned to the desired area of the sample with the help of micromanipulators and fixed with adhesive tape. Then, gold contacts 75 -100 nm thick are deposited by thermal evaporation. sample the more the alignment needs to be precise. The next step is to mechanically remove short circuits and give the sample the proper shape, compatible with the van der Pauw method. The samples are shaped by scratching the surface with a micromanipulator tip in order to remove the unwanted parts or with plasma etching as described in Section 2.3.2. A CVD graphene shaped with plasma etching is shown in Figure 3.10. The etched area leaves a square of graphene at the center of the glass. Even if the contacts are several hundreds of microns away from the square corners, the effective area which will be measured with the van der Pauw method is only the one delimited by the square itself [START_REF] Van Der Pauw | A method of measuring specific resistivity and Hall effect of discs of arbitrary shape[END_REF].
Compared to the scratching with the micromanipulator tip, the shaping with the plasma etching is more precise but it requires several technological steps to be performed. Moreover, it leaves residues of the polymer on the sample surface which can alter its quality and electronic transport properties.
Results
Space charge doping is performed in high vacuum (p 10 -6 mbar) in our transport measurement setup (Section 2.4). Vacuum is important in order to not contaminate the graphene surface during the doping which, as it is a charged surface, would otherwise attract charged impurities from the air. We applied our doping technique to several graphene samples in order to characterize several aspects: the maximum achievable carrier concentration, the control we can have on the carrier density, the effects of different glasses on the maximum induced carrier concentration and transport properties, the effects of the space charge doping on the sample quality. All these aspects will be discussed in the following.
Ambipolar doping
The sample deposited on the glass surface can be thought as an electrode which is put to the ground potential. When the glass is hot, the voltage on the electrode on the opposite face of the glass determines the direction of the Na + ions drift. There will be accumulation of Na + close to the sample if V G > 0, depletion of Na + if V G < 0. This is schematized in Figure 3.11. Before space charge doping, the sample usually has some intrinsic doping due to charged impurities on the surface caused by air exposition and/or by the CVD transfer process. For samples made by the anodic bonding process, there may be a residual space charge effect. The impurity induced doping can be as high as ≈ 10 13 cm -2 [START_REF] Pirkle | The effect of chemical residues on the physical and electrical properties of chemical vapor deposited graphene transferred to SiO2[END_REF] and it is usually found to be hole doping. With no applied gate voltage the equilibrium in the glass in preserved but when the glass is hot (i.e. Na + mobility is activated) the polarity of V G determines accumulation (V G > 0) or depletion (V G < 0) of sodium ions at the interface.
Annealing performed in vacuum (at a pressure 10 -6 mbar) in the cryostat at a temperature of 420 K or in an external oven (always in high vacuum) at T= 525 K can significantly reduce this doping and even eventually reverse it (from hole to electron for example).
We experimented two slightly different doping procedures: the first consists in heating the sample to the doping temperature and then increasing the gate voltage to a desired value and waiting for the doping to be accomplished; in the second approach, the gate voltage is applied at room temperature and then the sample is heated to the desired temperature. We found the second method to be more efficient in terms of maximum doping reached.
We will now show some representative results obtained on a graphene sample realized with the anodic bonding technique. The substrate is sodalime glass and the bonding process was done using the anvil as top electrode. The parameters for the deposition of the sample are T = 225 • C, V = -1250 V and t = 15 minutes. Figures 3.12 and 3.13 shows two different doping pprocedures performed on this sample. As we can see, several parameters can be monitored and controlled during the doping, like the sheet resistance R S , the gate voltage, the gate current and the temperature. An annealing was performed for an overnight period in the cryostat at a temperature of 420 K. The carrier density measured after the annealing at room temperature with the Hall effect was 4.02 × 10 12 cm -2 and R S = 1180 Ω/ . Vacuum annealing is expected to remove impurities from the surface of graphene and also neutralize the space charge created with the anodic bonding fabrication process in the glass substrate.
As we can see from Figure 3.12 the sample is heated to 365 K with a constant applied voltage V G = 10 V. We see a decrease in R S with the temperature indicating that the sample is being electrostatically doped due to the accumulation of sodium ions at the interface. Once the 365 K value is attained, the gate voltage is progressively increased to +210 V. R S immediately decreases due to the increasing space charge at the interface. When R S does not change anymore the process can be stopped. The sample is then cooled down rapidly ('quenched') to room temperature where the ion mobility is negligible. The space charge is then 'frozen' and conserved even with V G = 0 V (not shown in the figure). The electron concentration measured at room temperature is 10 14 cm -2 , an extremely high value compared to the usual carrier concentrations attained with conventional electrostatic techniques in graphene and also with respect to some chemical approaches. The corresponding sheet resistance due to residual resistance from defects or impurities is 270 Ω/ at room temperature.
We now focus on Figure 3.13 where the doped sample of Figure 3.12 is first annealed for an overnight period at 420 K with no V G to neutralize the previous doping and "reset" the glass. The same effect could also have been achieved by inverting the polarity of the gate voltage at high temperature. The starting conditions are R S = 850 Ω/ and the electron concentration is 8.82 × 10 12 cm -2 , meaning that there is still a residual space charge at the glass/material interface (a longer annealing would have still reduced this space charge). This time we apply a constant gate volatge of -10 V corresponding to hole doping. In the bottom part of Figure 3.13 it can be seen that the gate current increases exponentially as the temperature increases as a consequence of the ion mobility and drift. As a result the graphene layer is progressively hole doped. The Na + accumulation is progressively reduced finally leaving a negative oxygen ion space charge (figure 3.11) and resulting in the typical the bell-shaped curve in the top part of Figure 3.13 as the Fermi level in graphene is progressively shifted from the conduction band, through the neutrality point to the valence band. Figure 3.14 shows the Hall measurements performed before and after the doping process. The sign of the slope of the Hall resistance curve with respect to the applied magnetic field indicates that the charge carriers in the sample are changed from electrons to holes, the final carrier concentration being 10 13 cm -2 .
Fine doping and doping limit
Following the procedure illustrated in Figure 3.12 or 3.13 it is possible to reproduce the well known bell-shaped R S vs n S characteristic of graphene (see Section 1. -4 -2 0 2 4 0
1 2 3 4 V G < 0 R s ( k Ω/؏) n s ( 1 0 1 3 c m -2 )
V G > 0 together in Figure 3.16 in the 0 -2 T range). It is clear that the carrier density can be finely controlled by the space charge at the interface. We found experimentally that in the case of graphene the maximum reachable carrier density is higher for electrons than for holes. This could be related to the technique itself and space charge creation in the glass. Indeed the n-doping process is due to Na + accumulation at the interface while p-doping is due to Na + depletion and therefore due to the uncompensated and non-mobile oxygen ions. The width of the space charge could thus be different, being smaller in the case of ion accumulation and therefore the capacitance associated to the space charge could also be different.
Like in a FET device, the higher the capacitance the higher the charge accumulation at the interface. This would then be a direct consequence of the fact that Na + ions are mobile while O 2-ions a are covalently bound to the atomic structure of the glass. This observation needs direct confirmation though and eventually also needs to be correlated with the material to be doped. The highest achievable carrier density is determined by another parameter: the Na + content in the glass. Higher Na + content in glass implies a more dense space charge at the interface and thus a higher electric field. In table 3.2 we compare the maximum electronic carrier concentrations achieved borosilicate and soda-lime glass substrates on graphene. A carrier density three times higher is achieved with soda-lime glass. At the highest carrier density attained the shift of the Fermi energy from the Dirac point is calculated through the expression ∆E F = |v F | √ πn S [START_REF] Sprinkle | Scalable templated growth of graphene nanoribbons on SiC[END_REF], with v F ≈ 10 8 cm/s begin the Fermi velocity and the reduced Planck constant. With n S = 10 14 cm -2 we obtain ∆E F = 1.17 eV.
Na + density (mm -3 ) Max carrier density (cm The limiting value for doping limit is not always due to the Na + concentration in the glass and it is sometimes necessary to stop the doping process in order to avoid damage to the sample. When the space charge appears and the sample begins to be doped, a strong attraction arises between the sample and the glass surface because they have opposite charges. The electrostatic pressure on the sample can reach 1 MPa [START_REF] Wallis | Field assisted glass-metal sealing[END_REF][START_REF] Anthony | Anodic bonding of imperfect surfaces[END_REF], possibly inducing deformation. This would degrade its quality and the mobility of the charge carriers in the material as shown schematically in Figure 3.17. Well polished substrates and finely prepared surfaces are thus a key to avoid this degradation. We have probed this aspect by pushing the doping process beyond the value for which R S no longer decreases. In these conditions we observe that R S , after attaining a minimum value, starts to increase meaning that the mobility of the charge carriers decreases possibly because of sample deformation. An example is shown in Figure 3.18, where a CVD graphene sample on a soda-lime glass substrate is doped at 375 K at a gate voltage of 130 V. As highlighted in orange in the figure, the resistance attains a mini-mum and then starts to increase. At this point, quenching the temperature is necessary in order limit damage in the sample. The carrier concentration that was measured later at room temperature for this particular sample was 1.2 × 10 14 cm -2 .
Reversibility
Outside of conditions leading to sample damage, space charge doping ensures perfect reversibility of the doping process. It is thus always possible to achieve a carrier concentration previously induced in the sample by simply performing space charge doping and quenching the system at the proper value of R S . This fact is presented in Figure 3.19, where a 70 × 70 µm 2 CVD graphene sample is doped at different carrier concentration values going in the n to p direction (black circles) and than is doped back from the p to n direction (red crosses) by stopping the doping process at the same R S value found before. Clearly, the same carrier concentration is found for all the points, meaning that the process is fully reversible. A detailed analysis of the graphene quality after the doping process will be done later in this chapter.
Substrate surface quality
We showed above that the substrate surface quality can induce mechanical deformation in the sample at high doping. We now take a closer look at the influence of this substrate surface quality on the sample mobility. As
Ω/؏) n S ( 1 0 1 2 c m -2 )
S t a r t R e t u r n reported in Table 3.2, soda-lime glass allows to reach carrier concentration levels exceeding 10 14 cm -2 thanks to the higher sodium ion content. But for a transparent conducting electrode application the carrier concentration is not the only factor which is relevant. The sheet resistance of the TCE should be below a certain threshold value which depends on the application. For the most demanding applications the threshold of ∼ 100 Ω/ [100] is often cited. The Drude resistivity in a material depends both on the carrier concentration and the mobility of the charge carriers.
R S = 1 eµn S (3.1)
where e is the electron charge, n S is the carrier density in the material and µ is the mobility. Mobility is affected in particular by defects or deformations. In a one atom thick material like graphene the interaction with the substrate can play a role in modifying electronic properties. In order to quantify this aspect we compared the results of the measured Hall mobility taken from several samples and for different carrier densities (both in the n and p side) deposited on borosilicate or soda-lime glass. They are reported in Figure 3.20. Both in the electron and the hole side we observe a reduction on the mobility with increasing carrier concentration, which is normal because the charge carriers experience an incremented carrier-carrier scattering, which lowers the mobility. However, the mobility of the samples deposited on the borosilicate glass is systematically higher than the mobility of the samples on soda-lime at every carrier density value. AFM scans of the surfaces of the borosilicate and soda-lime glasses revealed that the root-mean-square (RMS) roughness is significantly lower in the borosilicate glass then in the soda-lime (0.227 nm against 0.734 nm) as Figure 3.21 shows. The difference is due to the polishing procedure carried out by the manufacturer, since no polishing step was performed by us. This difference in the RMS roughness (see Figure 3.17 for CVD graphene on borosilicate glass and 268 Ω/ for anodic bonded graphene on soda-lime glass.
Transmittance
For transparent conducting electrodes (TCE) the transparency of the material is a fundamental aspect. TCE are used in many devices like solar cells, diodes or displays and are essential for the proper functioning of the device: they provide electricity to the active part of the device and at the same time let the light go in or out of it. The problem in producing such materials is that the transparency of a material decreases with increasing conductivity. The transparency is measured through the transmittance. The transmittance is defined as the portion of light at a certain wavelength which is not absorbed by a material and passes through it. The transmittance spectrum of doped graphene was measured by illuminating the graphene layer through the substrate using a white lamp in several points and analyzing the transmitted spectrum with the spectrometer of our Raman system. The same thing is done on clean glass adjacent to the graphene sample. The ratio between the two intensities at every wavelength gives the transmittance spectrum of doped graphene:
T (λ) = I graphene (λ) I glass (λ) × 100 (3.2)
where I graphene (λ) is the intensity of light passing through graphene and the space charge above it and I glass (λ) is the intensity of light passing through the clean glass close to the graphene sheet. The range of wavelengths in which the measurement is done is 535 -800 nm (the lower limit is imposed by the filter in our Raman system, the upper limit is the limit of the visible range). Monolayer graphene has a transmittance of ∼ 97.6% at a wavelength of 550 nm [START_REF] Suk | Transfer of CVD-grown monolayer graphene onto arbitrary substrates[END_REF]. In Figure 3.22 are shown our measurements on doped CVD graphene performed immediately after high doping. As we can see, the transmittance is not altered by the doping process.
Quality of the doped samples
The graphene samples like the one showed in Figure 3.10 were used to monitor a given area with µ-Raman mapping before and after the doping process. As described in Section 1.3.3, the Raman signal of graphene gives useful information about its quality [START_REF] Ferrari | Raman spectrum of graphene and graphene layers[END_REF] through the D peak and about carrier concentration [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF] through the ratio of the intensity of the 2D peak to the G peak. Raman mapping allows to monitor both aspects simultaneously over a big area. The step size we used for the mapping was 1.3 µm over an area of 30 × 30 µm 2 . The ratio 2D/G was mapped (Figure 3.23) before and after the doping in the same area. The sample was routinely doped both in the electron and in the hole side, the last value of carrier concentration before Raman mapping being 4.28 × 10 13 cm -2 on the electron side and R S = 260 Ω/ . It is clear that the 2D/G ratio decreases remarkably after the doping and in a uniform way on the whole area of the sample. This is consistent with the results found in [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF] and [START_REF] Chen | Controlling inelastic light scattering quantum pathways in graphene[END_REF] where the effects of doping was monitored with µ-Raman measured in situ. The spectra collected during the mapping are also used to monitor the quality of the graphene layer before and after the doping. In Figure 3.24 10 reperesentative spectra extracted from the Raman mappings are shown. The curves are shifted for clarity. We note that there is no increase in the D peak, confirming that the graphene layer has not been damaged by the multiple doping steps. It can again be seen that the ratio of the intensity of the 2D over the G peak is significantly reduced after the doping.
Comparison with other doping methods
Published literature furnishes a wide variety of methods to dope graphene. The first ever used is the electrostatic doping through a dielectric in a field effect transistor (FET) configuration using an exfoliated graphene sample deposited on an SiO 2 /Si [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF] substrate which is also used as gate electrode. The advantage of using this technique is that it is relatively straightforward to implement and it allows to change the carrier concentration in the sample simply by changing the gate voltage. On the other hand, the maximum achievable carrier concentration is relatively low for certain applications, being of the order of 10 12 cm -2 . This is due to the low capacitance associated with the SiO 2 dielectric and also the thickness of the oxide layer routinely achievable. Moreover, a graphene sample in this configuration can obviously not be used as a TCE because of the non transparency of the SiO 2 /Si substrate and the gate voltage needs to be maintained to maintain doping. Other techniques involving chemical doping of the graphene sheet have been proposed. One of these which is also promising for the large scale production of graphene was presented in Reference [START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF].
Here, a roll-to-roll method was used to transfer large area CVD graphene onto a PET substrate and graphene sheets with diagonal dimension up to almost 80 cm have been repetitively transferred. During the transfer process the graphene layer is doped with nitric acid (HNO 3 ) and the authors found it to efficiently pdope the graphene layer. Sheet resistivity as low as 125 Ω/ on mono-layer graphene were obtained with a slightly altered transmittance. For TCE applications the advantages offered by this method are many (the minimum value of R S , the transmittance of doped graphene, the applicability to very large areas) but on the other hand, this kind of doping is not stable in time thus imposing severe limitations to this technique [START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF]. Another example of chemical doping can be found in Reference [START_REF] Park | Doped graphene electrodes for organic solar cells[END_REF], where the authors used a graphene layer p-doped with AuCl 3 as TCE for solar cells applications but they found that the process is not always reproducible. The two methods mentioned above have also the disadvantage of not being reversible.
Substitutional doping has also been been used to dope graphene. In particular, in Reference [START_REF] Panchakarla | Synthesis, structure, and properties of boron-and nitrogen-doped graphene[END_REF] the electron and hole doping capability of substitutional boron (acceptor) or nitrogen (donor) in graphene are explored. The amount of B or N determines the amount of doping and it is thus possible to tune the carrier concentration in the sample. This interesting technique has the obvious disadvantage of not being reversible and the graphene structure is damaged because the substitutional atoms break the perfect hexagonal network of graphene, as seen by Raman spectroscopy. In another work in our group [START_REF] Velez-Fort | Epitaxial graphene on 4H-SiC(0001) grown under nitrogen flux: evidence of low nitrogen doping and high charge transfer[END_REF] nitrogen doping of graphene was performed during the growth of graphene onto a SiC substrate. It was found that the nitrogen induced carrier density can be as high as 2.6 × 10 13 cm -2 , higher then the aforementioned electrostatic doping but the process is still irreversible, introduces defects and in the case of SiC graphene, remaisn incompatible with TCE applications.
Highest efficiency, in terms of maximum doping achievable, can be obtained by means of a polymer electrolyte gate as presented in Reference [START_REF] Sarma | Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor[END_REF]. Here the principle is to use a polymer containing mobile ions in order to create an ion accumulation at the interface with graphene (similarly to the space charge doping). The associated electric field dopes the graphene layer. With this method a charge carrier concentration of 5 × 10 13 cm -2 can be obtained, but the graphene layer is contaminated by the polymer, an aspect that can significantly alter the electronic transport properties of graphene. Moreover, its transparency is greatly diminshed by the presence of the polymer. Another example can be found in Reference [START_REF] Kalbac | The influence of strong electron and hole doping on the Raman intensity of chemical vapor-deposition graphene[END_REF], where the same method is used to dope CVD graphene. Liquid electrolyte has also been used to highly dope graphene and control the Raman scattering properties of the sample through doping, like is shown in Reference [START_REF] Chen | Controlling inelastic light scattering quantum pathways in graphene[END_REF]. Again, there is no practical use of this technique due to the presence of the liquid electrolyte which can significantly alter the transport properties of graphene as well its transparency.
Space charge doping can overcome many of the disadvantages which characterize the doping techniques mentioned above. The substrate used is commercial glass, which is perfectly compatible with all applications especially those needing a transparent substrate like transparent conductive electrodes. It allows to reach a considerably higher carrier concentration than all the techniques known for doping graphene (> 10 14 cm -2 ). Space charge doping does not involve the use of any extra layer or other technological steps. This ensures no contamination or unwanted doping of the sample and the preservation of its optical properties. Moreover, no defects are introduced during the standard doping process, which is really important to maintain unaltered the electronic properties of the sample. Especially for large scale applications, the uniformity of the doping is important to ensure the correct functioning of the device. During our experiments we dedicated particular attention to this aspect. We uniformly doped samples of 1 cm 2 maximum size, but conceptually the technique applies to samples of arbitrary size provided that the applied electric field necessary for the ion drift is uniform all over the sample area. Finally, the carrier concentration in the sample can be finely controlled. Moreover at room temperature or lower the doping is permanent without the need to maintain a gate voltage which is a very important criterion for energy saving. On the other hand, some drawbacks can be identified. First of all, tuning the carrier concentration in the sample requires heating the sample, achieving the required doping through ion drift and cooling down the system. These steps could be inconvenient in some applications. Also, if the sample is heated above the temperature where ion mobility is activate and the gate voltage is not applied the space charge will gradually disappear and the doping will be lost. This also can be not convenient for some applications like solar cells for example, but choosing or designing glasses with a higher activation temperature can overcome this problem.
If we consider space charge doped graphene as a TCE many of the problems mentioned above for other doping techniques can be overcome, but others remain. Doped graphene is sensitive to ambient contamination because, being a charged material, it attracts charged impurities from the air which neutralize the doping induced by the space charge. For potential TCE use the graphene layer needs to be protected which is also a mechanical requirement. In this thesis, the standard value for TCEs of 100 Ω/ could not be achieved. As mentioned before this problem can be overcome by using more polished glasses and higher quality graphene. In the field of TCE space charge doping is thus an innovative method to increase the conductivity of transparent materials. One can for example investigate materials more mechanically robust then graphene. The answer could be found in transparent oxides. They are usually very resistive and, like in the case of graphene, many techniques have been used to lower the resistivity. As described in Chapter 4, we applied the space charge doping to zinc oxide and we will show it to be a valid alternative to what is commonly done to dope this material.
Control measurement on quartz
We made a cross-check to verify the microscopic mechanism that we use to justify the doping process. For this we used a substrate with similar dielectric properties and size as the glass substrate used, except that it contains no alkali ions. We performed a test measurement on a CVD graphene sample deposited on a quartz substrate. Quartz is formed by a crystalline network of SiO 2 and we can think about it as a glass without Na + ions or other foreign species. For our control measurement we used a quartz substrate of same thickness (0.5 mm) as our soda-lime and borosilicate glass substrates and of similar RMS roughness. In Figure 3.25 we compare doping performed on a CVD graphene sample deposited on borosilicate glass and a measurement of a CVD graphene on quartz. The two samples were subjected to a procedure which consists of (a) raising the temperature to 350 K and then applying a gate voltage of +210 V (red line), (b) leaving the system at that temperature for a fixed time (black line), (c) then decreasing to room temperature followed by switching off of the gate voltage (blue line).
The initial R S of the samples depends on the impurities due to the transfer process of the CVD graphene from the copper substrate to the glass or quartz substrate. The bottom part of the graph corresponds to the doping procedure applied to the sample on glass. We can see that as soon as the gate voltage is applied a drop in the resistivity is observed, as a consequence of the electrostatic doping induced by ion drift. After a certain time and a lowering of R S of nearly 1 kΩ/ the system is quenched to room temperature in order to freeze the ion space charge.
The top part of Figure 3.25 shows the same procedure applied to the graphene sample deposited on quartz. After the application of the gate the doping attempted on CVD on quartz in the same conditions. We see that no effect is produced on the sample deposited on quartz due to the absence of sodium ions in the quartz.
voltage, only a small shift of the sheet resistance of ∼ 20 Ω/ is observed due to the standard field effect across the quartz substrate. The R S of the graphene layer remains unaltered after quenching the system to room temperature, giving a direct confirmation of the fact that the doping of the sample deposited on glass is due to the space charge formed by Na + ion accumulation or depletion. Since no ions are present in the quartz substrate, no space charge can be formed and the sample can not be doped with space charge doping method.
Conclusions on space charge doping
In this chapter we have introduced space charge doping, a technique for doping ultra-thin layers of materials deposited on a glass substrate. The first material to be doped with this technique is graphene. The choice of graphene is justified by many factors, like the possibility to dope it n or p, the availability of big area or good quality samples and the ample bibliography present in literature on this material, useful to analyse our results. The process of sodium ion drift (the key element of space charge doping) is exploited in order to form a space charge at the interface between the glass and the graphene deposited on its surface. Ion mobility is activated above 330 K and an electric field across the glass causes the Na + ions to migrate towards the cathode. The space charge layer can be formed either by Na + ion accumulation or depletion which creates a positive or negative electric field, respectively. The graphene sheet will accordingly be doped n or p.
Space charge doping was characterized in terms of maximum achievable carrier concentration, how finely the doping can be controlled, the effect of the doping process on the sample quality and the dependence of the space charge doping efficiency on the glass substrate. It turns out that, as expected, the maximum reachable carrier concentration in the sample can be obtained with glass substrates which have a higher ion concentration, but the minimum value of sheet resistance is strongly affected by the glass surface quality (an important aspect for transparent conducting electrode applications). Moreover, the carrier concentration in the sample can be finely controlled by quenching the system at the proper value of sheet resistance and it is found that the doping process is fully reversible and does not alter graphene transparency.
Compared to the most used techniques for doping graphene, space charge doping offers a valid and for many aspects a much better alternative. Simply involving a glass substrate and without the need of other technological steps, space charge doping is able to induce carrier concentrations in graphene well beyond the techniques mentioned before (exceeding 10 14 cm -2 ) without damaging the sample. Space charge doped graphene could be used in applications like TCE, but graphene as a TCE has not fulfilled the expectyed promise in the last years, probably because of a bottleneck related to production, quality and processing of large area graphene. Other alternatives like transparent oxides can be the answer to find new transparent electrodes based on the space charge doping as we will see in Chapter 4.
Concluding, we invented the space charge doping as an efficient and reliable method for doping graphene and potentially other ultra-thin materials deposited on glass, allowing a fine control of the carrier density to values exceeding 10 14 cm -2 , much higher than most other doping techniques.
Chapter 4
Ultra-high doping of ZnO 1-x thin films Chapter 4 and Chapter 5 are partially based on our submitted paper: A. Paradisi, J. Biscaras and A. Shukla, Magneto-conductance in nearly degenerate ZnO thin films: space charge doping and localization
In the second part of this thesis I study ultra-thin zinc oxide films deposited on glass. The transport and magneto-conductance properties of these are investigated as a function of electrostatic doping using the space charge doping technique. Why zinc oxide? As we have seen the space charge doping technique is readily applicable to a well-defined single crystalline material deposited onto a susbtrate, which is the case of graphene. However as discussed in the earlier chapter, graphene also has shortcomings as a TCE material. Furthermore if our doping technique is versatile, we should be able to apply it to a variety of polycristalline or even eventually amorphous thin films deposited on glass by standard deposition techniques. This was the reason to choose a large gap semiconductor like ZnO which is quite different from graphene as far as material and electronic properties are concerned, as we shall see below. We had the possibility to produce high quality samples thanks to the magnetron sputtering facility at the Insitut des NanoSciences de Paris. Moreover doped ZnO, like graphene, is a widely studied material in many fields and in particular in the field of Transparent Conducting Electrodes (TCEs) and spintronics, making it easy to compare results obtained by our method with other results in litterature.
Zinc oxide is a wide band-gap semiconductor with a direct band-gap of E g ∼ 3.3 eV at 300 K [START_REF] Janotti | Fundamentals of zinc oxide as a semiconductor[END_REF][START_REF] Ozgur | A comprehensive review of ZnO materials and devices[END_REF] which makes it practically transparent in the visible range. Accordingly single crystal ZnO samples show very high resistivity of the order of 10 5 Ω • cm [START_REF] Carcia | Transparent ZnO thin-film transistor fabricated by rf magnetron sputtering[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF]. The most widespread technique for the doping of ZnO, especially for TCE applications, is substitutional doping with Al atoms [START_REF] Chen | Structural, electrical, and optical properties of transparent conductive oxide ZnO:Al films prepared by dc magnetron reactive sputtering[END_REF][START_REF] Minami | Highly conductive and transparent aluminumdDoped zinc oxide thin films prepared by RF magnetron sputtering[END_REF][START_REF] Jiménez-González | Optical and electrical characteristics of aluminum-doped ZnO thin films prepared by solgel technique[END_REF], though easily formed oxygen defects also provide a route to doping and increasing conductivity. This technique can significantly lower the resistivity of ZnO to values of the order of 10 -4 Ω • cm, but the atomic structure and the transparency of the aluminium-doped ZnO are altered by this process. Moreover chemical doping is not reversible.
Regarding the spintronics applications, ZnO (and in general II-VI semiconductors) have obtained the attention of the scientific community thanks to the possibility of manipulating the spin-dependent magnetic phenomena in thin film systems.
Our study of zinc oxide concerns ultra-thin (< 40 nm) films of ZnO 1-x deposited on soda-lime glass. As in most oxide films, these thin films were non-stoichiometric, containing oxygen defects. Oxygen vacancies are responsible for the n-type conductivity in ZnO [START_REF] Chen | Structural, electrical, and optical properties of transparent conductive oxide ZnO:Al films prepared by dc magnetron reactive sputtering[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF][START_REF] Ziegler | Electrical properties and non-stoichiometry in ZnO single crystals[END_REF] and their number can be increased by vacuum annealing thus increasing the conductivity of the material. The transport properties of the thin films are studied as a function of the carrier concentration, temperature and magnetic field. We find that space charge doping significantly alters electronic properties of the thin film and that electrostatic doping can induce an electron concentration as high as 2 × 10 14 cm -2 , bringing the ZnO 1-x thin film close to the insulator-metal transition. As in the case of graphene (Chapter 3) the carrier concentration can be finely tuned in a wide range allowing us to study the evolution of the transport properties with the respect to the carrier concentration.
ZnO 1-x device fabrication
We fabricated ZnO ultra-thin films on soda-lime glass with the RF magnetron sputtering method at room temperature using a Zn target in an atmosphere composed of Ar and O, following the procedure described in Section 2.1.3. The total pressure in the chamber during the deposition was kept at 10 -2 mbar while the partial pressure of Ar and O was varied to vary the number of oxygen defects and the stoichiometry of the deposited ZnO thin film. In our experiments the partial pressures were fixed by varying the composition of gases in the range Ar= 50% -O= 50% to Ar= 80% -O= 20%. We did not perform a chemical analysis on our deposited thin films to determine the exact amount of Zn and O contained in the samples but as an indication of the initial stoichiometry obtained we cite data from Reference [START_REF] Min | Properties and sensor performance of zinc oxide thin films[END_REF]. In this work it was found that annealing ZnO thin films produced with the same partial pressure of Ar and O (i.e. Ar= 50% -O= 50%) results in quasi stoichiometric ZnO. As the O partial pressure is reduced with respect to the Ar partial pressure, the ratio O/Zn decreases, for example Ar= 70% -O= 30% results in a stoichiometry of ZnO 0.73 with x = 0.27. The determination of the stoichiometry with wet chemical analysis on such ultra-thin films is not realistic and for our purposes the initial value of x is not crucial since we are concerned with the maximum carrier concentration and the minimum value of R S we can attain with space charge doping. Moreover, we found that our results are independent of the initial stoichiometry of the samples, as we shall see later. The sputtering time determines the thickness of the sample. The samples are characterized with X-ray diffraction and AFM in order to determine the crystalline properties and the exact thickness. The characterization is followed by shaping of the sample and contact deposition to fabricate the device used in transport measurements.
Zinc oxide deposition on glass
Our samples were deposited using a gas mixture formed by Ar= 76% -O= 24% or Ar= 80% -O= 20% which, as we saw from Reference [START_REF] Min | Properties and sensor performance of zinc oxide thin films[END_REF], should result in ZnO 1-x with x ≈ 0.3 and higher. These samples should thus have an initial defect induced carrier concentration which was our aim since stochiometric samples have resistances beyond our measuring range. The deposition time is varied between 25 s and 120 s, so as to obtain samples with different thickness. After the deposition an annealing process in high vacuum at a temperature of ∼ 350 • C is eventually performed. The annealing further increases the oxygen vacancies in the sample through oxygen desorption from the thin film and increases the grain size at the same time through a sintering process.
The zinc oxide thin films are deposited on ∼ 1 cm 2 clean soda-lime glass substrates. For a given set of deposition parameters two to three substrates are used, since samples are needed for measurements as well as for characterization. Once the gases are injected in the chamber, the RF power is turned on and an Ar plasma is created causing the deposition of ZnO on the glass surface (see Section 2.3.2). A total of 15 sets of samples where made, each one characterized by a different gas mixture and deposition time.
To study the effects of space charge doping on thin film ZnO samples, the deposition parameters were varied in order to modify the stoichiometery and the thickness of the samples. For a detailed study we chose to focus on four samples labelled A, B, C and D whose deposition parameters and thickness are listed in Table 4.1. The samples were intentionally deposited with a certain degree of non-stoichiometry in order to decrease the initial resistivity with some chemical doping. Then, the Ar/O ratio and the deposition time were varied to see the effect of the space charge doping on samples with different characteristics (stoichiometry and thickness). From the samples of Table 4.1 we obtained the most representative results of our study which will be analyzed in this chapter. The characterization of the samples is described in the following.
X-ray diffraction, AFM and transmittance
Obtaining an X-ray diffraction signal on our ZnO samples was difficult because of the ultra-thin nature of the samples. The spectra are collected using a grazing geometry with an incident angle of 4 • from a Cu source with a wavelength of the incident beam of λ = 0.15405980 nm. In • correspond to the crystalline orientation in the (002) direction [START_REF] Znaidi | Sol-gel-deposited ZnO thin films: A review[END_REF]. We see also a small peak located at 2θ ≈ 63 • corresponding to the (103) crystalline orientation. Our thin films are thus textured with a preferential (00l) orientation.
As explained earlier an initial carrier concentration is induced by creating oxygen defects. This facilitates the practical application of space charge doping since the sample resistivity which can be very high initially, can thus be measured over the whole process. We performed the annealing step on samples B, C and D for 15 hours at a temperature of ∼ 350 • C. XRD reveals that the annealing step has an effect on the grain size, as shown in Figure 4.3. We see clearly that there is a sharpening of the peaks associated to the (002) and ( 103) crystalline orientations indicating a bigger grain size. We notice also that the peaks show a slight shift due to a change in the stress in the film. The average grain size of the samples has been estimated from the Scherrer formula [START_REF] Patterson | The Scherrer formula for X-ray particle size determination[END_REF] (see also Section 2.2.4) and the calculated average grain sizes calculated after the annealing step are 12.9 nm for Sample A and 12.8 nm, 11.2 nm and 10.3 nm for sample B, C and D respectively. As we saw from Section 2.2.4, performing the XRD analysis in a grazing geometry (as it is the case of our samples) while increasing count-rate through an enhanced footprint on the sample surface, can degrade the angular resolution due to the augmented source size as seen from the detector. Thus, the effective Xray coming from coherent scattering from the sample which hits the detector can be emitted from an effective surface which is, in the worst case, as big as the sample surface [START_REF] Simeone | Grazing incidence X-ray diffraction for the study of polycrystalline layers[END_REF] (∼ 1 cm 2 in our case). The result is a broadening of the peaks in the diffraction pattern. However we verified that this error does not contribute significantly in our measurements and can thus be neglected.
The annealing increases the grain size of the sample. For example the full width half maximum (FWHM) of the peak associated to the (002) direction was 0.723 before the annealing and 0.583 after the annealing for sample B, while for sample C the FWHM changed from 0.755 to 0.591 after the annealing step. The peak position also changed, meaning a change in the strain in the film. In particular, in sample B the position of the peak changed from 34.29 • to 34.81 • and in sample C from 34.33 • to 34.54 • . This shift of the peak position to higher angles is a consequence of compressive stress caused by the annealing [START_REF] Chen | Structural, electrical, and optical properties of transparent conductive oxide ZnO:Al films prepared by dc magnetron reactive sputtering[END_REF]. AFM analysis is used to scan the surface of the samples in order to obtain topographic data and compare the results with those obtained from the XRD. An example of the grain size analysis performed with AFM on sample A is presented in Figure 4.4. The height mapping of the scanned area (upper part of Figure 4.4) reveals the grains of the sample but a more accurate information about the grain size can be obtained from the mapping of the phase, presented in the lower part of the figure. However, the estimation of the average size of the grains obtained from the AFM scans is higher than that obtained from the XRD (≈ 28 nm instead of 12.9 nm calculated from XRD). Since the XRD gives information on the coherent diffraction domains related to the (00l) reflections which also represent the dominant texture of the sample, we can assume that grains have an average dimension perpendicular to the sample surface of about 12 nm. The AFM measurement on the other hand gives us an average dimension of 28 nm for the grain dimension in the plane of the sample surface. The grains are thus on the average, flat discs of thickness about 12 nm and diameter about 28 nm and roughly oriented with the flat surface on the substrate surface. In it is shown from a schematized lateral view the deduced shape of the grains of our ZnO samples.
The thickness of the ZnO samples is also estimated with AFM. A part of the sample must be etched away in order to obtain a sharp step. The edge of the sample is thus scanned with the AFM and the height of the step between the glass and the top of the sample can be measured. An example is shown in Figure 4.6 where the thickness of sample C is measured.
The RF-magnetron sputtering method is a standard way to produce ZnO thin films. ZnO thin films are mainly used to fabricate transparent devices or for the study of their electronic properties. Our results on the fabrication and characterization of the samples are in excellent agreement with published data. The crystalline orientation of the film usually depends on the substrate used and the deposition technique. With the RF sputtering method it is often found that the preferred crystalline orientation of the film is the (00l) direction [START_REF] Fortunato | Fully transparent ZnO thin-film transistor produced at room temperature[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF][START_REF] Hoffman | ZnO-based transparent thin-film transistors[END_REF][START_REF] Kim | Magnetron sputtering growth and characterization of high quality single crystal ZnO thin films on sapphire substrates[END_REF][START_REF] Wang | Temperature-dependent magnetoresistance of ZnO thin film[END_REF]. The grain size of our films is generally lower then what is found in literature. The grain size estimated in Reference [START_REF] Carcia | Transparent ZnO thin-film transistor fabricated by rf magnetron sputtering[END_REF] for example is 32 -38 nm. Our films are ultra-thin and grown on an amorphous substrate. The samples of the aforementioned references are in fact thicker than our samples ranging from ∼ 70 nm to 1 µm and in general grown on crystalline substrates.
Transmittance measurements where also used to characterize the ZnO thin-films. ZnO absorbs a really small fraction of the visible light thanks to it wide band-gap. ZnO transmittance in the visible range can exceed 90% [START_REF] Mosquera | Exciton and core-level electron confinement effects in transparent ZnO thin films[END_REF] and vary with the thickness. Our ZnO samples show a transmittance > 94% in the visible range of wavelengths as shown in Figure 4.7. We briefly mention the characterization of our ZnO thin film with µ-Raman spectroscopy. ZnO presents some characteristic peaks dominated by a peak at 570 cm -1 and other characteristic peaks around it [START_REF] Tzolov | Vibrational properties and structure of undoped and Al-doped ZnO films deposited by RF magnetron sputtering[END_REF]. We attempted the characterization of our ultra-thin films by means of µ-Raman spectroscopy by performing very long acquisitions with an excitation laser wavelength of 532 nm and by doing multiple acquisitions in order to filter the noise. The results were not encouraging since our samples are very thin and transparent and the obtained spectra are dominated by the substrate background. Raman spectroscopy is thus not adapted for the characterization of our samples.
Sample shaping and contact deposition
As in the case of graphene, the ZnO samples must be shaped and contacted for the transport measurements. As described in Section 2.3, to give the zinc oxide the proper shape it is necessary to cover the sample with a drop of PMMA, leaving uncovered the parts of the sample which have to be etched
Space Charge Doping applied to the ZnO thin film
This section is dedicated to the space charge doping of the ZnO samples. The samples are doped with the procedures described in Section 3.2.2, i.e. either by heating the sample and applying the gate voltage once T is stable, or by applying V G at room temperature and then heating the sample to the target doping temperature, with this latter technique allowing for the highest values of carrier concentration to be reached, as in the case of graphene.
Fine control of doping in the thin film
The initial resistivity of ZnO can vary considerably from sample to sample due to the different initial carrier concentration determined by the oxygen vacancies. The initial sheet resistance of the four samples under examination are listed in Table 4.2. Sample A shows a considerably high initial sheet resistance since no post-deposition annealing is performed on this sample and the oxygen vacancy induced carrier concentration is relatively low. Samples B and C present an initial R S of the order of few tens of kΩ/ as a result of a higher oxygen vacancy induced carrier concentration due to post-deposition The sample is first heated at ∼ 370 K and then a gate voltage V G = +285 V is applied. As observed, the space charge of Na + ions forming at the interface between the glass and the ZnO thin film cause a massive drop in R S of ∼ 4 orders of magnitude in just 10 minutes.
annealing. We notice also that the initial R S of sample B is slightly higher than sample C, as a consequence of the initial oxygen stoichiometry of the samples as determined by the partial pressures of argon and oxygen during deposition. From Table 4.1, in sample B P Ar /P O = 76/24 while in sample C P Ar /P O = 80/20. This difference persists after the annealing and this is reasonable since the annealing time and temperature for the two samples are the same. Sample D which was subjected to the same annealing process as samples B and C, shows a relatively high initial R S of the order of 10 6 Ω/ . This is because it is the thinnest film.
The effect of space charge doping on ultra-thin ZnO films is impressive and clear. It lowers R S by 4 orders of magnitude or more. As an example, in Figure 4.9 we show sample A which had the highest initial R S of 10 8 Ω/ of our four samples. This is at the upper limit of our measurement apparatus and falls in a spectacular way during the doping procedure. The sample is heated to the doping temperature of ∼ 370 K and at that point a gate voltage of +285 V is applied. An abrupt drop in the resistivity is observed as the consequence of the space charge forming at the interface. As the doping increases, the resistivity drop rate slows to zero after about 10 minutes. At this point the temperature is quenched to room temperature where R S = 14 kΩ/ . Thus a drop in resistivity of a factor 10 4 is produced with space charge doping. The carrier concentration measured with the Hall effect at room temperature after the doping is 5.4 × 10 13 cm -2 .
The carrier density can be finely controlled, exactly like it was done for graphene, by quenching the system at the desired sheet resistance value. In Figure 4.10 are shown several R S vs T characteristics in the range 4K< T < 300K at several values of carrier density. The first thing to be noticed in these graphs is that the maximum reached carrier concentration is very similar for all the samples. The maximum carrier concentration attained always exceeds 10 14 cm -2 electrons, ranging from 1.51 to 2.19 × 10 14 cm -2 independent of the initial carrier concentration. Space charge doping thus induces charges in a similar manner to chemical doping as confirmed by the similarity between the curves obtained at different doping from the four samples. The sheet resistance showed by the four samples at the highest doping is also very similar being 2 -3 kΩ/ at room temperature in all cases. The variation is attributed mainly to the difference in the carrier mobility of the samples. We also notice that, like in the case of graphene, the space charge doping is able to induce in ZnO an extremely high doping in a reversible way. In fact, the doping procedure is usually carried out by first inducing the highest possible carrier density in the thin film and then progressively reducing the doping. Moreover, like in graphene, the transmittance of the sample measured after the doping process is not altered and it shows a behaviour similar to that shown in Figure 4.7.
Chemical doping with Al is the most widely used technique for doping ZnO, especially for practical applications like transparent conducting electrode (TCE). This technique is very effective for reducing the resistivity of ZnO. In Reference [START_REF] Chen | Structural, electrical, and optical properties of transparent conductive oxide ZnO:Al films prepared by dc magnetron reactive sputtering[END_REF] aluminium doped ZnO is produced with RF sputtering using a Zn target mixed with Al. They found that the maximum carrier concentration induced in the sample is 9.2×10 20 cm -3 which is equivalent to a carrier concentration in two dimensions (in a 1 nm thick sample) of almost 10 14 cm -2 and the corresponding resistivity of the film is ∼ 4 × 10 -4 Ω • cm. To compare the resistivity of our 2D samples with the published data we have to calculate the resistivity starting from the sheet resistance of the samples. In Reference [START_REF] Goldenblum | Weak localization effects in ZnO surface wells[END_REF] the thickness of a quantum well induced on the surface of ZnO is calculated to be of the order of few nanometers. We can thus assume that in our ZnO samples the thickness which is effectively doped by the space charge is of the order of the nanometer. The resistivity can be calculated from ρ = R S • t, with t the thickness of the doped ZnO. The lowest R S in all the analyzed ZnO samples is of the order of the kΩ/ , thus the resistivity of our doped samples is of the order of ρ ∼ 10 -4 Ω • cm. This value is comparable with the one found for Al doped ZnO in Reference [START_REF] Chen | Structural, electrical, and optical properties of transparent conductive oxide ZnO:Al films prepared by dc magnetron reactive sputtering[END_REF]. In Reference [START_REF] Oh | Transparent conductive Al-doped ZnO films for liquid crystal displays[END_REF] slightly different results are found with a resistivity of the Al doped ZnO being ∼ 4 × 10 -3 Ω • cm at a doping of 2 × 10 20 cm -3 . Space charge doping is thus as effective as aluminium doping of ZnO, but with the immense advantage of being reversible and not altering the transparency (which can be as low as 80% for Al doped ZnO) and quality of the thin film. The resistance of our thin film still remains high for TCE applications as a consequence mostly of low electronic mobility which could be improved by improving the crystalline quality. In the aforementioned papers the thickness of ZnO varies from 100 nm to 0.8 mm, with bulk conduction.
Polymer electrolyte doping has been described in References [START_REF] Yuan | High-density carrier accumulation in ZnO field-effect transistors gated by electric double layers of ionic liquids[END_REF][START_REF] Yuan | Electrostatic and electrochemical nature of lquid-gated electric-double-layer transistors based on oxide semiconductors[END_REF]. Similarly to the space charge doping, polymer electrolyte doping exploit the mobile ions present in the polymer to create a space charge at the interface of the material which dopes it. In those papers, carrier densities up to 4 × 10 14 cm -2 or more have been measured on the surface on ZnO. The authors claim that this kind of doping is reversible, but the doping procedure raises many questions about intercalation of foreign species in the ZnO structure and on the reliability of the measured data since the ions of the polymer electrolyte responsible for the doping are considerably mobile in a wide range of temperatures and can thus alter the measurements. As seen in Figure 5 of Reference [START_REF] Yuan | High-density carrier accumulation in ZnO field-effect transistors gated by electric double layers of ionic liquids[END_REF] we can see that the carrier density induced by the polymer electrolyte varies considerably with the temperature. Moreover polymer electrolyte is not suitable for practical applications because it alters the transparency of the film and contaminates it.
We mention also the results obtained in References [START_REF] Fortunato | Fully transparent ZnO thin-film transistor produced at room temperature[END_REF][START_REF] Fortunato | Wide-bandgap high-mobility ZnO thin-film transistors produced at room temperature[END_REF] where transparent transistors using ZnO as active material have been used. The whole transistor structure (gate electrode, source and drain electrodes and gate dielectric) has been realized using transparent materials but does not allow to reach very high carrier concentration levels.
Despite the really high doping (always exceeding 10 14 cm -2 ) we are able to induce, we do not observe an insulator-metal transition in our samples. This behaviour is coherent with results from literature [START_REF] Yuan | High-density carrier accumulation in ZnO field-effect transistors gated by electric double layers of ionic liquids[END_REF][START_REF] Vai | The transition to the metallic state in polycrystalline n-type doped ZnO thin films[END_REF]. In Reference [START_REF] Vai | The transition to the metallic state in polycrystalline n-type doped ZnO thin films[END_REF] impurity doped ZnO thin films ∼ 400 nm thick are deposited on glass with the spray pyrolysis technique with carrier concentration of 10 18 -10 20 cm -3 (corresponding to 10 12 -10 13 cm -2 in a 1 nm thick material). An insulating material is characterized by a dependence of the conductivity of the temperature like ∂σ/∂T > 0 and the transition to the metallic state is observed when ∂σ/∂T < 0 (i.e. ∂ρ/∂T > 0). In the aforementioned paper this transition is not observed since only a flattening of the R S vs T characteristic is observed and not a clear transition to the metallic state. This is similar to what we observe in our sample at the highest carrier concentration and, as we'll see in Chapter 5, the increase of the sheet resistance at low temperature is due to the Weak Localization phenomenon. In Reference [START_REF] Yuan | High-density carrier accumulation in ZnO field-effect transistors gated by electric double layers of ionic liquids[END_REF] (Figure 5a) at the highest carrier concentration it appears that a metallic-like behaviour (∂ρ/∂T > 0) is observed at high temperature (150 K < T < 250 K) for polymer electrolyte doped ZnO. However in this range of temperature the authors state that the mobility of the ions in the polymer is still significant and contributes to charge transport. In fact, below 150 K where ion mobility is negligible, the sample shows insulating behaviour. Thus we conclude that the insulator-to-metal transition in ZnO is not easy to induce and to date has not been clearly observed. Later in this chapter we examine this problem with the concept of mobility edge.
Space charge doping can thus induce really high electron density in ZnO thin films to values comparable with the most used techniques like chemical doping. Furthermore the process is perfectly reversible and the transparency of the thin film is not altered. In a mechanically stable material like ZnO even an extremely doping does not induce material damage and allows perfect reversibility of the doping process.
Carrier scattering mechanism
The understanding of the main scattering process in ZnO thin films, doped or not, is essential for understanding the transport properties exhibited by these samples. Generally speaking, in a polycristalline thin film there are three main scattering mechanisms: scattering from grain boundaries, from thermal lattice vibrations and from ionized and neutral impurities [START_REF] Zhang | Scattering mechanisms of charge carriers in transparent conducting oxide films[END_REF]. However, the scattering cross section of neutral impurities scattering is small compared to that of ionized impurities and can be neglected. Thus we can express the total mobility of the carriers as:
1 µ = 1 µ gb + 1 µ ph + 1 µ i (4.1)
where µ gb is the mobility limited by the grain boundaries, µ ph is the mobility limited by phonons and µ i is the mobility limited by the ionized impurities.
Lattice phonon scatterimg
Scattering from lattice vibration can be the dominating scattering mechanism in polycristalline thin films at high temperature. The dependence of the mobility at high temperature [START_REF] Zhang | Scattering mechanisms of charge carriers in transparent conducting oxide films[END_REF] should then be such that
µ ph ∝ 1 T (4.2)
thus µ should decrease with increasing temperature at high temperature. However this behaviour is not relevant to our data since we shall analyze data collected at low temperature (< 50 K).
Grain boundary scattering
Our ZnO samples are, as we saw, formed by aggregated crystalline grains. The junction between two crystallites is characterized by lattice defectsinduced trapping states which compensate the doping in the material by trapping the electrons in the conduction band. Thus a depletion region is formed around the grain boundaries to which is associated a potential barrier which can strongly affect the motion of conduction electrons. According to the Peritz equation [START_REF] Petritz | Theory of photoconductivity in semiconductor films[END_REF] the expression for the mobility in the presence of grain boundary scattering is
µ gb = µ 0 T -1/2 e -φ b /k B T (4.3)
where µ 0 can be considered as the mobility inside the grain, φ b is the potential barrier height at the grain boundary and k B is the Boltzmann constant.
If the main scattering mechanism is grain boundary related, a plot of ln µT 1/2 vs 1/T should give a straight line. Figure 4.11 shows such a plot for our samples. The mobility is the Hall mobility extracted from the Hall measurements performed at 50, 25, 10 and 4 K. Two sets of data are displayed for each sample, one corresponding to high doping and one corresponding to low doping (relatively to the range of doping we studied). In our thin film samples grain boundaries do not play a dominant role in the carrier scattering mechanism, since in the plot of Figure 4.11 all the curves deviate from a linear behaviour.
µT 1 / 2 ) ( c m 2 V -1 s -1 K 1 / 2 ) 1 / T ( K -1 )
Figure 4.11: ln µT 1/2 vs 1/T plot for the four analysed samples. For each sample we display two curves: one corresponding to high doping and one to low doping. The non linearity of the curves suggests that grain boundaries are not the main scatterers in the carrier transport.
However, even if not dominant, grain boundaries do play a role in the transport properties of our ZnO thin films. According to the discussion in Reference [START_REF] Ellmer | Carrier transport in polycrystalline transparent conductive oxides: a comparative study of zinc oxide and indium oxide[END_REF], the expression of φ b in Equation 4.3 has the form:
φ b = e 2 Q 2 t 8 0 ns , Ln s > Q t e 2 L 2 ns 8 0 , Ln s < Q t (4.4)
where L is the grain size, Q t is the charge carrier trap density at the boundary and n S is the carrier density. Thus we have two limiting cases: (i) at high carrier density, where at a fixed temperature the barrier height decreases with increasing n S , (ii) at low carrier density the barrier height increases with increasing carrier density. Thus in case (i) the mobility increases with n S , in case (ii) it decreases with n S . Figure 4.12 shows the dependence of the mobility with respect to the carrier density in our samples at 25 K. We see that the mobility increases for all the samples with carrier concentration. This effect is however secondary in our thin film as shown in Figure 4.11 and as explained in the following.
Ionized impurity scattering
In (conducting) oxides in general ionized impurities play an essential role in the scattering mechanism of charge carriers. Generally speaking, ionized impurities can be represented by foreign atoms introduced in the crystalline network like dopants (in doped films) or oxygen vacancies (like in our ZnO samples). This scattering mechanism can be recognized again by the temperature dependence of the mobility which should vary like [START_REF] Agnihotry | Studies on e-beam deposited transparent conductive films of In 2 O 3 :Sn at moderate substrate temperatures[END_REF][START_REF] Bansal | Metal-semiconductor transition and negative magneto-resistance in degenerate ultrathin tin oxide films[END_REF] µ ii ∝ T 3/2 (4.5) Figure 4.13 shows µ vs T 3/2 of the Hall mobility of the four analysed samples measured at 50, 25, 10 and 4 K. The carrier density indicated in the plot for each curve is measured at room temperature.
The linear behaviour of the Hall mobility with respect to T 3/2 clearly suggests that the ionized impurities govern the scattering process in the ZnO thin film. Since no foreign atoms are added as dopants, the ionized impurities can only be the oxygen vacancies in the ZnO thin film.
Variable Range Hopping and mobility edge
The transport properties of disordered systems (like our ZnO thin films) can be studied in terms of localized and extended states [START_REF] Mott | Conductivity, localization, and the mobility edge[END_REF]. A very simplified picture in one dimension can be used to argue that in an ordered system, the potential profile experienced by the charges in the material will be like the one showed in the upper part of the potential wells instead of being uniform will be spread over a range of energies V 0 , as shown in the lower part of by Anderson [114] that if V 0 /B 1 all the states in the band are localized and take the form Ψ =
C n e iφnψn e -αr (4.6)
where the part of Equation 4.6 in the brackets is the wavefunction of the electrons in the absence of disorder and the exponent outside the brackets shows how the wavefunction decays exponentially with the distance from the well. Thus when states are localized conduction can be obtained only by thermally activated hopping between the wells. In terms of energy, the localized states are separated from extended states by a particular energy value E c which is called the mobility edge. We can distinguish two cases: (i) the Fermi energy lies above E c , thus the mobility will depend slightly on the phonons and in general the conductivity will have a metallic like behaviour (∂σ(T )/∂T < 0); (ii) if E F < E c the electrons will have a thermally activated mobility and the behaviour of the conductivity with the temperature will have the form [START_REF] Mott | The Anderson transition[END_REF] at high temperature
σ(T ) = σ 0 exp(-(E c -E F )/kT ) (4.7)
with k the Boltzmann constant. At low temperature the conductivity takes the form σ(T ) = σ 0 exp(-A/T 1/(1+D) ) (
where D is the dimensionality of the system. Thus in three dimensions the dependence of ln(σ(T )) with respect to the temperature would be linear with respect to T -1/4 , while in the 2D case it would be linear with respect to T -1/3 .
Variable range hopping in ZnO thin films
Our ZnO thin films can be considered as disordered systems since they are characterized by a strong polycrystallinity. It is thus reasonable to analyze the scattering phenomena in our samples in terms of variable range hopping (VRH). As said before, VRH is a consequence of disorder in materials. In this situation the potential seen by an electron travelling through the material is characterized by potential well of unpredictable depth where the electrons are trapped. Only thermal energy can help the electrons to get out of the well. VRH thus manifests itself with an important increase in the resistivity at low temperature as described by Equation 4.8. In principle it is possible to discriminate if the electronic transport is confined in two dimensions or if it is in 3D by plotting the ln(R S ) vs T -1/(1+D) (with D the dimensionality of the system) as shown in Figure 4.15. The upper part of the figure shows the 2D case, while the lower part the 3D case. We see that it is quite hard to discriminate between 2D or 3D transport since all the curves (both in the 2D and 3D case) have a linear behaviour in the low temperature part. VRH is thus not a good tool to understand the dimensionality or our system. Hence it is not possible to determine the dimensionality of the transport mechanism with these considerations. However since we do observe variable range hopping in both graphs through the obvious linear dependence of ln(R S ) vs T -1/(1+D) , the transport can be assumed to be of the VRH type. to identify a certain energy E c which marks the transition from localized to extended states which is called mobility edge. If the Fermi energy is smaller than E c the mobility of the charge carriers is thermally activated while it becomes independent of temperature if E F > E c . In the words of Mott [START_REF] Mott | The mobility edge since 1967[END_REF]:
Mobility edge
A mobility edge is defined as the energy separating localised and non-localised states in the conduction or valence bands of a non-crystalline material, or the impurity band of a doped semiconductor. It thus marks the transition from the insulating to the metallic state. In Figure 4.10 we see that all samples show insulating behaviour because resistance increases as temperature decreases. However clearly, with increasing doping a metal-insulator transition is approached without being reached at the highest doping values attained. In other words the Fermi energy remains below the mobility edge. From Equation 4.7 it is in principle possible to evaluate the energy ∆E separating the Fermi level from the mobility edge E c by plotting ln(σ(T )) as a function of 1/T and taking the slope of the straight line obtained in the high temperature part of the curve. This was done for samples B, C and D and the result is shown in Figure 4. [START_REF] Wallis | Field assisted glass-metal sealing[END_REF]. We see that the ∆E is very small, of the order of a few meV for the highest doping. This is not realistic since thermal excitation would delocalize electrons but the absolute value of ∆E is probably not of significance. ∆E is positive, indicating that E F is below the mobility edge and it becomes smaller when the doping increases, confirming that E F is approaching the region of delocalized states. This is an aspect which should be investigated more in detail because in all probability the transition to the metallic state in not sharp but gradual (as mentioned in [START_REF] Mott | Conductivity, localization, and the mobility edge[END_REF]).
The study of the electronic transport properties inside transparent oxides is important for practical applications. In fact, the understanding of the scattering mechanisms involved during the electron motion is crucial for the design of efficient devices. Our conclusions on the scattering mechanisms in ZnO are consistent with previous published work. For instance, in Reference [START_REF] Huang | Variablerange-hopping conduction processes in oxygen deficient polycrystalline ZnO films[END_REF] the authors conducted a study on oxygen deficient ZnO thin films and concluded that the electronic transport is dominated at low temperature by VRH. The same conclusions are drawn for chemically doped samples like in References [START_REF] Lin | Variable range hopping and thermal activation conduction of Y-doped ZnO nanocrystalline films[END_REF][START_REF] Han | Hopping conduction in Mn-doped ZnO[END_REF] where the resistivity of ZnO doped respectively with Y and Mn are studied as a function of the temperatures and the VRH was found to be active in all the cases. However, as shown before the transition to the metallic state (thus the transition from localized to extended states) is not an easy task to accomplish in ZnO and no evidence has yet been cited in literature.
We can thus conclude that in (electrostatically or chemically) doped ZnO thin film VRH is a key element in the electronic transport. Eventually, an extremely high doping can bring the system in a condition where the mobility is independent of temperature i.e. where the states occupied by the electrons are delocalized, but this situation has not been experimentally found as yet. In our study on space charge doped ultra-thin ZnO films we were able to induce carrier concentrations such that the transition to the metallic state was very close.
2D nature of the doped film
Even if the phenomenon of the VRH discussed in Section 4.3.1 does not allow to uniquely confirm that the carrier transport in done in two dimensions, it is anyway reasonable to think that it is so. First of all, as discussed before, calculations on the width of ZnO quantum wells done in Reference [START_REF] Goldenblum | Weak localization effects in ZnO surface wells[END_REF] suggest that the accumulation of the electrons at the surface of a ZnO crystal is of the order of few nanometers. We can thus imagine that the Na + space charge formed at the glass/ZnO interface induces a strong accumulation layer at the interface itself. Moreover, and most importantly, the results we obtained from the four representative samples are independent on the sample thickness and stoichiometry being very similar in terms of maximum n s , minimum R S , values of mobility and, as it will be shown in Chapter 5, behaviour of the resistivity under the effect of a magnetic field.
Conclusions on space charge doping of ZnO
Zinc oxide thin film samples were deposited on glass substrates with the RF magnetron sputtering technique at room temperature, fully characterized using XRD and AFM measurements and prepared as devices ready for space charge doping and transport measurements.
The results presented in this chapter show that space charge doping can be used to dope ultra-thin films of ZnO to extremely high carrier concentration exceeding 2 × 10 14 cm -2 . This fact is of particular interest because it opens a completely new perspective in the use of this doping technique, firstly for a wide variety of materials deposited by diverse techniques on glass substrates and secondly for a variety of applications, for example in the field of Transparent Conducting Electrodes where transparent oxides are normally doped exclusively by chemical means in order to increase their conductivity. We showed that space charge doping can obtain results superieur to other techniques in terms of maximum doping of the ZnO thin film without altering the sample quality and transparency in particular with respect to chemical or polymer electrolyte doping. The resistivity of the space charge doped samples is also comparable with the lowest reported resistivity, but the ultimate resistance of our samples is still high for TCE applications and this problem could be eventually overcome with the use of higher quality thin films.
Hall measurements where performed at different temperatures in the the range 4 -50 K for all the carrier densities in each sample in order to determine the variation of the Hall mobility (µ H ) with the temperature. The study of µ H vs T revealed that the main scattering mechanism for electrons is charged impurity scattering which we identified with the oxygen vacancies present in the ZnO. Grain boundaries have also an effect on the transport properties, but they do not provide the dominant scattering mechanism in the material.
The carriers in our ZnO thin films obey the variable range hopping mechanism at low temperature. In fact, the random potential induced by the disordered lattice of ZnO is such that the electrons wavefunctions are localized for all doping levels that we can attain. An insulator-metal transition should be observed if E F crosses a certain energy value E C called mobility edge separating localized states from delocalized states. Our measurements show that with space charge doping we attained doping values which place our samples at the threshold of this transition. Moreover, the perfect similarity of the results on samples of different thickness and stoichiometry suggests that the electronic transport is confined in two dimensions at the interface with the glass substrate.
Chapter 5
Magneto-transport and spin orbit coupling
In Chapter 4 we have detailed the procedure for doping ZnO thin films to very high carrier concentrations and discussed scattering mechanisms which can intervene to explain the transport properties that we observe. These properties are principally determined, we found, by the impurity scattering mechanism. At low temperatures and eventually in the presence of a magnetic field electronic transport in a disordered medium can exhibit some specific quantum interference effects related to coherent transport and the electronic spin. These effects can significantly change conductivity. The first aspect gives rise to negative magnetoresistance through the phenomenon of Weak Localization and the second aspect can suppress this behaviour through the phenomenon of Weak anti-Localization. The measurement of both these phenomena can give access to information of the dimensionality of the observed phenomenon and quantities like the spin relaxation time. Spin orbit coupling in ZnO is thought to be small, because the valence band splitting is of the order of a few meV, almost two orders of magnitude less than in GaAs for example [START_REF] Prestgard | Temperature dependence of the spin relaxation in highly degenerate ZnO thin films[END_REF]. Nevertheless experiments [START_REF] Lü | Spin relaxation in n -type ZnO quantum wells[END_REF][START_REF] Andrearczyk | Spin-related magnetoresistance of n-type ZnO:Al and Zn1 -xMn x O:Al thin film[END_REF][START_REF] Prestgard | Temperature dependence of the spin relaxation in highly degenerate ZnO thin films[END_REF][START_REF] Fu | Spin-orbit coupling in bulk ZnO and GaN[END_REF] have found significant effects of spin orbit coupling in ZnO. This could be justified by the wurtzite structure of ZnO as argued in [START_REF] Harmon | Theory of electron spin relaxation in ZnO[END_REF]. ZnO is also widely studied because of predictions of possible applications in spintronic devices with appropriate doping. It is known that quantum interference phenomena like the aforementioned weak localization exist in ZnO and more rarely weak anti-localization has also been observed. In the work found in Reference [START_REF] Goldenblum | Weak localization effects in ZnO surface wells[END_REF] for instance, the magneto-transport properties of high carrier concentration (up to 5 × 10 14 cm -2 ) ZnO surface layers produced with a variety of methods are studied and the authors found that the samples show the weak localization phenomenon up to room temperature. Spin-related phenomena have also been studied [START_REF] Prestgard | Temperature dependence of the spin relaxation in highly degenerate ZnO thin films[END_REF][START_REF] Liu | Room-temperature electron spin dynamics in free-standing ZnO quantum dots[END_REF][START_REF] Ghosh | Room-temperature spin coherence in ZnO[END_REF] and it was found that in ZnO system the spin coherence time can be of the order of 10 ns and can 99 be measured up to room temperature.
This chapter is dedicated to the study of the magneto-transport properties of space charge doped zinc oxide thin-films as a function of the temperature and doping. We analyse conductivity as a function of the magnetic field and temperature and derive information about the mean free path, inelastic scattering times and spin-coherence time.
Weak localization
Weak localization (WL) is a quantum interference phenomenon experienced by charges moving in a disordered medium at low temperature. WL gives rise to a negative correction to the conductivity due to quantum interference effects, causing a change of sign in the slope of the conductivity with respect to the temperature of a metallic sample at low temperatures. The characteristic signature of this phenomenon is the suppression of these quantum effects on the application of a magnetic field. In Figure 5.1 is shown an example one of our macroscopic CVD graphene sample on borosilicate glass having a carrier concentration of 1.7 × 10 12 cm -2 . We see that the sample is metallic and that its resistance decreases with temperature (∂R S /∂T > 0) till ∼ 80 K. Below 50 K the sample appears to be insulating as the resistance now increases as the temperature decreases. The behaviour in this specific case was shown to be weak localization in graphene.
Backscattering
In the classical treatment of the motion of a particle, an electron starting at the time t = 0 and position r = 0 has a certain probability to be found at point r, after a time t τ , with τ the transport relaxation time. A quantum mechanical treatment however results under certain conditions in an enhanced probability for the wavefunction at the starting point r = 0 after a time t . Following the classical treatment, the probability w for the electron to travel from one point to another is given by the sum of the squares of the probability amplitudes A i of all the possible paths connecting the two points, while in a quantum mechanical treatment where the wave function has a phase as well as an amplitude, it is the square of the sum of the probability amplitudes of all the possible paths [START_REF] Germanenko | Spin effects and quantum corrections to the conductivity of two-dimensional systems[END_REF]. In other words we have:
w = |A i | 2 , classical case | A i | 2 , quantum mechanical case (5.1)
Since the possible paths an electron can travel are random, the phase acquired during a given path is also random. Most terms in the second row of Equation 5.1 thus cancel each other out. Paths travelled along opposite directions of a closed loop however will sum up. Thus in our simple model the probability of finding an electron at the starting point is increased by a factor of 2 with respect to the classical case:
w = |A 1 | 2 + |A 2 | 2 = 2A 2 , classical case |A 1 | 2 + |A 2 | 2 + 2 |A 1 A 2 | = 4A 2 , quantum mechanical case (5.2)
With A 1 and A 2 being the probability amplitudes that the electron travel along the same path in opposite direction and A is the modulus of A 1 and A 2 . This is the Weak Localization effect. Figure 5.2 illustrates schematically the WL phenomenon. An electron, being subjected to various elastic scattering events can follow the path illustrated in Figure 5.2 by solid lines. As we saw from Equation 5.2, when we consider the phase of the electrons the times-reversed path (dashed lines) gives rise to a positive interference, thus making the closed loop the preferred path for the electrons and increasing the probability of finding it at the starting point.
The total effect is that the electron is backscattered decreasing the mobility of the charge carriers and, thus, provoking a negative correction to the conductivity.
Temperature dependence of the conductivity
At low temperature, the correction to the conductivity due to the weak localization effect is dependent on the dimensionality of the system. The temperature dependence of the inelastic scattering time is τ φ ∝ T -p , where p depends on the inelastic scattering mechanism. Thus from theoretical considerations the conductivity takes the form [START_REF] Lee | Disordered electronic systems[END_REF]:
σ(T ) 3D = σ 0 + e 2 aπ 2 T p/2 σ(T ) 2D = σ 0 + p 2 e 2 π 2 ln( T T 0 ) σ(T ) 1D = σ 0 - ae 2 π T -p/2
(5.3)
In our doped ZnO samples the temperature dependance of the conductivity is clearly representative of 2D transport. From the 2D case in Equation 5.
Magneto-conductivity
Theoretical considerations
Effect of the weak localization Without quantum interference phenomena, i.e. for example at high temperature, conductivity has a quadratic dependence on the magnetic field [START_REF] Sommerfeld | The statistical theory of thermoelectric, galvano-and thermomagnetic phenomena in metals[END_REF]. However in two-dimensions, when quantum interference due to the weak localization takes place, the conductivity of the sample deviates from this value by a positive logarithmic value [START_REF] Lee | Disordered electronic systems[END_REF]. As pointed-out before, localization is due to the constructive interference of the wavefunctions along the same closed path in opposite directions. These will be in phase at the starting point of the closed loop and interfere constructively, increasing the probability of a carrier to 'stay in the same place' or localize. The application of a magnetic field B perpendicular to the current flux introduces a dephasing term δφ = 2πBS/Φ 0 , where S is the surface of the closed loop and Φ 0 is the flux quantum. At low B, the dephasing is small and the quantum interference is still strong. But as B increases the constructive interference becomes smaller due to the increasing dephasing. This results in a reduction of the resistivity as a function of the magnetic field or negative magnetoresistance, which is a signature of weak localization. The rigorous expression for the correction to the conductivity coming from theoretical treatment can be found in Reference [START_REF] Hikami | Spin-orbit interaction and magnetoresistance in the two dimensional random system[END_REF] ∆σ(B)
G 0 = -Ψ 1 2 + B tr B + Ψ 1 2 + B φ B (5.4)
where Ψ is the digamma function, G 0 = e 2 /2π 2 is the conductance quantum, B tr = /4eDτ (τ is the scattering time associated to the mean free path L tr and D is the diffusion coefficient), B φ = /4eDτ φ with τ φ the inelastic scattering time. ∆σ(B) is the correction to the conductivity due to the interference phenomenon and is related to the magneto-conductivity by the expression ∆σ(B) -∆σ(0) = σ(B) -σ(0), where ∆σ(0) and σ(0) are respectively the correction to the conductivity and the conductivity at zero applied magnetic field. The scattering time τ associated to the mean free path can be considered as the elastic scattering time only at very low temperature and in the following discussion this consideration will be taken into account.
Influence of spin-orbit coupling on the magneto-conductivity
In a material with spin-orbit coupling, that is coupling between the spin and the momentum of the carrier, the above picture undergoes further change.
The rotation of the spin along the paths in opposite directions of the closed loop is also opposite, introducing a dephasing which leads to destructive interference. Spin-orbit coupling can thus have a significant effect on the low temperature magneto-conductivity. Spin-flip scattering (or spin relaxation and spin-lattice relaxation) can be due to different mechanisms the most important of which are the Elliott-Yafet mechanism, the D'Yakonov-Perel' mechanism and the Rashba effect. They will all be discussed in details later in this chapter. As discussed above, in the presence of spin-orbit coupling, the temperature dependence of the conductivity is still subject to a quantum correction arising from the altering of the constructive interference at the starting point of the closed path. In this case the correction is still logarithmic but is now positive (rather then negative as in the case of weak localization) [START_REF] Germanenko | Spin effects and quantum corrections to the conductivity of two-dimensional systems[END_REF] and is a direct consequence of the fact that the spin-flip time is shorter then the inelastic scattering time, τ SO τ φ . The expression of the magneto-conductivity is radically altered by spinorbit coupling. The correction to the magneto-conductivity taking into account this additional correction has been examined by several authors. In the case of a perpendicular magnetic field Zeeman coupling can be neglected and a general expression is given for example in Reference [START_REF] Maekawa | Magnetoresistance in two-dimensional disordered systems: effects of Zeeman splitting and spin-orbit scattering[END_REF]:
∆σ(B) G 0 = -Ψ 1 2 + B tr B + 3 2 Ψ 1 2 + B φ + B SO B - 1 2 Ψ 1 2 + B φ B (5.5)
where B SO = /4eDτ SO is the characteristic field associated with the spinflip time. Equation 5.5 then describes the behaviour of the magneto-conductivity in the presence of WAL. However though very general because all scattering processes are explicitly taken into account, the practical application of this expression to analyze experimental data can be hazardous because it depends on as many as three parameters (B tr , B φ and B SO ). The mathematical problem thus can be overdetermined and the extraction of the three parameters unstable. Another problem that must be confronted is the need to estimate a diffusion coefficient to extract characteristic scattering times and lengths.
As a first step we used Equation 5.5 to fit our data. We could thus extract three characteristic fields but in some cases the characteristic field B tr was found to be of the order of the applied magnetic field or B φ or B SO . This translates to a time τ associated to the mean free path of the order of τ φ and τ SO , which is questionable.
Another expression for the magneto-conductivity can be found in References [START_REF] Iordanskii | Weak localization in quantum wells with spin-orbit interaction[END_REF][START_REF] Caviglia | Tunable Rashba spin-orbit interaction at oxide interfaces[END_REF]:
∆σ(B) G 0 = Ψ 1 2 + B φ + B SO B + 1 2 Ψ 1 2 + B φ + 2B SO B - 1 2 Ψ 1 2 + B φ B -ln B φ + B SO B - 1 2 ln B φ + B SO B + 1 2 ln B φ B (5.6)
In this expression only two parameters, B φ and B SO appear, making their extraction from experimental data more stable. This reduction of the number of parameters is based on the hypothesis that the characteristic field B tr is much higher than the applied magnetic field in the measurement. This translates to a time τ associated to the mean free path which is much smaller than τ φ and τ SO . τ can be estimated by the expression of the mobility (or conductivity) in terms of τ and the effective mass of the charge carriers in ZnO m ef f = 0.25 obtained from Reference [START_REF] Morkoç | Zinc oxide: fundamentals, materials and device technology[END_REF]:
τ = µm e . ( 5.7)
As we shall see later the condition τ τ φ , τ SO is indeed satisfied. This brings us to the problem of the extraction of these characteristic times from experimental data. An expression for a diffusion coefficient is needed. One can be tempted to use the Einstein relation (D = µk B T /e, with µ the mobility of the charge carriers, k B the Boltzmann constant, T the temperature and e the electron charge), but in our case a more appropriate expression is D = v 2 F τ /2 [START_REF] Germanenko | Spin effects and quantum corrections to the conductivity of two-dimensional systems[END_REF], with v F being the Fermi velocity which is estimated in the approximation of a perfect two-dimensional electron gas and a single parabolic conduction band. In this case, v F = k F /m where is the reduced Planck constant and k F = √ 8πn S is the Fermi wavevector and n S the carrier concentration.
Magneto-conductivity of ZnO
We measure the magneto-conductivity of our ZnO thin films as a function of temperature and doping. The range of doping is the one shown in Figure 4.10 i.e. from ∼ 10 13 to over 2 × 10 14 cm -2 while the measurements under magnetic field are usually carried out at 50, 25, 10 and 4 K. Our electromagnet facility allows us to apply a magnetic field of up to 2 T. The 2 T field is sufficient to observe significant changes in magnetoconductivity.
Given the small spin-orbit coupling in ZnO one could ask if weak antilocalization would be visible in our samples. We show that in a certain domain of doping, temperature and magnetic field we do observe weak antilocalization. More interestingly in all our samples we observe a transition from weak localization to weak anti-localization. This transition can be observed at fixed carrier concentration as temperature decreases, or at fixed temperature by lowering the carrier concentration. This transition is a textbook case of a change in the dominant scattering mechanism as a function of an external parameter.
WL to WAL transition as a function of temperature
Let us examine the correction to the magneto-conductivity ∆σ(B)/G 0 calculated in Equation 5.6 for our data. For clarity we restrict ourselves to data from sample C but as we have mentioned before, all our samples show similar behaviour. In Figure 5.4 is shown the relative variation of the magneto-resistance and its evolution in temperature in the range 4 K to 50 K while in Figure 5.5 we show the calculated ∆σ(B)/G 0 . The carrier concentration of the sample which stays fixed throughout this measurement is measured at room temperature to be 7.2 × 10 13 cm -2 . In the figure, the grey circles correspond to the correction to the conductivity calculated from ∆σ(B)/G 0 = (σ(B) -σ(0)) /G 0 and the coloured lines are the parametrized fits obtained from Equation 5.6. The parameters are the characteristic fields B φ and B SO .
A deviation from zero in this figure indicates a modification of the conductivity in presence of the magnetic field due to the WL or WAL phenomena. Even at the relatively high temperature of 50 K we detect weak localization: the conductivity increases with the magnetic field. Since the inelastic scattering time increases with decreasing temperature weak localization increases between 50 and 10 K. However at 4 K the correction ∆σ(B)/G 0 to the conductivity switches sign becoming negative. This is the signature of weak anti-localization and implies that the spin relaxation time τ SO is now lower than τ φ . This is the crossover between dominating scattering mechanisms that we mentioned above, which is controlled by temperature. It can be seen from the figure that the parametrized fits are of good quality and permit us to extract characteristic parameters of the scattering experienced by the carriers in each case, notably the characteristic fields which are related to the characteristic transport times τ φ and τ SO , while τ is estimated from Equation 5.7. We are confident that the positive magnetoresistance ob-served in our measurements is effectively due to the WAL effect and not to the classical magnetoresistance due to the Kohler effect which manifest itself as a quadratic dependence of the resistivity as a function of the magnetic field. In the first place, the Kohler effect is relevant when the charge carriers in the material have relatively high mobility, which is not the case of our ZnO samples. Secondly, the Kohler effect is visible at very high magnetic field, which again is not our case since we are able to apply a magnetic field of maximum 2 T. Classical magnetoresistance is proportional to 1+(µ×B) 2
[133] so for µ = 5 cm 2 V -1 s -1 and B = 2 T, we get 10 -6 % which is four orders of magnitude smaller than what we see.
We use Equation 5.6 to fit our data as it allows for a comprhensive description of the magneto-conductivity and can be used to extract the necessary information required for the understanding of the scattering mechanisms in the samples. Other alternatives are proposed in literature to describe the magneto-conductivity in the presence of quantum interference phenomenons like in References [START_REF] Goldenblum | Weak localization effects in ZnO surface wells[END_REF][START_REF] Wang | Temperature-dependent magnetoresistance of ZnO thin film[END_REF][START_REF] Choo | Crossover between weak anti-localization and weak localization by Co doping and annealing in gapless PbPdO2 and spin gapless Co-doped PbPdO2[END_REF] where an approximation of Equation 5.5 is used. In the equation for the correction ∆σ(B)/G 0 used in those works the only fitting parameters is the inelastic scattering time τ φ (thus neglecting the spin relaxation time τ SO ) and the phenomenon of the weak anti-localization in Reference [START_REF] Choo | Crossover between weak anti-localization and weak localization by Co doping and annealing in gapless PbPdO2 and spin gapless Co-doped PbPdO2[END_REF] is not treated in terms of τ SO but a negative prefactor is simply added in order to change the sign of the curve. In Reference [START_REF] Reuss | Magnetoresistance in epitaxially grown degenerate ZnO thin films[END_REF] instead a semiempirical expression is used in order to fit the negative magneto-resistance data.
In Figure 5.6 are shown B φ and B SO obtained from the fits of Figure 5.5. As expected, B φ decreases with the temperature (B φ ∝ 1/τ φ ∝ T p ). B SO does not vary with T except at the lowest measured temperature at 4 K where B SO becomes larger than B φ . This transition manifests itself as the weak anti-localization effect.
The process of fitting the data with Equation 5.6 must be done with care since the values of the two paramters are dependent on each other. Several pairs of valid values can be found which gives a domain of validity for each parameter. Thus several alternative sets of parameters were tried to establish the domain of existence of the fits which determine the error bars shown in Figure 5.6. By software analysis, after the initial parameters are fitted with the least square method and the unphysical values are discarded, the parameters are varied in order to find the range in which the fit is still valid. We are thus confident of the values extracted by this procedure and the interpretation that we deduce from them.
The characteristic fields extracted from the fits can be translated into a more physically intuitive quantity by using the diffusion coefficient D discussed above. Indeed the fields are related to characteristic relaxation times (see Equation 5 Corresponding to the behaviour already seen for the charcteristic fields, while at higher temperature τ φ is always smaller than τ SO , there is Sample n S,4 K (10 a crossover as T approaches 4 K where the spin relaxation time becomes smaller then the inelastic scattering time.
The values found for the different samples at 4 K and at high doping (i.e. where the spin relaxation is supposed to be slower) are listed in Table 5.1. We find values value of τ SO ≈ 0.5 -1.6 ps at 4 K in our samples. These are only indicative but consistently smaller than those found in earlier works. Ghosh et al. [START_REF] Ghosh | Room-temperature spin coherence in ZnO[END_REF] measure electron spin dynamics in 100 nm thick epilayer samples with Time Resolved Faraday Rotation. Prestgard et al. [START_REF] Prestgard | Temperature dependence of the spin relaxation in highly degenerate ZnO thin films[END_REF] measure spin relaxation times with spin polarized Hanle transport measurements in 200 nm thick films. The carrier concentrations in their highest doped samples when translated to our 2D case are well below those that we achieve . Ghosh et al. measure a spin relaxation time of 2 ns at 10 K rapidly decreasing with temperature. Prestgard et al. measure a spin relaxation time of 0.15 ns at 20 K. Finally Liu et al. [START_REF] Liu | Room-temperature electron spin dynamics in free-standing ZnO quantum dots[END_REF] report a theoretical estimation of about 6 ns for ZnO. The values we found for τ SO at low temperature and high doping in our samples are smaller than what are found in literature and could be attributed to the relatively low mobility in our polycristalline samples compared to the other published works.
WL to WAL transition as a function of carrier concentration
The transition from weak localization to weak anti-localization can also be observed at a fixed temperature (4 K) by varying the carrier concentration as shown in Figure 5.8 for samples B and C. They are representative of all our samples where we systematically measure this transition from WL to WAL. Figure 5.9 shows the magneto-resistance corresponding to the curves in presented Figure 5.8 and normalized to the value of R S at B = 0. We note the following points:
-the phenomenon we observe of increasing magneto-resistance (decreasing magneto-conductivity) is unambiguously associated to the weak anti-localization and not to the normal parabolic increase of the resistivity with the magnetic field because the latter is a phenomenon observable at higher values of B.
-weak anti-localization is present at relatively low carrier density, meaning that in this range of doping and at 4 K the spin relaxation time is shorter then the inelastic scattering time;
-τ φ becomes smaller then τ SO as n S approaches 10 14 cm -2 and the transition to weak localization occurs (Figure 5.10); in other words B SO decreases with increasing doping; -the discussion above is valid only if the B tr , the characteristic field associated with the mean free path, is much larger then B φ and B SO , which is the assumption implicitly made in Equation 5.6. In fact, the parameter B tr would appear in the equation if it would be comparable to the other characteristic fields (like in Equation 5.5 for example), but one obtains the actual expression of Equation 5.6 by considering B tr much higher then the maximum applied magnetic field. Intuitively this translates the logical fact that the mean free path, which is related to the totality of scattering events, is smaller than the characteristic lengths associated to specific scattering events: inelastic or spin-flip scattering. Typical values obtained for τ corresponding to the mean free path are of the order of 10 -15 s indicating once more that defect scattering is important.
Considerations on the characteristic transport lengths
From the characteristic fields obtained from the fit of the correction of the magneto-conductivity with Equation 5. The mean free path at 4 K calculated for the four analyzed samples ranges from ∼ 1 nm at the lowest carrier concentration to ∼ 6 nm at the highest carrier concentration. The increase of L tr with the doping is due to the increasing overlap of the wavefunctions of the electrons in the material as the carrier concentration increases. As we saw in Section 4.3 the charge carriers are subjected to VRH especially at low carrier density well in the insulating regime. We notice also that at high carrier density where the maximum value of L tr is attained, it is of the order of the lowest dimension of the grain-size. This is the thickness of the disk shaped grains (Section 4.1.2). This is to be expected because the mean free path cannot be longer than this minimum dimension of the crystalline grain.
The calculated phase-coherence lengths vary from 16 nm to 49 nm at 4 K in going from low to high carrier concentrations. These values are higher than those of L tr and most importantly at high carrier concentration they are bigger than the minimum dimension of the crystalline grain. This fact is coherent with the observation of WL and WAL. In fact the charge carriers need to remain inside a single crystal grain of the polycrystalline film to maintain phase coherence. The only way to maintain phase coherence for such a long path (50 nm) is to travel along a loop inside the same grain which is possible given the flat disk-like shape of the grains (Section 4.1.2) and the average diameter of about 30 nm. We could thus imagine that the situation depicted in Figure 5.2 to explain the WL effect actually happens inside one of the grains of the sample. The values obtained for the spin-coherence length L SO at 4 K vary from 10 nm to 128 nm. It is found in the regime of the WL that, as expected, L SO is bigger than L φ meaning that the electrons travel for a distance L φ with the same phase but the spin is preserved until the distance L SO is reached, inelastic scattering dominates. On the other hand, the value of L SO is found to be smaller then L φ when we observe the WAL, or in other words in this regime spin-flip scattering dominates.
Origin of the spin-orbit coupling
As mentioned above, spin-orbit coupling in semiconductors can originate from various mechanisms. Below we discuss the various possibilities briefly and from our data we isolate the dominant mechanism in our samples. The possibilities considered are the D'Yakonov-Perel' mechanism, the Rashba effect and the Elliott-Yafet mechanism.
D'Yakonov-Perel' mechanism
The D'Yakonov-Perel' mechanism is an efficient spin-relaxation mechanism in systems lacking an inversion symmetry resulting in an effective magnetic field. Indeed, without inversion symmetry, the spin-up and spin-down states are not degenerate: E k↑ = E k↓ [START_REF] Žutić | Spintronics: Fundamentals and applications[END_REF]. The conduction band states experience ordinary impurity and phonon scattering. The wavevector k changes at every scattering event, as well as the effective magnetic field on the spin that comes from the spin-orbit coupling. The fluctuating magnetic field is responsible for the spin relaxation and its magnitude is proportional to the conduction-band spin splitting. The D'Yakonov-Perel' mechanism can be recognized to be active in the material if there is direct correlation between the inverse of the spin relaxation time τ SO and the time associated to the mean free path τ tity 1/τ SO is inversely and not directly correlated to τ and we can conclude that in our samples the D'Yakonov-Perel' mechanism is not the dominant mechanism for spin relaxation. Earlier published works used spin polarized transport Hanle experiments or optical orientation experiments [START_REF] Prestgard | Temperature dependence of the spin relaxation in highly degenerate ZnO thin films[END_REF][START_REF] Harmon | Theory of electron spin relaxation in ZnO[END_REF] to measure spin relaxation in ZnO. They identified the D'Yakonov-Perel' mechanism as the dominant mechanism, arguing that the conditions in ZnO (large band gap and small spin-orbit coupling) were compatible with it. Some differences exist between these experiments and our experiment in that they used samples with a very high degree of crystallinity and a thickness above 100 nm. However the main difference resides in the fact that our carrier concentrations are clearly higher than the ones in their samples. Secondly as shown by [START_REF] Harmon | Theory of electron spin relaxation in ZnO[END_REF], the D'Yakonov-Perel' mechanism becomes dominant above 50 K while all our measurements pertain to temperatures below 50 K.
Rashba effect
The Rashba effect can be classified as a particular case of the D'Yakonov-Perel' mechanism. In a two-dimensional electron gas (2DEG) formed in an interfacial band bending of a semiconductor when the spin-orbit coupling is governed by the Rashba effect, a supplemental term is added to the Hamil-tonian:
H SO = α(σ × k) • n z (5.10)
where σ are the Pauli matrices, k is the quasi-wavevector of the electrons, n z is a unit vector perpendicular to the surface and α is the spin-orbit coupling constant [START_REF] Bychkov | Oscillatory effects and the magnetic susceptibility of carriers in inversion layers[END_REF] which is proportional to the electric field perpendicular to the 2DEG. The spin relaxation time is related to the spin-orbit coupling constant by the relation 1/τ SO ∝ α 2 . It is reasonable to think that the Rashba effect can be present in the doped ZnO thin-films since the doping is electrostatically induced by the ions accumulated at the interface between the glass and the zinc oxide. In this particular case, the carrier density at the interface is proportional to the perpendicular electric field seen by the electrons, and hence, if the Rashba effect is active, we should see an increase in spin orbit scattering with increasing carrier density. However, we can exclude that this is the dominant effect since we can see from Figure 5.12 that B SO (which is proportional to 1/τ SO ) decreases with increasing carrier concentration which is the opposite of the behaviour expected from the Rashba effect.
Elliott-Yafet mechanism
In the Elliott-Yafet mechanism spin relaxation is caused by scattering with the ions of the lattice or impurities at low temperature or phonons at high temperature. At every scattering event there is a finite probability that the electron changes its spin orientation. It is thus reasonable to think that this mechanism is active in our ZnO thin films since oxygen vacancies (present in our samples) represent an important source of scattering.
From the rigorous treatment of the problem of the electrons travelling in a crystal in the presence of spin-orbit coupling [START_REF] Žutić | Spintronics: Fundamentals and applications[END_REF][START_REF] Elliott | Theory of the effect of spin-orbit coupling on magnetic eesonance in some semiconductors[END_REF], one finds that the Hamiltonian describing the system has a component resulting from the spin-orbit coupling and the corresponding eigenfunctions will be a linear combination of spin-up ↑ and spin-down ↓ states (5.11)
The two functions are degenerate (same k and energy) and are coupled by time reversal and spatial inversion. In other words, the Equations 5.11 describe a mixing of spin-up and spin-down states. Since spin mixing is usually small, we can assume that in Equations 5.11 |a| ≈ 1 and |b| 1. In the presence of such mixing of the eigenfunctions, the spin relaxation can be caused by any spin-independent event, while (in principle) in the absence of scattering the spin of the electrons remains unaltered. The Elliott-Yafet process is schematically shown in Figure 5.13. undergoes some scattering events until one of them causes a spin change (blue circles). This phenomenon is repeated after some collisions.
Since spin relaxation can occur only during a momentum scattering event, the spin relaxation time is related to the mean free path time by a relation of proportionality [START_REF] Elliott | Theory of the effect of spin-orbit coupling on magnetic eesonance in some semiconductors[END_REF] τ SO ∝ τ .
(5.12)
Theoretically, the ratio τ /τ SO (according to the Elliott relation [START_REF] Žutić | Spintronics: Fundamentals and applications[END_REF]) should be equal to the shift ∆g of the electronic g factor from the free electron value g 0 = 2.0023 [START_REF] Elliott | Theory of the effect of spin-orbit coupling on magnetic eesonance in some semiconductors[END_REF]. However, experimentally the ratio τ /τ SO also depends on the scattering source (impurities, grain boundaries or phonons).
To examine if the Elliot-Yafet mechanism is dominant in the experimental conditions of our samples we plot τ SO as a function of τ . The plots for samples B, C and D are shown in dominance of the Elliott-Yafet mechanism. As there are no foreign atoms in out ZnO samples, the only scatterers which can be responsible for the spin relaxation are the oxygen vacancies. This is also in agreement with the discussion in Chapter 4.
Considerations on the metal-insulator transition
One could determine the transition from the insulating to the metallic state through what is called the Ioffe-Regel criterion [START_REF] Sarma | Two-dimensional metal-insulator transition as a strong localization induced crossover phenomenon[END_REF]. According to this criterion the metal-insulator transition is marked by a critical carrier density n c for which the product k F l tr = 1, with k F being the Fermi wavevector and l tr the mean free path. If k F l tr > 1, the system is in the metallic state, while the system is insulating when k F l tr < 1.
We calculated the product k F l tr for all our samples in order to determine if the sample could be considered a "metal" at high doping and an insulator at low doping according to the Ioffe-Regel criterion. For high doping, i.e. n S > 10 14 cm -2 , we always obtain k F l tr 1, with values varying between 21 and 42. At lower doping (2.4 -6.3 × 10 13 cm -2 ) k F l tr is of the order of unity, ranging between 0.5 and 5.4. This criterion indeed corresponds to such a transition in certain cases [START_REF] Biscaras | Onset of twodimensional superconductivity in space charge doped few-layer molybdenum disulfide[END_REF]. In our case at low doping the R S vs T curves have a clear negative slope indicating that the sample is insulating (see Figure 4.10) even though the product k F l tr is of the order of unity. In the high carrier density regime, where k F l tr 1, we still observe non-metallic behaviour in our samples in the sense that ∂R S /∂T < 0.
The Ioffe-Regel criterion has been used to argue that the metal-insulator transition was achieved in polycristalline ZnO with chemical doping [START_REF] Vai | The transition to the metallic state in polycrystalline n-type doped ZnO thin films[END_REF]. Though the product k F l tr becomes larger than unity as doping is increased, similar to our observations, the commonplace metallic criterion of ∂ρ/∂T > 0 is not observed in the ρ vs T curves. To clarify the semantic aspects, our highly doped samples are thus metallic from the point of view of the Ioffe-Regel criterion but not from the point of view of the ∂ρ/∂T > 0 criterion.
Conclusions of the magneto-transport in ZnO
This chapter was dedicated to the study of the magneto-transport properties of our doped non-stoichiometric ultra-thin ZnO samples deposited on soda-lime glass. Magneto-transport measurements revealed the scattering mechanisms active in our doped thin-films and helped determine the origin of the spin-flip observed in the samples. Many groups have studied spin relaxation in ZnO with the perspective of the use of ZnO in spintronics applications.
We observe the phenomenon of the weak localization associated to quantum interference and disorder in the sample. We also systematically observe at low temperature and at a certain carrier concentration a transition to the weak anti-localization meaning that the spin relaxation time τ SO has become shorter than τ φ .
The characteristic transport times are determined by a parametric fit of the correction of the magneto-conductivity curves using the equation found in [START_REF] Iordanskii | Weak localization in quantum wells with spin-orbit interaction[END_REF][START_REF] Caviglia | Tunable Rashba spin-orbit interaction at oxide interfaces[END_REF] (Equation 5.6) which is the result of the rigorous treatment of the problem of the magneto-transport in the presence of spin-orbit coupling. We have thus given a complete description of magneto-conductivity in polycrystalline ZnO when compared with other reports in literature.
The analysis of the evolution of the characteristic scattering times al-lowed to identify the Elliot-Yafet mechanism to be responsible for the spin relaxation. In the presence of this mechanism there is a finite probability that the spin of the electrons can be reversed after a scattering event and the evidence that the Elliot-Yafet mechanism is active in our samples is in the linear relation between the scattering time τ which is essentially elastic at low temperatures and the spin relaxation time τ SO . Another work cites the D'Yakonov-Perel' mechanism as the most important spin relaxation mechanism in ZnO, but as we have pointed our before their experimental conditions (carrier concentration and measurement temperature) are different.
The transition from WL to WAL can be also obtained at a certain temperature by decreasing the carrier concentration in our samples. At 4 K we observe a crossover as n S approaches 10 14 cm -2 from WAL to WL, meaning that below 10 14 cm -2 the spin relaxation time is shorter than the inelastic scattering time (τ SO < τ φ ). We can thus tune the dominant scattering mechanism in our samples by changing the carrier density.
Concluding, we showed the relevance and efficiency of the space charge doping technique for polycrystalline thin films of a wide band gap semiconductor, ZnO. We could observe quantum interference phenomena in the low temperature magnetoconductivity of these films and extract characteristic values related to the scattering mechanisms. These are compatible with the crystalline structure and electronic properties of ZnO and were compared to recent literature. Finally we also showed that the dominant scattering mechanism can be tuned by external parameters like temperature but also, in our case an electrostatic field which changes carrier concentration.
Chapter 6
Conclusions and perspectives
In this thesis we have developed a new technique for ultra-high electrostatic doping of thin films of materials deposited on a glass substrate called Space Charge Doping. We have applied this technique to the prototype 2D material graphene which is a semi-metal and can easily be prepared in a monolayer. We have also applied this technique to polycrystalline films of the wide band gap semiconductor zinc oxide. We have successfully obtained carrier concentration levels beyond 10 14 cm -2 in both materials in a reversible way. Reaching a high carrier concentration in these materials is of interest for a wide variety of reasons, a common one being the eventual application as a transparent conducting electrode. We applied our method to obtain very significant modification of carrier density and modification of the transport properties investigated as a function of temeprature and magnetic field. The techniques and the results obtained were discussed in the light of material properties and the relevant theory.
Space charge doping has been shown to be a powerful and reliable method for doping materials deposited on a glass substrate, capable of inducing extremely high doping levels in a reversible way. Moreover, the doping level can be finely controlled allowing to study the properties of the materials in a wide range of doping without introducing defects. The space charge created at the glass surface with the sodium ions accumulation or depletion is able not only to dope two-dimensional materials like graphene, but also polycrystalline materials like ZnO. This fact opens a completely new perspective in many fields, going from the fundamental research to industrial applications.
The first experiments on space charge doping where performed on microscopic and macroscopic graphene samples. The fact that it is possible to dope graphene n or p makes it the ideal material for the characterization of our doping technique, both from a fundamental and from an applied point of view. We showed that with the space charge at the glass/graphene 121 interface it is possible to reproduce the well known bell-shaped R S vs n S characteristic of graphene demonstrating that the space charge doping can finely control the doping level in graphene. Moreover, the doping is reversible and doesn't introduce defects, as shown from Raman spectroscopy. The maximum reached doping level depends on the material and on the glass substrate and is higher for glasses containing a higher concentration of Na + ions. The maximum doping we reached is 1.2 × 10 14 cm -2 in the electron side. The technique was also validated for large area CVD graphene samples (∼ 1 cm 2 ), indicating that the doping process is uniform all over the glass surface.
For practical applications like TCE other important parameters are the minimum value of sheet resistance and the transparency of the material. Both the quality of the graphene sample and the glass surface play a role. In fact, the smoother the surface of the glass the higher the mobility of charge carriers in graphene and the lower the sheet resistance with a minimum measured value of R S of 224 Ω/ on borosilicate glass. The transparency of the space charge doped graphene is also unaltered, as a further confirmation that the quality of the sample is preserved.
Space charge doping has thus been demonstrated to be a very efficient doping method for graphene and easily extensible to other two-dimensional materials. In the field of transparent conductors, the combination of graphene and space charge doping can be a valid alternative to the already used materials and doping techniques in this field. However, monolayer graphene is sensitive to air pollution especially when it is highly doped. Thus it requires packaging or it needs to be kept in vacuum. Other alternatives like transparent oxides combined with the space charge doping can be the answer for finding new transparent conductors.
The measurements performed on ultra-thin (< 40 nm) films of ZnO 1-x were essential to expand the applicability of space charge doping to polycrystalline materials. Also, electrostatically doped ultra-thin ZnO could have potential applications as a TCE. We showed that space charge doping is able to induce a very high carrier density in ZnO with sheet resistance dropping from ∼ 10 8 Ω/ to ∼ 10 3 Ω/ , a difference of 5 orders of magnitude. The maximum doping level attained was 2.2 × 10 14 cm -2 . Like in the case of graphene, the doping level can be finely controlled in the accumulation layer at the interface allowing us to study the electronic properties of the thin-film in wide range of carrier concentrations. The analysis of the Hall mobility in the 4 -50 K range at various doping levels showed that the main scattering mechanism for electrons is by charged impurities, i.e. the oxygen vacancies in the film. Magneto-transport measurements revealed that the same impurities are responsible for the spin relaxation in the system with the observation of a remarkable transition from weak localization to weak anti-localization under certain conditions.
123
A comparative table is shown in the following resuming the characteristics (in terms of R S and carrier density) of the graphene and ZnO samples before and after the space charge doping is performed. The original work done in this thesis opens new perspectives in the field of fundamental research for the study of the materials in high doping conditions as well as for practical applications like TCE. At the moment, in our laboratory the space charge doping has been successfully used to induce an insulating-to-superconducting transition in few layers MoS 2 samples [START_REF] Biscaras | Onset of twodimensional superconductivity in space charge doped few-layer molybdenum disulfide[END_REF] or in high-temperature superconductors (on-going work). Very interesting informations could be obtained from the samples if Raman spectroscopy could be performed in situ in order to monitor, the evolution of the Raman spectrum as a function of doping. This particular equipment has been designed and will be available in our laboratory in the next months.
Regarding potential applications of space charge doping in the future, an interesting way of using our doping method could be by local doping of the samples, i.e. by creating space charges of different magnitude and polarity in different areas of the same sample giving the possibility to fabricate, for example, diodes without chemical doping or chips with multiple devices on the same glass substrate. In the field of TCE the work done on ZnO could be improved by working on the crystalline quality of the thin film in order to reach the standard value of resistivity for a TCE of 100 Ω/ . Eventually other oxides like SnO could be investigated. Generally speaking, the performances of space charge doping (both for fundamental research and practical applications) could be improved by working on the quality of the glass substrate. Both ultra-smooth surfaces and a higher Na + content would provide an enhanced doping capability with respect to the glasses studied here.
During this thesis two articles [START_REF] Paradisi | Space charge induced electrostatic doping of two-dimensional materials: Graphene as a case study[END_REF][START_REF] Biscaras | Onset of twodimensional superconductivity in space charge doped few-layer molybdenum disulfide[END_REF] have been published, another is under preparation and industrial patent on space charge doping has been obtained.
Figure 1 . 1 :
11 Figure 1.1: Schematization of the field effect in a metal-oxide-semiconductor structure in the particular case of a p-doped semiconductor. Re-printed from 1.1
.3. At the edge of the first Brillouin zone there are what are called the Dirac points (K and K ), with coordinates K = 2π/ (3a) , 2π/ 3 √ 3a (1.1) and K = 2π/ (3a) , -2π/ 3 √ 3a . (1.2)
Figure 1 . 2 :
12 Figure 1.2: Graphene hexagonal atomic structure. The two triangular sub-lattices are highlighted in blue and yellow.
Figure 1 . 3 :
13 Figure 1.3: Graphene energy dispersion relation with highlighted the K and K points. Readapted from [10].
Figure 1 . 4 :
14 Figure 1.4: Linear dispersion relation of graphene at the Dirac point.
Figure 1 . 5 :
15 Figure 1.5: Raman spectrum of a graphene sample with defects. Re-printed from [21].
Figure 1 . 6 :
16 Figure 1.6: 2D peak evolution with the number of layers for different laser excitations. Re-adapted from [22].
Figure 1 . 7 :
17 Figure 1.7: Graphene ambipolar conductivity as a function of the gate voltage. Re-printed from [26].
Figure 1 . 8 :
18 Figure 1.8: The three different crystal structure of ZnO. Re-printed from [52].
Figure 1 . 9 :
19 Figure 1.9: Electronic band structure of ZnO. Re-printed from [51].
Figure 1 . 10 :
110 Figure 1.10: XRD spectrum obtained from ZnO powder. Re-printed from [59].
Figure 1 . 11 :
111 Figure 1.11: XRD spectrum obtained from a ZnO thin-film of ∼ 130 nm in thickness deposited on glass. Re-printed from [60].
Figure 1 . 12 :
112 Figure 1.12: AFM scan of a ZnO thin-film deposited on a Si substrate. The film thickness is 108 nm. Re-printed from [61].
Figure 2 . 1 :
21 Figure 2.1: Schematics of anodic bonding. The motion of the Na + ions under the electric field is highlighted with the red arrow. The top anvil can be substituted by a micromanipulator tip.
Figure 2 . 2 :
22 Figure 2.2: Steps of the transfer of CVD graphene onto a glass substrate. (a) PMMA is spincoated on one side of thez copper foil, covering the underlying graphene. (b) Oxygen plasma etching is performed on the back side to remove graphene not covered by the PMMA. (c) Copper is etched away in a aqueous solution of APS. (d)The floating graphene/PMMA is rinsed several times in DI water and then is "fished" with the glass substrate. (e) The glass/graphene/PMMA system is let dry in air and, after a baking to enhance adhesion, the PMMA layer is dissolved in acetone. (f) Finally, graphene has been transferred onto the glass substrate.
Figure 2 . 3 .
23 It is composed by a vacuum chamber in which is present a high purity zinc
Figure 2 . 3 :
23 Figure 2.3: Schematization of the RF magnetron sputtering apparatus.In a high vacuum chamber, a glass substrate is placed in front of a zinc target. Argon and oxygen are introduced in the chamber and an argon plasma is activated with the RF power supply. Argon ions hit the zinc substrate expelling Zn atoms from it which travels towards the glass substrate. During their path they enter the oxygen atmosphere, oxidize and deposit onto the glass substrate in the form of ZnO.
Figure 2 . 4 :
24 Photo-diodeElectornics and lock-in amplifier Phase Amplitude
Figure 2 .ħΩFigure 2 . 5 :
225 Figure 2.5: Schematics of the absorption process of a photon, re-emission of a photon with different energy due to energy exchange with a phonon.
Figure 2 . 6 :
26 Figure 2.6: Illustration of the Rayleigh, Stokes and anti-Stokes scattering. Re-printed from[START_REF] Larkin | Infrared and Raman Spectroscopy: Principles and Spectral Interpretation[END_REF]
Figure 2 . 7 :
27 Figure 2.7: Schematic of the micro-Raman system.
Figure 2 . 8 :
28 Figure 2.8: Illustration of the Bragg law.
Figure 2 . 9 :
29 Figure 2.9: Schematization of the X-ray diffraction setup.
Miller indices A convenient way to determine the orientation of the crystalline planes is represented by the Miller indices. A crystal can be seen as set of planes occupied by the atoms which are all parallel to each other and intersect the axes of the crystallographic unit cell. A set of indices hkl can be defined by considering the intersection points of the planes with the axis of the unit cell. The intersection points are thus a/h, b/k and c/l, with a, b and c being the axis of the unit cell. As an example, a plane identified by the Miller indices[START_REF] Fu | Spin-orbit coupling in bulk ZnO and GaN[END_REF] will intercept the axes a, b and c of the unit cell at the coordinates 1, 1/2 and 1 respectively. In X-ray diffraction the Miller indices are thus used to determine the different crystalline orientations present in a crystal, each peak of the diffraction pattern corresponding to a different crystalline orientation.
Figure 2 . 10 :
210 Figure 2.10: Contact configuration in the van der Pauw method for (a) Rs measurements and (b) Hall measurements.
Sample
Figure 2 . 11 :
211 Figure 2.11: Schematization of the alignment of the stencil mask to the sample.
Figure 3 .Figure 3 . 1 :
331 Figure 3.1: Schematization of the glass structure at the atomic level.
Figure3.2: Measurements on the conductivity of our soda-lime glass with respect to temperature compared with the data in Reference[START_REF] Mehrer | Diffusion and ionic conduction in oxide glasses[END_REF]. The small offset of our data from the extrapolation of the published data is probably due to slight differences in the composition of the glasses.
Figure 3 . 3 :
33 Figure 3.3: Variation of the current through soda-lime glass with a constant applied voltage of V G = -10 V while temperature is swept from room temperature to ∼ 370 K.
Figure 3 . 4 :
34 Figure 3.4: The surface of a CVD graphene sample transferred on glass. The image was taken with a 50X objective.
Figure 3 . 5 :
35 Figure 3.5: Typical Raman spectrum of a CVD graphene sample after the transfer process. The almost absent D peak suggest that the sample has a good quality. Inset: fit of the 2D peak with a single Lorentzian curve as a demonstration that the sample is a mono-layer graphene.
min.) Borosilicate 1700 -1800 250 -230 10 -
Figure 3 . 6 :Figure 3 . 7 :
3637 Figure 3.6: Optical image of a monolayer graphene sample on soda-lime glass (highlighted with the red line). The lateral size in one direction exceeds 200 µm.
Figure 3 . 8 :
38 Figure 3.8: AFM scan of the edge of an anodic bonded monolayer graphene sample. The measured height of ∼ 0.7 nm (in the inset) is compatible with the predicted thickness of graphene.
Figure 3 .Figure 3 . 9 :
339 Figure 3.9: Contact deposition on an anodic bonded graphene sample with a lateral size of ∼ 50 µm. The sample is highlighted in red, the gold contacts with a black line.
Figure 3 . 10 :
310 Figure 3.10: Result of the plasma etching on a macroscopic CVD graphene sample after a step of electron beam lithography. The etched part appears slightly darker whit respect to the graphene covered glass. The effective area of the sample which is measured is the area enclosed in the square.
Figure 3 . 11 :
311 Figure 3.11: Illustration of the principle of space charge doping applied to a graphene sample.With no applied gate voltage the equilibrium in the glass in preserved but when the glass is hot (i.e. Na + mobility is activated) the polarity of V G determines accumulation (V G > 0) or depletion (V G < 0) of sodium ions at the interface.
Figure 3 . 12 :Figure 3 . 13 :
312313 Figure3.12: Doping of an anodic bonded graphene with a positive gate voltage. The sample was found to be slightly n-doped after a vacuum annealing in the cryostat. It was then heated to the doping temperature and the voltage increased up to 210 V.
Figure 3 . 14 :
314 Figure 3.14: Hall measurement performed before and after space charge doping showing that the charge carrier type in the graphene sample is changed.
Figure 3 .
3 Figure 3.15:A large CVD graphene sample on a borosilicate glass substrate doped at different carrier concentrations. Each point of the curve is obtained with the process described in Figure3.12 or 3.13. The red line is a guide for the eyes.
Figure 3 . 16 :
316 Figure 3.16: The Hall measurements done in each point of the curve of Figure 3.15 from 0 to 2 T showing the wide range of achievable doping.
Figure 3 . 17 :
317 Figure 3.17: Illustration of the mechanical stress to which graphene is subjected in extremely high doping conditions.
I n c r e a s e d u e t o t h e m e c h a n i c a l s t r a iFigure 3 . 18 :
318 Figure 3.18: Mechanical strain induced increase of sheet resistance at very high doping values.
Figure 3 . 19 :
319 Figure 3.19: Space charge doping is performed on a 70 × 70 µm 2 CVD graphene sample (delimited with a micromanipulator tip) in a cycle to demonstrate the reversibility of the doping process. The black circles represent the doping performed in the n to p direction while the red crosses are the doping in the p to n direction.
Figure 3 . 20 :
320 Figure 3.20: Hall mobility of several graphene samples deposited on the two glass substrates used in this thesis at different carrier concentration values.
Figure 3 . 21 :
321 Figure 3.21: AFM topography performed on the surface of clean borosilicate (left) and soda-lime (right) glass. The flatter nature of the borosilicate glass substrate is visible from the topography. The scan is performed on an area of 15 µm × 15 µm and RMS roughness of 0.227 nm for borosilicate and 0.734 nm for soda-lime extracted by software analysis.
Figure 3 . 22 :
322 Figure 3.22: Transmittance of a doped CVD graphene sample in the wavelength range between 535 and 800 nm.
Figure 3 . 23 :Figure 3 . 24 :
323324 Figure 3.23: Map of the 2D/G ratio measured before and after the doping showing a remarkable and uniform decrease as indication of the homogeneity of the doping all over the sample area.
Figure 3 . 25 :
325 Figure 3.25: Space charge doping performed on CVD graphene on borosilicate glass compared to the doping attempted on CVD on quartz in the same conditions. We see that no effect is produced on the sample deposited on quartz due to the absence of sodium ions in the quartz.
Figure 4 .Figure 4 . 1 :
441 Figure 4.1: XRD spectra of samples A, B, C and D. Background due to the glass has been removed and the spectra are normalized with respect to the peak at ≈ 34 • .
Figure 4 . 2 (
42 Figure 4.2 (the spectrum of ZnO powder) we can see that the visible peaks obtained from our samples located at 34.2 • < 2θ < 34.6 • correspond to the crystalline orientation in the (002) direction[START_REF] Znaidi | Sol-gel-deposited ZnO thin films: A review[END_REF]. We see also a small peak located at 2θ ≈ 63 • corresponding to the (103) crystalline orientation. Our thin films are thus textured with a preferential (00l) orientation.As explained earlier an initial carrier concentration is induced by creating oxygen defects. This facilitates the practical application of space charge
Figure 4 . 2 :
42 Figure 4.2: The XRD spectrum of the ZnO powder reproposed from Chapter 1. Re-printed from [59].
Figure 4 . 3 :
43 Figure 4.3: Effect of the annealing on the sample crystalline structure analysed by XRD. It is clear that the characteristic peaks of the thin film are sharper after the annealing, meaning that the grain size has increased. The spectra are normalized with respect to the peak at ∼ 34 • .
Figure 4
4
Figure 4 . 4 :
44 Figure 4.4: Topography of the height measured from the AFM scan (upper part) and the corresponding phase topography (bottom part). The height scan of the surface enables to see very well the grains and measure their height, but the topography of the phase displays in a clearer way the shape and the size of the grains.
Figure 4 . 5 :
45 Figure 4.5: Schematization of the shape of the grains of our ZnO thin-films in a lateral view. The grains lateral size is bigger than their height are suggested from the AFM and XRD measurements.
Figure 4 . 6 :
46 Figure 4.6: Thickness measurement of the sample. The edge of the sample is very sharp after the etching in the aqueous solution of HCl.
Figure 4 . 7 :
47 Figure 4.7: Transmittance of sample C measured after the annealing process..
Figure 4 . 8 :
48 Figure 4.8: A typical ZnO sample after the etching process and before the contact deposition.
Figure 4 . 9 :
49 Figure 4.9: Space charge doping applied to Sample A. The initial sheet resistance of the sample was R S ≈ 10 8 Ω/ . The sample is first heated at ∼ 370 K and then a gate voltage V G = +285 V is applied. As observed, the space charge of Na + ions forming at the interface between the glass and the ZnO thin film cause a massive drop in R S of ∼ 4 orders of magnitude in just 10 minutes.
Figure 4 .
4 Figure 4.10: R S vs T characteristics in a wide range of carrier concentrations for sample A, B, C and D.Even at the highest carrier density (which is always higher then 10 14 cm -2 ) the samples show an insulating behaviour.
Figure 4 . 12 :
412 Figure 4.12: Dependence of the mobility on the carrier concentration for sample A, B, C and D measured at 25 K. The increase of µ with n S indicates that the samples are in the condition of high doping as highlighted in Equation4.4 and that the grain boundaries do influence transport properties of the ZnO thin films without dominating them.
Figure 4 . 14 .Figure 4 . 13 :
414413 Figure 4.13: Linear dependence of the mobility with respect to T 3/2 , indicating that the main scattering mechanism is due to ionized impurities.
Figure 4 . 14 .
414 It has been shown
Figure 4 . 14 :
414 Figure 4.14: Simplification in one-dimension of the shape of the potential in a materials in the ordered (upper part of the figure) and disordered (lower part) case. Re-printed from [113].
Equation 4 . 14 Figure 4 . 16 :
414416 Figure 4.16: Value of ∆E extrapolated from Equation 4.7 for samples B, C and D for various carrier concentrations.
a l i z a t i o n e f f e c t
Figure 5 . 1 :
51 Figure 5.1: WL effect on the R S vs T characteristic of a doped CVD graphene sample deposited on borosilicate glass.
Figure 5 . 2 :
52 Figure 5.2: Illustration of the backscattering phenomenon by several scattering centers along opposite directions.
3, plotting the conductivity with respect to the ln(T ) should give a straight line. As a representative case of what is measured in our samples we show the behaviour of sample B at high doping (n S = 2.2 × 10 14 cm -2 ) and at low temperature in Figure 5.3. A linear dependance is observed indicating the 2D nature of our samples. We now have clearer experimental verification of this 2D behaviour as opposed to the more ambiguous results shown in Figure 4.15.
Figure 5 . 3 :
53 Figure 5.3: Conductivity of sample B at high doping as a function of ln(T ). The linear behaviour is indicative of 2D transport for the electrons in the sample.
Figure 5 . 4 :Figure 5 . 5 :
5455 Figure 5.4: Normalized magneto-resistance (R S (B)/R S (0)) as a function of the temperature showing the relative variation of the resistance of the sample with the magnetic field.
. 4 ,Figure 5 . 6 :Figure 5 . 7 :
45657 Figure 5.6: Evolution of the characteristic fields associated with the curves of Figure 5.5. Till 10 K, B φ is higher then B SO , as a consequence of the shorter inelastic scattering time with respect to the spin-flip time. Between 10 and 4 K we observe a crossover and the weak anti-localization takes place.
Figure 5 . 6 .
56 Figure 5.6. No error bars are consequently shown for these characteristic times.Corresponding to the behaviour already seen for the charcteristic fields, while at higher temperature τ φ is always smaller than τ SO , there is
13 Figure 5 . 8 :Figure 5 . 9 :
135859 Figure 5.8: Combined data from sample B and C showing the transition from weak antilocalization to weak localization as n S increases. The data from the two samples show perfect coherence.
Figure 5 . 10 :
510 Figure 5.10: τ phi and τ SO as a function of n S extracted from the fit of Figure 5.8.
6 it is possible to obtain the values of the phase-coherence length L φ and the spin-coherence length L SO . Since L = √ Dτ , with D the diffusion coefficient and τ the characteristic transport times associated with L, we can estimate the characteristic transport lengths from L = 4eB (5.8) with L being L φ or L SO and B respectively B φ or B SO . The mean free path L tr is derived from τ by the expression L tr = v F τ , with v F being the Fermi velocity.
In Figure 5 .
5 [START_REF] Mattevi | A review of chemical vapour deposition of graphene on copper[END_REF] we show the plot of 1/τ SO vs τ for all samples analyzed and for different carrier concentrations. It can clearly be seen that the quan-
Figure 5 . 11 :
511 Figure 5.11: Plot of 1/τ SO with respect of τ clearly showing that the D'Yakonov-Perel' mechanisms is not active in our samples.
Figure 5 . 12 :
512 Figure 5.12: Evolution of B SO as a function of the carrier density.
Ψ
kn↑ (r) = [a kn (r)| ↑ + b kn (r)| ↓ ] e ik•r Ψ kn↓ (r) = a * -kn (r)| ↓ -b * -kn (r)| ↑ e ik•r
Figure 5 . 13 :
513 Figure 5.13: Illustration of the Elliott-Yafet mechanism showing that the spin of the electron can change during a scattering event. The electron possessing for example spin down (red circles)undergoes some scattering events until one of them causes a spin change (blue circles). This phenomenon is repeated after some collisions.
Figure 5 Figure 5 .
55 Figure 5.14: τ SO vs τ for samples B, C and D at different electron concentrations. The proportional relationship seen confirms that the Elliott-Yafet mechanism is the dominant mechanism encountered.
Undoped DopedR S (Ω/ ) n S (cm -2 ) R S,min (Ω/ ) n S,max (cm -2
Table 3 .
3
1: Optimal anodic bonding parameters for graphene.
Table 3 .
3 2: Maximum reached carrier concentration in graphene.
-2 )
Table 4 .
4 1: The deposition parameters of the different samples studied
Sample Ar/O Deposition time (s) Thickness (nm)
A 76/24 120 40
B 76/24 66 23
C 80/20 66 27
D 80/20 25 7.5
Table 4 .
4 2: R S values of the four ZnO samples examined before space charge doping.
Sample Initial R S (Ω/ )
A ∼ 10 8
B 6.2 × 10 4
C 4.0 × 10 4
D 3.2 × 10 6
Figure 4.15: Variable range hopping in ZnO. Upper part of the figure: VRH in the 2D case. Lower part of the figure: VRH in the 3D case. We can see that for both types of behaviour the data appears linear in the low temperature part making it impossible to discriminate the dimensionality from this argument.
V R H i n 2 D
S a m p l e A S a m p l e B S a m p l e C S a m p l e D
1 4
1 3
1 2
1 1
l n ( R S ) 1 0
9
8
0 . 2 0 . 4 0 . 6 0 . 8 0 . 2 0 . 4 0 . 6 0 . 8 0 . 2 0 . 4 0 . 6 0 . 8 0 . 8 . 6 0 . 4 0 . 2
1 / T 1 / 3 ( K -/ 3 ) 1 / T 1 / 3 ( K -1 / 3 ) 1 / T 1 / 3 ( K -1 / 3 ) 1 / T 1 / 3 ( K -1 / 3 )
V R H i n 3 D
1 4 S a m p l e A S a m p l e B S a m p l e C S a m p l e D
1 3
1 2
1 1
l n ( R S ) 1 0
9
8
0 . 2 0 . 4 . 6 0 . 8 0 . 2 0 . 4 0 . 6 0 . 8 0 . 2 0 . 4 0 . 6 0 . 8 0 . 2 0 . 4 0 . 6 0 . 8
1 / T 1 / 4 ( K -/ 4 ) 1 / T 1 / 4 ( K -1 / 4 ) 1 / T 1 / 4 ( K -1 / 4 ) 1 / T 1 / 4 ( K -1 / 4 )
14 cm -2 ) τ SO (ps)
A 1.37 0.76
B 2.06 0.52
C 1.39 0.53
D 1.56 1.62
Table 5 .
5 1: Maximum spin relaxation time found in the measured samples in high doping conditions.
1.3. GRAPHENE
2.2. CHARACTERIZATION METHODS
Acknowledgements | 261,578 | [
"740872"
] | [
"247163"
] |
01380583 | en | [
"info"
] | 2024/03/04 23:41:44 | 2016 | https://theses.hal.science/tel-01380583v2/file/these_archivage_3274462o.pdf | Promethee Spathis
Marcelo Dias De Amorim
Lelia Blin
Mustapha Amr Abdelfattah
Abuteir Rabee
Salah Belouanas
Fadwa Boubekeur
Florent Coriat
Antonella Del Pozzo
Grassi Giulio
Alexandre Maurer
Narcisse Nya
Filippo Rebecchi
Matteo Sammarco
Badreddine Wafa
Benjamin Baron
Alexandre Ragaleux
Les réseaux de capteurs sans fil sont constitués de noeuds capteurs, capables de récolter des données, de les analyser et de les transmettre. Ces réseaux ont plusieurs applications, en fonction de la zone où ils sont déployés. Application militaire ou de sauvetage dans des zones pouvant être inaccessibles aux humains ; application sanitaire avec des capteurs déployés sur et dans le corps humain ; application de surveillance avec des capteurs sur les voitures d'un ville, ou les arbres d'une forêt. Les noeuds sont autonomes en énergie et il est primordial d'assurer leur longévité sans retarder la récolte des données. La tache principale réalisée par les réseaux de capteurs sans fils consiste à effectuer des mesures et à envoyer ces données jusqu'à un noeud coordinateur. Cette tache d'agrégation est effectuée régulièrement, ce qui en fait la plus consommatrice d'énergie. L'étude approfondie de la consommation d'énergie des capteurs, qui au centre de ma thèse, peut se traduire de différentes manières.
Premièrement, nous avons étudié la complexité du problème de l'agrégation de données en utilisant un modèle simplifié pour représenter un réseau de capteurs sans fils. Dans ce modèle, nous avons montré que la recherche d'une solution optimale permettant d'agréger les données d'un réseau de capteurs sans fil quelconque n'était pas réalisable en pratique, même pour un algorithme centralisé connaissant l'évolution futur du réseaux. De plus nous avons étudié la résolution de ce problème par un algorithme distribué fonctionnant en temps réel. Nous avons montré que le problème n'avait pas de solution en général, sans connaissance supplémentaires.
Secondement, nous nous sommes concentrés sur l'estimation de cette durée de vie. Les simulateurs existant implémentent souvent des modèles trop simplistes de consommation et de batterie. Par ailleurs, les modèles plus réalistes de batterie, implémentés dans des simulateurs généralistes, sont trop complexes pour être utilisés pour les réseaux de capteurs possédant de nombreux noeuds, et dont la durée à simuler peut atteindre des mois, voire des années. WiSeBat est un modèle de batterie et de consommation d'énergie optimisé pour les réseaux de capteurs, implémenté dans le simulateur WSNET. Après validation, nous l'avons utilisé pour comparer les performances des algorithmes de broadcast efficaces en énergie.
List of Figures
In the last decade, the number of personal devices connected to the Internet has grown to exceed the human population on earth. The decreasing cost of small devices has driven the development of the Internet of things, which consists in connecting devices, such as printers, fridges, or lamps, directly to the Internet. Their connectivity allows users to send requests from their smartphones, e.g. to turn the heat on before getting back home, to ensure the lights are off, or to check whether the door is locked.
From this abundance of devices, another paradigm has gain popularity: connecting devices together to form an independent network connected to the Internet through a unique device, called the gateway. In this context devices can communicate together with lightweight protocols (instead of the legacy Internet protocol) and it is still possible to communicate with them through the gateway. In return, there is no infrastructure that organizes their communication. This kind of network have lots of advantages: their low computational needs allow them to be battery powered; devices are small and relatively inexpensive; they can be used in areas where no internet connection or other infrastructure networks exist.
Usually, devices that compose such networks consist of a microcontroller, a battery, a radio transceiver and a sensor. A device uses its sensor (e.g. an accelerometer, a gyroscope, or a temperature sensor), and communicates with the other devices through wireless transmissions. The networked connection of these devices form a Wireless Sensor Network.
The health monitoring of a patient is a particularly challenging application of wireless sensor networks. Numerous examples of diseases would benefit from continuous or prolonged monitoring, such as cardiovascular disease, diabetes, hypertension, asthma, renal failure, etc. Monitoring is usually used post-operative, for systematic prevention (e.g. to prevent the sudden infant death syndrome), or to enhance the quality of life (e.g. a pump administering the correct dose of insulin to diabetics based on the glucose level measurements).
To perform the monitoring, intelligent physiological sensors can be integrated into a wearable network, called Body Area Network (see Figure 1.1). All the data from all the sensors are aggregated periodically to the gateway (e.g. a smartphone).
The gateway analyzes the aggregated data and logs it, or sends an alert to the hospital if there is a risk for the patient. In this context, the network must be reliable, sustainable and predictable. At the same time, devices should be small, battery powered (for some devices to be implanted inside the body), and having a lifetime of months or even years without intervention. Despite the short distance between sensor nodes, communications are mutli-hop. In case of short-range and high-throughput wireless technologies (e.g. 60 GHz transmissions), signals travel solely by line-of-sight, and are blocked by the body. Hence, the mobility of the sensor nodes on the body may disconnect and connect nodes unexpectedly, preventing the use of the direct path between the source of a transmission and the gateway.
A key challenge is to evaluate the lifetime of a wireless sensor application aiming to run a long period of time. Ensuring that the application runs at least for the time given by the specification is crucial and may have a direct impact on the life of the patient. In this thesis, we tackle this problem by providing accurate models to evaluate the lifetime of the network using simulations.
The fundamental task of data aggregation is done regularly, making it the most energy-consuming. In this thesis, we analyze this task from a theoretical point of view. We assume that the minimal amount of transmissions is used (to optimize the energy consumption) and try to limit the duration of the data aggregation. Indeed, it is important to have a low delay between the detection of a problem by a device and its reception by the gateway.
Context of the Thesis
Wireless Sensor Networks A wireless sensor network paradigm was defined, as we know it now, between the years 1990 and 2000 [START_REF] Bambos | Toward power-sensitive network architectures in wireless communications: concepts, issues, and design aspects[END_REF][START_REF] Goldsmith | Design challenges for energy-constrained ad hoc wireless networks[END_REF], in the prolongation of radio and wireless networks. It consists of devices that communicate together by mean of wireless transmissions. The devices are autonomous and there is no centralized infrastructure, which makes the network self-organized. When a device wants to communicate, it can only transmit a message to nearby devices. When the destination is far from the source, intermediate devices are induced to Figure 1.2 -A wireless sensor network connected to a smart-phone forward the message, creating multi-hop communications. We present a typical architecture of a wireless sensor network in Figure 1.2. It consists of devices that cooperate to send data to a gateway, which in turn may be part of a bigger network.
Performance Analysis A key characteristic of wireless sensor networks is the shared medium used to propagate transmission signals. Simple models for the signal propagation lead to a representation of a wireless sensor network as a graph, which represents the devices and their communication links. When devices are mobile, the changes in the topology over the time are captured by a Time-Varying Graph, or simply by a sequence of graphs, that represents the evolution of the network over the time. For a more accurate estimation of the performance of a protocol, it is important to simulate entirely the way devices work and how the signals propagate and interfere using realistic signal propagation model.
Energy Consumption One fundamental challenge in a wireless sensor network is the limited amount of energy available to the devices. The energy is not only used by a device to perform its own actions, but is also used for unpredictable actions. This includes forwarding messages between two devices that cannot communicate directly, making sure that the device is still part of the network by sending control messages, informing incoming devices about the network, etc. Each decision made by the layers of the communication stack may impact the energy consumption of devices. That is why, optimizing and evaluating the energy consumption of devices in a wireless sensor network is the common thread of this thesis.
Thesis Organization
Part One: Context In this first part, we introduce the wireless sensor networks, their characteristics, and their applications. We give an overview of their components and highlight several challenges that partly motivate this thesis. Then, we present the model we use to represent wireless sensor networks with different levels Chapter 1. Introduction of abstraction.
Part Two: Data Aggregation Problem The second part of this thesis extends results from [START_REF] Bramas | The complexity of data aggregation in static and dynamic wireless sensor networks[END_REF]BMT16]. We consider the problem of data aggregation. First [START_REF] Bramas | The complexity of data aggregation in static and dynamic wireless sensor networks[END_REF], we investigate, from a centralized point of view, the complexity of finding the optimal solution in static and dynamic wireless sensor networks. We also present the first approximation algorithm for the problem in a dynamic network. Then [BMT16], we investigate the problem from a distributed and online point of view. We show several impossibility results and present optimal algorithms when the interactions that occur in the networks are random.
Part Three: Lifetime Estimation of a Wireless Sensor Network The last part of this thesis is related to results presented in [BDBF + 15, BT16]. We focus on benchmarking energy-centric protocols in wireless sensor networks. First [BDBF + 15], we present a model for the energy consumption and the battery of a sensor node, called WiSeBat. We evaluate its implementation in the WSNet simulator [START_REF] Fraboulet | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF] against real devices and show that it outperforms the default energy model of the simulator. Then [START_REF] Bramas | Benchmarking Energy-Efficient Broadcast Protocols in Wireless Sensor Networks[END_REF], we use this model to benchmark several energy-centric broadcasting protocols and show that their performance in a realistic environment differs from the original results.
Reading map Figure 1.3 summarizes dependencies between chapters of this thesis. The growing number of sensor nodes with sensing, computing and communication capabilities, was made possible by recent technological advances. This growth was encouraged by a variety of applications and contributes to the widespread interest in practical and theoretical aspects of wireless sensor networks. Sensor nodes should be inexpensive, small and sustainable in order to be easily deployed in a dangerous area, inside a human body or in vehicles, generally for monitoring applications. Miniaturization and cost reduction permit, for a single application, the deployment of multiple sensor nodes in an area. One particularity of those sensor nodes, is that they do not rely on an infrastructure. Each node is equipped with a radio transceiver that enables the communication with nearby nodes. The network composed by those sensor nodes is called a Wireless Sensor Network (WSN). A WSN is created on the fly, without infrastructure. In that sense, it is an ad hoc network.
By nature, ad hoc networks such as WSNs are self-organized, tolerate unpredictable behavior and can be dynamic. This raises various challenges, such as energy (sensors are battery powered) and delay efficiency (information is relevant for a short period of time only). The delay to transmit a message from one node to another can easily grow to become a problem. This is due to the multi-hop transmissions i.e., when a node wants to transmit a message to a node that is not in its communication range, it has to use intermediate nodes to forward the message between sensors that are close to one another toward the destination. The choice of the intermediate node and the way messages are forwarded may impact the energy consumption of nodes and thus the lifetime of the network. The two fundamental tasks of a WSN are monitoring and tracking [START_REF] Yick | Wireless sensor network survey[END_REF]. The data obtained by each sensor node is then transmitted to a base station that analyzes the data. The base station is either controlled by an end-user or is used as a gateway between the WSN and another network such as the Internet. Those tasks usually require two kinds of transmission scheme, a transmission from the sensor nodes to the base station, and a transmission from the base station to the sensor nodes. The former one is used to retrieve the data obtained by sensor nodes to the base station for further analysis, and the latter one can be used to send application updates, queries, configuration, or control messages. We say we perform a broadcast when the base-station sends a message to every node in the network (Figure 2.1). We say we perform a convergecast when every node in the network sends a message to the base-station (Figure 2.2).
Another important feature of WSNs is that communications are performed through a shared medium and in all directions (we assume here that antennas are omnidirectional). This can be used to reduce the number of transmissions by broadcasting a message to all the neighbors at once. However, it can also lead to problems as simultaneous transmissions generate interference that can prevent the correct reception of messages. Indeed, in a real environment, any wireless signal is subject to several phenomena, such as attenuation over the distance, and the superposition with other wireless signals, before being received by a receiver. To be properly received, the signal corresponding to the message must be decoded considering the sum of all other incoming signals as noise.
Types of Sensor Networks Sensor nodes can be deployed in various types of environment. The simplest deployment concerns terrestrial WSNs where a number of nodes are statically deployed in an area, randomly located, or with predefined position such as on a grid. Terrestrial WSNs usually offer good conditions for the radio transmissions, and allow energy harvesting with solar panels for instance. Underground or underwater WSNs [START_REF] Pompili | Deployment analysis in underwater acoustic wireless sensor networks[END_REF] require appropriate equipment and are more expensive to deploy and maintain. Also, the topology may be different than simple terrestrial WSNs due to the possible placement in three dimensions. Mobile WSNs (or MANETs for Mobile ad hoc networks) consist of a collection of moving nodes, which creates a topology that changes over the time. The mobility of the nodes can be active (nodes can change their position e.g. with motorized wheels) or passive (nodes undergo mobility e.g. when located on animals or on vehicles). Finally, wireless body area networks consist of nodes located on or in the body. This type of WSN is particularly challenging as nodes are moving on a three dimensional space and the body is an heterogeneous environment extremely complex to model [iNWK + 15, PMS + 15].
Applications
As explained by J.Yick et al. [START_REF] Yick | Wireless sensor network survey[END_REF], WSN applications can be classified in two categories: monitoring and tracking.
Monitoring Applications Sensor nodes can be used to monitor one or several types of data in a specific environment, in order to alert the base station when something is wrong or just to study the evolution of a feature over the time.
Health monitoring applications [BAB + 07] gains a lot of attention and focuses on all the challenges as nodes have to be reliable, sustainable, and with nodes located on or in the body, it has to be safe. Moreover, since the data may be sensitive and personal, it has to be secured. The objectives includes post-operative or intensive care, or long-term surveillance of chronically ill patients or the elderly Smart cities consist in the deployment of heterogeneous sensor nodes with different monitoring capabilities. The objectives include traffic and parking management, garbage level, noise and atmospheric pollution, crossroads traffic light regulation, smart lights, and finally, buildings, bridges and roads structural monitoring.
On Volcàn Reventador in northern Ecuador, sensor nodes equipped with seismographers and microphones monitor the volcano and alert the base station in case of a volcanic event [WALR + 06] (see Figure 2.3).
In the Macroscope of redwood case study [TPS + 05], sensor nodes were deployed on redwood trees in California, and at different heights, to monitor the air temperature, the relative humidity and the photo-synthetically-active solar radiation. The data are used by Plant biologist to validate biological theories.
Tracking Applications Sensor nodes can be deployed on animals to track their movements over a period of time. For instance, ZebraNet [START_REF] Zhang | Hardware design experiences in zebranet[END_REF] system deployed several sensor nodes (with GPS unit) into zebra's collar in order to track their movement over several weeks. The positions log is sent multi-hop across zebras to the base station.
There exists a number of dataset available online containing GPS mobility data recorded by sensor nodes deployed on humans (e.g. contributed by I.Rhee et al. [RSH + 09]) or on vehicles (e.g. taxi cabs in San Francisco [START_REF] Piorkowski | CRAWDAD dataset epfl/mobility (v. 2009-02-24)[END_REF] or in Roma [BBL + 14]). While those data were obtained by sensor nodes that do not communicate, they can be used to simulate WSNs having the same mobility as the one recorded. For instance, what would be the delay to broadcast a message to all the taxi cabs in Roma, using only the multi-hop communication across them?
Human interactions can also be recorded using WSNs. For instance if sensor nodes, sending beacons at regular intervals, are deployed in a group of people, each node can record the received beacon (i.e., its neighbors) over the time. Again, those data can be used to propose efficient routing protocols that may be used in future application (e.g. the analyze by P.U. Tournoux et al. [TLB + 09] on the rollernet dataset [START_REF] Benbadis | CRAWDAD dataset upmc/rollernet[END_REF])
Sensor nodes can also be deployed in an area to track specific objects. In the military PinPtr [SML + 04] system, sensor nodes are statically deployed in an area to detect and locate snipers.
Great Duck Island Example
The monitoring in the Great Duck Island of the Leach's Storm Petrels [MCP + 02] is representative of many applications in this domain. The deployment of sensors aims to understand (1) the usage pattern of nesting burrows over a particular 24-72 hour cycle, (2) the changes observed in the burrow and surface environmental parameters during the course of the approximately 7 month breeding season and (3) the differences in the micro-environments with and without large numbers of nesting petrels.
The study could have been done in field by researcher. However, they are becoming increasingly concerned about the potential impacts of human presence in monitoring plants and animals in field conditions. Disturbance effects are of particular concern in small island situations. Research in Maine [START_REF] John | Pilot survey of mid-coast maine seabird colonies: an evaluation of techniques[END_REF] suggests that even a 15 minute visit to a cormorant colony can result in up to 20% mortality among eggs and chicks in a given breeding year. In this context, WSNs is a significant advances as it can be deployed prior to the onset of the breeding season or other sensitive period.
Each goal of the study requires unique data needs and suitable acquisition rates. The WSN is connected to a base station that has internet-connectivity to send data and receive management messages. The network consists of sensor nodes that have to run for 9 months from non-rechargeable power sources (except for the gateway that is harvested by solar panels). Each node (a Mica mote shown in Figure 2.4) runs on a pair of AA batteries. By simply dividing the available capacity of the battery by the number of days, researchers decided to give a "power budget" of 6.9 mAh per day. This budget is used to predict what a node is able to perform during a day including message reception, transmission, sensor reading, etc. After 4 weeks of deployment, researchers have calculated that the motes have sufficient power to operate for the next six months, which is below the specification. This study shows the importance of reducing energy consumption of protocols in WSNs and of accurate lifetime estimation.
Sensor Components
Sensor nodes are composed of two main parts: the hardware and the software.
Hardware
A sensor node consists of four units: processing, transceiver, sensing, and power units [START_REF] Healy | Wireless sensor node hardware: A review[END_REF][START_REF] Abo-Zahhad | A survey on protocols, platforms and simulation tools for wireless sensor networks[END_REF]. To reduce manufacturing cost, sensor nodes are also available as platforms that usually include the processing, transceiver, and power units. The sensing unit is either optional or included, depending on the application needs. Popular platforms, in research institutions at least, include the Mica2, MicaZ and Telosb/Tmote Sky [START_REF] Healy | Wireless sensor node hardware: A review[END_REF]. One particularity of sensor node units are the different operational modes. For instance the processing unit usually has several modes to work with or without the flash memory, at different clock speeds, and also has several sleep modes: a sleep mode with a fast wake up, and a low power sleep mode that has the lowest energy consumption but requires more time to wake up from. The energy consumption is usually represented by its instantaneous current, measured in ampere (A).
The sensing unit can contain several types of sensor depending on the application. The most used include a temperature sensor, a pressure sensor, an accelerom-eter/gyroscope, an image sensor, and a light sensor. Depending on the sensor, the instantaneous current can vary from tens of microampere to tens of milliampere.
The processing unit consists in a low power microprocessor. The most used ones, which include the Atmel Atmega 128L and the TI MSP430F series, have a clock speed between 4 and 16 MHz, with less than 10KB of RAM. The typical instantaneous current of a processing unit is a few milliampere.
The transceiver unit contains a low power radio. The majority of platforms used the Chipcon CC2420 compatible with the IEEE 802. 15.4. In addition to the four common modes, transmitting (TX), receiving (RX), idle, and sleep, the radio may propose modes for several transmission powers. Using a small transmission power implies a lower energy consumption but a smaller transmission range. The typical instantaneous current is between 10 and 20 mA for the TX and RX modes, and around 1µA for the sleep mode.
The power unit consists of a voltage regulator and either a simple battery or a storage device such as a rechargeable battery or super-capacitor. The storage device can be harvested from the environment with photovoltaics, piezoelectric, or other devices. The capacity of a battery is measured in ampere hour (Ah). Rechargeable batteries are usually lithium-ion battery and can have a capacity from tens of mAh to a few Ah.
TMote Sky Platform As an example, we detail here the characteristics of the TMote Sky platform (see Figure 2.5). It is an ultra low power module designed at the University of California, Berkeley. It has integrated humidity, temperature, and light sensors. Programming and data collection are performed via USB. Tmote Sky is powered by two AA batteries. AA cells may be used in the operating range of 2.1 to 3.6V DC, however the voltage must be at least 2.7V when programming the microcontroller flash or external flash. The main electric characteristics are presented in Table 2.1.
Software
Operating System There exist a variety of operating systems supporting different system platforms and designed to optimized node longevity and reliability. The most used include TinyOS [LMP + 05] and ContikiOS [START_REF] Dunkels | Contiki -a lightweight and flexible operating system for tiny networked sensors[END_REF]. The amount of RAM required to run is particularly important. We further detail the architecture of ContikiOS.
A running Contiki system consists of the kernel, libraries, the program loader, and a set of processes. A process is basically defined by an event handler function. Interprocess communication is done by posting events. The events are dispatched by the kernel (which is simply a lightweight event scheduler) to run processes. Additionally the kernel periodically calls processes' polling handlers. A key feature of ContikiOS is the protothread-based [START_REF] Dunkels | Protothreads: simplifying event-driven programming of memoryconstrained embedded systems[END_REF] processes to run efficiently while keeping a good flow control. A protothread is a programming abstraction between multi-threading and event-driven programming. Contiki Processes cooperatively run alternatively by explicitly yielding control back to the kernel at regular intervals. When a process is called (trigger by an event from an hardware event or from another process), it is executed until it explicitly yields. When it yields, its state is saved (using only a few tens of bytes) and the scheduler can proceed with the next event.
Contiki works on a lot of platforms (including TMote Sky). Furthermore, the communication stack may be split into different processes as shown in Figure 2.6. This enables run-time replacement of individual parts of the communication stack. Communication processes use the interprocess communication mechanism to call each other and synchronous events to communicate with application programs [START_REF] Dunkels | Contiki -a lightweight and flexible operating system for tiny networked sensors[END_REF].
Contiki is developed by a world-wide team of developers, support a wide range of platforms, as several Radio Duty Cycling drivers (for the MAC layer) and has its own simulator (Cooja).
(ContikiMAC [Dun11], B-MAC[PHC04], X-MAC[BYAH06], BoX-MAC[ML08], etc).
The IEEE 802.15.6 standard [802b] specifies short-range, wireless communications in the vicinity of, or inside, a human body (but not limited to humans) i.e., the physical layer and the MAC layer of wireless body area networks. This standard considers effects on portable antennas due to the presence of a person (varying with male, female, skinny, heavy, etc.), radiation pattern shaping to minimize the specific absorption rate into the body, and changes in characteristics as a result of the user motions.
The routing protocol is particularly important as it can have a great impact on the energy consumption of the device. Also, there is a number of protocols designed for a specific application such as broadcast or convergecast, and for a specific kind of node, such as nodes with variable transmission range.
One of the most fundamental tasks performed by sensor nodes in a WSN is to disseminate data and to retrieve data to the base station. The convergecast can use data compression or data aggregation to reduce communication cost and increase reliability of data transfer [START_REF] Yick | Wireless sensor network survey[END_REF]. The data-compression technique consists in reducing the size of the data before the transmission. There is no lost of information, all the data are retained, and the decompression occurs at the base station. With data aggregation, the data from multiple sensors is combined before the transmission to the base station. Part of the data may be lost, but the most important part is received by the base station. For instance, when querying the maximum temperature in the network, if a node receives the temperature sensed by a neighbor, it can compare the value with its own value and forward only the greatest value.
Challenges
To conclude this chapter we present the major challenges of WSN. S.Sharma et al. [START_REF] Sharma | Issues and challenges in wireless sensor networks[END_REF] also highlighted many other challenges that are more application-specific, such as security, robustness, multimedia communication, etc.
Energy Sensor nodes are battery-powered and may be deployed in area not accessible by humans (due to physical risks such as military or past-catastrophic applications, or due to application requirements such as monitoring the Great Duck Island as explained before). This characteristic forces the sensor nodes to be sustainable. This can be done by predicting accurately the amount of energy needed by the application to run according to the specification during a given duration. Another way to ensure that the node remains periodically powered is to allow the node to use an energy harvesting component, such as solar panels.
MAC Layer
The MAC layer has a direct impact on the energy consumption of wireless sensor nodes. Indeed, listening, sending and receiving control messages and acknowledgments are actions that originate from the MAC layer and consume lots of energy. However, they also participate to the reliability and low latency of application communications. Thus, the challenge is to find a good trade-off.
Routing Routing protocols impact the way communications are handled in the network. A bad routing protocol may cause a subset of nodes to consume much energy unnecessarily. Also, the routing protocol may be responsible for weak node reachability, for instance if the protocol has an incomplete view of the topology of the network.
Chapter 2. Wireless Sensor Networks
Evolving Topology When the topology evolves over the time, the majority of existing protocols may fail as they assume static (or at least stable) topology. However, there are an increasing amount of applications that produce dynamic topologies, including drone or robot networks, or when monitoring animals.
Limited Memory
The limited amount of memory can be a challenge when it comes to storing the local state of the network, or to buffer the data reads from the sensor before transmission. However, it can also be a solution for different problems. For instance, oblivious protocols imply fault-tolerant applications that perform well in dynamic topology.
Chapter 3
Model
In open country, the most probable place to find a drunken man who is at all capable of keeping on his feet is somewhere near his starting point! In this chapter we present the model used in the remaining of the thesis. As any other networks, it is convenient to represent a WSN as a simple graph where the nodes represent the devices and the edges the communication links between them. However, we show that in a wireless context, one may not simply model communication links independently. In fact the existence of a communication link (that represents the availability for a node at one extremity to transmit a message to the node at the other extremity) is subject to other factors, such as the transmission of another node somewhere else in the network.
In Section 3.1, we present the model for the physical layer that defines how the simultaneous transmission signals are received by the nodes. WSN simulators use this model to evaluate accurately the performances of protocols. Then, in Section 3.2 and 3.3, we show a simple way to model static and dynamic WSNs as graphs and evolving graphs respectively. Despite their simplicity, these models offer a good abstraction to analyze protocols with a theoretical point of view. Finally, we present several models to generate random static and dynamic graphs.
Physical Layer Modeling
In a WSN, messages are sent with a radio transceiver on a shared medium. In this context, modeling the way the signal propagates is crucial to evaluate how the message is received by other nodes. Physical layer modeling includes the radio range modeling, the radio link modeling, and the interference modeling [START_REF] Elyes | Impact of the physical layer modeling on the accuracy and scalability of wireless network simulation[END_REF].
Radio Range Modeling
The range modeling is based on the Signal to Noise Ratio (SNR) of links. The SNR of a link (i, j), denoted snr(i, j), is defined as the path-loss P L i,j times the ratio between the power of the signal P i over the power of the background noise N j at node j:
snr(i, j) = P L i,j P i N j
Let θ be the SNR threshold. Assuming the system interference free, the state of the radio link is binary: if the signal to noise ratio of a link is above the threshold, the link is on, otherwise, the link is off.
Propagation models
The path-loss may depend on the transceiver property, the distance between the nodes, and other environmental aspects [START_REF] Phillips | A survey of wireless path loss prediction and coverage mapping methods[END_REF]. It defines the way a signal sent by a node is attenuated when received by another node. Analytic models that are commonly used in wireless network simulators include the Freespace model. In a complex environment, other effects such as shadowing and fading appears. To efficiently take these effects into account, there exist several models that add a random variable to a path-loss model to account for additional fading and shadowing in the wireless channel. For instance, the Log-Normal Shadowing model and the Rayleigh Fading model are commonly used. The shadowing corresponds to slow variations of the signal due to obstacles. With multiple obstacles, the signal may be split in different paths and different copies of the signal create a fading effect, corresponding to fast variations in the signal amplitude.
Radio Link Modeling
To model the radio link, one can replace the fixed SNR threshold by a random variable representing the Packet-Error-Rate (PER). The PER is a function of the Bit-Error-Rate (BER), which is derived from the radio properties and the modulation. Given a modulation, there exist different techniques to compute the BER [START_REF] Wang | A simple and general parameterization quantifying performance in fading channels[END_REF].
Interference Modeling
sinr(i, j) = P L i,j P i N j + k =i,j P L k,j P k
To study by simulation the efficiency of an algorithm that is executed by devices communicating through wireless signals, it is necessary to consider an interference model. The survey by P. Cardieri [START_REF] Cardieri | Modeling interference in wireless ad hoc networks[END_REF] show the impact of the interference model on various network layers. A. Iyer et al. [START_REF] Iyer | What is the right model for wireless channel interference? Wireless Communications[END_REF] show that the interference model has a huge impact on both scheduled transmission networks and random access networks. Their conclusion is that all models used for quantitative evaluation purposes should at least include SINR considerations. The vast amount of research in this topic highlights the fact that it is not only necessary to have an interference model when analyzing WSN, but it is essential to have a good one.
WSN as a Unit-Disk Graph
A simple and abstract way to model a WSN is to assume that nodes are deployed in a 2-dimensional Euclidean space 1 and that each node u has a communication range R u and an interference range I u ≥ R u . Then, a communication link exists from a node u to a node v if and only if the Euclidean distance between them is smaller than the communication range R u of u. Then, we can model the network as its communication graph G(V, E) where V is the set of nodes and E the set of
(u, v) ∈ E ⇔ d(u, v) ≤ R u
Where d(u, v) denotes the Euclidean distance between nodes u and v. In the same way we define the interference graph G i (V, E i ) where edges in E i verify
(u, v) ∈ E i ⇔ d(u, v) ≤ I u
The interference graph informs us where collisions occur when a given subset of nodes transmit simultaneously. When a node u transmits a message, a neighbor v in the communication graph is able to receive the message, only if there is no node w transmitting at the same time and having v as neighbor in the interference graph (see Figure 3.1). Formally, a node v receives a message from a transmitting node u, if and only if (u, v) ∈ E and there is no transmitting node w ∈ V \ {u} such that (w, v) ∈ E i .
For the sake of simplicity, we can assume that all the nodes are identical, their communication ranges are normalized to 1, and their interference ranges equal their communication ranges. In this case, the communication graph and the interference graph coincide and the condition for a node v to receive a message from a neighbor u is that no other neighbor transmits at the same time. Also, the communication graph is a special case of intersection graph called Unit-Disk Graph (UDG) [START_REF] Clark | Unit disk graphs[END_REF] (see Figure 3.2) i.e., a node is represented by a disk of radius 1/2 in the plane, and an edge exists between two nodes if the intersection between their disks is not empty (disks are supposed closed, so that tangent disks intersect).
In this simple model, we usually assume discrete time and normalized the duration of the transmission of a message to 1 time unit. In this context, we are able to use the language of graph theory. For instance, we say that the transmission of a message sent by a node u to a node v is done along a path from u to v, which is a sequence of edges where the first edge contains u, the last edge contains v and two consecutive edges have one node in common. Then, the length of the path is exactly the duration between the transmission of the message by u and its reception by v.
In the sequel, n denotes the cardinal of V and ∆ denotes the maximum node degree of G.
Shortest Path Tree A shortest path between two nodes of a graph G(V, E) is a path with minimal length. In a network, it usually represents the best route for the transmission of a message between these nodes. An interesting extension is to look at all the shortest paths from a given node v to all the other nodes (or equivalently, from all the other nodes to v). From this we can obtain a tree, where each path from v to another node is a shortest path. This tree, called the shortest path tree, is not unique, but gives an optimal strategy to perform a broadcast from v (or a convergecast to v), see Figure 3.3. This definition is given in a static graph and we show in Section 3.3.3 one way (among others) to extend it in a dynamic graph.
Orthogonal Planar Embedding An orthogonal planar embedding (or drawing) of a planar graph G(V, E) is a mapping of each vertex to a distinct point in a grid, and of each edge to a curve made of horizontal and vertical segments such that edges intersect only at a vertex (see Figure 3.4). The embedding of a planar graph can be used to create a UDG from a planar graph, with the help of the following Lemma, used in chapter 5. Let H be a planar graph with n > 6 nodes and maximum degree ∆ ≤ 4, there exists an orthogonal planar embedding of H such that each edge has the same length. This embedding can be computed in time polynomial in n.
Proof : From [BK98], we know that there exists an orthogonal embedding of H in a grid g of size n where each edge has at most 2 bends (so each edge has length smaller than 3n). Let g i be the grid g where the unit has been divided by 4i, of size 4in.
We consider the corresponding embedding of H in g i (so that the coordinates of vertice has been multiplied by 4i). In g i , the maximum length of an edge is 3 × 4in.
We can increase the length of an edge by 2 by replacing a piece of edge between two points with integer coordinates by a new piece of edge, like in figure 3.5a. This is possible if there are no other edges around.
By choosing i ≥ 48n, there is enough space in a subgrid (2i -2) × (2i -1), called safe box, to contain an edge of length 3 × 4in. So that, if i ≥ 48n, since all the length are even, we can lengthen all the edges to make them have a length 3 × 4in. Indeed, we can lengthen a given edge in the first (2i -2) × (2i -1) subgrid of an end point, in the anticlockwise direction (see figure 3.5).
Dynamic Networks
A dynamic network is a network whose topology (links' existence and properties) and characteristic (nodes' properties) evolve over time. This domain of computer science has grown in the past decade. A big difference with a static network, is that in a dynamic network, this change of property, or link appearance and disappearance, is not seen as a failure but is part of the normal evolution of the network.
There exists a number of models to represent such networks. Here we present the two most common models. The general Time-Varying Graph model that is complex but that encompasses all the other models, and the simpler Evolving graph model that can be used to model the majority of networks. The lifetime T of a dynamic
a b c d [ 2 , 5 ) [ 0 , 4 ) [1, 3) [0, 1) ∪ [5, 6)
Figure 3.6 -A simple TVG. The interval(s) on each edge e represents the periods of time when it is available, that is, {t ∈ T : ρ(e, t) = 1}. From [START_REF] Casteigts | Time-varying graphs and dynamic networks[END_REF].
network is the time instants used to represent the evolution of the graph. It is a subset of a temporal domain T, which is usually N for discrete-time systems or R + for continuous-time systems.
Time-Varying Graphs
Time-varying graphs have been defined by A.Casteigts et al. [START_REF] Casteigts | Time-varying graphs and dynamic networks[END_REF] in the following way:
Definition 3.1 A Time-Varying Graph (TVG) is a tuple (V, E, T , ρ, ζ), where: -(V, E) is a labeled graph -ρ : E × T → {0, 1}
is the presence function that indicates whether a given edge is available at a given time.
ζ : E × T → T is the latency function that indicates the time it takes to cross a given edge if starting at a given date.
An example of a TVG is presented in Figure 3.6.
Evolving Graphs
We consider the special case of discrete TVGs with temporal domain T = T = N and a constant latency function ρ that equals 1 for every edge at any time. Under those assumptions, the graph is called an evolving graph i.e., a sequence of snapshots, where each snapshot represents the time-varying graph at a given time t ∈ N (see Figure 3.7). Since the latency is 1, messages can travel at most one hop at a given time 2 . An evolving graph can be defined using a simpler formalism than a TVG.
Definition 3.2
An evolving graph is a couple (V, E) where V is a set of nodes and E = (E t ) t∈N is a sequence that represents the edges between nodes over the time. The snapshot at time t is the graph (V, E t ) and represents the topology of the network at time In this model, an edge is a couple (e, t) where e is an edge between two nodes in the snapshot (V, E t ). t is called the time of occurrence or the edge. In the remaining of the thesis, we only consider evolving graphs or its variant.
Definitions and Preliminaries
If it is clear from the context, ∆ denotes the maximum node degree among all the snapshots of G.
Underlying graph
The underlying graph of an evolving graph G = (V, (E t ) t∈N ) is the graph G = (V, t∈N E t ). This graph is also called the footprint of G. This graph captures all the edges of the evolving graph i.e., if an edge exists in the underlying graph then this edge exists at a given time. However this information is sometimes not helpful as it may become obsolete after some time (i.e., if a link appears only a finite number of times). That is why we can also consider the eventual underlying graph. The eventual underlying graph of an evolving graph G = (V, (E t ) t∈N ) is the graph G ∞ = (V, t ∈N t>t E t ) i.e., an edge exists in G ∞ between two nodes if those nodes are connected in G infinity often.
Journey A journey from a node u to node a v is a sequence of edges ((e 1 , t 1 ), . . . , (e r , t r )) such that (e 1 , e 2 , . . . , e r ) is a path from u to v in the underlying graph and ∀i ∈
[1..r -1], t i < t i+1 ∀i ∈ [1..r], e i ∈ E t i
For a journey J, we denote by departure(J) the starting time t 1 and by arrival(J) the arrival time t r + 1 of the journey. The arrival time corresponds to the time of the existence of the last edge plus the latency to travel along the last edge. Then, duration(J) = arrival(J) -departure(J) denotes the duration of the journey. We denote by J (u,v) the set of journeys from u to v and by J [ts,te] (u,v) the subset of journeys that start and end between t s and t e .
Foremost Convergecast Trees
We introduce the notion of convergecast trees and foremost convergecast trees.
Definition 3.3
Let G(V, E) be an evolving graph. A convergecast tree to node s is a pair (T, c), where T (V, E T ) is a tree rooted at s, and c is a function c : E T → N that satisfies: if u is a descendant of v in T and (e 1 , e 2 , . . . , e r ) is the path from u to v in T , is a journey in G called the journey from u to s induced by T .
t = 0 t = 1 t = 2 Foremost Convergecast Tree
The departure (respectively, the arrival ) of the convergecast tree is the departure of the first journey in T (respectively, the arrival of the last journey in T ): Let G(V, E) be an evolving graph. A Foremost Convergecast Tree (FCT) to node s starting at time t s is convergecast tree (T, c) to s such that departure(T, c) ≥ t s with minimum arrival time.
departure(T, c) = min
F CT (G, s, t s ) denotes the set of foremost convergecast trees of G to node s starting after time t s . The common duration of foremost convergecast trees starting after t s is denoted F CT D(G, s, t s ).
In dynamic WSNs, a foremost convergecast tree plays the same role as a shortest path tree in static WSNs. Indeed it gives an optimal routing strategy to perform a convergecast to s. Figure 3.8 shows an example of the unique foremost convergecast tree to the sink node s starting at time 0 of a simple evolving graph.
Hierarchy of Dynamic Graphs
A hierarchy of classes of dynamic graphs has been identified in previous work [START_REF] Casteigts | Time-varying graphs and dynamic networks[END_REF]. Here we present only the few we are interested in.
-C (Connectivity over the time): there exists a journey between any two nodes.
∀u, v ∈ V :
J (u,v) = ∅
-RC (Recurrent connectivity): there exists a journey between any two nodes, infinity often. ∀u, v ∈ V, ∀t ∈ N:
J [t,+∞) (u,v) = ∅
-BRC (Time-bounded recurrent connectivity): there exists a bound T such that, there exists a journey between any two nodes in every interval of duration T .
∀u, v ∈ V, ∀t ∈ N: J [t,t+T ] (u,v) = ∅
-RE (Recurrence of edges): the graph is connected over the time and if an edge appears once, it appears infinitely often. ∀u, v ∈ V J (u,v) = ∅ and:
(∃t, (u, v) ∈ E t ) ⇒ ∀t ∈ N, (u, v) ∈ t >t E t
-BRE (Time-bounded recurrence of edges): the graph is connected over the time and there exists a bound T such that, ∀u, v ∈ V :
(∃t, (u, v) ∈ E t ) ⇒ ∀t ∈ N, ∃t + ∈ [t, t + T ], s.t. (u, v) ∈ E t
-P (Periodic): the graph is connected over the time and there exists T ∈ N such that:
∃t, (u, v) ∈ E t ⇒ (∀k ∈ N, (u, v) ∈ E t+kT )
Random Graphs
In this section we present several models to generate random graphs.
Erdős-Rényi graphs An Erdős-Rényi graph [START_REF] Erdős | On random graphs i[END_REF][START_REF] Bollobás | Random graphs[END_REF] is a static graph constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge. Equivalently, all graphs with n nodes and M edges have equal probability of
p M (1 -p) ( n 2 )-M
The connectivity of Erdős-Rényi graphs is well-known: If p < (1-ε) n , then the graph almost surely contains isolated vertices; if p > (1+ε) n the graph is almost surely connected. Where almost surely means that the probability tends to 1 as n tends to infinity. An evolving graph G(V, E) can be constructed with this model if we assume that each snapshot E t is an Erdős-Rényi graph.
Edge-Markovian Evolving Graphs An edge-Markovian graph [CMM + 10] is an evolving graph constructed in the following way. The first snapshot can be any graph (either given, or randomly generated). Then, at every time step, every edge changes its state (existing or not) according to a two-state Markovian process with probabilities p and q. If an edge exists at time t, then at time t + 1 it dies with probability q (i.e. death-rate). If instead the edge does not exist at time t, then it will come into existence at time t + 1 with probability p (i.e. birth-rate). Observe that when q = 1 -p, then the construction is time-independent and correspond to an evolving graph where each snapshot is an Erdős-Rényi graph with parameter p.
Random Unit-Disk Graph Here, and in the sequel, we assume that the positions are chosen in a simulation area A that is convex (usually we consider a square). A random unit-disk graph with n nodes in an area A is an unit-disk graph where each node has a random position in the area A.
Random Walk [Pea05]
To generate an evolving graph from a random walk, with step s in an area A, we initially consider either a given or a random unit-disk graph. Then, a snapshot at time t is constructed by assigning a node a random position in A at distance at most s from its position at time t -1.
Random Waypoint [START_REF] David | Dynamic source routing in ad hoc wireless networks[END_REF] To generate an evolving graph using the random waypoint model, with speed s in an area A, we initially consider either a given or a random unit-disk graph. Then each node chooses a random destination in A and starts to move toward it with a speed s. Once a node reaches its destination, it chooses a new destination randomly in A. The speed can be fixed, or different for each node, chosen uniformly at random at the beginning of the construction in an give interval, or varying over time.
A variant of the random waypoint called the Manhattan random waypoint assumes that movement can only be horizontal or vertical, to simulate the movement in the streets of a city. There are other variants too, for instance where the movement are smoothed. One can also consider other kind of simulation area. For instance with the random waypoint on a torus or the random waypoint on a sphere [START_REF] Boudec | Perfect simulation and stationarity of a class of mobility models[END_REF].
Markovian Evolving Graph All above-mentioned graph models are Markovian Evolving Graph (MEG) [START_REF] Avin | How to explore a fastchanging world (cover time of a simple random walk on evolving graphs)[END_REF]. Formally, the process is a MEG if the sequence of snapshots, seen as random variables, (G(V,
E t )) t∈N = G(V, E 1 ), G(V, E 2 ), . . . is
Part II
Data Aggregation Problem
Chapter 4
Introduction of Part II
The scientist does not study nature because it is useful to do so. He studies it because he takes pleasure in it, and he takes pleasure in it because it is beautiful. In the second part of this thesis, we deal with an important problem in WSNs: the data aggregation problem. We saw in chapter 2 that WSNs are usually made of small, often battery-powered, sensor nodes. These nodes are deployed in an area to perform a task. The nodes may be controlled indirectly by an end-user located at a specific node called gateway. The control is indirect because there may not exist a direct link between every node and the gateway, such that intermediate nodes are necessary to forward messages between two distant nodes. In many applications, each node is assigned to a small task (e.g. collecting or generating data from its environment). Then one of the fundamental communication tasks is to retrieve the result of each node to the gateway. Then, the gateway is able to analyze the data and either inform the end-user of the final result or perform a new task in response. In this context, the gateway is called the sink .
There are many ways to retrieve the data from a set of nodes. A simple way is to order every node to transmit its data to the sink, without looking at what the other nodes are doing. Intermediate nodes can forward the other nodes' data with respect to their routing protocol. This is called a convergecast. However, in the context of energy efficiency consideration, this may generate too many transmissions, especially for the nodes close to the sink that are responsible for forwarding a large number of messages. In total, up to Ω(n 2 ) transmissions are necessary to perform a convergecast from n nodes to a sink node.
In order to minimize the number of transmissions, we can assume several data can be merged together by an aggregation function. Examples of such a function include min, max, etc. Then a node can wait to receive data from its neighbors before forwarding the aggregated data toward the sink. This assumption implies that n transmissions are sufficient to aggregate the data from n nodes to the sink. This idea was made popular when the minimum duration of the data aggregation has been analyzed by Chen et al. [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] and Kesselman et al. [START_REF] Kesselman | Fast distributed algorithm for convergecast in ad hoc geometric radio networks[END_REF] after the work of Anamalai et al. [START_REF] Annamalai | On tree-based convergecasting in wireless sensor networks[END_REF]. The problem was initially defined in WSNs with static topology where interference due to simultaneous transmissions can cause collisions i.e., if two nodes transmit simultaneously a message to a common neighbor, no data is received due to wireless signal interference. To avoid collisions while ensuring a small overall delay is a challenge and finding the optimal solution is NPcomplete [START_REF] Bramas | The complexity of data aggregation in static and dynamic wireless sensor networks[END_REF]. When the collisions are handled by a MAC layer, the problem is more practical and the challenges are different.
After extending the problem of data aggregation to dynamic WSNs, we observe that the geometric nature of WSN does not play an important role anymore, even though the possible collision between transmissions is still the root of the problem when seen with a global point of view (no collision causes the problem to be trivial for a centralized algorithm). Finally, our study on distributed solutions for this problem shows that in dynamic networks, collisions play a second role and the network can be fully abstracted and modeled by a simple sequence of interactions. In this context, the amount of information given to the nodes assumed by an algorithm is directly linked with its performance.
Problem
Let V be a set of n nodes that initially have a data. We assume that the time is discrete and the set of time instants is represented by the set of positive integers N. At a given time t ∈ N, a node that has a data can receive data from its neighbors and aggregate it with its own data. For the sake of simplicity, the duration for the reception and the aggregation of a data is normalized to one. The goal is to aggregate all the data from nodes in V to a sink node s ∈ V . In the sequel V always denotes the set of nodes, n ≥ 3 its size, and s the sink node.
One can observe that the way we define the aggregation implies that a node can transmit its data at most once. With this rule, the aggregation is performed with the minimal number of transmissions i.e., n -1. In the context of WSN, this implies that an algorithm solving the data aggregation problem is energy optimal, considering only at the consumption of radio transmissions, which is one of the most important parts of the consumed energy.
Definition 4.1 (Minimum Data Agregation Time Problem)
The problem of aggregating the data from the nodes in V to the sink s with An instance of the MDAT problem is a couple (G, s) where G(V, E) models an evolving graph, and s ∈ V represents the sink node.
We defined the problem independently from the nature of the network, static or dynamic, with or without geometric constraints, and independently from the collision avoidance capabilities of the nodes. In the following subsections, we precise the definition of the problem, as it can take different forms and can cause different challenges, depending on these parameters.
Minimum Data Aggregation Time Problem in a WSN
Here, we suppose that the network is composed of sensor nodes that communicate through wireless transmission, and that the collisions due to interference are not handled by a MAC layer. We suppose that the topology of the network can evolve over the time. Since the problem is defined with discrete time instants, we choose to model the network as an evolving graph (see definition 3.2).
Due to the wireless nature of the communication, transmissions are subject to collisions. The simplest model of interference, as defined in chapter 2, constraints the communication with the following rules. Sensor nodes can send or receive data, but cannot do both at the same time. Moreover, if two nodes send their data simultaneously, all their common neighbors do not receive anything, due to interference (see Figure 4.1a) i.e., a node must have a unique transmitting neighbor in order to receive data from it. So, a solution of the MDAT problem in a WSN is a collisionfree schedule, telling when each node has to transmit, so that all the data of the network is aggregated at the sink node with minimum duration. This problem is studied in chapter 5.
One can observe that the term "collision-free" does not mean that no collision occurs, but that the data that have been aggregated are received without collision. Indeed, it is possible that two nodes u and v transmit simultaneously their data to nodes u and v (e.g. in Figure 4.1b, u is the only neighbor of u and v is the only neighbor of v ) and still a collision occurs at a node w (e.g. u and v are neighbors of w).
Let G = (V, (E t ) t∈N ) be an evolving graph where each snapshot (V, E t ) is a UDG.
∀u ∈ A \ B, ∃v ∈ B, ∀u ∈ A \ B \ {u} : (u, v) ∈ E t ∧ (u , v) / ∈ E t
Then, the definition of a dynamic data aggregation schedule follows.
Definition 4.3 (Dynamic Data Aggregation Schedule)
A dynamic data aggregation schedule to s of duration l is a decreasing sequence of sets
V = R 0 ⊇ R 1 ⊇ . . . ⊇ R l = {s} such that, for all t = 0, 1, . . . , l -1, data is aggregated from R t to R t+1 at time t.
For an instance of the dynamic MDAT problem (G, s), a solution is a dynamic data aggregation schedule to s with minimum duration. The minimum duration is denoted by M DAT Opt (G, s).
Remark 1
The MDAT problem may have no solution, even in an evolving graph G ∈ C connected over time. Indeed, consider a set of edges defined as follow:
E 0 = V × V and ∀i > 0, E i = ∅.
Then, the graph is connected, but only one node can send its data to the sink node at time 0, and the other nodes are never able to send their data. A simple sufficient (but not necessary) assumption that ensures the existence of a solution is that the graph is in the class RC of recurrent connected graphs (see our algorithm GDAS in the sequel).
Tolerating Multiple Simultaneous Transmissions An interesting extension of the MDAT problem in WSNs is to consider wireless communications with better collision capabilities. For instance, we can assume that up to K simultaneous transmissions can be received by a node, and that a collision occurs at a node if K + 1 neighboring nodes transmit simultaneously. Let K ∈ N * , we define the M DAT K problem as the M DAT problem, with the additional assumption that nodes can simultaneously receive up to K messages from K different neighbors. We show in chapter 5 that the results under this model are intuitive prolongations of the case K = 1. Here, we consider the MDAT problem in an arbitrary dynamic network, such as sensors deployed on a human body, cars evolving in a city that communicate with each other in an ad hoc manner, etc. We focus our study on the distributed aspect of the problem. To do so, we consider that nodes execute the same algorithm and have to make a decision in an online manner i.e., each node processes its input in the order that the input is fed to the algorithm, without having the entire input available from the start.
The essence of such a data aggregation algorithm is to decide whether or not to send a node's data when encountering a given communication neighbor. Also, a node may base its decision on its past experience (past interactions with other nodes) and initial knowledge only. Then, an algorithm accommodating those constraints is called a Distributed Online Data Aggregation (DODA) algorithm. The existence of such an algorithm is conditioned by the (dynamic) topology, initial knowledge of the nodes (e.g. about their future communication neighbors, or some partial information about the evolving graph), etc.
For the sake of simplicity, we assume that interactions between the nodes are carried out through pairwise operations (so that no collision occurs). Anytime two nodes a and b are communication neighbors (or, for short, are interacting), either no data transfer happens, or one of them sends its data to the other. The receiver executes the aggregation function on both its previously stored data and the received data and the output replaces its stored data. In the sequel, we use the term interaction to refer to a pairwise interaction.
We assume that an adversary controls the dynamics of the network, that is to say, the adversary decides what are the interactions. As we consider atomic interactions, the adversary decides what sequence of interactions is to occur in a given execution. Then, the sequence of static graphs to form the evolving graph can be seen as a sequence of single edge graphs, where the edge denotes the interaction that is chosen by the scheduler at this particular moment. Therefore, the time when an interaction occurs is exactly its index in the sequence.
This leads to model the dynamic network as a couple (V, I), where I = (I t ) t∈N is a sequence of pairwise interactions (or simply interactions). In the sequence (I t ) t∈N , the index t of an interaction also refers to its time of occurrence. One can observe that this model is a restriction of the evolving graph model, where each snapshot consists of a graph with a single edge.
In general, we consider that nodes in V have unique identifiers, unlimited memory and unlimited computational power. However, we sometimes consider nodes with no persistent memory between interactions; those nodes are called oblivious.
During an interaction I t = {u, v}, if both nodes still own data, then one of the nodes has the possibility to transmit its data to the other nodes. If a node decides to transmit its data, then it does not own any data, and is not able to receive other's data anymore.
Distributed Online Data Aggregation Algorithms The distributed online data aggregation problem consists in choosing at each interaction whether a node transmits (and which one) or not so that after a finite number of interactions, the sink is the only node that owns a data. An algorithm solving this problem is called a Distributed Online Data Aggregation (DODA) algorithm.
A DODA algorithm takes as input an interaction I t = {u, v}, and its time of occurrence t ∈ N, and outputs either u, v or ⊥. If a DODA outputs a node, this node is the receiver of the other node's data. For instance, if u is the output, this means that before the interaction both u and v own data, and the algorithm orders v to transmit its data to u. The algorithm is able to change the memory of the interacting nodes, for instance to store information that can be used in future interactions. In the sequel, D ODA denotes the set of all DODA algorithms. And D ∅
ODA denotes the set of DODA algorithms that only require oblivious nodes.
A DODA algorithm can require some knowledge to work. A knowledge is a function (or just an attribute) given to every node that gives some information about the future, the topology or anything else. By default, a node u ∈ V has two pieces of information: its identifier u.ID and a boolean u.isSink that is true if u is the sink, and false otherwise. A DODA algorithm may use additional functions associated with different knowledge. D ODA (i 1 , i 2 , . . .) denotes the set of DODA algorithms that use the functions i 1 , i 2 , . . .. For instance, we define for a node u ∈ V the function u.meetT ime that maps a time t ∈ N with the smallest time t > t such that I t = {u, s} i.e., the time of the next interaction with the sink (for u = s, we define s.meetT ime as the identity, t → t). Then D ODA (meetT ime) refers to the set of DODA algorithms that use the information meetT ime.
Adversary Models
We consider three models of adversaries:
-The oblivious adversary. This adversary knows the algorithm's code, and must construct the sequence of interactions before the execution starts.
-The adaptive online adversary. This adversary knows the algorithm's code and can use the past execution of the algorithm to construct the next interaction. However, it must make its own decision as it does not know in advance the decision of the algorithm. In the case of deterministic algorithms, this adversary is equivalent to the oblivious adversary.
-The randomized adversary. This adversary constructs the sequence of interactions by picking pairwise interactions uniformly at random.
Definition of Cost
To study and compare different DODA algorithms, we use a tool slightly different from the competitive analysis that is generally used to study online algorithms. The competitive ratio of an algorithm is the ratio between its performance and the optimal offline algorithm's performance. However, in our case, one can hardly define objectively the performance of an algorithm. For instance, if we just consider the number of interactions before termination, then an oblivious adversary can construct a sequence of interactions starting with the same interaction repeated an arbitrary number of times. In this case, even the optimal algorithm has infinite duration. Moreover, the adversary can choose the same interaction repeatedly after that the optimal offline algorithm terminates. This can prevent any non optimal algorithm from terminating and makes it have an infinite competitiveratio.
To prevent this, we define the cost of an algorithm. Our cost is a way to define the performance of an algorithm, depending on the performance of the optimal offline algorithm. We believe our definition of cost is well-suited for lots of problems where the adversary has a strong power, especially in dynamic networks. One of its main advantages is that it is invariant by trivial transformation of the sequence of interactions, like inserting or deleting duplicate interactions.
For the sake of simplicity, a data aggregation schedule with minimum duration (performed by an offline optimal algorithm) is called a convergecast. Consider a sequence of interactions I. Let opt(t) be the ending time of a convergecast on I, starting at time t ∈ N. If the ending time is infinite (if the optimal offline algorithm does not terminate) we write opt(t) = ∞. Let T : N ≥1 → N ∪ {∞} be the function defined as follows:
T (1) = opt(0) ∀i ≥ 1 T (i + 1) = opt(T (i) + 1)
T (i) is the duration of i successive convergecasts (two convergecasts are consecutive if the second one starts just after the first one completes).
Let duration(A, I) be the termination time of algorithm A executed on the sequence of interactions I. Now, we define the cost cost A (I) of an algorithm A on the sequence I, as the smallest integer i such that duration(A, I) ≤ T (i):
cost A (I) = min{i | duration(A, I) ≤ T (i)}
This means that cost A (I) is a tight upper bound on the number of successive convergecasts we can perform during the execution of A, on the sequence I. It follows from the definition that an algorithm performs an optimal data aggregation if and only if cost
A (I) = 1. Also, if duration(A, I) = ∞, then it is possible that cost A (I) < ∞. Indeed, if i max = min i {i | T (i) = ∞} is well-defined, then cost A (I) = i max , otherwise cost A (I) = ∞.
Related Work
There are two fundamental tasks in WSNs: data dissemination and data aggregation. After a quick review of the existing results on data dissemination, we present the related work on data aggregation. The small number of work related to the data aggregation problem in dynamic networks under our model motivates our study.
Data Dissemination Problem
The time to disseminate a message in the network is also called the flooding time. Their upper bound holds if the MEG converges to a unique stationary distribution, which is a probability distribution over n-node graphs and is called the stationary (random) graph G. The upper bound is function of the worst-case probability of appearance of an edge, the degree of independence among edges in G and, the mixing time of the MEG (the worst-case time, required to reach a distribution which is "close" to G).
In a more practical setting, W.Badreddine et al. [START_REF] Badreddine | Broadcast strategies in wireless body area networks[END_REF] reviewed and compared different strategies to disseminate information in BANs, depending on the delay and the number of messages used.
Data Aggregation Problem
The problem of data aggregation has been widely studied in the context of wireless sensor networks. There exists also some work on dynamic networks. The literature on this problem can be divided in two groups depending on the assumption made about the collisions being handled by an underlying MAC layer.
When Collisions are not Handled by the MAC Layer This case corresponds to the minimum data aggregation time problem defined in subsection 4.1.1. This problem has been initially studied by Anamalai et al. [START_REF] Annamalai | On tree-based convergecasting in wireless sensor networks[END_REF]. The authors assume that a fixed number of channels is available for a transmission, and a collision occurs at a receiver whenever two of its neighbors transmit on the same channel at the same time. The authors propose an algorithm that constructs a collision-free convergecast tree that can also be used for broadcasting.
Then, Kesselman et al. [START_REF] Kesselman | Fast distributed algorithm for convergecast in ad hoc geometric radio networks[END_REF] and Chen et al. [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] formalized the problem, equivalent to the convergecast problem defined by Anamalai et al. with a unique channel, and gave the first analytic bounds. However, the problem defined by Kesselman et al. differs slightly as no particular node plays the role of the sink, and the transmission range of nodes are variable (so that the communication graph is directed). Chen et al. [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] presented a well-defined model for the study of the MDAT problem (as we defined it in subsection 4.1.1) and proved that the problem is NP-complete, even in graphs of degree at most four (more precisely, they restricted the problem to networks whose topology is a sub-graph of the grid, which cannot be considered directly as a UDG). They stated that a lower bound for the MDAT problem is the minimum between the height of shortest path tree (which gives an optimal solution if there is no collision) and log(|V |) (In the best case, at each time, the number of nodes with data can be divided by two). They also gave a (∆ -1)-approximation algorithm.
After the work of Chen et al. When collisions are handled by the MAC layer Various problems related to data aggregation have been investigated. The general term in-network aggregation includes several problems such as gathering and routing information in WSNs, mostly in a practical way. For instance, a survey [START_REF] Fasolo | Innetwork aggregation techniques for wireless sensor networks: a survey[END_REF] relates aggregation functions, routing protocols, and MAC layers with the objective of reducing resource consumption. Continuous aggregation [START_REF] Abshoff | Structural information and communication complexity: 21st international colloquium[END_REF] assumes that data have to be aggregated, and that the result of the aggregation is then disseminated to all participating nodes. The main metric is then the delay before aggregated data is delivered to all nodes, as no particular node plays the role of a sink.
Most related to our concern is the work by Cornejo et al. [START_REF] Cornejo | Aggregation in dynamic networks[END_REF]. In their work, they consider a finite time evolving graph. Pairwise interaction occurs between nodes i.e., for every snapshot, each node is included in at most one edge. Each node starts with a token and no particular node plays the role of a sink node. Then, when two nodes interact, one of them can decide to send or not its token. Token must not be lost nor duplicated i.e., each token must be owned by a unique node at any time. However, a node that transmits its token can receive it again later.
The goal is to minimize the number of nodes that own at least one token (such nodes are called uploaders) at the end of the execution. An algorithm has to decide for each interaction whether a node transmits its token or not. The number of uploaders at the end of the execution of an algorithm is then compared with the optimal offline algorithm to compute the competitive ratio. Cornejo et al. [START_REF] Cornejo | Aggregation in dynamic networks[END_REF] prove that there are dynamic graphs where the optimal offline algorithm can aggregate to a single node, but with high probability any randomized algorithm will aggregate to Ω(n) nodes, where n is the network size i.e., the competitive ratio of any randomized algorithm is Ω(n) with high probability against an oblivious adversary.
This impossibility result leads the author to consider the problem in p-cluster graphs (where p ∈ (0, 1]). A dynamic graph is a p-cluster if by the end of the execution, every node has interacted with at least p-fraction of all nodes. Cornejo et al. described a randomized algorithm ClusterAggregate p that with high probability aggregates all tokens to O(log n) nodes when executing on a p-cluster.
This problem differs from the MDAT problem, mainly because of the fact that there is no sink node. This implies that transmitting is always a good choice. In the MDAT problem, reducing the number of nodes that own data is not always useful, since a node without data does not participate to the process anymore.
Contribution of Part II
The Complexity of Data Aggregation in WSNs
The results on the complexity of data aggregation in WSNs are presented in Chapter 5. On this topic, the contribution is fourfold. First, in order to compare the complexity of the data aggregation in static and dynamic WSN, we give a tight bound for the complexity of the MDAT problem in static WSN. In more details, we show that, in a static WSN, the problem remains NP-complete when the graph is a partial grid of degree at most three (a particular case of WSN topology). As it is trivial to solve the problem in static graph of degree at most two, our result implies that the problem is intrinsically difficult for any practical setting. This result closes the complexity gap in the static case.
Second, we introduce an extension of MDAT problem in dynamic WSNs, and we prove that the MDAT is NP-complete in a dynamic WSN of degree at most two (and it is trivial to solve the problem if the graph is of degree at most one). This result does not use the geometric aspect of the graph, compared to the static case, as it remains true for arbitrary graphs and not only for unit disk graphs. We also show that allowing simultaneous transmissions to the same node is not intrinsically helpful as it only delays the complexity wall: we show that the problem remains NP-complete if a node can correctly receive up to K > 1 simultaneous packets from different neighbors, if the maximum node degree of the graph is K + 2 in the static case (and K + 1 in the dynamic case).
Third, we give the first lower and upper bounds for the dynamic MDAT problem. More precisely, the minimum time to aggregate all data in a dynamic network is greater than the duration of a foremost convergecast tree (this result is valid in any graph, and for any degree ∆, there exists a dynamic graph such that the bound is attained) and is smaller than the duration of n-1 independent foremost convergecast trees (this later bound is valid for any graph, but actually obtained for dynamic graphs of degree n -1). If we restrain the class of dynamic graphs to those of degree smaller than n-1, we prove that the upper bound is greater or equal to the duration of l independent foremost convergecast trees (with l = (∆-1) log ∆ (n (∆ -1) + 1)-∆ + 2), which prevents previous approximation algorithms for the static case to be extended to the dynamic case.
Four, we observe that, even in periodic graphs, optimal solutions cannot be computed by an algorithm that is unaware of the future of the graph or by a distributed algorithm (even if each node knows its own future). This motivates our simple approximate algorithm presented in section 5.4 to be centralized with full knowledge (yet, it does not assume that the graph is a dynamic WSN in the sense that it can perform on arbitrary graphs). The approximation factor is T (n -1) if there exists a bound T such that there is a journey between every two nodes in every time interval [t, t + T ] i.e., in the T -time-bounded recurrent connectivity class of graphs.
Distributed Online Data Aggregation
The results on the distributed online data aggregation problem are presented in Chapter 6. It turns out that the problem difficulty strongly depends on the power of the adversary (that chooses which interactions occur in a given execution).
For the oblivious and the online adaptive adversaries, we give several impossibility results when nodes have no knowledge about the future evolution of the dynamic graph, nor about the topology. In more details, we prove (i) that there exists an online adaptive adversary that generates, for every DODA algorithm, a sequence of interactions such that the cost of the algorithm is infinite and (ii) that there exists an oblivious adversary that generates, for every oblivious (randomized) DODA algorithms, a sequence of interactions such that the cost of the algorithm is infinite.
Also, when nodes are aware of the underlying graph, the data aggregation is impossible in general. To examine the possibility cases, we use our cost function to compare the performance of a DODA algorithm to the optimal offline algorithm on the same sequence of interactions. Our results show that if all interactions in the sequence occur infinitely often (i.e., in the RE class), there exists a distributed online data aggregation algorithm whose cost is finite. Moreover, if the underlying graph is a tree, we present an optimal algorithm.
For the randomized adversary, we first present tight bounds when nodes have full knowledge about the future interactions in the whole graph. In this case, the best possible algorithm terminates in Θ(n log(n)) interactions with high probability. Then, we consider nodes with restricted knowledge, and we present two optimal distributed online data aggregation algorithms that differ in the knowledge that is available to nodes. The first algorithm, called Gathering, assumes nodes have no knowledge whatsoever, and terminates in O(n 2 ) interactions in expectation, which we prove is optimal without knowledge. The second one, called Waiting Greedy, terminates in O n 3/2 log(n) interactions with high probability, which we show is optimal when each node only knows the time of its next interaction with the sink (the knowledge assumed by Waiting Greedy).
We believe our research paves the way for stimulating future researches, as our proof arguments present techniques and analysis that can be of independent interest for studying dynamic networks.
Chapter 5
The Complexity of Data Aggregation in Wireless Sensor Networks In this chapter we study the complexity of the Minimum Data Aggregation Time (MDAT) problem in static and dynamic wireless sensor networks. We said in the previous chapter that, in a static WSN, this problem is NP-complete [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF], but the question is: what is the smallest set of graphs, with regard of the maximum node degree, for which the problem is NP-complete? After answering this question in static and in dynamic WSNs, we give the first upper and lower bounds for the problem that are valid in a dynamic WSN. Finally, we give the first approximation algorithm for the MDAT problem in dynamic networks.
NP-Completeness
In this section we show what are the settings for which the MDAT is NPcomplete, in static and in dynamic WSNs.
Static grid graphs of degree at most three
A grid graph is a unit disk graph where all disks have centers with integer coordinates and radius 1/2 i.e., an induced sub-graph of the grid. However, a subgraph of the grid (not necessarily induced, called partial grid) is not necessarily a grid graph.
Chen et al. prove in [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] that finding the minimum data aggregation time is N P -hard, even when the network is restricted to partial grid (with maximum degree ∆ = 4). On the other side, if the maximum degree of a static graph is ∆ = 2, the graph is either a line or a cycle and the minimum data aggregation time is easy to compute. Let ε be the eccentricity of the sink node and n the number of nodes. If n is odd and ε = (n -1)/2 (the graph is either a cycle or a line with the sink node in the middle) then the MDAT is ε + 1. Otherwise the MDAT is ε.
In this section we close the complexity gap of the MDAT problem in static networks by proving that the MDAT is N P -hard, even when restricted to grid graphs with maximum degree ∆ = 3.
We use a construction that is similar to that of Chen et al. [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] with some improvements about the topology (grid graph instead of partial grid) and about the maximum node degree (3 instead of 4).
Before stating our first theorem, we recall the definition of the restricted planar
3-SAT [Lic82]. Let ϕ be a 3-SAT formula composed by a set C of m clauses c 1 , . . . , c m over a set V of n variables v 1 , . . . , v n . We define the corresponding formula graph G ϕ = (V ∪ C, E 1 ∪ E 2 ), where E 1 = {(x i , c j ) : x i ∈ c j or xi ∈ c j } and E 2 = {(x i , x i+1 ) : 1 ≤ i ≤ n -1} ∪ {(x n , x 1 )}.
ϕ is said to be planar if the formula graph G ϕ is planar. ϕ is said to be restricted if (i) each variable appears in at most three clauses, (ii) both negated and unnegated forms of each variable appears at least once, and (iii) clauses drawn on the same side of the cycle E 2 must share the same literal if they share the same variable (i.e., at a variable vertex in G ϕ , incident edges from one side correspond to the same literal). It is known that restricted planar 3-SAT is NP-complete [Lic82].
Theorem 5.1
The MDAT problem restricted to grid graphs of degree at most three is NPcomplete.
Proof : The proof is by reduction from restricted planar 3-SAT. Let ϕ be an instance of the restricted planar 3 -SAT on n variables and m clauses. From the planar formula graph G ϕ , we construct a planar graph G with maximum degree ∆ = 3. The idea behind the construction is that, in order to have a fast data aggregation, the schedule must "choose" between two sides (i.e., the data are aggregated along one of two possible paths) representing the true or false instantiation of a variable. The aggregation is fast if all clauses are connected to the correct side of at least one variable. First we construct the sub-graph X i that represents the variable
x i (see figure Fig. 5.1.1). X i is composed of a cycle of nodes e i , l i , r i , s i , o i , si , ri , li .
Then, we connect to l i (resp. li ) a line L i (resp. Li ) of length 5i -3. Thus l i cannot sends its data before aggregating data from L i i.e., before 5i -3 timeslots. For 1 ≤ i < n we connect o i to e i+1 and we connect to e 1 a new node e 0 .
Each clause c j is represented by a single node and for each variable x i (resp. negation xi ) in clause c j , we connect c j to r i or s i (resp. ri or si ) by a line In order to use the previous lemma we need to be able to change the distance between nodes. So we define G T obtained from G by replacing every edge by a
Ψ i,j of length (5i -2) to r i (resp. ri ) or (5i -1) to s i (resp. si ). Let G = 1≤j≤m 1≤i≤n X i ∪ L i ∪ Li ∪ Ψ i,j
e i l i r i s i o i si ri li o i-1 X i e i+1 L i Li Fig. 5.1.1: sub-graphs Xi, Li and Li X 1 X 2 X 3 X 4 X 5 e c 1 c 2 c 3 c 4 Ψ 1,3 Ψ 2,3 Ψ 3,3 Fig. 5.1.2: example for ϕ = (x1 ∨ x2 ∨ x5) ∧ (x3 ∨ x4 ∨ x5) ∧ (x1 ∨ x2 ∨ x3) ∧ (x3 ∨ x4 ∨ x5)
line of length T i.e., by adding T -1 nodes between two connected nodes, and by adding a pending node e connected to e 0 .
In the next three Claims we show that there exists a T such that G T is a grid graph (of degree at most 3) and that the minimum time to aggregate data from G T to o n is 5nT + 1 if and only if ϕ is satisfiable. Then, the theorem follows from the NP-completeness of restricted planar 3-SAT [Lic82].
Claim 1. There exists a T such that G T is a grid graph.
From lemma 3.1 we deduce that G has an orthogonal embedding such that every edge has the same length l ≥ 1 in a grid of size s. We divide the unit by 4 so that the embedding is in a grid of size 4s and every edge has length 4l (every vertex has its coordinates multiplied by 4). Then, we replace, in its embedding, each node by a disk of radius 1/2, and every edge by a chain of 4l -1 disk of radius 1/2, centered at integer coordinates along the edge. Finally, we add a disk of radius 1/2, centered at integer coordinates, at distance 1 from e 0 and at distance greater than 1 from other disks. The corresponding unit disk graph (that is also a grid graph) is exactly
G 4l . Claim 2. For all T ≥ 1, if ϕ is satisfiable, then the minimum time to aggregate data from G T to o n is 5nT + 1.
First of all, we remark that the distance between e and o n is 5nT + 1, so that 5nT + 1 is a lower bound for the minimum data aggregation time. Now suppose that ϕ is satisfiable and let I be an interpretation of the truthfunctional propositional calculus satisfying ϕ. Let i ∈ [1..n] and suppose I(x i ) = true. Suppose that e i has aggregated at time (5i -4)T + 1 data from previous nodes X k ∪ (Ψ k,j -{c j }), k < i and from clauses c j containing for every j < i, x j if I(x j ) = true, and xj otherwise. As we said before, l i (resp. li ) can aggregate all data from L i (resp. Li ) at time (5i -3)T .
At the negative side of X i , li can also aggregate all data of the T -1 nodes between e i and li before time (5i -3)T . If ri is connected to a clause c j , it can receive all data from nodes between ri and c j (c j excluded) at time (5i -2)T -1.
Then ri can receive all data from li at time (5i -2)T . If si is connected to c j , it can receive all data from nodes between si and c j (c j exclude) at time (5i -1)T -1.
Then si can aggregate all data from ri at time (5i -1)T . Thus o i can aggregate all data from si at time 5iT .
On the other side of the cycle, l i has to wait time (5i -3)T + 1 to receive data from e i . Again, if r i is connected to c j , it can receive all data from nodes between r i and c j (c j included) at time (5i -2)T . Then r i can receive all data from l i at time (5i -2)T + 1. If s i is connected to c j , it can receive all data from nodes between s i and c j (c j included) at time (5i -1)T . Then s i can aggregate all data from r i at time (5i -1)T + 1. Finally, o i can aggregate all data from s i at time 5iT + 1.
If i = n, then it's over, otherwise o i can start transmitting to the next block X i+1 and e i+1 can aggregate data from o i at time (5(i + 1) -4)T + 1, the data includes data from clauses containing x i . If I(x i ) = f alse, the schedule on the positive and negative side of the cycle are exchanged in order to aggregate data from clauses containing xi instead of x i . Recursively, since all clauses are connected to a variable x i with I(x i ) = true or to xi with I(x i ) = f alse, o n aggregates all data at time 5nT + 1.
Claim 3. For all T ≥ 1, if the minimum time to aggregate data from G T to o n is 5nT + 1, then ϕ is satisfiable.
Suppose that all data from G T are aggregated to o n at time 5nT + 1. Since e is at distance 5nT + 1, its data is sent directly through a shortest path i.e., there is a shortest path P from e to o n such that, when a node in P receives at time t a data from e (an aggregation that contains e's data) it sends the aggregation to the next node in the path at time t + 1. There are 2 n shortest paths from e to o n , indeed, at each block X i , the path can use the positive or the negative side, and implicitly choose to interpret x i as true or false. Let i ∈ [1..n], and suppose P uses the positive side of X i . As we saw before, data from L i and Ψ i,j for clauses c j that contain x i can be aggregated to l i , r i and s i before the data from e. The first observation is that o i receives data from e, L i and Ψ i,j for clauses c j containing x i at time 5iT + 1 and has to send the aggregation just after (it cannot receive data after time 5iT + 1).
On the other side, li can start sending at time (5i-3)T (because it must receive data from Li first), and its data must be aggregate through a path to o n of length smaller than or equals to 5nT + 1 -(5i -3)T . Because of its length, this path must pass through o i , by the negative side of X i . Thus data from li must be aggregate through a path to o i of length smaller than or equals to 3T + 1 (the maximum length of the path minus the minimum distance from o i to o n ). Adding this to the first observation, we know that data from li must be received by o i at time 5iT (and cannot be received before, because li and o i are at distance 3T ).
Thus, data from li cannot be delayed (except by o i ) i.e., if a node receives a data from li at time t, it must send the aggregation to the next node at time t + 1. Thus ri and si can receive data from other nodes only before time (5i -2)T and (5i -1)T , and so this data cannot include data from a connected clause c j (only from Ψ i,j -{c j }). Since all paths from a clause c j to o n passing trough X i contain a node r i , ri , s i or si , a connected clause can send its data through X i only if it contains x i .
The same thing happens if P uses the negative side of X i : a connected clause can send its data through X i only if it contains xi .
For a shortest path P from e to o n used to aggregate e's data, we say a variable x i is true if P uses the positive side of X i , and xi is true otherwise. We have shown that if the data of a clause c j is received by o n on time (before time 5nT + 1), then c j contains a true literal.
Finally, if data from all clauses is received on time, then every clauses contains at least one true literal, and the formula is satisfiable. Now, we extend the previous results to the MDAT K problem, which is the MDAT problem with the additional assumption that nodes can simultaneously receive up to K messages from K different neighbors. Note that simultaneously receiving K + 1 or more messages still results in a collision. We show that allowing one more simultaneous transmission results in increasing by one the maximum node degree of the graph for the problem to be NP-complete. This may seems obvious at first sight, but the fact that the problem concerns unit-disk graphs makes the proof slightly technical. Indeed, every time we want to create a collision, to force the algorithm to make a choice, we have to create a node of degree K + 2, and in a unit-disk graph, if K is large enough, such a node must have two neighbors that are connected. So that we cannot connect an arbitrary amount of lines to a node while keeping the whole network as a unit-disk graph. The idea of the proof is as follow. For each node where we need a collision (see proof of theorem 5.1), we add a single special line of nodes. The resulting graph has now a maximum node degree of 4 and still have an embedding in the grid. Then, after replacing each edge in the embedding by nodes to create a unit-disk graph, we replace each special line by a unit-disk graph that creates the desired collision.
Theorem 5.2
The MDAT K problem restricted to graphs of degree at most K + 2 is NPcomplete.
Proof : Let ϕ be an instance of the restricted planar 3 -SAT with n variables and m clauses. From the planar formula graph G ϕ , we construct a graph G as in the proof of Theorem 5.1, with 5 additional lines C 1 , C 2 , C 3 , C 4 , C 5 connected to nodes r i , l i , ri , li , and o i . We connect C 1 (resp. C 2 ) of length 5i -2 to r i (resp. ri ), C 3 (resp. C 4 ) of length 5i -1 to l i (resp. li ), C 5 of length 5i + 1 to o i . As in the proof of Theorem 5.1 we define G T obtained from G by replacing every edge by a line of length T i.e., by adding T -1 nodes between two connected nodes, and by adding a pending node e connected to e 0 . G has a maximum node degree of 4, so it has an orthogonal embedding such that every edge has the same length l ≥ 1 in a grid of size s (see Lemma 3.1). We divide the unit distance by 4 so that two edges are at distance at least 4 from each other. Then, we divide the unit distance by 2(K -1).
Let ε > 0 and
K = K -1. For each line C ∈ {C 1 , C 2 , C 3 , C 4 , C 5 } of length d in the embedding (d is a multiple of 8K
), we split the line in small parts of length 2K at the two extremities, and length 4K otherwise (see Figure 5.1). Each small part is either a straight line or two orthogonal straight lines. The goal is to replace each part by a unit-disk graph Ĉ such that, when aggregating all the data from Ĉ, K nodes will want to transmit simultaneously at time d their data to the node that connects Ĉ to the remaining of the graph.
Each straight line of length 4K is replaced by K × 4K disks of radius 1/2 spread across a grid of size K × 4K , that is constrained in a rectangular area of width 4K , height ε, and centered at the middle of the initial line. This implies that the intersection graph of those disks (see Figure 5.1) is composed of 4K cliques of size K that are connected by K lines of length 4K . This graph has the property that the distance between two nodes located at distinct extremities is either 4K if the nodes are in the same line, or 4K +1 otherwise. We repeat the same process for the parts at the extremities, using only K ×(2K -1) disks for the part connected to the remaining of the graph, and K × 2K disks for the part at the other extremity.
Similarly, each part composed by two lines at a right angle is replaced by K × 4K disks of radius 1/2 in such a way that the intersection graph contains K lines of length 4K (connected with some additional edges), and has the same property as the previous intersection graph (see Figure 5.1). This implies that the intersection graph produced by all the disks of all the small parts contains K lines of length d such that the distance between two nodes at distinct extremities is d if they are in the same line, or d + 1 otherwise.
All the disks are slightly translated toward the node u ∈ {r i , ri , l i , li , o i } that connects C to the remaining of the graph, so that the K disks at the extremity of the line are at distance at most 1 from u. Also, ε is chosen sufficiently small so that disks that replace Line C do not intersect with other disks, located in the remaining of the graph.
Finally, as in the previous proof, we replace in the remaining of the embedding each node by a disk of radius 1/2, and every edge by a chain of 8K l -1 disks of radius 1/2, centered at integer coordinates along the edge. Finally, we add a disk of radius 1/2, centered at integer coordinates, at distance 1 from e 0 . The obtained graph is G 8K l , where each line C i is replaced by a subgraph Ĉi such that, to aggregate all data from Ĉi without delaying the data aggregation in the whole graph, the node u that connects Ĉi to the remaining of the graph has to aggregate the K -1 neighbors in Ĉi simultaneously at time d, which equals the length of C i . Then, only one other neighbor of u can transmit its data to u at time d. This simulates the constaint that no two neighbors of u could transmit at time d without interference in the previous setting. So, we can apply the same proof as the case K = 1 on the transformed graph.
One can observe that the problem is easy to solve in static graphs of degree at most K + 1. Indeed, when aggregating data along the shortest path tree, no collision occurs, except maybe at the sink node. Any schedule, for the transmissions of the neighbors of the sink, that avoid collisions is optimal.
Dynamic graphs of degree at most two
In a dynamic network we prove that, even when the maximum degree is two, the MDAT problem is NP-hard. This result is optimal since the problem is easy to solve in a graph of degree at most one (where no collision occurs).
v i s (a) configuration during T1 c i c i a b c c i c i (b) configuration during T2 v i vi s (c) configuration during T3
Theorem 5.3
The MDAT problem is NP-complete even in a dynamic wireless sensor network of degree at most two.
Proof : The proof is by reduction from 3-SAT. Given any 3-SAT instance ϕ of n variables v 1 , . . . , v n and m clauses c 1 , . . . , c m , we construct the dynamic grid graph G ϕ (V, E) as follow:
Nodes are composed of one sink node, literals, clauses, and copy of clauses:
V = {s} 1≤i≤n {v i } ∪ {v i } 1≤i≤m {c i } ∪ {c i } Let t f = 3n + 2m.
We decompose the time interval [1, t f ] in three periods T 1 , T 2 and T 3 (see figure A.3):
• During T 1 = [1, 2n],
we have for all i ∈ [1, n]:
E 2i-1 = {(v i , s)}, E 2i = {(v i , s)} • During T 2 = [2n + 1, 2n + 2m] we have for all j ∈ [1, m]: with {a, b, c} = c j , E 2n+2j-1 = {(c i , c i )}, E 2n+2j = {(c j , a), (c j , b), (c j , c)} • During T 3 = [2n + 2m + 1, 2n + 2m + n] we have for all i ∈ [1, n]: E 2n+2m+i = {(v i , s), (v i , s)}
The remaining of the dynamic graph (after time t f ) can contain for instance composed only empty snapshots, or be such that the graph is periodic. This does not change the proof, but can be used to analyze the best approximation ratio for this problem (see remark 2).
During T 3 , either a variable or its negation can send its data to the sink node s, but not both, so that the set of literals that send data can be seen as an interpretation of a truth-functional propositional calculus.
During T 1 , variables that does not send their data during T 3 can send their data directly to the sink node.
During T 2 , there is a link between a clause c i and its copy c i so that either c i or c i can send both data. Since all clauses can send their data only once to all the 52 Chapter 5. The Complexity of Data Aggregation in Wireless Sensor Networks literals it contains, the data is successfully sent to the sink node if and only if at least one literal it contains sends its data in T 3 i.e., is true. Thus, if the interpretation chosen in T 3 satisfies the formula ϕ, then each clause contains a literal that transmits during T 3 , and the minimum data aggregation time is exactly t f . Otherwise, some clauses must send their data before t = 1, and the minimum data aggregation time is greater than t f . So that the 3-SAT instance ϕ is satisfiable if and only if the minimum data aggregation time ending before t f is t f .
Remark 2
Theorem 5.3 raises the question of the best approximation ratio achievable by an approximation algorithm. We observe that using time as a complexity measure is not relevant in the dynamic case. Contrarily to the static case, where good approximation ratios have been found, the problem is not approximable in the dynamic case using the duration of the solution as a measure of complexity. Indeed, the dynamic graph constructed in the proof of Theorem 5.3, with empty snapshots when time is greater than t f , only contains the optimal solution. So, if an approximation algorithm finds an approximate solution, it actually finds the optimal solution, which is not possible in polynomial-time (unless P = N P ). Even when restricted to smaller classes of graphs, such as periodic dynamic graphs, the approximation ratio (with respect to duration) can be made arbitrarily large by increasing the period. The approximation ratio can be bounded in periodic graphs, but only when the period is itself bounded (or in time-bounded recurrent connected graphs with fixed bound, as defined in Section 5.3). This remark justifies (i) our use of Foremost Convergecast Trees, defined in Section 5.2, as a complexity measure to establish upper and lower bounds, and (ii) the approximation ratio of our approximate algorithm, presented in Section 5.4, when restricted to time-bounded recurrent connected graphs.
As in the static case, we show that when K simultaneous transmissions are allowed without collisions, the problem is similar. Here, the geometric constraints do not hold, giving a more straightforward proof.
Theorem 5.4
The MDAT K problem restricted to evolving graphs of degree at most (K + 1) is NP-complete.
Proof : In the dynamic case, there is a simpler way to apply the same trick as in the static case. From a 3-SAT instance, we create the same evolving graph as in Theorem 5.3, but with (K -1) × n additional nodes (r 1 1 , r 1 2 , . . . , r n K-1 ), and where edges in the period T 3 are defined as follows: for all i ∈ [1, n],
E 2n+2m+i = {(v i , s), (v i , s)} ∪ K-1 j=1 {(r i j , s)}
Upper and Lower Bounds 53
So, in order for the data aggregation to terminate at time t f = 3n + 2m, all nodes r i j have to transmit during T 3 . This implies, like in the proof of Theorem 5.3, that either a variable or its negation can transmit during T 3 .
Upper and Lower Bounds
In this section we propose the first upper and lower bounds for the MDAT problem in dynamic networks, given in terms of foremost convergecast tree duration.
Lemma 5.1
Let G be a dynamic graph, we have:
M DAT Opt (G, s) ≥ F CT D(G, s, 0)
Proof : Let G be a dynamic graph, s be a sink node, and S = {S t } t be a data aggregation schedule to s of duration l = M DAT Opt (G, s).
Let x 0 be a node different from the sink. We know that there exists i such that x 0 ∈ S i and x 0 / ∈ S i+1 . Since (G, S i , i) → (G, S i+1 , i + 1), there exists a node x 1 ∈ S i+1 such that (x 0 , x 1 ) ∈ E i . We can apply the same argument to x 1 if it is different from the sink.
Recursively we have x 1 , x 2 , . . ., x p = s and t 1 < t 2 . . . < t p < l such that for all
1 ≤ i ≤ p, (x i-1 , x i ) ∈ E ti . Thus J = {((x i-1 , x i ), t i ) | 1 ≤ i ≤ p}
is a journey from x 0 to s ending before l. We can do this with every node in V -{s}, which proves F CT D(G, s, t s ) ≤ l.
In a static WSN, the same shortest path tree can be used to avoid collisions. But in a dynamic WSN, a FCT T 1 that exists at a given time may not exist thereafter. If we delay the transmission of a node, to avoid a collision, another FCT T 2 will be used to retry a transmission. In order to be sure that T 2 can be used by all delayed nodes, it has to start after the end of T 1 . In this case we say that (T 1 , T 2 ) is a 2-time-independent FCT.
Definition 5.1
A l-time-independent FCT of G to s starting at time t s is a sequence of l foremost convergecast trees of G to s ((T 1 , c 1 ), . . . , (T l , c l )) such that:
-(T 1 , c 1 ) is a FCT starting at t s . -for all 1 < i ≤ l, (T i , c i ) is a FCT starting at arrival(T i-1 , c i-1 ).
Its duration is the sum duration of all FCT in the sequence and also equals to arrival(T 1 , c 1 ) -t s . The set of l-time-independent F CT of G to s starting at t s is denoted F CT l (G, s, t s ). The common duration of all l-time-independent LT Js in F CT l (G, s, t s ) is denoted LT JD l (G, s, t s ). This definition is used to give lower and upper bounds for the minimum data aggregation time in a dynamic graph G with n nodes as follow:
F CT D n-1 (G, s, 0) ≥ M DAT Opt (G, s) ≥ F CT D(G, s, 0)
Where the right-hand inequality comes from lemma 5.1 and the left-hand inequality comes from the fact that a node can send its data during a FCT, so that n -1 foremost convergecast trees are sufficient to aggregate the data of every node.
The lower bound and the upper bound are tight in the sense that there exists a graph that reaches the lower bound (any graph of degree at most one) and a graph that reaches the upper bound (for instance a graph whose sink node is of degree n -1 at each time).
If we consider only graphs with a given maximum node degree ∆, the lower bound is still tight, but the upper bound is no longer tight. The following lemma gives a graph with a minimum data aggregation time that lowers the upper bound, for an arbitrary maximum node degree ∆. We conjecture that it also gives the worst data aggregation duration i.e., that it also gives an upper bound that remains tight for an arbitrary maximum node degree.
Lemma 5.2
Let n ∈ N * and ∆ ≤ n, there exists a dynamic graph G with n nodes of degree at most ∆ such that:
F CT D m (G, s, 0) = M DAT Opt (G, s) < +∞ with m = (∆ -1) log ∆ (n (∆ -1) + 1) -∆ + 2
Impossibility Results
55
Proof : Let ∆ ≥ 2. We consider the dynamic graph G(V, E). For the sake of simplicity, we suppose that there exists h ∈ N such that |V | = n = ∆ h+1 -1 ∆-1 . One can construct G such that, there is a perfect ∆-ary tree T (of height h) such that, for all t ≡ 0 mod h, F CT (G, s, t) = {(T, c t )} and (T, c t ) is of duration h (and thus with collision appearing between every node having the same parent). See for instance figure 5.3 with ∆ = 2 and h = 3. The path associated to a journey from a node u to the sink is unique. So that a node has to wait to receive the data from all its children before transmitting.
Since no two children of s can transmit at the same time, s needs ∆ FCTs to receive the data from all its children. Let s be the first direct child of s that transmits, and T (s ) the sub-tree of T rooted at s . T (s ) is a perfect ∆-ary tree of height h -1. When s transmits, its data contains the data of its children. Again, ∆ FCTs are needed to aggregate all the data from all its children. One of these FCTs can be used to aggregate the data from s to s. For each layer of the tree, ∆ -1 consecutive FCTs are needed.
Recursively, we need at least (∆ -1)h + 1 time-independent F CT s to aggregate all the data from G. One can show that this is also sufficient (see figure 5.4). Since h = log ∆ (n(∆ -1) + 1) -1, the theorem is proved.
Conjecture 1
Let G be a dynamic graph with n nodes of degree at most ∆. Let m = (∆ -1) log ∆ (n (∆ -1) + 1) -∆ + 2, we have:
F CT D m (G, s, 0) ≥ M DAT Opt (G, s)
We observe that the conjecture is proven for ∆ = n -1 and is trivial if ∆ = 1.
Impossibility Results
In this section we present several classes of dynamic graphs where only a centralized algorithm that knows the future can compute optimal and "good" approximate solutions.
The two following impossibility results are for the class of periodic graphs, and naturally extend to larger classes such as recurrent connected. The first impossibility result concerns distributed algorithms. A distributed algorithm is a set of local algorithms that are each executed independently by each node. It is also assumed that nodes do not have knowledge about the global topology (or future global topology) of the graph; they may only have knowledge about adjacent edges and nodes (or their future). In general, when two nodes interact, we assume that they are allowed and able to exchange their local knowledge (for instance, about their respective local future if they are aware of it) and use this information for future interactions. For our problem, a distributed algorithm has to decide, whenever an interaction occurs, whether the node sends its data or not. Networks G : In P, the MDAT problem does not have a distributed optimal algorithm, even if each node knows its own future.
G : t ≡ 0 [3] t ≡ 1 [3] t ≡ 2 [3]
Proof : We define G = (V, {E i }) and G = (V , {E i }) as follow (see figure 5.5):
V = V = [s, 1, 2, 3], ∀i ∈ N, E i = {(1, s), (2, 3)}, E i = {(1, s)} if i ≡ 0 mod 3 E i = E i = {(1, 2), (3, s)} if i ≡ 1 mod 3 E i = E i = {(1, s), (2, 3)} if i ≡ 2 mod 3
At time 0 a distributed algorithm A does the same thing for node 1 and s on G and G , because they have the same neighbors now and in the future. If A decides that node 1 transmits its data to s at time 0, then A is not optimal on G (since its faster to wait the data from node 2 at time 1 and then transmit at time 2). Otherwise, A is not optimal on G (since all data can be aggregated at time 1).
If we consider time as a measure of complexity, the following proposition shows that the best competitive ratio a distributed algorithm with knowledge of (local) future can achieve is unbounded.
Proposition 5.2
In P, the competitive ratio of a distributed algorithm is unbounded for the MDAT problem, when considering time as a measure of complexity, even if each node knows its own future.
Proof : We construct two graphs, G and G , so that there is an arbitrary delay between the optimal solution and any other solution, resulting in unbounded competitive ratio. In more details, for all K > 2, we define
G K = (V, {E i }) and G K = (V , {E i }),
G 1 : periodic with period T = 2K 2 , as follows:
G 2 : t ≡ 0 [2] t ≡ 1 [2]
V = V = [s, 1, 2, 3], ∀i ∈ N, E i = {(1, s), (2, 3)}, E i = {(1, s)} if i ≡ 0 mod T E i = E i = {(1, 2), (3, s)} if i ≡ 1 mod T E i = E i = {(1, s), (2, 3)} if i ≡ 2K mod T E i = E i = ∅ otherwise
So that, for an algorithm A, either (i) A chooses that node 1 transmits at time 0 and the time to aggregate all the data is greater than 2K 2 compared to 2K for an optimal centralized algorithm (i.e. a ratio of K) or (ii) A chooses that node 1 does not transmit at time 0 and the time to aggregate all the data is greater than 2K compared to 2 for an optimal centralized algorithm (i.e. a ratio of K). In both cases, the ratio K between the time to aggregate the data with A compared to an optimal centralized algorithm can be made arbitrarily big.
Proposition 5.3
In P, without knowledge of the future, the MDAT problem does not allow a centralized optimal algorithm.
Proof : Let k ∈ {1, 2}. We define G k as follow (see figure 5.6):
V = [s, 1, 2], ∀i ≥ 0, E i = (1, 2) if i ≡ 0 mod 2 (k, s) if i ≡ 1 mod 2
Let A be an algorithm that does not know the future. At time 0, A is executed the same way on G 1 and G 2 . If A decides that node 1 should transmit its data to node 2 at time 0, A cannot solve the problem on G 1 . If A decides that 2 should transmit to 1 at time 0, then A cannot solve the problem on G 2 . Otherwise, if A chooses to do nothing at time 0, the solution given by A is not optimal.
Again, if we consider the time as a measure of complexity, the following proposition shows that the best competitive ratio a centralized algorithm without the knowledge of future (called online algorithm) can achieve is unbounded. Networks Proposition 5.4
In P, the competitive ratio of an online algorithm (that is, a centralized algorithm without knowledge of the future) is unbounded for the dynamic MDAT problem, considering time as a measure of complexity.
Proof : Let G ∞ (V, E ∞ ) be defined as follows:
V = {s, 1, 2}, ∀i ≥ 0, E ∞ i = (1, 2) if i ≡ 0 mod 2 (1, s) if i ≡ 1 mod 2
Let A be an algorithm that does not know the future. When executing A on G ∞ , either (i) A decides never to transmit, and the corresponding competitive ratio of A is infinite, (ii) the first node to transmit is node 1, and the competitive ratio of A is infinite, or (iii) at some time t, A decides that node 2 transmits to node 1. Let G t,T (V, E t,T ), with T > t, be defined as follows:
∀i ≥ 0, E t,T i = (1, 2) if i ≡ 0 mod 2 (1, s) else, if i ≡ j mod 3T , with 0 < j < t (2, s) otherwise
Then, when executing A on G t,T , node 2 transmits to node 1 at time t, and node 1 has to wait until time 3T + 1 to transmit its data to s. Since the optimal offline algorithm executed on G t,T terminates in 2 steps if t > 0, and 3 steps otherwise, the competitive ratio of A on G t,T is greater than T .
Approximation Algorithm
In this section, we present the Greedy Data Aggregation Schedule (GDAS) approximation algorithm for the MDAT problem. The maximum duration of a solution output by this algorithm reaches the theoretical upper bound given in Section 5.2. The complexity of our algorithm is two times the number of edges of all snapshots between time 0 and time t f (where t f is the duration of the found solution).
During the first inner while loop, the algorithm finds the arrival time of a foremost convergecast tree of the nodes in remainingN odes, starting at time t s . The arrival time, denoted t f , becomes the starting point to find a collision-free schedule between time t f and time t s in a backward manner for the nodes in remainingN odes. If some nodes in remainingN odes are not able to transmit between time t s and time t f , we start a new iteration and compute the duration of the foremost convergecast tree consecutive to the one found in the previous iteration (its departure time is the arrival time of the previous foremost convergecast tree).
The last for loop converts the sequence S that contains the senders over the time to a dynamic data aggregation schedule. With this method, if a node is in S t 1 ∩ S t 2 , then only the last transmission is taken into account. The algorithm loops over all the edges, twice, for each snapshot of the dynamic graph between time 0 and the duration of the found solution.
Our algorithm uses Procedure canT ransmit(u, Senders, Receivers), that returns true if and only if Node u can transmit its data to a node in Receivers without interfering with other nodes in Senders.
Algorithm GDAS: Greedy Data Aggregation Schedule
Input: MDAT Instance (G, s)
S ← empty sequence remainingN odes ← V \ {s} t ← 0 do t s ← t for u ∈ V do data t [u] ← {u} while remainingN odes ⊂ data t [s] do data t+1 ← data t for (u, v) ∈ E t do data t+1 [v] ← data t [u] ∪ data t [v] data t+1 [u] ← data t [u] ∪ data t [v] t ← t + 1 t f ← t marked ← {s} for t = t f -1, . . . ,
duration(GDAS(G, s)) ≤ F CT D n-1 (G, s, 0)
Proof : First, we observe that the inner while loop simulates a multicast of every node in the network and stops when the sink node receives data from the nodes in remainingN ode. The duration t f -t s is smaller than the duration of a foremost convergecast tree starting at t s (and it terminates in finite time since the graph is recurrent connected). We prove that after each iteration of the main do ... while loop, the cardinal of the set remainingN odes strictly decreases. Let t i f be the value of t f at the end of the i-th iteration. Suppose we have already executed the i -1 iteration of the main loop. At the beginning of the i-th iteration, we compute in the inner while loop the minimum time to aggregate (maybe with collision) the data of the nodes in remainingN odes. Indeed, for a node u, v ∈ data t [u] implies that there exists a journey from v to u ending before time t. Since data t i f [s] contains remainingN odes, for each node u ∈ remainingN ode, there exists a journey from u to s with departure greater than t i-1 f = t i s and arrival smaller that t i f . In the for loop starting Line 16, we create a collision-free schedule that aggregates the data of at least one node in remainingN odes. Indeed, for each time t ∈ [t i s ..t i f ] (in decreasing order), a node is chosen to transmit if the transmission does not create a collision (with the help of the function canT ransmit) and the node is in a journey from a node in remainingN odes to s (since data t-1 [v] ∩ remainingN odes = ∅). Since a node can be marked at most once, there exists a time t, when the only nodes (not marked) on a journey from a node in remainingN odes to s are themselves in remainingN odes. One of these nodes is chosen to transmit at time t and is removed from the set remainingN odes.
This implies that there can be at most n -1 = #(V \ {s}) iterations of the main loop. After each iteration, the duration t f -t s is smaller than the duration of a foremost convergecast tree starting at t s . Since for each iteration, the computation restart from the end of the previous one, after i iterations of the main loop, t i f is smaller than the duration of i consecutive foremost convergecast trees. Since there can be at most n -1 iterations of the main loop, the duration of the dynamic data aggregation schedule is at most LT JD n-1 (G, s, 0).
If the graph is T -time-bounded recurrent connected, then a F CT duration is smaller than T . Thus, we can derive an absolute bound and the following approximation factor for the dynamic MDAT problem.
Corollary 5.1
Algorithm GDAS, in BRC with bound T , satisfies:
duration(GDAS(G; s)) ≤ T (n -1)
Thus, it is an approximation of factor T (n-1), for the dynamic MDAT problem.
Conclusion 61
Conclusion
We studied the complexity of the minimum data aggregation time problem in wireless sensor networks. We proved that the problem is NP-complete in a static WSN of degree at most three, and NP-complete in a dynamic WSN of degree at most two. The degree constraint is crucial, as a smaller one induces a trivial solution in both cases. Then we gave tight lower and upper bounds for the minimum data aggregation time problem in dynamic networks and the first approximation scheme for the problem. Also, in a dynamic graph with n nodes of degree at most ∆, we conjecture a more accurate upper bound (that is tight even in the class of graphs with a given maximum node degree ∆) of l time-independent foremost convergecast trees (with l = (∆ -1) log ∆ (n (∆ -1) + 1) -∆ + 2).
Finally we observed that only a centralized algorithm with full knowledge can compute the optimal solution of the problem. Thus, we gave a simple approximate algorithm giving a solution whose time matches the theoretical upper bound.
One can observe that allowing nodes to transmit several times their data, instead of only once, does not change the results concerning centralized algorithms that are aware of the future. Indeed, a schedule that contains multiple transmissions by node can be converted to a schedule where each node transmits only once, by keeping only the last transmission.
The results given in this chapter implies that the search for an optimal solution to aggregate data in a wireless sensor network is limited by the available computation power and knowledge. In order to perform data aggregation with a good energyconsumption/delay trade-off, one must allow a greater delay or relax the optimal energy consumption assumption. Indeed, with the latter assumption, the problem may not even be solvable when the nodes have no knowledge of the future evolution of the network, as we will see in the next chapter. In this chapter we study the online and distributed solutions of the data aggregation problem, in dynamic networks. As we said in chapter 4, we consider here that no collision occurs and that an adversary chooses the sequence I = (I t ) t∈N that models the pairwise interactions (or simply interactions) between nodes over the time.
Oblivious and Online Adaptive Adversaries
In this section we give several impossibility results when nodes have no knowledge, and then show several results depending on the amount of knowledge. We choose to limit our study to some specific knowledge, but one can be interested in studying the possible solutions for different kinds of knowledge.
Impossibility Results When Nodes Have no Knowledge
Theorem 6.1
For every algorithm A ∈ D ODA , there exists an adaptive online adversary generating a sequence of interactions I such that cost A (I) = ∞ (the cost function is defined in chapter 4).
Proof : Let I the sequence of interactions between 3 nodes a, b, and the sink s, defined as follow. I 0 = {a, b}. If a transmits, then for every i ∈ N, I 2i+1 = {a, s} and I 2i+2 = {a, b} so that b will never be able to transmit. Symmetrically if b transmits the same thing happens. If no node transmits, then I 1 = {b, s}. If b transmits, then I 2i+2 = {a, b} and I 2i+3 = {b, s} so that a will never be able to transmit.
Otherwise I 2 = {a, b} and continue as in the first time. A never terminates, and a convergecast is always possible, so that cost A (I) = ∞.
In the case of a deterministic algorithm, the previous theorem is true even with an oblivious adversary. However, for a randomized algorithm, the problem is more complex. The following theorem states the impossibility result for oblivious randomized algorithm, leaving the case of non-oblivious randomized algorithms against oblivious adversary as an open question. Theorem 6.2
For every randomized algorithm A ∈ D ∅ ODA , there exists an oblivious adversary generating a sequence of interactions I such that cost A (I) = ∞ with high probability 2 .
Proof : Let V = {s, u 0 , . . . , u n-2 }. In the sequel, indexes are modulo n -1 i.e., ∀i, j ≥ 0, u i = u j with i ≡ j mod n -1. Let I ∞ defined by, for all i ∈ N, I ∞ i = {u i , s}. Let I l be the finite sequence, prefix of length l > 0 of I ∞ . For every l > 0, the adversary can compute the probability P l that no node transmits its data when executing A on I l . (P l ) l>0 is a non-increasing sequence, it converges to a limit P ≥ 0. For a given l, if P l ≥ 1/n, there is at least two nodes whose probability not to transmit when executing A on
I l is at least n -1 n-2 = 1 -O 1 √ n .
To prove this, we can see the probability P l as the product of n -1 probabilities p 0 , p 1 , . . ., p n-2 where p i is the probability that node u i does not transmit during I l . Those events are independent since the algorithm is oblivious. Let p d ≥ p d be the two greatest probabilities in {p i } 0≤i≤n-2 , we have:
n-2 i=0 p i ≥ 1 n ⇒ n-2 i=0 log(p i ) ≥ log 1 n ⇒ (n -2) log(p d ) ≥ log 1 n ⇒ p d ≥ n -1 n-2
This implies that, if P ≥ 1/n, then A does not terminate on the sequence I ∞ with high probability.
Otherwise, let l 0 be the smallest index such that P l0 < 1/n. So that with high probability, at least one node transmits when executing A on I l0 . Also, P l0-1 ≥ 1/n so that the previous argument implies that there is at least two nodes u d and u d whose probability to still have a data (after executing
A on I l0-1 ) is at least n -1 n-2 . If l 0 = 0 we can choose {u d , u d } = {u 1 , u 2 }. We have u d = u l0 or u d = u l0 .
Without loss of generality, we can suppose u d = u l0 , so that the probability that u d transmits is the same in I l0-1 and in I l0 . Now, u d is a node whose probability not to transmit when executing A on
I l0 is at least n -1 n-2 = 1 -O 1 √ n .
Let I be the sequence of interactions defined as
2. An event A occurs with high probability, when n tends to infinity, if
P (A) > 1 -o 1 log(n) . follow: ∀i ∈ [0, n -2] \ {d -1}, I i = {u i , u i+1 }, I d-1 = {u d-1 , s}
I is constructed such that u d (the node that has data with high probability) must send its data along a path that contains all the other nodes in order to reach the sink. But this path contains a node that does not have a data.
Let I be the sequence of interaction starting with I l0 and followed by I infinitely often. We have shown that with high probability, after l 0 interactions, at least one node transmits its data and the node u d still has a data. The node that does not have data prevents the data owned by u d from reaching s. So that A does not terminate, and since a convergecast is always possible, then cost A (I) = ∞.
When Nodes Know The Underlying Graph
Let G be the underlying graph i.e., G = (V, E) with
E = {(u, v) | ∃t ∈ N, I t = {u, v}} .
The following results assume that the underlying graph is given initially to every node.
Theorem 6.3
If n ≥ 4, then there exists an online adaptive adversary that generates, for every algorithm A ∈ D ODA (G), a sequence of interactions I such that cost A (I) = ∞.
Proof : V = {s, u 1 , u 2 , u 3 }. We create a sequence of interactions with the underlying graph G = (V, {(s, u 1 ), (u 1 , u 2 ), (u 2 , u 3 ), (u 3 , s)}). We start with the following interactions:
({u 1 , s}, {u 3 , s}, {u 2 , u 1 }, {u 2 , u 3 }) . (6.1)
If u 2 transmits to u 1 in I 2 , then we repeat infinitely often the three following interactions:
({u 1 , u 2 }, {u 2 , u 3 }, {u 3 , s}, ...) .
Else, if u 2 transmits to u 3 in I 3 , then we repeat infinitely often the three following interactions:
({u 3 , u 2 }, {u 2 , u 1 }, {u 1 , s}, ...) .
Otherwise, we repeat the four interactions (6.1), and apply the previous reasoning. Then, A never terminates, and a convergecast is always possible, so that cost A (I) = ∞.
Theorem 6.4
If the interactions occurring at least once, occur infinity often, then there exists A ∈ D ODA (G) such that cost A (I) < ∞ for every sequence of interactions I. However, cost A (I) is unbounded.
Proof : Nodes can compute a spanning tree T rooted at s (they compute the same tree, using nodes identifiers). Then, each node waits to receive the data from its children and then transmits to its parent as soon as possible. All transmissions are done in finite time because each edge of the spanning tree appears infinitely often. However, when G is not a tree, there exists another spanning tree T . Let e be an edge of T that is not in T . By repeated interactions of edges of T , an arbitrary amount of convergecasts can be performed while a node is waiting for sending data to its parent through e in execution of A.
Theorem 6.5
If G is a tree, there exists A ∈ D ODA (G) that is optimal.
Proof : Each node waits to receive the data from its children, then transmits to its parent as soon as possible.
If Nodes Know Their Own Future
For a node u ∈ V , u.f uture denotes the future of u i.e., the sequence of interactions involving u, with their times of occurrences. In this case, according to the model, two interacting nodes exchange their future and non-oblivious nodes can store it. This may seem in contradiction with the motivation of the problem that aims to reduce the number of transmissions. However, it is possible that the data must be sent only once for reasons not related to energy (such as data that cannot be duplicated, tokens, etc.). That is why we consider this case, for the sake of completeness, even if oblivious algorithms should be favored.
Theorem 6.6
There exists A ∈ D ODA (f uture) such that cost A (I) ≤ n for every sequence of interactions I.
Proof : One can show that the duration of n -1 successive convergecasts is sufficient to perform a broadcast from any source. So every node broadcast its future to the other nodes. After that, all the nodes are aware of the future of every nodes and can compute the optimal data aggregation schedule. So that it takes only one convergecast to aggregate the data of the whole network. In total, n successive convergecasts are sufficient.
Randomized Adversary
The randomized adversary constructs the sequence of interactions by picking a couple of nodes among all possible couples, uniformly at random. Thus, the underlying graph is a complete graph of n nodes (including the sink) and every interaction occurs with the same probability p = 2 n(n-1) . In this section, the complexity is computed on average (because the adversary is randomized) and no more "in the worst case" as previously. In this case, considering the number of interactions is sufficient to represent the complexity of an algorithm.
We see in Theorem 6.8 that an offline algorithm terminates in Θ(n log(n)) interactions w.h.p. This bound gives a way to convert the complexity in term of number of interactions to a cost. Indeed, if an algorithm A terminates in O(n 2 ) interactions, then it performance is O(n/ log(n)) times worse than the offline algorithm and cost A (I) = O(n/ log(n)) for a randomly generated sequence of interactions I. For the sake of simplicity, in the remaining of the section, we give the complexity in terms of number of interactions.
Since an interaction does not depend on previous interactions, the algorithms we propose here are oblivious i.e., they do not modify the memory of the nodes. In more details, the output of our algorithms depends only on the current interaction and on the information available in the node.
First, we introduce three oblivious DODA algorithms. For the sake of simplicity, we assume that the output is ignored if the interacting nodes do not both have data. Also, to break symmetry, we suppose the nodes that interact are given as input ordered by their identifiers. The last algorithm (Waiting Greedy) uses the meetT ime information defined in chapter 4.
-Waiting (W ∈ D ∅ ODA ): A node transmits only when it is connected to the sink s:
W : (u 1 , u 2 , t) = u i if u i .isSink ⊥ otherwise
-Gathering (GA ∈ D ∅ ODA ): A node transmits its data when it is connected to the sink s or to a node having data.:
GA : (u 1 , u 2 , t) = u i if u i .is.Sink u 1 otherwise -Waiting Greedy with parameter τ ∈ N (WG τ ∈ D ∅ ODA (meetT ime)):
The node with the greatest meet time transmits, if its meet time is greater than τ :
m 1 = u 1 .meetT ime(t) m 2 = u 2 .meetT ime(t) WG τ : (u 1 , u 2 , t)= u 1 if m 1 < m 2 ∧ τ < m 2 u 2 if m 1 > m 2 ∧ τ < m 1 ⊥ otherwise
Lower Bounds
We show a lower bound Ω(n 2 ) on the number of interactions required for DODA against the randomized adversary. The lower bound holds for all algorithms (including randomized ones) that do not have knowledge about future of the evolving network. The lower bound matches the upper bound of the Gathering algorithm given in the next subsection. This implies that this bound is tight. Theorem 6.7
The expected number of interactions required for DODA is Ω(n 2 ).
Proof : We show that the last data transmission requires Ω(n 2 ) interactions in expectation.
We consider any (randomized) algorithm A and its execution for DODA. Before the last transmission (from some node, say v, to the sink s), only v has data except for s.
The probability that v and s interacts in the next interaction is 2 n(n-1) . Thus, the expected number EI of interactions required for v to transmit to s is:
EI = n(n -1) 2
So that the whole aggregation requires at least EI = Ω(n 2 ).
We also give a tight bound for algorithms that know the full sequence of interactions.
Theorem 6.8
The best algorithm in D ∅ ODA (full knowledge) terminates in Θ(n log n) interactions, in expectation and with high probability.
Proof : First, we show that the expected number of interactions of a broadcast algorithm is Θ(n log n). The first data transmission occurs when the source node (say v 0 ) interacts with another node. The probability of occurrence of the first data transmission is 2(n-1) n(n-1) . After the (i -1)-th data transmission, i nodes (say V i-1 = {v 0 , v 1 , . . . , v i-1 }) have the data and the i-th data transmission occurs when a node in V i-1 interacts with a node not in V i-1 . This happens with probability
2i(n-i) n(n-1) .
Thus, if X is the number of interactions required to perform a broadcast, then we have:
E(X) = n-1 i=1 n(n -1) 2i(n -i) = n(n -1) 2 n-1 i=1 1 i(n -i) = n(n -1) 2n n-1 i=1 ( 1 i + 1 n -i ) = (n -1) n-1 i=1 1 i ∈ Θ(n log n).
And the variance is
V ar(X) = n-1 i=1 1 - 2i(n -i) n(n -1) / 2i(n -i) n(n -1) 2 = n(n -1) n-1 i=1 n(n -1) -2i(n -i) (2i(n -i)) 2 = O n 4 n/2 -1 i=1 1 i(n -i) 2
The last sum is obtained from the previous one by observing that it is symmetric with respect to the index i = n/2, and the removed elements (i = n/2 and possibly i = n/2 ) are negligible. We define f :
x → 1 x 2 (n-x) 2 .
Since f is increasing between 1 and n/2, we have
n/2 -1 i=1 f (i) ≤ n/2 1 f (x)dx = (n-2)n n-1 + 2 log(n -1) n 3 = O 1 n 2
So that the variance is in O(n 2 ). Using the Chebyshev's inequality, we have
P (|X -E(X)| > n log(n)) = O 1 log 2 (n)
Therefore, a sequence of Θ(n log(n)) interactions is sufficient to perform a broadcast with high probability. By reversing the order of the interactions in the sequence of interactions, this implies that a sequence of Θ(n log(n)) interactions is also sufficient to perform a convergecast with the same probability. Aggregating data along the convergecast tree gives a valid data aggregation schedule.
Corollary 6.1
The best algorithm in D ODA (f uture) terminates in Θ(n log n) interactions, in expectation and with high probability.
Proof : If each node starts with its own future, O(n log(n)) interactions are sufficient to retrieve with high probability the future of the whole network. Then O(n log(n)) interactions are sufficient to aggregate all the data with the full knowledge.
Algorithm Performance Without Knowledge
Theorem 6.9
The expected number of interactions the Waiting requires to terminate is O(n 2 log n).
The expected number of interactions the Gathering requires to terminate is O(n 2 ).
Proof : In the W aiting algorithm, data is sent to the sink when a node with data is connected to the sink. We denote by X W the random variable that equals the number of interactions for the algorithm Waiting to terminate. The probability of occurrence of the first data transmission is 2(n-1) n(n-1) . The probability of occurrence of the i-th data transmission after the (i -1)-th data transmission is 2(n-i) n(n-1) . Thus, the expected number of interactions required for DODA is
E(X W ) = n-1 i=1 n(n -1) 2(n -i) = n(n -1) 2 n-1 i=1 1 i ∈ O(n 2 log n)
Since those events are independent, we also have that the variance of the number of interactions required for DODA is
V ar(X W ) = n-1 i=1 n(n -1) -2i n(n -1) × (n(n -1)) 2 4i 2 = n-1 i=1 n 2 (n -1) 2 -2in(n -1) 4i 2 ∼ +∞ n-1 i=1 n 4 4i 2 ∼ +∞ n 4 π 2 24
Using the Chebyshev's inequality, we have
P (|X W -E(X W )| > n 2 log(n)) = O n 4 π 2 24n 4 log 2 (n) = O 1 log 2 (n)
Therefore, algorithm Waiting terminates after O(n 2 log(n)) interactions with probability greater than 1 -1/log 2 (n).
In the Gathering algorithm, a data is sent when a node with the data is connected to the sink or another node with data. We denote by X G the random variable that equals the number of interactions for the algorithm Gathering to terminate. Notice that the total number of data transmissions required to terminate is exactly n -1. The probability of occurrence of the first data transmission is n(n-1) n(n-1) = 1. The probability of occurrence of the i-th data transmission after the (i -1)-th data transmission is (n-i+1)(n-i)
n(n-1)
. Thus, the expected number of interactions required to terminate is
E(X G ) = n-1 i=1 n(n -1) (n -i + 1)(n -i) = n(n -1) n-1 i=1 1 i(i + 1) ∈ O(n 2 )
Corollary 6.2 Algorithm Gathering is optimal in D ODA .
Algorithm Performance With meetT ime
In this subsection we study the performance of our algorithm Waiting Greedy, find the optimal value of the parameter τ and prove that this is the best possible algorithm with only the meetT ime information (even if nodes have unbounded memory). We begin by a lemma to find how many interactions are needed to have a given number of nodes interacting with the sink.
Lemma 6.1 If f is a function such that f (n) = ω(log(n)) and f (n) = o(n) then, in nf (n) interactions, Θ(f (n))
nodes interact with the sink with high probability.
Proof : The probability of the i-th interaction between the sink and a node that has data, after i -1 such interactions, is 2(n-i) n(n-1) . Let X be the number of interactions needed for the sink to meet f (n) different nodes. We have:
E(X) = f (n) i=1 n(n -1) 2(n -i) = n(n -1) 2 (H(n -1) -H(n -f (n))) ∼ n 2 2 -log 1 - f (n) n + o(1) ∼ n 2 2 f (n) n ∼ f (n)n 2
and the variance is
V ar(X) = f (n) i=1 1 - 2(n -i) n(n -1) / 2(n -i) n(n -1) 2 ∼ f (n) i=1 n 4 4n 2 ∼ n 2 4 f (n)
Using the Chebyshev's inequality, we have
P (|X -E(X)| > nf (n)) = O n 2 f (n) 4n 2 f (n) 2 = O 1 f (n) So that X = Θ (nf (n)) w.h.p. if 1/f (n) = o(1/ log(n)) or equivalently, if f (n) = ω(log(n)).
Now we can state our theorem about the performance of Waiting Greedy depending on the parameter τ . Theorem 6.10
Let f be a function such that f (n) = o(n) and f (n) = ω(log(n)). The algorithm Waiting Greedy with τ = Θ max nf (n), n 2 log(n)/f (n) terminates in τ interactions with high probability.
Proof : To have an upper bound on the number of interactions needed by Waiting
Greedy to terminate, we decompose the execution in two phases, one between time 0 and a time t 1 and the other between time t 1 and a time t 2 = τ . In the last phase, a set of nodes L ⊂ V interacts at least once directly with the sink. Nodes in L do not transmit to anyone in the first phase by definition of the algorithm (they have a meetTime smaller than τ ). Nodes in L help the other nodes (in L c = V \L) to transmit their data in the first phase. Maybe nodes in L c can transmit to L in the second phase, but we do not take this into account, that is why it is an upper bound.
If a node u in L c interacts with a node in L in the first phase, either it transmits its data, otherwise (by definition of the algorithm) it has a meetTime smaller than τ (and smaller than t 1 because it is not in L). In every case, a node in L c that meets a node in L in the first phase, transmits its data. To prove the theorem i.e., in order for the algorithm to terminate before τ with high probability, we prove two claims: (a) the number of nodes in L is f (n) with high probability if t 2 -t 1 = nf (n) and (b) all nodes in L c interact with a node in L with high probability if t 1 = Θ(n 2 log(n)/f (n)). The first claim is implied by lemma 6.1. Now we prove the second claim.
Let X be the number of interactions required for the nodes in L c to meet a node in L. The probability of the i-th interaction between a node in L c (with a data) and a node in L, after i-1 such interactions already occurred, is 2f
(n)(n -f (n) -i)/n(n -1).
It follows that the expected number of interactions to aggregate all the data of
L c is E(X) = n-f (n)-1 i=1 n(n -1) 2f (n)(n -f (n) -i) = n(n -1) 2f (n) n-f (n)-1 i=1 1 n -f (n) -i ∼ +∞ n 2 2f (n) log(n -f (n)) = n 2 2f (n) log(n(1 -f (n)/n)) ∼ n 2 log(n) 2f (n)
And the variance is
V ar(X) = n-f (n)-1 i=1 1 -2f (n)(n-f (n)-i) n(n-1) 2f (n)(n-f (n)-i) n(n-1) 2 ∼ n-f (n)-1 i=1 n 4 4f (n) 2 n 2 ∼ n 3 4f (n) 2
Using the Chebyshev's inequality, we have
P |X -E(X)| > n 2 log(n) 2f (n) =O 1 n log 2 (n) Thus X=O n 2 log(n) f (n)
with high probability.
Corollary 6.3
The algorithm Waiting Greedy, with τ = Θ(n 3/2 log(n)), terminates in τ interactions with high probability.
Proof : In the last theorem, the bound
O max nf (n), n 2 log(n)/f (n) is minimized by the function f : n → n log(n).
Theorem 6.11
Waiting Greedy with τ = Θ(n 3/2 log(n)) is optimal in D ODA (meetT ime).
Proof : For the sake of contradiction, we suppose the existence of an algorithm A ∈ D ODA (meetT ime) that terminates in T (n) interactions with high probability, with
T (n) = o n 3/2 log(n) .
Without loss of generality we can suppose that A does nothing after T (n) interactions. Indeed, the algorithm A that executes A up to T (n) and does nothing afterward has the same upper bound (since the bound holds with high probability). Let L be the set of nodes that interact directly with the sink during the first T (n) interactions. Let L c be its complementary in V \{s}. We know from lemma 6.
1 that #L = O(T (n)/n) = o n log(n) with high probability.
We can show that T (n) interactions are not sufficient for all the nodes in L c to interacts with nodes in L. If nodes in L c want to send their data to the sink, some data must be aggregated among nodes in L c , then the remaining nodes in L c that still own data must interact with a node in L before T (n) interactions (this is not even sufficient to perform the DODA, but is enough to reach a contradiction).
When two nodes in L c interact, their meetTime (that are greater than T (n)) and the previous interactions are independent with the future interactions occurring before T (n). This implies that when two nodes in L c interact, using this information to decide which node transmits is the same as choosing the sender randomly. From corollary 6.2, this implies that the optimal algorithm to aggregate data in L c is the Gathering algorithm. Now, we show that, even after the nodes in L c use the Gathering algorithm, there is with high probability at least one node in L c that still owns data and that does not interact with any node in L. This node prevents the termination of the algorithm before T (n) interactions with high probability, which is a contradiction.
Formally, we have the following lemmas.
Lemma 6.2
Let g(n) be the number of nodes in L c . After using the Gathering algorithm during T (n) interactions, the number of nodes in L c that still own data is in ω( n/ log(n)) with high probability.
Proof : Let X be the number of interactions needed for R(n) nodes in L c to transmit their data. For the sake of contradiction, we suppose that
g(n) -R(n) = O( n/ log(n)) = o(g(n)) (6.2)
and show that X is greater than T (n) w.h.p. The probability of the i-th interaction between two nodes in L c that own data, after the (i -1)-th interaction already occurred, is
(g(n)-i)(g(n)-i-1) n(n-1)
. Thus we have:
E(X) = R(n)-1 i=0 n(n -1) (g(n) -i)(g(n) -i -1) = n(n -1) g(n) i=g(n)-R(n)+1 1 i(i -1) = n(n -1) 1 g(n) -R(n) + 1 - 1 g(n) = n(n -1) R(n) g(n)(g(n) -R(n))
From equation (6.2) we deduce that g ∼ R and we have:
E(X) ∼ n 2 1 g(n) -R(n) which implies E(X) = Ω n 2 log(n) n = Ω n 3/2 log(n) .
As in the previous proofs, the expectation is reached with high probability. This contradicts the fact T (n) = o(n 3/2 log(n)).
Lemma 6.3
Let H ⊂ L c be the nodes in L c that still own data after the gathering. With high probability, T (n) interactions are not sufficient for all the nodes in H to interact with nodes in L.
Proof : We know from the previous lemma that the number of nodes in H
Concluding Remarks
75 is h(n) = ω( n/ log(n)).
Let X be the random variable that equals the number of interactions needed for the nodes in H to interact with the nodes in L. We show that X is in ω(n 3/2 log(n)) with high probability. Indeed, the probability of the i-th interaction between a node in H that owns data, after the (i-1)-th interaction already occurred, is 2f (n)(h(n)-i)
n(n-1)
, where f (n) = #L. Thus we have:
E(X) = h(n)-1 i=0 n(n -1) 2f (n)(h(n) -i) = n(n -1) 2f (n) h(n) i=1 1 i ∼ n 2 2f (n) log(h(n)) But since f (n) = o n log(n) , we have E(X) = ω n 3/2 log(n) log(h(n)) = ω n 3/2 log(n) log(n/log(n)) = ω n 3/2 log(n)
Again the bound holds with high probability. This implies that, with high probability, T (n) = o(n 3/2 log(n)) interactions are not sufficient for all the nodes in H to interact with nodes in L.
End of the proof of theorem 6.11. We have shown that T (n) interactions are not sufficient for the nodes in L c to transmit their data (directly or indirectly) to the nodes in L. Indeed, we have shown that the nodes in L c can apply the gathering algorithm so that ω( n log(n)) nodes in L c still own data with high probability. But, with high probability, one of the ω( n log(n)) remaining nodes does not interact with a node in L in T (n) interactions. This implies that, with high probability, at least one node cannot send its data to the sink in T (n) interactions and an algorithm A with such a bound T does not exist.
Concluding Remarks
We defined and investigated the complexity of the distributed online data aggregation problem in dynamic graphs where interactions are controlled by an adversary. We obtained various tight complexity results for different adversaries and node knowledge. First we show that, when nodes have no knowledge, an oblivious adversary can generate, for a deterministic DODA algorithm or a randomized oblivious DODA algorithm, a sequence of interactions for which this algorithm has an infinite cost (it does not terminate and the optimal offline algorithm terminates on every suffix of the sequence of interactions). By giving some knowledge to the nodes, such In this third part of this thesis, we focus on the evaluation of the lifetime of a WSN. Estimating the battery lifetime of a device is important to reduce the manufacturing and development phase duration and ensure sensor sustainability. Indeed, sensor device applications may require long lifetime (from a day to a year), especially in the medical sector, which makes the test phase complex. Also, this long lifetime requirement impacts the efficiency of existing simulators that aim to accurately simulate all behaviors of a battery: heavy computations are expected when using low-level mode simulations (such as the electrochemical model), as those do not consider the fact that wireless sensor devices spend most of the lifetime in sleep mode.
Problem
The problem we are considering here is: how to evaluate accurately and efficiently the energy consumption and the lifetime of a battery-powered sensor node. Ultimately, the goal is to evaluate the lifetime of a WSN. The definition of the lifetime of a network may depend on the application (e.g. , it can be the duration before the first node exits the network, on the duration before a given percentage of nodes exit the network), but usually depends on the lifetime of the sensor nodes in the network.
The goal is to create two models (an energy consumption model, and a battery model) and to implement these models in a module for the WSNet simulator [START_REF] Fraboulet | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF]. The energy consumption model defines how a sensor node consumes the energy. For instance, one can assume that only the energy consumed by the transmissions is considered, or only the transmissions and receptions, or assume that all the components of the node consume energy. The battery model represents the state of the battery depending on the external factors such as the instantaneous current drawn by the sensor node over the time, the characteristic of the battery, or the temperature. In Chapter 8, we present our module called WiseBat.
In the literature, when it comes to compare the energy efficiency of protocols, it is common to just count the number of transmitted messages, or to use a simple energy consumption model with a linear (ideal) battery model. With the help of our module in WSNet simulator, we can benchmark different protocols to study their performance in a realistic environment. There exists a variety of problems where our model can be used. In this thesis, we propose a simulation campaign to benchmark broadcasting protocols that aim to use varying transmission power to reduce the energy consumption of sensor nodes.
When each node can change its transmission power (and so its transmission range), the goal of an energy efficient broadcast algorithm is, for a given source s, to assign a transmission range to each sensor node so that the node s can broadcast a message to all the other nodes (there is a path from s to all the other nodes) and the overall energy consumed by all the nodes is minimized. This problem, called the minimum energy broadcast problem, has been defined by Cagalj et al. [START_REF] Čagalj | Minimumenergy broadcast in all-wireless networks: Np-completeness and distribution issues[END_REF] as follows:
Definition 7.1 (The minimum energy broadcast problem) Given a graph G = (V, E), where each node i ∈ V is assigned a variable node power p v i , we assign a link cost c ij : E → R + that is equal to the minimum transmission power necessary to maintain link (i, j).
Considering a source node v that wants to broadcast a message, the minimum energy broadcast problem consists of finding the power assignment vector P = [p v 1 , p v 2 . . . p v n ] such that it induces the directed graph G = (V, E ), where E = {(i, j) ∈ E : c ij ≤ p v i }, in which there is a path from v to any node of V (all nodes being covered), and such that i∈V
p v i is minimal
In the geometric version of the problem i.e., in a WSN, the cost of a link (i, j), which is the minimum transmission power needed by node i to reach node j, equals to d α i,j where d i,j is the distance between i and j and α is a real number between 2 and 5. As demonstrated by Cagalj et al. [START_REF] Čagalj | Minimumenergy broadcast in all-wireless networks: Np-completeness and distribution issues[END_REF], the minimum energy broadcast is NPhard. Different approaches exist to approximate this problem, all considering that the cost of a link is the square of the distance between the nodes (i.e., α = 2). In Chapter 9 we benchmark several approximate algorithms in a realistic environment using our energy consumption and battery models.
Related Work
Simulating a WSN
Modeling a WSN with a UDG is simple but can be far from being realistic, especially when it comes to evaluate the performance of a protocol in a real environment. If a protocol is made to avoid collisions, then a UDG is well-suited, but when a protocol assumes that collisions are handled by a MAC layer, things are different. In the latter case, to evaluate the performance of a protocol, it is common to assume that the MAC layer is ideal, which means, that if two nodes transmit simultaneously to a common neighbor, both messages are received simultaneously, ignoring the impact of the interference. However, in this case, many factors have to be considered, and it is simply not accurate to model the network with a simple UDG. Collisions are the result of interference, for which several models exist and depends in turn on the propagation model. The MAC layer has a leading effect on the results as well, especially on the energy consumption of nodes. That is why, a simulator with a complete protocol stack for each node is best suited when it comes to evaluate protocols in a WSN.
We start by presenting several wireless sensor network simulators. + 11]. Then, we survey existing models to evaluate the energy consumption is WSNs simulators. Finally, we describe the related work about broadcasting protocols in WSNs.
Generic Network Simulators
NS2/3 NS2 and NS3 [ns297, HRFR06] are open-source discrete event network simulators. They are object-oriented written in C++ and easily extensible. Modules in NS2 are written in tcl. Their popularity and the big community using and improving it is an advantage. However, they are not optimized for wireless sensor networks and lack some customization about energy model, sensing hardware models, the packet format. Moreover, only a few low-energy MAC layers are already implemented.
OPNET OPNET [Mod03] is a commercial discrete event, object-oriented, network simulator. It is customizable (from the modeling of the hardware to the packet format). However, as NS2/3, it is not very scalable, and the number of protocols available is limited.
OMNeT++ OMNeT++ [V + 01] is a discrete event, component-based, simulator developed in C++. It is modular and comes with a simulation software to help traceability and debuggability of simulation models. It is scalable and offers many different protocols available. A useful feature of OMNeT++ is that it can be embedded in another application. This feature resulted in some simulators based on it (e.g. Castalia and MiXiM).
Emulators
TOSSIM TOSSIM [START_REF] Levis | Tossim: Accurate and scalable simulation of entire tinyos applications[END_REF] is an emulator as it runs actual application code. TOSSIM is designed specifically for TinyOS applications to be run on MICA Motes. However TOSSIM does not have an accurate physical model and is not easily extensible.
Cooja Cooja [ODE + 06] is also an emulator, but designed to run ContikiOS application. It has the same drawback than TOSSIM.
Wireless Sensor Oriented Simulators
GloMoSim GloMoSim [ZBG98] is a discrete event mobile wireless network simulator written in Parsec (an extension of C for parallel programming). Each layer has its own API to communicate with the surrounding layers and can be implemented as a module. However, GloMoSim is limited to IP networks and is not effective for low-power wireless sensor networks.
SENSE SENSE [CBP + 05b] is a discrete event simulator developed in C++. It is component-based (see Figure 7.1), which makes it extensible and scalable. The main limitation is its small community and the small number of available protocols, especially for low-energy wireless sensor nodes. Castalia Castalia [B + 11] is a WSN and body area network (BAN) simulator built on top of OMNeT++ and proposes realistic radio and physical models, with a 802.15.4 MAC layer. The goal is to be able to test protocols in a realistic environment. However, the energy model is not easily extensible and the existing physical and MAC models are extensible but limited by the implementation of the simulator. Like SENSE simulator, we can see that the architecture of the Castalia (see Figure 7.2) is close to the architecture of a real sensor.
MiXiM MiXiM [START_REF] Mixim | Simulator for wireless and mobile networks using omnet++[END_REF] is built on top of OMNeT++ and is a combination of several OMNeT++ frameworks that models the lower layers of the protocol stack, offers detailed models of radio wave propagation and interference estimation. The main limitation is the small number of MAC layers for low power wireless sensor nodes available and the lack of documentation to implement your own modules.
WSNet Simulator In this thesis we use WSNet simulator [START_REF] Fraboulet | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF] to perform all our simulations. WSNet is an event-driven simulator for wireless networks. It simulates nodes with a full protocol stack (application, routing protocols, mac protocols, radio interface, antenna) and their mobility inside a simulated environment and radio medium (propagation models, interference models, modulation functions, etc.). WSNet is fully extensible as node, environment and radio medium blocks are developed in independent modules that can be changed. The WSNet block architecture is presented in Figure 7.3.
WSNet is particularly attractive for studying wireless sensor networks as there exist modules to provide extreme scale simulation (up to 20 million nodes [START_REF] Ali | XS-WSNet : Extreme-scale wireless sensor simulation[END_REF]) and study the impact of faults and attacks [AT10a]. All modules are configured in a single XML file that represents a simulation. WSNet allows users to write their own module for the environment as well as for any node functionality. The default WSNet installation comes with several models of each type. For instance free space or log-normal shadowing among others for the radio propagation, greedy geometric or static for the routing layer. For our part, the default energy module models a linear battery that takes only the transceiver into account by default. In more details, the module receives an event at the end of a transmitted or received packet with the duration of the transmission. Then, it multiplies this duration by a number that represents the consumption of the transceiver in the TX or RX mode, and subtracts it from the capacity of the battery. A node is killed i.e., leave the simulation, when the capacity of the battery is equal to or smaller than zero.
Each model can communicate with other models through the core of the simulator. A default connection exists between two consecutive modules in the protocol stack. for instance, when the application module wants to transmit a packet, it calls the function TX that calls the function tx in the lower level (usually the module handling the routing). Each module also has a generic entry point called ioctl that can be used by any other module, usually to perform cross-layer communications. For instance, if the application module wants to change the options of the radio module, it has to use this function. This feature is used by our module WiseBat presented in Chapter 8, to simulate the energy consumption and the battery accurately.
For this thesis, simulations are performed in the WSNet simulator [START_REF] Fraboulet | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF]. By default, the simulator uses the SINR model [START_REF] Elyes | Scalable versus Accurate Physical Layer Modeling in Wireless Network Simulations[END_REF]. With this, we use the logdistance pathloss propagation model.
Simulating Energy Consumption in WSNs
There exists a number of available simulators, designed for specific or generic networks (see Section 7.2.1). Even if network lifetime definition is very application-dependent [START_REF] Dietrich | On the lifetime of wireless sensor networks[END_REF], it is determined using each single node's lifetime. Node's lifetime estimation requires both the node's power consumption and its battery to be modeled accurately [DDV + 14]. Therefore, a variety of energy consumption and battery models have been studied [RVR03, RFG13] and implemented in existing simulators [WGMM08, FW10, PSS00] (often in the most popular ones). Despite a large choice of solutions to estimate the network lifetime, the majority of studies about wireless sensor networks does not use accurate power consumption and/or battery model. This is mostly because network lifetime is not always the first concern of these studies, and sometimes because they use a specific simulator that does not implement an accurate energy model (like WSNet simulator [START_REF] Chelius | Worldsens: A fast and accurate development framework for sensor network applications[END_REF]).
Usually, in the basic approach, the sensor node's consumption model is limited to an estimate of the energy consumed by each bit that has been transmitted by the radio. Together with this modeling, an ideal battery model is oftenly added (as it is the case for NS2/3 [ns297, HRFR06] or WSNet [START_REF] Chelius | Worldsens: A fast and accurate development framework for sensor network applications[END_REF], to cite a few). In the one hand, this model is very limited but, in the other hand, more accurate battery models can be themselves too complex and too time-consuming to be part of the simulation [RVR03]. As we demonstrate in the remainder of the paper, this approach is very limited and does not, in any real case, match the energy behavior of individual nodes.
Other solutions like SENSE [CBP + 05b] or PowerTOSSIMz [PCC + 08] model the power consumption of every component of the node together with a non-linear battery. Even if this approach seems to be more realistic, it does not take into account the battery's supply voltage variations (that are non-linear). This leads to another kind of inaccuracy. Indeed, we know that a node's life ends when its battery supply voltage falls under a given threshold (called the cut-off voltage), even if battery is not fully depleted. Depending on this cut-off voltage, the lifetime can be drastically reduced. That is why ignoring the voltage variation leads to an overestimation of the lifetime.
Benchmarking Energy-Centric Broadcast Protocols
We can distinguish two families of protocols that aim to be energy efficient by adjusting transmitting powers: topology control oriented protocols, and broadcast oriented protocols.
Topology control oriented protocols. A topology control oriented protocol assigns the transmission power for each node, independently of the source of the broadcast. The goal is to obtain a connected network with minimum total transmission power according to an energy consumption model. Once the radii are assigned, they are used for every broadcast from an arbitrary source. The problem of minimizing the total transmission power that keeps the network connected is known as min assignment problem and was considered by Kiroustis et al. Broadcast oriented protocols. A broadcast oriented protocol has the same overall goal, but considers that the broadcast starts at a given node. Hence, the induced broadcast network does not have to be strongly connected, leading to possibly more efficient solutions. For instance, the last nodes that receive the message do not need to retransmit it i.e., the algorithm may assign them a null transmission power. However, the source must be able to reach every node of the network. The problem remains difficult, as it has been proven [START_REF] Čagalj | Minimumenergy broadcast in all-wireless networks: Np-completeness and distribution issues[END_REF] that the minimum energy broadcast problem is NP-complete.
In chapter 9 we study six broadcast oriented protocols (FLOOD, BIP, LBIP, DLBIP, RBOP and LBOP) defined in the literature [WNE00, ISR + 08, CBB09, CISRS05, CISRS05].
Flooding Flooding is the simplest distributed protocol: when a node has a message to transmit, it transmits it with maximum power. Flooding is typically used for comparison with more elaborate energy-centric protocols, as it tries to maximize reachability without considering energy consumption.
BIP (Broadcast Incremental Power)
[WNE00] BIP is the only centralized algorithm we consider in the paper. Although mostly Greedy-based, it is one of the most efficient algorithms, and is commonly used as a reference in the literature. It constructs a tree as follows: Initially, the broadcast tree contains only the source node; then, while there exists a node that is not in the tree, it computes the incremental minimum power needed to add a node to the tree, either by increasing the power transmission of a transmitting node in the tree, or by choosing a non-transmitting node in the tree to transmit.
LBIP (Localized BIP) [ISR + 08] LBIP is the distributed version of BIP. Since acquiring the knowledge of the full network topology to apply BIP at every node would be to expensive (at least, energy-wise), the goal of the protocol is to discover only the 2-hop neighborhood and use the BIP algorithm on it. Each time a node receives a packet, in addition to the original message, it can contain a list of forwarder nodes. If the node is in this list, it computes the BIP tree on its 2-hop neighbors to choose the right transmission power and transmit the message with a list of nodes that must forward it, according to the BIP tree.
DLBIP (Dynamic Localized BIP) [CBB09]
DLBIP is the energy-aware version of LBIP. It is dynamic in the sense that for the same source, two broadcast trees may be different. The broadcast is done in the same way as LBIP, but the BIP tree construct with the 2-hop neighbors take into account the remaining energy of the nodes to promote nodes with higher residual energy.
RBOP (RNG Broadcast Oriented Protocol) [CISRS05]
RBOP is based on the RNG topology control protocol. A node that has a message sends it with a transmission power such that all its neighbors in the RNG graph receive it. The protocol also contains some optimization to avoid transmission when a node knows that some of its neighbors in the RNG graph already received the message.
LBOP (LMST Broadcast Oriented Protocol) [CISRS05] LBOP applies the same scheme as RBOP but the RNG graph is replaced by the LMST one.
There exists other broadcast oriented algorithms that we do not consider in this paper. For instance the INOP (INside-Out Power adaptive approach) defined in [START_REF] Chiganmi | Variable power broadcast using local information in ad hoc networks[END_REF], is very close to the other algorithms. It takes into account the 2-hop neighborhood, sorts them in terms of the power needed to cover them and then computes the optimal energy strategy starting from the closest neighbor to cover the next neighbor directly or indirectly. In the same paper, the authors evaluate their algorithm with a realistic network stack but used the 802.11 DCF MAC layer (which does not correspond to sensor network MAC layers), and an ideal per-packet energy consumption model.
All aforementioned works (besides INOP [START_REF] Chiganmi | Variable power broadcast using local information in ad hoc networks[END_REF]) consider an ideal MAC layer where no interference occurs when two neighboring nodes transmit data within the same timeframe. Such an assumption is unrealistic as further discussed in Section 7.2.1. INOP [START_REF] Chiganmi | Variable power broadcast using local information in ad hoc networks[END_REF] does consider a realistic network stack, but the stack is related to full-fledged computer networks, which is not energy-aware. Sensible network stacks for sensor nodes include ContikiMAC and 802. 15.4, that we consider in the sequel.
Another oversimplification made by all previous work is even more problematic: the energy consumed by a node is supposed to be equal to the energy consumed by the radio during transmissions. In more details, the energy E(u) consumed by a node u for one transmission is given as a function depending on the radius r(u) of the transmission (7.1) (which depends on the power transmission of the radio). The energy consumed by the protocol is then the sum E of the energies consumed by all nodes (7.3). In order to compare several algorithms, we can consider the ratio EER between the energy consumed by a protocol and the energy consumed by the flooding protocol (7.2-7.4), which always chooses the maximum transmission power.
E(u) = r(u) α + c if r(u) = 0 0 otherwise (7.1) E f looding = n × (R α + c) (7.2) E = u∈V E(u) (7.3) EER = E E f looding × 100 (7.4)
Where α, c ∈ R + . Of course, real sensor nodes also consume energy when doing other tasks (reading sensor values, computing, receiving data, re-transmissions). Also, low power batteries typically have non-linear behavior (their capacity may vary depending on the intensity of the drained current at a given time).
Contribution of Part III
Energy Consumption and Battery Models for WSNet Simulator We use an accurate existing battery model (presented in previous work [DDV + 14]) that we optimized for wireless sensor networks simulation. We then link it with a power consumption model that takes all individual components' power consumption into account. The WiSeBat model refers to both the battery model and the power consumption model. We give a formal definition of these models. Then, we explain how to use them with user applications in the WSNet simulator. We validate our approach against a real sensor device. The collected measurements show that our model performs very well for duty-cycled scenarios, and give more realistic results than the default energy model of WSNet by several orders of magnitude.
Once validated, we used the WiSeBat approach to compare different protocol stacks in various scenarios. First, we compare X-MAC [BYAH06], Contiki-MAC [START_REF] Dunkels | The contikimac radio duty cycling protocol[END_REF], and 802.15.4 MAC [80203] beacon-enabled (simply denoted 802.15.4 in the sequel) in a single node scenario. As expected, ContikiMAC performs better than X-MAC, and 802.15.4 becomes useful if the node is in sleep mode for a long period of time. Secondly, we show that, as soon as an intermediate node is needed to forward messages from the leaf node to the coordinator, ContikiMAC outperforms 802.15.4 mac layer. Indeed, if a node using 802.15.4 needs to be able to forward a message, it has to regularly send beacons, which drains the battery faster than the short channel assessments needed by ContikiMAC. This is confirmed in the last scenario, where we compare ContikiMAC and the 802.15.4 mac performances in a dense network. WiSeBat energy model and the scenarios used in this paper are available online [wis14] and can be adapted to compare sensor lifetimes with user applications performing under user scenarios.
Benchmarking Energy-Centric Broadcast Protocols We consider previously proposed energy-centric broadcasting protocols for WSNs and evaluate them in realistic scenarios. We benchmark them using a simulator that includes a complete communication protocol stack, a realistic physical communication layer, and an accurate energy consumption model. We choose to perform our simulation with ContikiMAC and 802.15.4 MAC, that were both designed for small and energy constrained devices. In particular, in our settings, the MAC layer has to deal with possible collisions, and the energy consumption takes into account all components of the sensor device (including a non-linear battery behavior).
The results of our evaluation is as follows. First, we demonstrate that wireless interference significantly impacts the performance of broadcasting protocols in various ways. Indeed, the hierarchy of the broadcasting protocols (based on their performance in an ideal setting) is not preserved in the more realistic setting. Also,
Contribution of Part III 89
we show that the MAC layer, also does not impact all broadcast protocols in the same way (some protocols perform better with ContikiMAC than with 802.15.4 MAC, while other do not). Quite surprisingly, it turns out that the very simple flooding protocol (used as a theoretical lower bound in previous work), is actually one the best distributed broadcasting protocols in our realistic environment. Our results show that considering only idealized settings in theoretical work does not give accurate performance hierarchies (including relative ones) in practical settings. In this chapter we present our model called WiSeBat, that includes an energy consumption model and a battery model, and implemented in WSNet simulator. First we describe the models used by WiSeBat and its usage in WSNet simulator. In Section 8.2 we evaluate its performance by comparing its results with measurement using a real sensor node. Finally we present in Section 8.3 several simulation done with WSNet simulator and WiSeBat to evaluate the lifetime of a WSN under different scenarios.
The WiSeBat Energy Model
In this section, we further describe our proposal for modeling energy-related issues in wireless sensor networks, named WiSeBat. The WiSeBat (Wireless Sensor Battery) model aims to realistically simulate the battery and the device power consumption with only a limited computation overhead. It offers the possibility to define custom components in addition to the wireless transceiver (typically sensors, LEDs, and micro-controllers) and their use by the application layer. The overview of the use of WiSeBat in a network simulator is as follows. First, the application code that is to be run by individual nodes registers all of its components during the initialization phase and gives them a name and a consumption function. Then, the application may change the running mode of a registered component during the execution of the simulation. The WiSeBat model automatically computes the total current consumed by all registered components and updates the supply voltage and the residual capacity of the battery. A node is killed when the voltage of the battery falls bellow the cut-off voltage of the device, that is, the maximum cut-off voltage among all its components.
We begin with a formal definition of the model that uses existing techniques to simulate the behaviors of a battery (subsection 8.1.1), and then explain how it is implemented in the WSNet simulator and how to use it in a particular application (subsection 8.1.2).
Battery Modeling
With respect to the energy consumption model, the WiSeBat module "simply" takes all components into account. In this section we focus on the battery model. We progressively present the behaviors of a battery we choose to simulate, starting from the simple linear model, then the rate capacity effect, and finally the voltage variations. At the end of the section we detail an important issue of our model, which is to determine when the battery requires to be updated.
Linear Battery Modeling
We start by considering an ideal battery model, but taking all the components in the device into account. The capacity of the battery is stored in the model and each time a component consumes a current i during a time t, it subtracts i × t from the battery capacity. If a particular component x (from a set of components X ) has spent T x (s) hours in a state s (from the set of possible states S x of component x), and consumes i x (s) mA in this state, then the battery capacity consumed C (in mAh) by all supplied components is computed using the following equation:
C = x∈X s∈Sx i x (s)T x (s)
In this simple model, the voltage is assumed to remain constant in all cases, and there is no difference between a component that consumes 1mA during 1s and a component that consumes 10mA during 0.1s.
The Rate Capacity Effect
The first behavior that is captured by our model is the rate-capacity effect. It refers to the fact that a lower discharge rate (i.e., current) is more efficient than a higher one: more charge can be extracted from the battery before reaching a given cut-off voltage. In fact, the battery behaves like its capacity decreases when the discharge rate increases.
When the current is constant, the total amount of energy that can be drawn from the battery is given by a function C eq , which is the equivalent capacity of the battery. The maximum capacity of the battery, called nominal capacity and denoted by C nomimal , is measured with a low current draw called the nominal current draw. Battery data-sheets often contains C nomimal and few other values.
At time t, if a battery is subject to a current i(t), the equivalent current is defined by [DDV + 14]:
i eq (i(t)) = C nominal C eq (i(t)) i(t) (8.1)
so that, if the current is constant and equals to i, the battery is empty (or is not able to deliver a current) after a time T such that i × T = C eq (i). Using equivalent current is a simple way to approximate the rate capacity effect, but it requires the battery model to know the current draw of each component at any given time. The energy consumption cannot be distributed as in the linear model.
Voltage Variation
In the case of a real battery, the supply voltage is not constant and depends on many factors. Here, our battery model takes into account two main factors: (i) the variation of the voltage depending on the residual capacity (when the battery is subject to the nominal current draw), and (ii) the internal resistance of the battery that linearly impacts the voltage depending on the current draw. Actually, the internal resistance of the battery also depends on the residual. We denote by Emf(C) and R(C) the voltage provided by the battery and the internal resistance when the residual capacity equals to C, respectively. Then, the supplied voltage V of the battery with a residual capacity C(t) and a current draw i(t) is given by:
V = Emf(C(t)) -i(t)R(C(t))
This voltage directly affects the components in two ways. First, the current drawn by a particular component may depend on the voltage. Second, a component stops operating when the voltage is lower than its cut-off voltage. This aspect implies a cyclic relation of dependence between voltage and current draw.
To take into account the voltage, we denote by i x (V, s) the current draw of component x in State s with a supplied voltage V .
WiSeBat Model
We now combine the used battery and the component's consumption models into a single module: WiSeBat. Suppose we have a set of components X and a function S that returns the state of each component at any given time: for each component x ∈ X , S x (t) denotes the state of x at time t. We denote by C(t) and i(t) respectively the residual capacity of the battery in mAh and the current draw in mA at time t in hours. Let i eq be the equivalent current draw in mA defined by equation (A.1). Let t 1 , t 2 , . . . be the times when the battery is updated. Our Sensor Networks module relies on the following relations:
C(t i ) = C(t i-1 ) -i eq (i(t i-1 )) (t i -t i-1 ) (8.2) V (t i ) = E mf (C(t i )) -i(t i-1 )R (C(t i )) (8.3) i(t i ) = x∈X i x (V (t i ), S x (t i )) (8.4)
We see in Equation (A.2) that the capacity decreases linearly with the time elapsed between two updates. Thus, if no update occurs in a long time, the voltage and the current draw can be obsolete. So the closer the updates are, the more accurate the approximation is.
In Equation (A.3), we update the voltage depending on the residual capacity and the previous current draw. So, if the current draw changes during its update in Equation (A.4), after a change of state of a component for instance, the voltage must be updated again.
Updates Triggering
As we already mentioned, the more often the battery is updated, the more accurate its approximation is. An intuitive approach to obtain accurate results could be to trigger an update every T time units. Even if this method has shown good results [START_REF] Dron | A fixed frequency sampling method for wireless sensors power consumption estimation[END_REF], it is computationally expensive, especially for duty-cycled sensor networks where nodes are in sleep mode the vast majority of the time. Indeed, when the current draw is very low (which typically occurs when the device is in sleep mode), the residual of the battery is nearly constant and so is its voltage. In this case, close consecutive updates should be factored out.
In contrast, when the current draw is high, the residual may decrease in a short period of time and updates have to be triggered more often to keep the voltage up to date. Here, let ε V be a positive constant and t 1 be a time when an update occurs. Assume we want the error about the voltage to be smaller than ε V . If a decrease of δC in residual implies a change by ε V in the voltage, then another update must occur at time t 2 such that:
t 2 < t 1 + δC i eq (i(t 1 ))
Indeed, the difference between residuals at time t 2 and t 1 is smaller than δC so that the voltage error is smaller than ε V .
In our implementation, the two functions E mf and R are piecewise linear (obtained by linear interpolation from the data-sheet) so that it is easy to compute the change in voltage depending on the change in the residual, for a given current draw.
Another important instant when an update must occur is when the running mode of a component changes. For instance, when the CPU wakes up. Indeed, such updates imply a change in the current draw. As a matter of fact, the voltage is computed with the previous current draw, and must be updated as soon as the current changes. However, as we saw, there exists a cyclic relation between voltage and In our implementation, the voltage and the current draw are updated until the variation in the current draw is smaller than a given ε i > 0. Since it is in theory possible that this process never converges, we delay the successive updates by a time l > 0 (for example the smallest possible duration available in the simulator, one nanosecond in WSNet), so that the simulation keeps going, and the residual capacity eventually decreases with every update. In practice, the convergence is really fast because on the one hand, when the voltage changes, the change in current is small, and in the other hand, because the voltage variation is linear with respect to the current draw variation.
Figure 8.1 summarizes the WiSeBat update scheme, triggered either from an internal callback or when a component's mode changes.
WiSeBat Usage
WiSeBat is used in two different places. The characteristics of the battery have to be set in the WSNet configuration file. Then, on the application code side, the consumption functions of used components must be provided, and the mode of individual components must be updated using WiSeBat control functions.
WiSeBat Options
One entity of the model represents one battery. The characteristic of the battery is given to the model using the following options:
energy: the nominal capacity C nominal of the battery (in milli Ampere Hour). Sensor Networks internal-resistance: the internal resistance of the battery (in Ohm) depending on the residual. This parameter permits the construction of the function R.
cut-off-voltage: the cut-off-voltage of your architecture (in Volt).
voltage-characteristic: the characteristic of the voltage depending on the residual. This parameter permits the construction of the function Emf .
capacity-characteristic: the capacity depending on the current draw. This parameter permits the construction of the function C eq .
For the parameters that define a function (internal-resistance, voltagecharacteristic and capacity-characteristic), the user can either give a number to define a constant function that equals this number or give a list of couples to define a piecewise linear function obtained by linear interpolation.
Other options can be given to nodes to define the level of logging. Also, the constants ε V and ε i defined in the previous section are set at the compilation time with default value 10 -4 .
WiSeBat Consumption and Control Functions
For each component of a particular device, a consumption function has to be defined. The consumption function of a component x takes as arguments the context of the battery (that contains the voltage V ), the mode s of the component, and returns the current (i.e., discharge rate) used by the component, which corresponds to i x (V, s). An example of a consumption function is given in Figure 8.2. The returned values are taken from Table 8.4, and corresponds to the current drawn by the transceiver.
At WSNet initialization, each component has to be registered using Function battery_register_component that takes as arguments a component (i.e., a name and a consumption function) and its initial running mode. Then, during the execution of the simulation, a component's mode can be changed using Function battery_set_component_mode that takes as arguments a registered component and its new mode. Each time this function is called, the battery is updated as explained in the previous section. Additional updates may be triggered, typically just after a change in the current draw, to take into account the change in voltage.
Evaluation
To evaluate the accuracy of our model, we used a real sensing device running an actual application code. The same application has been implemented in the WSNet simulator so that we can simulate the same Sense/Compute/Transmit cycles (see Figure 8.3) to compare measured and simulated lifetime (Subsection 8.2.1). We also measured the simulation overhead induced by our model, using a software profiler (Subsection 8.2.2). The same application is used for the simulations in Section 8.
Comparison with Real life measurements
We start evaluating our WiSeBat model with a comparison between real and simulated sensor lifetime running the same application scenario. We used a prototype wireless sensor device that contains multiple sensors and has a rechargeable battery. The device runs under ContikiOS with the standard 802.15.4 MAC slotted beacon with duty cycle. The full device architecture is confidential but the main components with their electrical consumption obtained from the datasheets are listed in Table 8.4.
The cut-off voltage is the minimum voltage required for a component to operate properly. The maximum cut-off voltage among all the components becomes the cutoff voltage of the whole application. Here, the device stops working if the voltage is below 2.4 Volt, corresponding to the power manager cut-off.
The scenario consists of sensing the pressure and sending the measured value to a coordinator every second or every ten seconds, with the device in sleep mode between two transmissions. Figure 8.3 details a transmission cycle. The durations are obtained from measurements. We measured the lifetime of the real device and simulated the same scenarios with WiSeBat and with the default models of WSNet. The node lifetime simulated with each model is given in Figure 8.5 with a log-scaled y-axis. We observe that unlike other approaches that overestimate the lifetime of the node, WiSeBat underestimates it. This is important if the simulation guarantees are then used for actual device production: an overestimated lifetime leads to weaker batteries to be embedded on actual devices, and the network fails operating before expectations. On the contrary, an underestimated lifetime permits the real device to last at least as long as the simulation. The accuracy of the WiSeBat model Figure 8.5 -Lifetime of a node simulated with WiSeBat models, WiSeBat battery model with only the radio transceiver consumption, and WSNet default models, relatively to a real sensor lifetime, in y-log-scale. WiSeBat underestimate the lifetime by 3 -14%, and the default WSNet models overestimation is greater than 2600%.
outperforms the default models of WSNet by several orders of magnitude: WiSeBat underestimate the lifetime of the node by at most 14% (4%, respectively), while the default models of WSNet overestimate the lifetime by more than 2600% (2890%, respectively) in the 10-second scenario (in the 1 second scenario, respectively). This means that if we were using the simulation outputs to dimension batteries, WiSeBat driven batteries would last at least as expected and would be oversized by at most 14%, while "classical" WSNet driven batteries would last only 1 27 of their expected lifetime, rendering the deployed sensor network practically useless.
We also simulate the scenario with the WiSeBat battery model taking only the transceiver into account, and the default energy consumption model that only takes the transceiver into account (wisebat_radio_only). We see that it improves the default battery model by a multiplicative factor of at least 5, but it remains quite far from reality, as it overestimates the lifetime by more than 500%. This demonstrates that both aspects of WiSeBat (battery and energy consumption models) play an important role in the global lifetime estimation.
WiSeBat: 33.6%
Other Models: 66.4% set_component_mode: 63% internal_update: 32% update: 73% update: 86%
WiSeBat simulation overhead
The simulation time overhead of WiSeBat has been measured with the Instruments tool on Xcode. Of course, the percentage of simulation time spent on WiSeBat models depends on the overall configuration. Indeed with the heavyweight 802.15.4 MAC layer, WiSeBat-specific functions represent only 0.1% of the simulated time. In contrast, with the lightweight ContikiMAC layer, the percentage of simulation time spent in WiSeBat models is around 34% (see figure 8.6). Inside the WiSeBat models, the two functions set_component_mode and internal_update use 95% of the time. Both functions call some internal functions, in particular the main update function which corresponds to 73% of the time spent by WiSeBat and 24.5% of the whole WSNet simulation (assuming the ContikiMAC scenario).
Those results demonstrate that the second main goal of WiSeBat is attained: despite being accurate (with respect to real sensors), the computation time required to obtain accurate results remains reasonable (typically 1 3 overhead, sometimes less). We expect WiSeBat to prove useful in a number of simulation scenarios, including those involving a respectable number of sensors.
Simulations
In this section we use WiSeBat to compare lifetime of nodes using different protocol stacks in simple and complex scenarios. We use the same application as in the previous section (see Figure 8.3). We first focus on the MAC and routing layers. We compare the X-MAC, ContikiMAC, and 802.15.4 MAC layers in a single node scenario (Subsection 8.3.1), then we examine ContikiMAC and 802.15.4 with a static or a RPL routing layer in a two-node scenario (Subsection 8.3.2).
Single Node Scenario
In this scenario, we consider a sensor node that repeatedly sends its data to a coordinator with varying time intervals (every ten seconds and every minute). The results are given in Figure 8.7. Here, there is no multi-hop communication, i.e., sensor nodes do not need to forward, allowing them to return to sleep mode between two transmissions. The 802.15.4 standard allows for arbitrary long dutycycles (for reduced functionality leaf nodes), which explains why 802.15.4 MAC layer performs better when the duty-cycle is longer. However, when nodes need to transmit every second, the near absence of control messages in ContikiMAC permits a longer lifetime than the 802.15.4 MAC layer. In both scenarios, X-MAC is outperformed by the two other mac layers.
Two-Node Scenario
In this scenario, two sensor nodes (one router node and one leaf node) and a coordinator are located in a line. The leaf node needs to send its data every minute to the intermediate router that forwards it to the coordinator. We compared the lifetime of the router and of the leaf node, with ContikiMAC or 802.15.4 mac layer with static or RPL routing layer. Figure 8.8 presents the results of this simulation. We first observe that using the RPL routing layer does not impact a lot the lifetime of the devices, compared to static routing. Also, we observe that ContikiMAC performs better than 802.15.4 mac layer. Indeed, with 802.15.4 mac, the fact that the router has to send beacons prevents it to last more than a day. Moreover, after the router shutdown, the leaf node uses all its remaining energy with control messages, trying to recover from the loss of its neighbor. With ContikiMAC, both router and leaf lifetimes (19 -20 days) almost equal the lifetime in the single node scenario (21 days), which means that a node under ContikiMAC that forwards a message does not consume a lot more energy than a node that only sends a message.
Conclusion
We presented the WiSeBat model for accurate energy benchmarking of wireless sensor networks, and a reference implementation of our model in the WSNet simulator. WiSeBat includes an energy consumption model that takes all the components of the sensor node into account and a battery model with non linear effect. The battery model takes into account the rate capacity effect, the internal resistance and and voltage variations of the battery, that depends on the residual capacity of the battery. Those characteristics are computed from the data-sheet of the battery cell only.
We demonstrated that the lifetime estimation by WiSeBat is a huge improvement from the default models of WSNet as it provides 85 -95% accuracy in examined scenarios, with a contained computational overhead (typically, 0.1 -34% of the simulation time). Our simulation results advocate that energy should be a first grade metric when evaluating efficiency of wireless sensor network protocols. In this chapter we benchmark six existing energy-centric broadcast protocols that use varying transmission range: FLOOD, BIP, LBIP, DLBIP, RBOB and LBOP (see Sub-section 7.2.2 for details). These protocols were all evaluated using a specific simplified energy consumption model. Their model defines a function that maps a communication range to the amount of energy needed to transmit a message within this range. Given this energy consumption model, they were evaluated using simulations to compare their relative efficiency. Those simulations all use an ideal MAC layer, where no collision ever occurs. This leaves the open question of how those algorithms perform in a more realistic setting.
First we present the simulation setup. Then, in Section 9.2 we show and discuss the results of our simulation campaign.
Simulation Setup
We use WSNet simulator [START_REF] Fraboulet | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF] to perform our simulation campaign. We deploy 50 sensor nodes that are located uniformly at random in a square-shaped area. The size of the area varies from 300 × 300m to 800 × 800m to create various network densities. The protocol stack consists of a broadcast application, a broadcast protocol (whose performance is to be evaluated), a MAC layer (we consider both the 2400MHz OQPSK 802.15.4 CSMA/CA unslotted [802a] and ContikiMAC [START_REF] Dunkels | The contikimac radio duty cycling protocol[END_REF]), a radio transceiver, and an omnidirectional antenna. For the environment, we use OQPSK modulation and the log-distance pathloss propagation model. The logdistance pathloss propagation model is more realistic than the range propagation and is simple enough to be easily predictable. This allows the nodes to choose the transmission power according to the desired range it wishes to attain.
A simulation setting consists in selecting a broadcasting protocol, a MAC layer, and the size of the area. For each setting, we run 50 simulations with various topologies. All measurements (remaining energy, number of receiving nodes per broadcast, and delay) are averaged over those simulations. Each topology is obtained by randomly deploying the nodes in the square area, and is used for all the simulation settings, so that different protocols are evaluated on the same topology. For the energy model we used the WiseBat [BDBF + 15] module with a real TMote Sky configuration (see Table 9.2). The current drawn by the CPU under a voltage V CC is given by the formula I[V CC] = I[3V ] + 210(V CC -3). We chose to power the device with a rechargeable Lithium Ion battery. Here, we used the data-sheet of the GP Battery 1015L08 model, that is designed for small devices.
Two execution scenarios are considered. In the first scenario (single source broadcast), Node 0 broadcasts a message to the other nodes every ten seconds, until the voltage is not sufficient for the node to work correctly (its cut-off voltage is 2.7V, see Table 9.2). In the second scenario (multiple source broadcast), a randomly selected node tries to broadcast its message, until there is no node working correctly and no node can initiate a broadcast.
Simulation Results and Discussion
We first present simulation results related to single source broadcast in Section 9.2.1, then multiple source broadcast in Section A.4.2.2. Our findings are further discussed in Section A.4.2.2
Single Source Broadcast
When the source of the broadcast does not change, it becomes the first node that stops working. This observation holds for every broadcasting protocol and every MAC layer. The reason is that the CPU of the source consumes some energy to initiate the broadcast, and our simulations show that no broadcasting protocol takes this fact into account when implementing their strategy. However, broadcast protocols exhibit various differences depending on the considered performance metric.
Number of Broadcasts
The overall number of broadcasts, which directly depends on the lifetime of the source node, varies depending on the network protocol stack used. Figure 9.1 presents for each MAC layer the number of broadcasts that could be achieved depending on the size of the area, for each considered broadcasting protocol. With 802.15.4 MAC, the number of broadcasts is roughly equivalent for all the protocols, and on average, the number of achieved broadcasts lies between 6300 and 6600, and does not depend on the size of the area. BIP has a small advantage, then FLOOD lasts a little longer than the other distributed broadcasting protocols.
With ContikiMAC, the number of broadcasts varies significantly with the considered broadcasting protocol and, to a lesser extent, with the size of the area. For BIP and LBIP protocols, the number of broadcasts is around 110,000 for dense networks. This number decreases to around 100,000 for sparse networks. In contrary, for FLOOD and DLBIP protocols, the number of broadcasts increases with the size of the area from around 80,000 to 95,000. RBOP and LBOP protocols have lower performance with less than 70,000 broadcasts.
We see that LBIP appears to be the best distributed protocol. Surprisingly (considering its energy unawareness) FLOOD exhibits very good performance, similar to DLBIP, and outperforms both RBOP and LBOP.
Number of Receiving Nodes
Contrarily to what we expected, all the nodes do not necessarily receive every message. For some broadcasting protocols, the number of receiving nodes can vary a lot. This is due to the fact that when broadcasting Figure 9.1 -Average number of broadcasts, depending on the size of the area, for each broadcasting protocol. a packet, the MAC layer does not request an acknowledgment, so the packet (due to interference) may not be received by its intended destination, and this loss is never notified to the broadcasting protocol. Again, the MAC layer impacts significantly the results (see Figure 9.2). In the sequel, the reachability metric denotes the percentage of nodes that receive the message.
With ContikiMAC, BIP, DLBIP, and FLOOD protocols offer very good performance regardless of the size of the area. LBIP and RBOP are below, but LBIP performs better as the density of the graph decreases.
With 802.15.4, FLOOD, DLBIP exhibit results that are similar to the previous case. However, BIP has one of the worst performances, with RBOP and LBOP. Their performance increases with the size of the area but the performance of BIP with 802.15.4 is far below its performance with ContikiMAC. Again, LBIP reachability increases until 80% as the density of the networks decreases.
In both cases, the improvement observed when density decreases can be explained by the fewer number of message collisions that go unnoticed.
Amount of Remaining Energy
In previous work, the amount of energy in the network upon simulation termination was analyzed. For our purpose, the energy that remains in the rest of the network after that the battery of the source node is depleted is not as relevant as the other metrics we considered. Indeed, for Contiki-MAC, the amount of energy remaining is correlated with the two other metrics. In more details, the greater number of broadcasts and the greater number of receiving nodes, the fewer the amount of energy remaining will be. So, less remaining energy actually implies better performance of the protocol.
With 802.15.4, the amount of remaining energy is similar for all broadcasting protocols, and cannot be used to differentiate their performance.
Multiple Source Broadcast (Gossip)
In this scenario, each broadcast is initiated by a randomly chosen source. Each simulation uses the same random order to make sure the differences between two simulations do not depend on this order. The simulation terminates when no nodes are alive (hence, we are not interested either in the amount of energy remaining in the network at the end of the simulation). In this setting, it is interesting to investigate the number of receiving nodes, and the delay over time.
Number of Receiving Nodes Figure 9.3 (respectively, Figure 9.4) shows the number of nodes that receive the message using ContikiMAC as a MAC layer (respectively, using 802.15.4 MAC), for each considered broadcasting protocol and for various sizes. The x-axis represents the number of broadcasts, and it is proportional to the time (because one broadcast occurs every 10 seconds). A point of the graph with x-coordinate i is the average number of receiving node for the i-th broadcast to the (i + 100)-th broadcast, for 50 simulations considering different topologies. Due to the sliding window used to compute the average, the graph is smoother than if we just take the average of the i-th broadcast over all the simulations. We observe that the number of receiving nodes at the beginning is consistent with the case of a unique source (See Section 9.2.1). Also, the number of broadcasts until a decrease starts is slightly more than in the case of a unique source, as the load is more evenly shared among sources.
With ContikiMAC, BIP is the protocol that keeps 100% reachability for the longest period of time. Then FLOOD and DLBIP are close runner-up. In dense networks, DLBIP is better because the decrease in the number of receiving nodes is slower. However, in sparse networks, FLOOD keeps 100% reachability for around Chapter 9. Energy Benchmarking of Broadcast Protocols 10% more broadcasts. It is interesting to see that LBIP has around 90% reachability, but performs 50% more broadcasts compared to BIP and has a really slow decrease. Finally, RBOP and LBOP protocols are outperformed by the other protocols. We can notice that for sparse networks, the performance of BIP, LBIP, DLBIP, and FLOOD appear to converge to about 130,000 broadcasts.
With 802.15.4 MAC we observe that, for every protocol, the decrease of the number of receiving nodes is faster than with ContikiMAC. Also, all broadcasts are almost equivalent in the number of broadcasts performed. This is mainly because 802.15.4 MAC consumes the majority of the available energy, so that the other source of consumption become less significant. The FLOOD protocol is the only protocol with almost 100% reachability until the end. DLBIP performs really well with more than 90% reachability. The other protocols have bad performance. In particular, BIP is below LBIP and DLBIP in terms of number of receiving nodes, which was already observed in the case of a single source.
End-to-End Delay The end-to-end delay we consider is the duration between the start of the broadcast and the time of the reception of the last message for this broadcast (if not all nodes receive the broadcast message, the time of reception of the last node that receives the message is used).
With ContikiMAC (see Figure 9.5), we note that FLOOD, LBIP, and LBIP have really good performance, with a delay from 500ms for dense networks to 1s for sparse networks. Also, even if BIP has good performance in terms of number of broadcasts and reachability, it has a high delay of 2.5s, regardless of the density of the network. LBOP and RBOP have even greater delay.
With 802. 15.4 (see Figure 9.6), the overall delay is better than with Contiki-MAC by two orders of magnitude. BIP, LBIP, DLBIP, and FLOOD have similar performance with a delay around 10ms. RBOP and LBOP exhibit a delay that is five times greater. The results for the other size of area are not presented because they are equivalent.
In both cases, we can see that the delay decreases when the number of receiving nodes decreases. However, we observe that for FLOOD and DLBIP protocols, the beginning of the decrease is preceded by a small peak, probably because after some nodes have stopped working, the network has a lower density, resulting in a greater delay.
Discussion
Our results show that the impact of the transmission collisions due to wireless interference is not uniform for each broadcasting protocol. For instance, we saw that the number of receiving nodes with RBOP is low for dense networks and increases as the density of the network decreases. This implies that interference has a huge impact on RBOP. This impact could be: (i) direct i.e., the protocols send only few messages, so that each node often receives the message from only one neighbor, and if this message is lost, a subset of the network does not receive the message, or (ii) indirect i.e., the nodes selected for broadcasting the message are chosen in such a way that their transmissions always collide at the receivers, particularly due to the hidden terminal problem (two nodes that are out of range from each other but that have a common destination neighbor). The first point can explain why the FLOOD protocol exhibits such good performance. Indeed, each node transmits the message, which causes more interference, but this also increases the probability that a node receives the message and is later able to retransmit it to the rest of the network.
In more details, there is a limit in the amount of interference in the network because if the MAC layer detects that another node is transmitting, it waits until the channel is clear. At some point, increasing the number of nodes that wants to transmit does not increase the number of lost packets. So that the probability that at least one node receives the message for the first time increases.
The good overall performance of FLOOD confirms the practical relevance of having redundancy when broadcasting a message, and that this redundancy does not necessarily imply a higher overall energy cost. This is even more stringent when the source of the broadcast remains the same. However, when the source of the broadcast is randomly chosen, there are some cases when LBIP or DLBIP may be more appropriate. For instance, with ContikiMAC, we see in Figure 9.3 that LBIP performs between two and three times more broadcasts in dense networks compared to FLOOD, albeit with less reachability. Also, DLBIP performs better than FLOOD in dense networks. So, in general, the overall best candidate is FLOOD, but some specific settings command the use of LBIP or DLBIP.
Conclusion
We used our model WiSeBat presented in the previous chapter to evaluate several energy-central protocols. We focused on the problem of broadcasting a message in an energy-efficient manner, in a wireless sensor network where nodes are able to change their transmission power. We studied six broadcasting protocols that are representative of the current state of the art. We answered the question left open by the previous work: how broadcasting protocols perform with realistic (simulated) devices in a realistic (simulated) environment? We found that the energy consumption does not depend on the protocols as one could expect from the previous studies. Indeed, it is not realistic to consider only the energy consumed by the radio during the transmission. Especially, we show that the hierarchy of the broadcasting protocols (based on their performance in an ideal setting) is not preserved in the more realistic setting. Also, we show that the MAC layer, also does not impact all broadcast protocols in the same way (some protocols perform better with ContikiMAC than with 802.15.4 MAC, while other do not). Quite surprisingly, it turns out that the very simple flooding protocol (used as a theoretical lower bound in previous work), is actually one the best distributed broadcasting protocols in our realistic environment.
Our conclusion is that focusing on power transmission to improve energy-efficiency of broadcast protocols for sensor networks is not the right choice. Using ideal MAC layer is not well-suited to evaluate the efficiency of a protocol, as collisions prevent many protocols to achieve acceptable coverage, especially when the density of the network is high. This concluding chapter surveys thesis contributions (see Section 10.1) and discusses questions raised by our work (see Section 10.2).
Overview of Thesis Contributions
Data Aggregation
The second part of the thesis focuses on the problem of aggregating data from all the nodes in the network to a sink node in minimum time. We consider two variants of the problem.
First, in Chapter 5, we suppose that the network is a WSN (possibly dynamic) modeled by a UDG (or a sequence of UDGs). In this case, interference occurs at a node when two neighbors transmit a message simultaneously. We investigated the complexity of finding a solution to the minimum data aggregation time problem by a centralized algorithm. We characterized the NP-completeness depending on the maximum node degree ∆ of the graph. In a static WSN, the problem is trivial if ∆ is 2 and is NP-complete otherwise. In a dynamic WSN, the problem is trivial if ∆ is 1 and NP-complete otherwise. We presented a centralized approximate algorithm that constructs a data aggregation schedule. We showed that the duration of the schedule is smaller than the duration of n -1 independent foremost convergecast trees. In the class of T -bounded recurrent connected graphs, this implies a duration smaller than T (n -1) time steps. We also conjecture that in the worst case, the data aggregation has a duration of (∆ -1) log ∆ (n (∆ -1) + 1) -∆ + 2 independent foremost convergecast trees.
Second, in Chapter 6, we suppose that the network is dynamic, and nodes interact in a pairwise manner. The goal is to find an algorithm that receives the interactions one by one and has to decide whether a node transmits its data to the other or not. We study this problem depending on how the interactions occur, and on the initial information given to the nodes. If an adversary chooses the interactions depending on the previous choices of the algorithm (an online adaptive adversary), the problem is not solvable without giving additional knowledge to the nodes. With some information the problem may be solvable. For instance, if nodes know the eventual underlying graph, there is an algorithm that terminates.
When the interactions are chosen randomly among all possible interactions (each one with probability 2 n(n-1) ), then we give two optimal algorithms: (a) when nodes have no knowledge, our algorithm Gathering terminates in O(n 2 ) interactions in expectation and (b) when each node knows its next interaction with the sink, our algorithm Waiting Greedy with parameter τ = Θ(n n log(n)) terminates in τ interactions w.h.p. Our two algorithms are oblivious and optimal in the class of non-oblivious algorithms (that use the same initial knowledge).
These results were published in international conferences SSS 2015 [BT15] and IEEE ICDCS 2016 [BMT16], and the Information and Computation international journal (to appear).
Lifetime Estimation of a Wireless Sensor Network
The third part of the thesis focuses on the lifetime evaluation of WSNs. To do so, we reviewed the existing models for the energy consumption and the battery. We proposed a new model called WiSeBat (Chapter 8) that aims to be accurate while remaining simple. We evaluated its implementation in WSNet simulator against a real sensor device. The results showed that WiSeBat outperformed the default battery model of WSNet, and it provides 85 -95% accuracy in examined scenarios, with a contained computational overhead (typically, 0.1 -34% of the simulation time).
A lot of previous work use over-simplified energy model to benchmark energycentric protocols. A typical example is the series of work on energy-efficient broadcast protocols in WSNs. To know if these results can be considered reliable for real applications, we used WiSeBat to evaluate six of such protocols (see Chapter 9). We showed that the hierarchy (based on their performance in an ideal setting) was not preserved in the more realistic setting. Quite surprisingly, it turned out that the very simple flooding protocol (used as a theoretical lower bound in previous work), was one the best distributed broadcasting protocols in our realistic environment. Our conclusion is that using a realistic interference model is critical to evaluate the performance of algorithms in WSNs.
These results were published in international conferences IEEE FDL 2015 [BDBF + 15] and NETYS 2016 [START_REF] Bramas | Benchmarking Energy-Efficient Broadcast Protocols in Wireless Sensor Networks[END_REF].
Perspectives
The contributions of this thesis concern two significant problems faced in WSNs: aggregating data from every node to the sink in an efficient way, and evaluating the lifetime of the network. We presented several models that fit each problem, and theoretical analysis, simulations, and measurements, which lead to optimal results, impossibility results, open source tools, and engaging open questions.
On the Practical Side Our energy model WiSeBat has shown good performance on a specific scenario and our short term objective is to perform large scale scenarios evaluations, using different platforms and different standards (e.g. the 802.15.6 standard). A robust evaluation will motivate its systematic use for energy benchmarking of new protocol proposals for wireless sensor networks and body area networks. There is a lot of work proposing new protocols or comparing existing protocols that would benefit to have an accurate energy model. For instance, broadcast protocols for body area networks [START_REF] Badreddine | Broadcast strategies in wireless body area networks[END_REF] and network coding [START_REF] Rout | Enhancement of lifetime using duty cycle and network coding in wireless sensor networks[END_REF] currently use simple linear battery models. In a farther future, including recent advances related to energy harvesting in our methodology is a challenging open question.
After using WiSeBat to benchmark broadcasting protocols, we concluded that future protocols are bound to integrate more realistic interference and energy consumption models to be relevant in practice. A cross-layer approach with the MAC layer and the broadcast layer helping each other is a possible path for long-term research.
Another interesting open question is the impact of mobility of sensor nodes on the energy efficiency of broadcast protocols. The six broadcast protocols we considered assume a static topology. Several evaluations of broadcast protocols in WSNs have been done when the nodes are mobile [START_REF] Williams | Comparison of broadcasting techniques for mobile ad hoc networks[END_REF]MBGW13]. However, protocol schemes and their evaluation are entirely different. In the work of S. Medetov et al. [MBGW13], the remaining energy is also considered, but with an ideal battery model and full-sized computer network stack (an 802.11 MAC layer is assumed). Energy benchmarking those mobility-aware broadcast protocols in a realistic setting such as that of this paper is a short-term research objective.
Finally, as an immediate continuation of our work on data aggregation, we plan to investigate how our distributed online data aggregation algorithms perform in real interaction graphs, obtained from human interactions or interactions between nodes on a moving person that changes postures.
On the Theoretical Side In the minimum data aggregation time problem with a global point of view, we can consider a similar problem by searching a data aggregation schedule that ends at given time and starts as late as possible (instead of one that start a 0). One can show that this problem is equivalent to the one we considered. In this case, one may use latest convergecast tree. Contrarily to foremost convergecast tree, a latest convergecast tree can be defined such that every journey from a leaf to the sink is a latest journey i.e., a journey that ends at a given time and starts as late as possible. Also, this problem must assume that the time domain is infinite in the negative direction (the time instants live in Z instead of N). For long-term research, we plan to focus on this problem, that leads to interesting new definitions.
La surveillance des signes vitaux d'un patient est une application particulièrement intéressante des réseaux de capteurs sans fil. De nombreuses maladies peuvent être détectées ou soignées avec une surveillance étroite du patient, comme les maladies cardiovasculaires, le diabète, l'hypertension, ou l'asthme. La surveillance peut être effectuée après une opération importante, de manière systématique (e.g. , pour prévenir la mort subite des nouveaux nés), ou pour améliorer la qualité de vie des malades (e.g. , une pompe administrant une dose adaptée d'insuline en fonction du taux de glucose mesuré par un capteur).
Pour contrôler l'état du patient, des capteurs physiologiques peuvent être placés sur ou dans le corps humain, formant un Body Area Network. Les données des capteurs sont agrégées périodiquement jusqu'au coordinateur, qui les analyse et peut envoyer une alerte à l'hôpital en cas de risque. Dans ce contexte, le réseaux se doit d'être fiable, durable et prévisible. Dans le même temps, les capteurs qui le constituent doivent être petits, utiliser des batteries, et avoir une longue durée de vie.
Les communications dans les réseaux de capteurs sans fil s'effectuent de manière multi-sauts. Les capteurs les plus éloignés du coordinateur doivent envoyer leur données de mesure à des noeuds intermédiaires qui doivent relayer les messages jusqu'au coordinateur. La stratégie utilisée pour récolter les données jusqu'au coordinateur a un impact important sur la consommation énergétique des capteurs.
A.1 Contexte de la thèse
Les réseaux de capteurs sans fil ou Wireless Sensor Netowks (WSN) sont constitués de noeuds capteurs, qui communiquent à l'aide d'ondes radio et capables de récolter des données, de les analyser et de les transmettre. Ces réseaux ont des applications variées, en fonction de la zone où ils sont déployés : militaire ou de sauvetage dans des zones pouvant être inaccessibles aux humains ; sanitaire avec des capteurs déployés sur et dans le corps humain ; surveillance avec des capteurs sur les voitures d'une ville ou les arbres d'une forêt, etc. Les noeuds sont autonomes en énergie et il est primordial d'assurer leur longévité sans retarder la récolte des données.
Étude de la Performance La caractéristique principale des réseaux de capteurs sans fil est que le médium propageant les signaux de transmission est partagé. Les modèles les plus simples permettent de représenter de tels réseaux par un graphe de communication. Le changement de topologie du réseau (dû au mouvement des capteurs par exemple) est alors capturé par un graphe évolutif, c'est-à-dire une suite de graphes statiques représentant l'état du réseau à chaque instant. Cependant, pour étudier de manière plus précise les performances d'un protocole, il est important d'utiliser des modèles de propagation du signal qui prennent en compte les interférences.
A.2. Modèle 121
Consommation Énergétique Réduire la consommation énergétique des protocoles utilisés dans les réseaux de capteurs sans fil constitue un défi majeur. Un capteur consomme de l'énergie lorsqu'il effectue des actions (a) dont il est à l'origine, telles que la mesure de l'environnement ou l'envoi d'un message, (b) venant d'autres capteurs, comme faire suivre un message au noeud suivant, ou (c) dues à un facteur externe, comme une retransmission après une transmission qui a échoué. Chaque niveau dans la pile de communication des capteurs peut avoir une influence importante sur chacun de ces aspects et donc sur la consommation du capteur. Le protocole permettant l'agrégation des données du réseau jusqu'au coordinateur étant celui déclenché le plus souvent, il est primordial de minimiser son impact énergétique.
A.2 Modèle
A.2.1 Modélisation de la Couche Physique
Modélisation du Lien Radio En l'absence d'interférence, la porté radio peut être définie simplement à l'aide d'un seuil. Lorsqu'un noeud i transmet un message, un autre noeud j le reçoit si le rapport entre la puissance du signal reçu (le produit de la puissance de transmission par l'atténuation) et le bruit de mesure est supérieur à un seuil donné.
Pour améliorer la précision du modèle, le seuil peut être remplacé par une probabilité d'erreur du paquet. Un paquet sera correctement reçu par le récepteur de manière probabiliste, en fonction du rapport signal à bruit, et de la modulation utilisée.
Modélisation des Interférences Lorsque plusieurs signaux issus de transmissions concurrentes se superposent, une interférence se produit, pouvant perturber la bonne réception des messages. Afin de prendre en compte ces interférences, le rapport signal à bruit peut être remplacé par le rapport signal à interférences plus bruit (SINR). Ainsi, lors de la réception d'un signal, la somme de tous les autres signaux reçus est considérée comme du bruit lors de la comparaison au seuil de réception.
Les travaux de P. Cardieri [START_REF] Cardieri | Modeling interference in wireless ad hoc networks[END_REF]
A.2.2 Le Réseau Comme un Graphe de Disque-Unité
Lorsque l'on suppose que tous les capteurs sont identiques, on peut normaliser la portée radio à 1 et représenter le réseau avec un graphe de disque-unité. Chaque capteur est un disque de diamètre 1 et deux capteurs peuvent communiquer si leur disque s'intersecte (voir figure A.1).
A.2.3 Modèle pour les Réseaux Dynamiques
Les liens de communication d'un réseau dynamique peuvent disparaitre ou apparaitre au cours du temps, sans que cela soit considéré comme une erreur. Nous choisissons de modéliser un réseau de capteurs dynamique par un graphe évolutif, c'est-à-dire une suite de graphes statiques. Chaque graphe de la suite représente le graphe de communication à un instant donné. De manière formelle, un graphe évolutif G est un couple (V, {E i } i∈N ) où E i est l'ensemble des liens de communication au temps i. Dans le cas d'un réseau de capteurs sans fil, chaque graphe (V, E i ) est un graphe d'intersection de disques. Le degré maximal d'un graphe évolutif est le degré maximal parmi tous les graphe (V, E i ). Dans un graphe évolutif, une arête est un couple (a, t) où t est un temps et a une arête dans E t . Ainsi, un chemin évoluant au cours du temps J , appelé trajet, est une suite d'arêtes de la forme ((a 1 , t 1 ), (a 2 , t 2 ), . . . , (a l , t l )) qui commence au temps depart(J ) = t 1 et se termine au temps arrivee(J ) = t l + 1.
A.3 Le Problème de l'Agrégation des Données
Dans la deuxième partie de cette thèse, on aborde le problème de l'agrégation de données. Dans un réseau où chaque noeud contient initialement une donnée, l'agrégation de données consiste à regrouper toutes ces données jusqu'au noeud coordinateur. Pour économiser l'énergie, on suppose que les données peuvent être agrégées, c'est-à-dire qu'il existe une fonction prenant deux données en entrée et retournant une seule donnée sensée contenir l'information importante présente dans les données d'entrée (on pourra penser par exemple aux fonctions minimum ou maximum).
A.3. Le Problème de l'Agrégation des Données 123 l'agrégation de données est ensuite considérée comme une simple donnée. Ainsi, un noeud pourra attendre de recevoir les données de plusieurs voisins avant de décider à son tour d'envoyer en une fois l'agrégation de toutes les données reçues en direction du coordinateur.
De cette manière, n transmissions sont suffisantes pour récolter les données de n capteurs vers un noeud coordinateur (contre Ω(n 2 ) sans agrégation, dans le pire des cas). Le problème de l'agrégation des données consiste à trouver un ordonnancement des communications ou chaque noeud transmet une seule fois, ce qui permet d'agréger les données de tous les capteurs du réseau jusqu'à un noeud coordinateur en un minimum de temps.
On étudie ce problème dans deux cas. Le premier est dans les réseaux de capteurs sans fil (statique ou dynamique) où les collisions dues aux interférences ne sont pas gérées par la couche MAC. Les résultats dans ce cas sont détaillés dans la section A.3.1. Le second cas s'intéresse aux solutions distribuées et en ligne du problème dans les réseaux dynamiques arbitraires. Les résultats dans ce cas sont détaillés dans la section A.3.2
Dans les Réseaux de Capteurs Sans Fil Afin d'économiser de l'énergie dans un réseau de capteurs sans fil, il est aussi important de prévenir les collisions due aux interférences. Pour cela on suppose que lorsque deux capteurs transmettent simultanément, les voisins communs de ces capteurs ne reçoivent aucune donnée, à cause des interférences. Dans ce contexte, l'ordonnancement qui permet l'agrégation des données doit s'assurer que les données arrivent bien jusqu'au coordinateur sans interférence.
Agrégation en Ligne Dans un Réseau Dynamique Arbitraire Dans le cas d'un réseau dynamique arbitraire on cherche à trouver une solution en ligne au problème de l'agrégation de données. C'est-à-dire qu'on cherche à construire un algorithme qui reçoit la séquence de graphes (qui représente l'évolution du réseau) comme un flux de données et doit prendre ces décisions au fur et à mesure. Pour simplifier on suppose que les interactions se produisent toujours entre deux noeuds. A chaque instant, lors d'une interaction entre deux noeuds, l'algorithme doit décider si un noeud transmet ou non sa donnée à l'autre. Le but étant toujours de minimiser le temps pour agréger toutes les données jusqu'à un noeud coordinateur. On remarque que dans ce modèle on suppose que les collisions sont gérées par la couche MAC.
A.3.1 Dans les Réseaux de Capteurs Sans Fil
A chaque instant, un noeud peut transmettre ou recevoir mais ne peut pas faire les deux à la fois. Lorsque deux capteurs transmettent simultanément, leurs voisins communs ne reçoivent aucune donnée. On dit que les données sont agrégées au temps t d'un ensemble de noeuds A vers un ensemble de noeuds B ⊂ A si les données des noeuds de A\B, transmises simultanément à l'instant t, sont correctement reçues par des noeuds de B (i.e., sans interférence, voir figure A.2c). C'est-à-dire que chaque Une instance du problème de l'Agrégation de Données en Temps Minimum (ADTM) est un triplet (G, s, t) où G = (V, {E i } i∈N ) est un graphe évolutif, s ∈ V un noeud appelé puits, et t un temps. Une solution de l'instance ADT M (G, s, t) est une agrégation de données de V vers s de départ t et de durée minimum. La durée minimum de l'agrégation est notée ADT M Opt (G, s, t). Dans le cas des graphes statiques, la donnée de t peut être omise.
Travaux connexes Dans le cas statique, le problème de l'agrégation de données en temps minimum (ADTM) a été largement étudié. Une version similaire fut présentée par Anamalai et al. [START_REF] Annamalai | On tree-based convergecasting in wireless sensor networks[END_REF] avec la possibilité pour les noeuds de choisir parmi un nombre fini de canaux différents pour éviter les collisions. Puis, Chen et al. [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] ont donné un modèle formel pour l'étude du problème ADTM, et ont montré que le problème est NP-Complet même si le graphe est de degré au plus quatre. Dans le même article, ils ont donné la borne inférieure et un majorant pour la durée minimum d'agrégation des données. Ils ont aussi proposé le premier algorithme d'approximation. Pour finir, de nombreux articles ont présenté des algorithmes d'approximations centralisés ou distribués de plus en plus performants, utilisant la nature géométrique des graphes d'intersections de disques, améliorant par la même occasion la majoration de la durée suffisante pour agréger les données du réseau. Cependant, aucune de ces études ne généralise son approche au cas des graphes évolutifs. À notre connaissance, aucune étude sur le problème de l'agrégation de données prenant en compte les collisions n'a été réalisée dans les graphes évolutifs.
Résumé des Contributions
) alors ADT M Opt (G, s) = ε + 1, sinon ADT M Opt (G, s) = ε.
Ainsi, le résultat suivant est optimal en fonction du degré maximum du graphe.
Theorem
V = {s} 1≤i≤n {v i } ∪ {v i } 1≤i≤m {c i } ∪ {c i } Soit t f = 3n + m. On décompose l'intervalle de temps [1, t f ] en trois périodes T 1 , T 2 et T 3 (voir figure A.3). Durant T 1 = [1, 2n], on a pour tout i ∈ [1, n] : E 2i-1 = {(v i , s) , (c i , c i )} , E 2i = {(v i , s)} v i s c i c i (a) configuration durant T1 c i c i a b c (b) configuration durant T2 v i vi v i+1 vi+1 (
E 2n+m+i = {(v i , v i+1 ) , (v i , v i+1 ) , (v i , vi+1 ) , (v i , vi+1 )} et E 2n+m+n = {(v i , s) , (v i , s)}
Durant T 3 , une variable ou sa négation (de manière exclusive) peut transmettre ses données au littéral suivant jusqu'au noeud puits, de sorte que l'ensemble des littéraux qui transmettent durant cette période, en leur assignant la valeur vrai, est vu comme une interprétation de la formule ϕ. On peut montrer que cette interprétation satisfait la formule ϕ si et seulement si la durée de l'agrégation des données de G vers s de départ 1 est t f .
A.3.1.2 Borne Inférieure et Borne Supérieure
Parmi tous les trajets qui commencent après l'instant t, un trajet J est dit principal s'il minimise la durée arrivee(J ) -t. Dans le cas statique, un trajet principal coïncide avec la notion de plus court chemin.
L'excentricité du noeud puits s peut être définie comme la hauteur commune des arbres enracinés en s de plus courts chemins. Elle est une borne inférieure dans le cas statique [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] car les arbres de plus courts chemins définissent autant de manières possibles d'agréger les données, lorsque les collisions ne sont pas prises en compte. C'est pourquoi, nous définissons un arbre des trajets principaux (ATP) enraciné en s de départ t comme étant un arbre enraciné en s où chaque branche correspond à un trajet principal de la feuille vers s commençant après t. À la manière d'un arbre de plus courts chemins dans le cas statique, un ATP peut servir de base pour définir un ordonnancement pour l'agrégation des données dans un graphe évolutif. Ainsi, la durée commune des ATP, enracinés en s et de départ t, notée AT P D(G, s, t), est une borne inférieure de ADT M Opt (G, s, t).
Dans le cas statique, un arbre de plus courts chemins reste toujours valide. Cela permet, pour résoudre une collision, de retarder d'une unité de temps une transmission. À chaque noeud d'un arbre de plus courts chemins, il y a au maximum ∆ -1 collisions, où ∆ est le degré maximum du graphe. D'où, (∆ -1)ε est un majorant de ADT M Opt (G, s) [START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF]. Dans le cas dynamique, un ATP n'est valable que pour un temps donné, et rien ne certifie qu'une transmission peut être retardée d'une unité de temps car le lien de communication peut disparaitre. En fait, l'existence d'un ATP enraciné en s ne certifie pas l'existence d'une solution au problème ADTM.
Pour trouver une condition suffisante d'existence d'une solution, on remarque qu'un ATP permet au moins à un noeud donné de transmette sa donnée jusqu'au noeud puits. Une condition suffisante pour l'existence d'une solution au problème ADTM est alors la donnée de (n -1) ATP consécutifs i.e., une suite d'ATP dont la fin d'un ATP précède le début de l'ATP suivant. Cette observation permet aussi d'assurer que la durée de (n -1) ATP consécutifs de départ t, notée AT P D n-1 (G, s, t) est un majorant de ADT M Opt (G, s, t). Ce majorant est atteint par le graphe G(V, {V × V } i∈N ) de degré n -1 (et elle n'est pas atteinte par un graphe de degré maximum n -2 ou inférieur).
Theorem A.3
Soit G un graphe évolutif, s un noeud de G et t un temps, on a :
AT P D n-1 (G, s, t) ≥ ADT M Opt (G, s, t) ≥ AT P D(G, s, t) A.3.1.
Conclusion
Nous avons étudié le problème de l'agrégation de données dans les graphes évolutifs, qui modélisent les réseaux de capteurs dont la topologie évolue avec le temps. Nous avons donné les conditions optimales sur le degré maximum du graphe pour que le problème soit NP-Complet. De plus nous avons donné les premières bornes pour la durée d'agrégation des données. La borne inférieure est vraie quel que soit le degré maximum du graphe. Cependant, la borne supérieure n'est pas le plus petit des majorants pour les graphes de degré maximum n -2 ou inférieur. Des travaux futurs consisteraient à trouver une borne supérieure qui resterait optimale quel que soit le degré maximum du graphe.
A.3.2 Agrégation en Ligne Dans un Réseau Dynamique Arbitraire
Dans le chapitre 6, nous étudions le problème de l'agrégation de données distribuée dans le cas où le réseau est dynamique, i.e., lorsque l'ensemble des liens permettant la transmission de données entre les noeuds varie dans le temps. Notre modèle s'appuie sur le modèle des graphes évolutifs pour représenter l'évolution du réseau. Plus précisément, l'évolution de la topologie d'un réseau peut se modéliser par une séquence de graphes statiques, où chaque graphe représente l'état du réseau à un instant donné. Dans notre modèle, où les objets interagissent deux à deux, un adversaire, en plus de choisir la séquence de graphes statiques, choisit pour chaque graphe statique l'ordre dans lequel ces interactions se produisent. C'est pourquoi nous supposons directement que chaque graphe de la séquence ne contient qu'un seul lien, c'est-à-dire que le réseau dynamique est modélisé par une séquence d'interactions (impliquant deux noeuds) représentant les interactions entre les noeuds au cours du temps.1 Travaux connexes. On peut classer les résultats existants sur l'agrégation de données en deux parties, selon qu'ils considèrent ou non que les collisions lors des transmissions sont prises en charge par la couche MAC ou pas. Dans le cas où les collisions ne sont pas gérées par la couche MAC, le problème est NP-complet [START_REF] Bramas | The complexity of data aggregation in static and dynamic wireless sensor networks[END_REF], même dans des réseaux de capteurs statiques (resp. dynamiques) de degré maximum 3 (resp. 2). Dans le cas des réseaux de capteurs statiques, de nombreux travaux ont abouti à des algorithmes d'approximation distribués performants. Cependant, le seul algorithme d'approximation existant pour les réseaux dynamiques est centralisé et suppose la connaissance complète de l'évolution future du réseau.
Dans le cas où les collisions sont gérées par la couche MAC, on retiendra particulièrement les travaux de Cornejo et al. [START_REF] Cornejo | Aggregation in dynamic networks[END_REF] où un ensemble de noeuds doit agréger des jetons. Aucun noeud ne joue le rôle de puits et le but est de minimiser le nombre de noeuds possédant au moins un jeton à la fin du temps imparti. Une solution pour laquelle un noeud particulier collecterait tous les jetons permettrait de résoudre le problème que nous considérons ici.
Modèle. Un réseau dynamique est modélisé par un couple (V, I) où V est l'ensemble des noeuds et I = (I t ) t∈N est la séquence des interactions impliquant une paire de noeuds. Dans la suite, on notera n le nombre de noeuds et s ∈ V le noeud puits.
On suppose que les noeuds ont un identifiant unique, une mémoire et une capacité de calcul illimitée. Cependant, on pourra considérer des noeuds sans mémoire persistante. Initialement, chaque noeud de V possède une donnée. Lors d'une interaction I t = {u, v}, si à la fois u et v possèdent une donnée, l'un des noeuds peut transmettre sa donnée à l'autre, qui l'agrège avec sa propre donnée. Après avoir transmis sa donnée, un noeud ne peut plus ni recevoir de donnée, ni transmettre à nouveau.
Un algorithme distribué d'agrégation de données (DODA) est un algorithme qui, étant donnée une interaction I t = {u, v} et sa date d'occurrence t, retourne u, v, ou ⊥. Si les deux noeuds u et v possèdent une donnée avant l'interaction, et si l'algorithme retourne un noeud (u ou v), alors ce dernier agrège la donnée de l'autre noeud. Si l'algorithme retourne ⊥, aucune donnée n'est transmise. Lors de son exécution, un DODA peut utiliser sa mémoire interne, la modifier, et utiliser des informations concernant les noeuds. Par défaut, un noeud u a pour information son identifiant u.ID et s'il est ou non le puits u.isSink. Certains algorithmes nécessitent d'autres informations (comme la séquence des interactions futures qui impliquent le noeud u.f uture, ou le moment de la prochaine interaction entre u et le puits u.meetT ime). L'ensemble des DODA utilisant les informations supplémentaires i 1 , i 2 , . . . est noté D ODA (i 1 , i 2 , . . .). Le sous ensemble de D ODA (i 1 , i 2 , . . .) des algorithmes sans mémoire (i.e., n'utilisant pas de mémoire persistante) est noté D ∅ ODA (i 1 , i 2 , . . .) On suppose qu'un adversaire construit la séquence des interactions. On distingue plusieurs types d'adversaire : les adversaires sans mémoire (qui construisent la séquence avant le début de l'exécution), et les adversaires aléatoires (qui choisissent les interactions aléatoirement).
Par manque de place, dans ce papier nous considérons uniquement des séquences d'interactions I = (I t ) t∈N telles que l'agrégation de données est toujours possible sur tout suffixe (I t ) t>k (au moins par un algorithme centralisé qui connaît le futur). De telles séquences sont dites D ODA -récurrentes. Dans ce contexte, on s'intéresse aux conditions d'existence d'un DODA qui termine pour toute séquence D ODArécurrente. Certains preuves sont disponibles dans le rapport technique associé [?].
A.3.2.1 Face à un Adversaire Sans Mémoire
Un adversaire sans mémoire connaît le code de l'algorithme, mais il doit construire la séquence d'interactions sans connaitre les choix (possiblement probabilistes) effectués par l'algorithme. Cela revient à générer, avant même le début de l'exécution, l'intégralité de la séquence d'interactions. Dans le cas d'un algorithme déterministe, cet adversaire est équivalent à un adversaire pouvant s'adapter aux choix faits par l'algorithme (ces choix peuvent être prédit avant le début de l'exécution). Face à un algorithme probabiliste, la séquence de nombres aléatoires n'étant pas connue de l'adversaire, les choix de l'algorithme deviennent imprévisibles. Il est facile de trouver un adversaire empêchant tout algorithme déterministe d'agréger les données du réseau en temps fini. Notre premier théorème généralise ce résultat à tout algorithme probabiliste.
u i = u j , avec i ≡ j [n -1]. Soit I ∞ défini pour tout i ∈ N par I ∞ i = {u i , s}
. Soit I l le préfixe de longueur l ∈ N de la séquence I ∞ . Pour tout l ∈ N, l'adversaire peut calculer la probabilité P l qu'aucun noeud ne transmette sa donnée pendant l'exécution de A sur I l . La suite (P l ) l>0 est décroissante et minorée, donc converge vers une limite P ≥ 0. Pour un l donné, si P l ≥ 1/n, alors on peut montrer qu'au moins deux noeuds ont une probabilité de ne pas transmettre supérieure à n -1 n-2 = 1 -O( 1 n ) (pendant l'exécution de A sur I l ).
Donc si P ≥ 1/n, alors il existe un noeud qui ne transmet jamais, et l'exécution de A sur I ∞ ne termine pas, avec forte probabilité. Sinon, soit l 0 ∈ N le plus petit indice tel que P l0 < 1/n. Avec forte probabilité, au moins un noeud transmet pendant l'exécution de A sur I l0 . De plus, P l0-1 ≥ 1/n, donc deux noeuds u d et u d n'ont pas transmis pendant l'exécution de A sur I l0-1 , avec une probabilité supérieure à n -1 n-2 (si l 0 = 0, on peut choisir {u d , u d } = {u 1 , u 2 }). On a u d = u l0 ou bien u d = u l0 . Sans perte de généralité, on suppose que u d = u l0 . Ainsi la probabilité que u d transmette pendant l'exécution de A sur I l0-1 est la même que sur I l0 .
Soit I la séquence finie d'interactions ({u d , u d+1 }, {u d+1 , u d+2 }, . . . , {u d-2 , u d-1 }, {u d-1 , s}). Cette séquence est construite de sorte que u d doit envoyer sa donnée jusqu'à s en utilisant un chemin qui contient tous les autres noeuds. On construit alors la séquence d'interactions I comme étant la séquence I l0 suivie de I répétée une infinité de fois. Ainsi, après l'exécution de A sur I l0 , avec forte probabilité, u d contient toujours sa donnée et au moins un autre noeud a déjà transmit ça donnée. Donc, lors de l'exécution de A sur I , la donnée de u d ne peut plus atteindre s (le chemin que doit emprunter la donnée de u d contient tous les autres noeuds, dont au moins un qui a déjà transmis et ne peut plus transmettre). L'exécution de A sur I ne termine pas. Par ailleurs, comme la séquence I est suffisante pour agréger toutes les données de V , I est D ODA -récurrente.
A.3.2.2 Face à un Adversaire Probabiliste
On considère maintenant un adversaire qui choisit les interactions aléatoirement, uniformément parmi toutes les interactions possibles. Chaque interaction I = {u, v}, u, v ∈ V , a une probabilité 2 n(n-1) d'être choisie par l'adversaire. Comme l'apparition d'une interaction ne dépend pas des interactions précédentes, les deux algorithmes que nous proposons dans cette section fonctionnent sans mémoire, i.e., la sortie de nos algorithmes dépend uniquement de l'interaction et des informations concernant les noeuds impliqués. De plus, on suppose que les noeuds proposés à l'algorithme sont ordonnés selon leur identifiant, et qu'ils contiennent tous les deux une donnée (sinon l'algorithme retourne ⊥).
-Gathering (GA ∈ D ∅ ODA ) : Un noeud transmet sa donnée lorsqu'il est connecté à s ou à tout autre noeud possédant une donnée. GA associe à (u 1 , u 2 , t) le noeud u i si u i .isSink, sinon il associe u 1 .
-Waiting Greedy de paramètre τ ∈ N (WG τ ∈ D ∅ ODA (meetT ime)) : le noeud avec le plus grand meetT ime transmet si son meetT ime est plus grand que τ :
WG τ : (u 1 , u 2 , t)= u 1 si m 1 < m 2 ∧ τ < m 2 u 2 si m 1 > m 2 ∧ τ < m 1 ⊥ sinon m 1 = u 1 .meetT ime(t) m 2 = u 2 .meetT ime(t)
Bornes Inférieures. Nous étudions la durée minimum d'agrégation des données qu'il est possible d'obtenir. Sans connaissance supplémentaire, un algorithme est limité par la durée nécessaire pour agréger la donnée du dernier noeud (autre que s). En effet, lorsqu'il ne reste qu'un noeud u = s possédant une donnée, la probabilité que u puisse transmettre sa donnée est égale à la probabilité d'occurrence de l'interaction {u, s}, i.e., Pour les algorithmes utilisant des informations supplémentaires, nous allons minorer le nombre d'interactions par le nombre d'interactions requis par un algorithme centralisé connaissant toute la séquence d'interactions. Pour cela on peut remarquer que l'agrégation de données correspond à un broadcast en prenant le temps qui s'écoule dans le sens inverse. On peut montrer qu'un broadcast s'effectue avec forte probabilité en Θ(n log(n)) interactions, d'où le théorème suivant.
Theorem A.6
Tout algorithme A ∈ D ODA (séquence d'interactions) termine en Θ(n log(n)) interactions avec forte probabilité.
Performance des Algorithmes Pour l'algorithme GA, on définit la v.a. X i du nombre d'interactions entre la (i -1)-ème et la i-ème transmission. X i suit une loi géométrique de paramètre (n-i+1)(n-i) n(n-1)
. La durée moyenne d'exécution de GA est donc la somme n-1
1 E(X i ) = n(n -1) n-1 i=1 1 i(i+1) ∈ O(n 2 ). GA est donc optimal dans D ODA ().
Pour les performances de WG τ on pose f une fonction telle que f Le paramètre τ = 2n 3/2 log(n), obtenu en prenant la fonction f : n → n log(n) qui minimise la valeur de nf (n) + n 2 log(n)/f (n), semble donc être optimal pour l'algorithme WG τ . On peut montrer que c'est le cas, et même que l'algorithme correspondant WG τ , avec τ = 2n 3/2 log(n), est asymptotiquement optimal parmi tout les algorithmes dans D ODA (meetT ime) (même ceux avec mémoire).
(n) = ω(1) et f (n) = o(n). Theorem A.7 L'algorithme WG τ avec τ = nf (n) + n 2 log(n)/f (n) termine
A.3.2.3 Conclusion
Nous avons présenté un nouveau modèle pour l'étude du problème de l'agrégation de données distribuée dans les réseaux dynamiques. Après avoir présenté le résultat d'impossibilité dans le cas général, face à un adversaire sans mémoire, nous avons donné deux algorithmes optimaux face à un adversaire aléatoire, le premier fonctionnant sans aucune connaissance supplémentaire, le deuxième utilisant l'information meetSink.
A.4 L'Estimation de la Durée de Vie des Batteries
De nombreuses applications des réseaux de capteurs, notamment dans le domaine médical, requièrent des appareils capables de fonctionner sur de longues périodes, typiquement, de plusieurs jours à plusieurs années. Cependant, la taille des capteurs étant limitée, il est crucial de simuler fidèlement la batterie et la consommation des composants pour choisir les paramètres (notamment, le ratio entre la durée d'éveil et la durée de sommeil) qui répondront aux exigences de l'application concernant la durée de vie du réseau.
Dans la troisième partie de cette thèse, nous présentons un nouveau modèle pour l'évaluation de la durée de vie des capteurs dans les réseaux de capteurs sans fil (Section A.4.1). Puis nous utilisons ce modèle pour faire une étude comparative de plusieurs protocoles de broadcast efficaces en énergie (Section A.4.2)
A.4.1 WiSeBat : Modèle pour une Évaluation Réaliste de la Durée de Vie des Batteries
Travaux Connexes Même si la définition de la durée de vie d'un réseau dépend de l'application [START_REF] Dietrich | On the lifetime of wireless sensor networks[END_REF], elle est déterminée à partir de la durée de vie de chacun des capteurs. L'estimation de la durée de vie d'un appareil repose à la fois sur le modèle de batterie et le modèle de consommation [DDV + 14]. De manière simplifiée, les simulateurs de réseaux populaires comme NS2/3 ou WSNET prennent en compte la consommation de la radio uniquement, couplée à un modèle de batterie linéaire. Cela revient à multiplier le nombre de bits transmis et de bits reçus par la consommation de la radio en mode TX et RX respectivement. Ce modèle très simple, utilisé par défaut, n'est cependant pas du tout réaliste, comme nous le montrons ci-après. D'autres solutions comme SENSE ou PowerTOSSIMz prennent en compte chaque composant de l'appareil, couplé a un modèle de batterie non linéaire. Même si cette approche peut paraître plus réaliste, elle ne tient pas compte des variations de voltage de la batterie, qui elles non plus ne sont pas linéaires. Cela entraine des imprécisions, notamment pour déterminer à quel moment un noeud s'arrête de fonctionner.
Résumé des Contributions Nous adaptons un modèle existant de batterie pour les réseaux de capteurs sans fil. Nous le couplons avec un modèle de consommation prenant en compte chaque composant du capteur pour former le module complet que nous appelons WiSeBat, dont l'implémentation de départ est intégrée au simulateur WSNET. Enfin, nous le validons par des mesures sur des capteurs réels qui montrent que le module WiSeBat (faisant référence à la fois au modèle de batterie et de consommation) surpasse le modèle par défaut de WSNET.
A.4.1.1 Le Module WiSeBat
Modèle de Consommation Dans les réseaux de capteurs sans fil, la radio qui n'est pas constamment allumée, n'est pas nécessairement le composant qui utilise le plus la batterie. C'est pourquoi WiSeBat offre la possibilité lors de l'initialisation de la simulation d'enregistrer un ensemble de composants X (par exemple : les capteurs, le micro-contrôleur, les LEDs, etc). Chaque composant x ∈ X est associé à une fonction qui donne sa consommation i x (m) en mA en fonction du mode de fonctionnement m dans lequel il se trouve. Lors de la simulation, la couche application peut mettre à jour le mode de fonctionnement de chaque composant enregistré (le mode de la radio étant directement modifié par la couche MAC). Si on l'associe à une batterie au comportement linéaire, la quantité d'énergie consommée en mAh par les composants est donnée par la formule suivante (où M x est l'ensemble des états possibles du composant x et T x (m) est le temps, en heures, passé dans l'état m) :
C = x∈X m∈Mx i x (m)T x (m)
Modèle de Batterie Le premier comportement non linéaire de la batterie pris en compte par WiSeBat est que lorsque le courant instantané augmente, la batterie se comporte comme si la capacité totale diminuait. En effet, à partir des données constructeur, on peut extraire une fonction décroissante C eq : i → C eq (i) telle que si le courant est constant égal à i alors la capacité mesurée est C eq (i). La capacité nominale C nominal de la batterie est en fait la limite en 0 de cette fonction (elle est généralement mesurée pour un courant faible appelé courant nominal). Ce comportement est capturé en utilisant la définition donnée par Dron et al. [DDV + 14] de courant équivalent i eq (t) au courant total i(t) fourni par la batterie au temps t :
i eq (i(t)) = C nominal C eq (i(t)) i(t) (A.1)
En effet, si le courant consommé par les composants est constant égal à i, la batterie sera épuisée (i.e., sera incapable de fournir du courant) après un temps T tel que i × T = C eq (i).
Le second comportement non linéaire de la batterie modélisé par WiSeBat est la variation de voltage au cours de sa décharge, en fonction du courant fourni. Les données du constructeur permettent de déduire les fonctions E mf et R qui donnent la force électromotrice en Volt et la résistance interne en Ohm de la batterie en fonction de sa capacité résiduelle. Si au temps t la batterie a une capacité résiduelle C(t) et fournit un courant i(t), alors le voltage V (t) à ses bornes est donné par la formule suivante :
V (t) = E mf (C(t)) -i(t)R(C(t))
Ce voltage affecte les composants d'un capteur de deux manières. Premièrement, la consommation des composants dépend du voltage à ses bornes (i.e., pour x ∈ X , i x dépend en fait du mode de x et de la tension). Deuxièmement chaque composant nécessite un voltage minimum pour fonctionner, appelé cut-off. C'est-à-dire qu'un capteur va cesser de fonctionner avant que la batterie soit totalement déchargée.
Grâce aux équations précédentes, le module WiSeBat met à jour les valeurs de i(t), C(t) et V (t) de manière discrète à des instants t 1 , t 2 , . . . en s'appuyant sur les relations suivantes (où pour un composant x, m x (t) est son mode à l'instant t et i x est la fonction de consommation en fonction du voltage et du mode) : C(t i ) = C(t i-1 ) -i eq (i(t i-1 )) (t i -t i-1 ) (A.2) Les composants du capteur ainsi que leur consommation sont présentés dans le tableau A.4. Les résultats donnés dans la figure A.5 représentent la durée de vie simulée des capteurs (fournie par WiSeBat et par le modèle par défaut de WSNET, respectivement) par rapport au capteur réel, dans deux scénarios (une transmission toutes les secondes et une transmission toutes les dix secondes). On remarque que le modèle par défaut de WSNET, utilisant une batterie au comportement linéaire et ne prenant en compte que la radio, surestime la durée de vie du capteur de plus de 2600%. Pourtant, ce modèle est très présent dans la littérature pour évaluer les performances énergétiques des protocoles réseaux. L'utilisation d'un modèle de batterie au comportement non linéaire améliore significativement les estimations, puisque la durée de vie simulée n'est plus que 7 fois supérieure à la durée de vie mesurée sur le capteur réel. Pour finir, la prise en compte de tous les composants (ce qui correspond au module complet WiSeBat) permet alors de sous-estimer la durée de vie avec une erreur comprise entre 4% et 14%. C'est un point important pour la production effective des capteurs : une sur-estimation aboutit à l'utilisation de batteries moins efficaces dans les capteurs menant à une défaillance largement prématurée du réseau (par rapport au temps estimé par la simulation). Au contraire, la sous-estimation de la durée de vie garantit que le capteur réel dure au moins le temps attendu. Lors de ces simulations, WiSeBat a représenté entre 0.1% et 30% du temps de simulation, ce qui encourage son utilisation systématique.
V (t i ) = E mf (C(t i )) -i(t i
A.4.1.3 Conclusion
Nous proposons un module pour le simulateur WSNET permettant d'estimer efficacement la durée de vie des noeuds dans un réseaux de capteur sans fil avec un faible coût computationnel. Nous montrons que les résultats obtenus ont une précision pouvant aller jusqu'à 94%, contrairement au module par défaut qui surestime de plus de 2600% la durée de vie. Pour finir nous avons retrouvé des résultats connus
A.4.2 Analyse Comparative des Protocoles de Broadcasts Efficaces en Énergie
Dans le Chapitre 9 de cette thèse, on s'intéresse à l'évaluation des performances de six protocoles de broadcast. Ces protocoles supposent que la radio est capable de faire varier sa puissance d'émission (et donc son rayon de transmission) afin d'économiser de l'énergie lorsque la destination est proche. Parmi les protocoles existant nous avons choisi les suivants :
-FLOOD : l'algorithme choisit toujours la puissance maximum.
-BIP : un algorithme centralisé offrant de très bonne performance théorique.
-LBIP : une version distribuée de l'algorithme BIP, moins performante en théorie.
-DLBIP : une version dynamique de l'algorithme LBIP, censée augmenter la durée de vie globale du réseau.
-RBOP et LBOP : deux algorithmes distribués au fonctionnement similaire, basés sur des propriétés locales du réseau utilisant RNG (graphe des voisins relatifs) et LMST (arbre couvrant local de poids minimal).
Dans les travaux précédents, les performances de ces protocoles sont données en utilisant un modèle énergétique idéal et dans un environnement idéal sans interférence. Cependant, on a vu que de telle hypothèses sont irréalistes et ne permettent pas d'évaluer efficacement les protocoles.
A.4.2.1 Configuration des Simulations
Nous avons réalisé des simulations en utilisant le simulateur WSNet [START_REF] Fraboulet | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF]. Chaque simulation contient 50 noeuds répartis aléatoirement dans un carré de côté compris entre 300 et 500 mètres. La pile réseau contient un des protocoles de test pour la couche routage, au dessus d'une couche MAC (2400MHz OQPSK 802. 15
A.4.2.2 Résultats et Discussion
Deux scénarios sont considérés : un broadcast est effectué toutes les 10 secondes par (a) le noeud ayant l'identifiant 0 (ne varie pas au cours de la simulation) (b) un noeud choisi aléatoirement.
Broadcast d'une Seule Source Parmi les résultats observés, on se concentre sur le nombre de noeuds moyen ayant effectivement les message envoyé par le noeud 0, en fonction de la taille de la zone de déploiement, présenté dans la Figure A.6. Contrairement à ce que l'on attendait, peu de protocoles offrent de bonnes performances. De plus, les performance dépendent de la couche MAC utilisée.
Avec ContikiMAC, BIP, DLBIP, et FLOOD offrent de bonnes performances quelle que soit la taille de la zone. LBIP et RBOP sont en dessous mais LBIP s'améliore lorsque la densité du graphe décroit. Dans les deux cas, l'augmentation des performances avec la baisse de la densité du réseau s'explique par une diminution du nombre de collisions. Avec ContikiMAC, BIP est le protocole qui broadcast le message à tous les noeuds pendant la plus grande durée. Ensuite FLOOD, et DLBIP offrent des performances légèrement moins bonnes mais qui restent acceptables (DLBIP est meilleur dans des graphes plus denses). Il est intéressant de voir que LBIP arrive a effectuer beaucoup plus de broadcasts dans des graphes denses, au prix d'environ 10% des noeuds qui ne reçoivent pas les messages. Finalement RBOP et LBOP sont bien en dessous des autres protocoles. On peut observer que dans des graphes creux, les performances de BIP, LBIP, DLBIP et FLOOD semblent converger vers environ 130000 broadcasts.
Radio
Avec 802.15.4 MAC on observe que, pour chaque protocole, la mort du réseau apparait brusquement. Aussi le nombre de broadcasts avec cette mort est similaire pour tous protocoles (autour de 6500 broadcasts). Cela est dû au fait que 802.15.4 MAC consomme la majorité de l'énergie, ce qui rend moins visibles les économies réalisées par nos protocoles. Le protocole FLOOD est le seul dont les broadcasts sont reçus par tous les noeuds, quelle que soit la densité du graphe. DLBIP à lui aussi de bonnes performances, et tous les autres protocoles sont loin derrière. -Une cause directe i.e., le protocole envoie peu de messages et chaque noeud ne possède qu'un seul voisin qui lui envoie ce message, augmentant ainsi le risque de perte en cas de collision. De plus, chaque collision risque d'isoler un sous-ensemble entier de noeuds de la source.
-Une cause indirecte i.e., la manière dont le protocole sélectionne les noeuds qui retransmettrent le message peut accentuer le nombre de collisions, particulièrement à cause du problème du terminal caché.
La première cause peut expliquer les bonnes performances du protocole FLOOD. En effet, FLOOD ne limite pas le nombre de noeuds qui retransmettent le message et ainsi augmente la probabilité qu'au moins un voisin de chaque noeud transmette sans collision. De plus, l'augmentation du nombre de transmissions, et donc d'interférences, ne semble pas impacter le protocole FLOOD. On peut remarquer qu'il y a de toute façon une limite pour la quantité d'interférence dans le réseau car la couche retarde les transmission lorsque le canal est occupé. Ainsi, l'augmentation du nombre de transmissions impacte le délai de transmission mais pas forcément la proportion de message reçu avec succès. Si cette proportion se stabilise, augmenter le nombre de transmission a pour effet d'augmenter la probabilité qu'au moins un message soit reçu sans collision.
Les performances remarquables du protocole FLOOD confirment l'utilité en pratique de la redondance, et cela n'implique pas forcément une consommation énergétique supérieure.
A.4.2.3 Conclusion
Nous avons utilisé notre modèle énergétique WiSeBat pour comparer les performances de six protocoles représentatifs de la littérature. Nous avons montré que les performances obtenues dans un environnement réaliste ne sont pas celle attendues. De manière surprenante, la hiérarchie entre les protocoles n'est pas respectée et varie en fonction de la couche MAC utilisée. Une observation notable concerne le modèle le plus simple FLOOD, qui était utilisé comme borne inférieure dans les études théoriques. En effet, malgré sa simplicité, il obtient de très bonnes performances dans tous les domaines.
A.5 Perspectives
Les perspectives de cette thèse sont multiples et concernent aussi bien la partie théorique, que la partie pratique. On a vu que l'agrégation de données est un problème difficile à résoudre, même en connaissant l'évolution du futur. Cependant, les réseaux utilisés en pratiques sont bien différents des réseaux correspondant au pire. Notamment, la périodicité, ainsi que l'apparition de schémas récurrents dans l'évolution du réseau au cours du temps peut permettre l'apparition d'algorithmes spécialisés ayant de bonnes performances en pratiques. Le même constat peut être fait dans le cas de l'agrégation de données en ligne. Par exemple, il serait intéressant d'étudier les performances de notre algorithme Waiting Greedy dans des graphes géométriques aléatoires. Aussi, nous pensons que les résultats seraient similaires, même en autorisant un nombre constant de transmissions. D'un point de vue pratique, notre modèle énergétique a montré qu'il peut être utilisé pour évaluer les performances de protocoles dans des conditions réalistes. Son coût d'utilisation limité (il ne prend que très peu de temps de simulation) devrait encourager son utilisation systématique. Les perspectives à court terme consistent à effectuer une campagne plus précise en utilisant un plus grand nombre de capteurs.
Aussi, l'utilisation de notre modèle aux côtés des couches MAC implémentées dans les réseaux de capteurs dans le corps humain (notamment 802.15.6) nous semble être une prochaine étape logique.
Cette thèse a permis de d'approfondir nos connaissances dans les réseaux de capteurs sans fil sur les difficultés à réduire la consommation d'énergie lors d'opérations simples, comme l'agrégation de données, et sur la nécessité d'effectuer des évaluations réalistes des protocoles.
Réseaux de Capteurs sans Fil Efficaces en Énergie
Résumé : Les réseaux de capteurs sans fil sont constitués de noeuds capteurs, capables de récolter des données, de les analyser et de les transmettre. Ces réseaux ont plusieurs applications, en fonction de la zone où ils sont déployés. Premièrement, nous avons étudié la complexité du problème de l'agrégation de données en utilisant un modèle simplifié pour représenter un réseau de capteurs sans fils. Dans ce modèle, nous avons montré que la recherche d'une solution optimale permettant d'agréger les données d'un réseau de capteurs sans fil quelconque n'était pas réalisable en pratique, même pour un algorithme centralisé connaissant l'évolution futur du réseaux. De plus nous avons étudié la résolution de ce problème par un algorithme distribué fonctionnant en temps réel. Nous avons montré que le problème n'avait pas de solution en général, sans connaissance supplémentaires.
Secondement, nous nous sommes concentrés sur l'estimation de cette durée de vie. Les simulateurs existant implémentent souvent des modèles trop simplistes de consommation et de batterie. Par ailleurs, les modèles plus réalistes de batterie, implémentés dans des simulateurs généralistes, sont trop complexes pour être utilisés pour les réseaux de capteurs possédant de nombreux noeuds, et dont la durée à simuler peut atteindre des mois, voire des années. WiSeBat est un modèle de batterie et de consommation d'énergie optimisé pour les réseaux de capteurs, implémenté dans le simulateur WSNET. Après validation, nous l'avons utilisé pour comparer les performances des algorithmes de broadcast efficaces en énergie.
Mots clés : réseau de capteur, aggrégation de données, évaluation de la durée de vie, réseau dynamique, algorithme de diffusion
Energy-Centric Wireless Sensor Networks
Abstract: A wireless sensor network is an ad-hoc network connecting small devices equipped with sensors. Such networks are self-organized and independent of any infrastructure. The deployment of a WSN is possible in areas inaccessible to humans, or for applications with a long lifetime requirement. Indeed, devices in a wireless sensor network are usually battery-powered, tolerate failure, and may use their own communication protocols, allowing them to optimize the energy consumption. The main application of WSNs it to sense the environment at different locations and aggregate all the data to a specific node that logs it and can send alerts if necessary. This task of data aggregation is performed regularly, making it the most energy consuming. As reducing the energy consumed by sensor is the leading challenge to ensure sustainable applications, we tackle in this thesis the problem of aggregating efficiently the data of the network. Then, we study lifetime evaluation techniques and apply it to benchmark existing energy-centric protocols.
Keywords: wireless sensor network, data agregation, lifetime evaluation, dynamic network, broadcast protocol
1. 1
1 A body area network. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A wireless sensor network connected to a smart-phone . . . . . . . . 1.3 Dependencies between chapters of this thesis. . . . . . . . . . . . . . 2.1 A broadcast to the base station . . . . . . . . . . . . . . . . . . . . . 2.2 A convergecast to the base station . . . . . . . . . . . . . . . . . . . 2.3 Monitoring of a volcano [WALR + 06]. (c) Harvard University, 2006 . 2.4 Acrylic enclosure used for deploying the Mica mote. From [MCP + 02] 2.5 Front and Back of the Tmote Sky platform. From [inc] . . . . . . . . 2.6 Communication stack processes in ContikiOS. From [DGV04] . . . .3.1 If v and w transmit simultaneously, u will not receive the message from v, because it is in the interference range of w . . . . . . . . . . 3.2 An example of a unit-disk graph . . . . . . . . . . . . . . . . . . . . 3.3 A shortest path tree . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 A planar graph and its embedding in the 5 × 5 grid. . . . . . . . . . 3.5 Construction of the orthogonal embedding . . . . . . . . . . . . . . . 3.6 A simple TVG. The interval(s) on each edge e represents the periods of time when it is available, that is, {t ∈ T : ρ(e, t) = 1}. From [CFQS11]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 The evolving graph corresponding to the TVG of Figure 3.6 . . . . . 3.8 An example of foremost convergecast tree. . . . . . . . . . . . . . . . 3.9 Relations of inclusion between classes . . . . . . . . . . . . . . . . . . 4.1 Communication constraints . . . . . . . . . . . . . . . . . . . . . . . 5.1 The replacement of a line connected to node u ∈ {r i , ri , l i , li , o i } by graphs containing K = K -1 lines of length 2K -1 for the part connected to u, and 4K for the parts not at an extremity. . . . . . . 5.2 Node Configurations (clauses are blue and literals are red). . . . . . . 5.3 Creation of a perfect binary FCT. . . . . . . . . . . . . . . . . . . . . 5.4 Optimal data aggregation schedule when FCTs are complete binary trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Optimal data aggregation in graph G and G . In G, node 1 transmits at time 0 and in G , node 1 does not transmit at time 0, even though node 1 has the same future in both G and G . . . . . . . . . . . . . . Context of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Figure 1
1 Figure 1.1 -A body area network.
Chapter 2 .Figure 2 . 2 -
222 Figure 2.1 -A broadcast to the base station
Figure 2 .
2 Figure 2.3 -Monitoring of a volcano [WALR + 06]. (c) Harvard University, 2006
Figure 2 .
2 Figure 2.4 -Acrylic enclosure used for deploying the Mica mote. From [MCP + 02]
Figure 2 .
2 Figure 2.5 -Front and Back of the Tmote Sky platform. From [inc]
Figure 2 .
2 Figure 2.6 -Communication stack processes in ContikiOS. From [DGV04]
Contents 3.1 Physical Layer Modeling . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.1 Radio Range Modeling . . . . . . . . . . . . . . . . . . . . . 20 3.1.2 Radio Link Modeling . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3 Interference Modeling . . . . . . . . . . . . . . . . . . . . . 20 3.2 WSN as a Unit-Disk Graph . . . . . . . . . . . . . . . . . . . . . . . 21 3.3 Dynamic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3.1 Time-Varying Graphs . . . . . . . . . . . . . . . . . . . . . 25 3.3.2 Evolving Graphs . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.3 Definitions and Preliminaries . . . . . . . . . . . . . . . . . 26 3.3.4 Hierarchy of Dynamic Graphs . . . . . . . . . . . . . . . . . 27 3.4 Random Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 3
3 Figure 3.1 -If v and w transmit simultaneously, u will not receive the message from v, because it is in the interference range of w
Figure 3 . 2 -
32 Figure 3.2 -An example of a unit-disk graph
vFigure 3
3 Figure 3.3 -A shortest path tree
Figure 3
3 Figure 3.5 -Construction of the orthogonal embedding
Figure 3
3 Figure 3.7 -The evolving graph corresponding to the TVG of Figure 3.6
Figure 3
3 Figure 3.8 -An example of foremost convergecast tree.
Figure 3
3 Figure 3.9 -Relations of inclusion between classes
Figure 4.1 -Communication constraints
Definition 4. 2 (
2 Aggregating Data From a Set to Another ) Let A ⊆ V and B ⊆ A. We say that data is aggregated from A to B at time t (see Figure A.2c) if nodes in A \ B transmit their data simultaneously and all the data is received by at least one node in B. Formally if:
2
2 Distributed Online Data Aggregation in an Arbitrary Dynamic Network
In Markovian evolving graphs, Clementi et al. [CMM + 10, CMPS09, CST15] show tight bounds on the time to disseminate a data. First, in edge-Markovian evolving graphs an upper bound of O log n log(1 + np) time steps holds With high probability (w.h.p.) 1 and is proven to be tight for a wide range of settings. The flooding time in stationary MEGs is studied in [CMPS09]. In the case of geometric-MEG, with R denoting the transmission radius and r denoting the speed, Clementi et al. [CMPS09] showed an upper bound of O √ n R + log log R and a lower bound Ω √ n R + r time steps. After similar results for specific mobility models [CMS13], Clementi et al. [CST15] provide an upper bound to the flooding time of any MEG.
[START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF], a variety of papers proposed centralized and distributed approximation algorithms in static WSNs using geometric aspect of the MDAT problem to improve the data aggregation delay. Yu et al. [YLL09] give a distributed algorithm with an upper bound at 24D + 6∆ + 16 (where D is the diameter, and ∆ the maximum degree of the graph). Xu et al. [XLM + 11] and Ren et al. [RGL10] proposed centralized algorithms with upper bounds at 16R + ∆ -14 and 16R + ∆ -11, respectively (where R is the radius of the graph). The best known bound is due to Nguyen et al. [NZC11], as they give a centralized algorithm that takes at most 12R + ∆ -11 time slots to aggregate all data.
Contents 5.1 NP-Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.1.1 Static grid graphs of degree at most three . . . . . . . . . . 45 5.1.2 Dynamic graphs of degree at most two . . . . . . . . . . . . 50 5.2 Upper and Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . 53 5.3 Impossibility Results . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.4 Approximation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 58 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
(see figure Fig. 5.1.1 and Fig. 5.1.2).
50
Figure5.1 -The replacement of a line connected to node u ∈ {r i , ri , l i , li , o i } by graphs containing K = K -1 lines of length 2K -1 for the part connected to u, and 4K for the parts not at an extremity.
Figure 5 . 2 -
52 Figure 5.2 -Node Configurations (clauses are blue and literals are red).
Chapter 5 .Figure 5 Figure 5
555 Figure 5.3 -Creation of a perfect binary FCT.
Figure 5
5 Figure5.5 -Optimal data aggregation in graph G and G . In G, node 1 transmits at time 0 and in G , node 1 does not transmit at time 0, even though node 1 has the same future in both G and G .
2 1
2 Figure 5.6 -Optimal data aggregation in graphs G 1 and G 2 . In G 1 , node 2 transmits at time 0 whereas in G 2 , node 1 does.
For a complete survey see for instance previous work by C.Singh et al. [SVT08], B.Musznicki et al. [MZ12], A.K. Dwivedi et al. [DV11] and H.Sundani et al. [SLD
Figure 7 . 1 -
71 Figure 7.1 -The internal structure of a typical sensor node in SENSE simulator. From [CBP + 05a]
Figure 7 . 2 -
72 Figure 7.2 -The internal structure of a typical sensor node in Castalia simulator.From[START_REF] Boulis | Castalia user manual[END_REF]
Figure 7
7 Figure 7.3 -Block architecture of WSNet. From [FCF07].
[START_REF] Lefteris M Kirousis | Power consumption in packet radio networks[END_REF].Clementi et al. [CPS00] demonstrated that this problem is NP-hard. Most of these protocols require global knowledge of the entire graph to compute a Minimum Spanning Tree (MST). Recently, localized protocols based on the Relative Neighborhood Graph (RNG)[START_REF] Godfried | The relative neighbourhood graph of a finite planar set[END_REF], and a Local Minimum Spanning Tree (LMST) construction [LHS05] have been proposed (see [GY07] for a survey).
Figure 8
8 Figure 8.1 -The WiSeBat battery update scheme
Figure 8 . 6 -
86 Figure 8.6 -Percentage of time spent in the WiSeBat model and on the most time consuming functions of WiSeBat, in a simulation using the ContikiMAC layer.
Figure 8
8 Figure 8.7 -Lifetime of a single leaf node with X-MAC, ContikiMAC and 802.15.4 MAC under two different duty-cycled scenarios. 802.15.4 mac performs better when the duty-cycle is longer.
Figure 8
8 Figure 8.8 -Lifetime of a router node and a leaf node with ContikiMAC or 802.15.4 mac and RPL or static routing.
Figure 9 . 2 -
92 Figure 9.2 -Average number of receiving nodes, depending on the size of the area, for each broadcasting protocol.
Figure 9 . 3 -
93 Figure 9.3 -Number of receiving nodes over time with ContikiMAC.
Figure 9
9 Figure 9.4 -Number of receiving nodes over time with 802.15.4 MAC.
Figure 9
9 Figure 9.5 -End-to-end delay (in ms) over time with ContikiMAC.
Figure 9
9 Figure 9.6 -End-to-end delay (in ms) over time with 802.15.4 MAC.
Overview of Thesis Contributions . . . . . . . . . . . . . . . . . . . 115 10.1.1 Data Aggregation . . . . . . . . . . . . . . . . . . . . . . . . 115 10.1.2 Lifetime Estimation of a Wireless Sensor Network . . . . . . 116 10.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Figure A. 1 -
1 Figure A.1 -Un exemple de graphe de disque-unité
Figure A.2 -Contraintes de Communication
Figure
Figure A.3 -Configuration des noeuds (les clauses sont bleues et les littéraux sont rouges).
Theorem A. 4
4 Pour tout algorithme probabiliste A ∈ D ODA , il existe un adversaire sans mémoire générant une séquence d'interactions D ODA -récurrente I telle que l'exécution de A sur I ne termine pas, avec forte probabilité (avec une probabilité supérieure à 1 -o(1)) Démonstration : Soit V = {s, u 0 , . . . , u n-2 }. Dans la suite, les indices des noeuds sont modulo n -1, i.e., pour tout i ∈ N
2 n 2 ,
22 (n-1) . L'espérance d'un tel événement est n(n-1) Tout algorithme A ∈ D ODA () termine en moyenne en Ω(n 2 ) interactions.
Figure A. 4 -
4 Figure A.4 -Caractéristiques des composants
Figure A. 5 -
5 Figure A.5 -Précision des modèles (modèle par défaut de WSNET, WiSeBat en tenant compte uniquement de la radio, et WiSeBat prenant en compte tous composants) dans deux scénarios.
Broadcast avec Source Multiple (Gossip) Parmi les résultats observés, on s'intéresse au nombre de noeuds qui reçoivent effectivement le message en fonction du temps, présenté dans la Figure A.7 et Figure A.8). Dans ces figures, l'axe des abscisses représente le nombre de broadcasts effectués et l'axe des ordonnées le nombre de noeuds ayant reçu le message.
Figure
Figure A.6 -Nombre moyen de noeuds recevant le broadcast, en fonction de la taille de la zone
Figure A. 7 -
7 Figure A.7 -Nombre de noeuds recevant le message en fonction du temps, avec ContikiMAC.
Figure
Figure A.8 -Nombre de noeuds recevant le message en fonction du temps, avec 802.15.4 MAC.
IndexAFRSUW
Ad hoc network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B Battery model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bit-error-rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CClass of Evolving GraphConnectivity over the time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Periodic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 56-58 Recurrence of edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28, Recurrent connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27, Time-bounded recurrence of edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time-bounded recurrent connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . Communication graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convergecast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10, Convergecast Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23, Distributed Online data aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributed online data aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic data aggregation schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E Edge-Markovian evolving graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Energy consumption model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Erdős-Rényi graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eventual underlying graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Footprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .see Underlying graph Foremost Convergecast Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph Evolving graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25, Markovian evolving graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time-varying graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unit-disk graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I IEEE 802.15.4 standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interaction, pairwise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37, J Journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M Markovian evolving graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Media access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minimum data aggregation time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35, Minimum energy broadcast problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . O Oblivious . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P Packet-error-rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random waypoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shortest path tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal-to-interference-plus-noise ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal-to-noise ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T A.5. Perspectives Temporal domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time of occurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Underlying graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wireless sensor network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, WiSeBat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . With high probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chapter 2
Wireless Sensor Networks
In a wireless sensor network, mWO ¶qp §n!rf
is important, but avoiding interference is more
important.
Part I
Context
Introduction
WiSeBat: Accurate Energy
Benchmarking of WSNs
Energy Benchmarking
of Broadcast Protocols
Figure 1.3 -Dependencies between chapters of this thesis.
Contents 2.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Sensor Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2
Table 2
2
Texas Instruments
.1 -Voltage specification of the TMote Sky Hardware
Communication Standards The IEEE 802.15.4 standard[80203] specifies the physical layer and the Media Access Control (MAC) layer for low-rate wireless personal area network. The standard can be extended by developing the upper layer, such as ZigBee [Z + 06] ISA100.11a [ISA], WirelessHART [HAR], etc. The MAC layer defined by the IEEE 802.15.4 standard can be replaced by compatible layers that consume less energy, such as duty cycling MAC protocols
If Algorithm GDAS terminates, the sequence {S i } i≤t f is a valid DAS. If GDAS terminates, then the sequence S satisfies S 0 = V , because each node that has been removed from remainingN odes line 21 has been added to S t line 23.
Chapter 5. The Complexity of Data Aggregation in Wireless Sensor
Networks
Theorem 5.5
If a graph G is in RC, algorithm GDAS finds a valid dynamic data aggregation
schedule such that
t s do S t ← ∅ previouslyM arked ← marked for v ∈ {N t (u)|u ∈ previouslyM arked} do if canT ransmit(v, S t , marked) ∧ data t-1 [v] ∩ remainingN odes = ∅ then remainingN odes ← remainingN odes \ {v} marked ← marked ∪ {v} S t ← S t ∪ {v} t ← t f while remainingN odes = ∅; for t = t f -1, . . . , 0 do S t ← S t ∪ S t+1 return S Lemma 5.3 Proof : Moreover, for each time t, nodes in S t \ S t+1 (if not empty) have been added to S t line 23 after Function canT ransmit returned true, thus their data are successfully received by nodes in S t+1 . This implies (G, S t , t) → (G, S t+1 , t + 1).
Oblivious and Online Adaptive Adversaries . . . . . . . . . . . . . . 63 6.1.1 Impossibility Results When Nodes Have no Knowledge . . . 63 6.1.2 When Nodes Know The Underlying Graph . . . . . . . . . . 65 6.1.3 If Nodes Know Their Own Future . . . . . . . . . . . . . . . 66 6.2 Randomized Adversary . . . . . . . . . . . . . . . . . . . . . . . . . 66 6.2.1 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.2.2 Algorithm Performance Without Knowledge . . . . . . . . . 69 6.2.3 Algorithm Performance With meetT ime . . . . . . . . . . . 71 6.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Chapter 6
Distributed Online
Data Aggregation
Contents
6.1
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.2.1 Simulating a WSN . . . . . . . . . . . . . . . . . . . . . . . 81 7.2.2 Benchmarking Energy-Centric Broadcast Protocols . . . . . 85 7.3 Contribution of Part III . . . . . . . . . . . . . . . . . . . . . . . . . 88
Introduction of Part III
Why would you want more than machine lan-
guage?
John von Neumann
Contents
7.1
The WiSeBat Energy Model . . . . . . . . . . . . . . . . . . . . . . 91 8.1.1 Battery Modeling . . . . . . . . . . . . . . . . . . . . . . . . 92 8.1.2 WiSeBat Usage . . . . . . . . . . . . . . . . . . . . . . . . . 95 8.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 8.2.1 Comparison with Real life measurements . . . . . . . . . . . 98 8.2.2 WiSeBat simulation overhead . . . . . . . . . . . . . . . . . 100 8.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 8.3.1 Single Node Scenario . . . . . . . . . . . . . . . . . . . . . . 101 8.3.2 Two-Node Scenario . . . . . . . . . . . . . . . . . . . . . . . 101 8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Chapter 8
WiSeBat: Accurate Energy
Benchmarking of Wireless Sensor
Networks
Contents
8.1
3. Sensor Networks
Component Mode Current
Micro controller Run 10mA
cut-off voltage: 1.65 Sleep 2.4uA
Radio transceiver Tx 7.7mA
cut-off voltage: 1.8 Rx 5.7mA
Idle 1.7mA
Sleep 1.1uA
Pressure sensor Init 42mA
cut-off voltage: 1.71 Read 5mA
Sleep 0.5uA
Led On 7.6mA
Off 0mA
Power Manager Efficiency: 95 -99%
cut-off voltage: 2.4
Figure 8.4 -Consumption characteristics of the main components of the actual
device prototype
Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 9.2 Simulation Results and Discussion . . . . . . . . . . . . . . . . . . . 106 9.2.1 Single Source Broadcast . . . . . . . . . . . . . . . . . . . . 106 9.2.2 Multiple Source Broadcast (Gossip) . . . . . . . . . . . . . . 109 9.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 9.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Chapter 9
Energy Benchmarking of
Broadcast Protocols
Contents
9.1
Table 9
9 Table 9.1 summarizes the properties of the topologies we use (averaged over the 50 topologies we constructed for each size).
Size Density Diameter Vertex-Connectivity
300 0.60 2.9 13.3
400 0.40 3.8 6.8
500 0.27 4.7 3.9
600 0.20 5.8 2.2
700 0.15 7.7 1.4
800 0.12 9.5 1.1
.1 -Average density, diameter, and connectivity of the topologies, depending on the area size.
Table 9 .
9 2 -Voltage specification of the TMote Sky Hardware
Radio CPU
Chipcon CC2420 Texas Instruments
Tx 0dB 17.4 mA MSP430 F1611
Tx -1dB 16.5 mA Run 8MHz 3V
Tx -3dB 15.2 mA (with flash read) 4mA
Tx -5dB 13.9 mA Sleep 2.6µA
Tx -7dB 12.5 mA voltate cut-off 2.7V
Tx -10dB 11.2 mA
Tx -15dB 9.9 mA
Tx -25dB 8.5 mA
Rx 19.7mA
Idle 365µA
Sleep 1µA
Volt. Regulator 20µA
et de A. Iyer et al.[START_REF] Iyer | What is the right model for wireless channel interference? Wireless Communications[END_REF] montrent l'impact du modèle d'interférence sur la précision des simulations. Ils concluent que l'utilisation du modèle SINR (ou plus précis) est essentiel pour une évaluation réaliste des protocoles dans les réseaux de capteurs sans fil.
Notre première contribution est de caractériser exactement ce qui fait de l'agrégation de données un problème NP-Complet. Dans les graphes statiques, le problème est trivial pour les graphes de degré au plus deux, et nous montrons que le problème reste NP-Complet dans les graphes de degré au plus trois. De manière similaire, le problème est trivial dans les graphes dynamiques de degré au plus un et nous montrons qu'il est NP-Complet dans les graphes de degré au plus deux.Notre deuxième contribution est de donner les bornes inférieures et supérieures pour la durée de l'agrégation des données dans un graphe dynamique.A.3.1.1 Complexité de l'Agrégation de Données en Temps MinimumDans les Réseaux de Capteurs Statiques Nous améliorons le résultat de Chen et al.[START_REF] Chen | Minimum data aggregation time problem in wireless sensor networks[END_REF] pour le rendre plus précis et optimal. Notre résultat s'applique aux graphes d'intersections de disques de rayon 1/2 à coordonnées entières et de degré au plus trois (le résultat de Chen et al. est vrai dans les grilles partielles, qui sont un sur-ensemble strict des graphes d'intersections de disques que nous considérons, de degré au plus quatre). Par ailleurs, le problème est facile à résoudre dans le cas des graphes statiques de degré au plus deux. En effet, en posant ε l'excentricité du noeud puits et n le nombre de noeuds total, si n est impair et ε = (n -1)/2 (le graphe est soit un cycle, soit une ligne de centre s
A.1Le problème ADTM est NP-Complet dans les réseaux de capteurs statiques de degré au plus trois.Dans les Réseaux de Capteurs Dynamiques Nous utilisons une approche différente pour démontrer la NP-Complétude dans les graphes évolutifs (qui modélisent les réseaux de capteurs dynamiques) de degré au plus deux. Encore une fois, le cas des graphes de degré au plus un est trivial étant donnée l'absence de collision. La preuve se fait par réduction depuis le problème 3-SAT. À partir d'une instance ϕ du problème 3-SAT, avec n variables v 1 , . . . , v n et m clauses c 1 , . . . , c m , on construit le graphe évolutif G(V, {E i } i∈N ) de la manière suivante. L'ensemble des noeuds est composé d'un noeud puits s, des littéraux de ϕ et des clauses de ϕ en deux exemplaires :
Theorem A.2
Le problème ADTM est NP-Complet dans les réseaux de capteurs dynamiques
de degré au plus deux.
Démonstration :
en τ interactions, avec forte probabilité. Soit L l'ensemble des noeuds qui interagissent avec s entre les instants τ = n 2 log(n)/f (n) et τ , et soit L c son complémentaire (on suppose s ∈ L). Lors d'une interaction avant l'instant τ , (i) si les deux noeuds sont dans L, rien ne se produit (car leur meetT ime est inférieur à τ ), (ii) si les deux noeuds sont dans L c , l'un des noeuds transmet à l'autre, (iii) sinon le noeud dans L c transmet sa donnée au noeud dans L. Une manière possible d'agréger toutes les données est que chaque noeud dans L c interagisse au moins une fois avec un noeud dans L avant l'instant τ . Ensuite les noeuds de L transmettent directement au puits entre τ et τ . On peut montrer qu'avec forte probabilité, nf (n) interactions sont suffisantes pour que f (n) noeuds interagissent avec le puits, donc L est de cardinal f (n) avec
Démonstration : forte probabilité. De plus, lorsque L contient au moins f (n) noeuds, alors il faut au plus n 2 log(n)/f (n) interactions pour que chaque noeud de L c interagisse au moins une fois avec un noeud de L. Donc en choisissant τ = nf (n) + n 2 log(n)/f (n), l'algorithme termine avant l'instant τ avec forte probabilité.
Les relations sont liées de manière circulaire car le voltage dépend de l'intensité et inversement. C'est pourquoi, il faut que plusieurs mises à jours soient effectuées après chaque changement de mode, jusqu'à ce que les valeurs se stabilisent. De cette manière, l'erreur entre le modèle et les valeurs calculées de C(t), V (t), et i(t) peut être maintenue en dessous d'une valeur arbitraire ε > 0.A.4.1.2 Validation ExpérimentalePour évaluer WiSeBat nous avons mesuré la durée de vie d'un capteur réel avec une application et une architecture réaliste, et nous l'avons comparé à la durée de vie simulée par WiSeBat dans le simulateur WSNET. Pour cela nous avons implémenté dans WSNET l'application et l'architecture présente dans le capteur.
Composant Mode Courant
Micro-controller Run 10 mA
voltage cut-off : 1.65V Sleep 2.4 uA
Radio transceiver Tx 7.7 mA
voltage cut-off : 1.8V Rx 5.7 mA
Idle 1.7 mA
Sleep 1.1 uA
Capteur de pression Init 42 mA
voltage cut-off : 1.71V Read 5 mA
Sleep 0.5 uA
Led On 7.6 mA
Gestionnaire d'alimentation Éfficacité : 95 -99%
voltage cut-off : 2.4V
-1 )R (C(t i )) (A.3) i(t i ) = x∈X i x (V (t i ), m x (t i )) (A.4)
.4 CSMA/CA unslotted [802a] ou ContikiMAC[START_REF] Dunkels | The contikimac radio duty cycling protocol[END_REF]), au dessus d'une radio standard possédant une antenne omnidirectionnelle de modulation OPQSK. On suppose que l'atténuation est logarithmique. Le Tableau A.1 résume les propriétés des topologies générées pour la simulation (moyenne sur les 50 topologies générées pour chaque taille). Table A.1 -Densité, diamètre, et connectivité moyens en fonction de la taille. Le modèle énergétique utilisé est WiSeBat [BDBF + 15] avec les données d'un capteur TMote Sky configuration dont les caractéristiques sont données dans le Tableau A.2).
Taille Densité Diamètre Connectivité
300 0.60 2.9 13.3
400 0.40 3.8 6.8
500 0.27 4.7 3.9
600 0.20 5.8 2.2
700 0.15 7.7 1.4
800 0.12 9.5 1.1
Table A .
A 2 -Caractéristique électrique de la TMote Sky Avec 802.15.4, FLOOD et DLBIP montrent des résultats similaires. Cependant BIP est un des pires protocoles, avec RBOP et LBOP. Les performances de ces derniers s'améliorent avec des densités plus faibles mais restent bien inférieures avec ce qu'on observe en utilisant ContikiMAC.
CPU
Chipcon CC2420 Texas Instruments
Tx 0dB 17.4 mA MSP430 F1611
Tx -1dB 16.5 mA Run 8MHz 3V
Tx -3dB 15.2 mA (with flash read) 4mA
Tx -5dB 13.9 mA Sleep 2.6µA
Tx -7dB 12.5 mA voltate cut-off 2.7V
Tx -10dB 11.2 mA
Tx -15dB 9.9 mA
Tx -25dB 8.5 mA
Rx 19.7mA
Idle 365µA
Sleep 1µA
Volt. Regulator 20µA
Application militaire ou de sauvetage dans des zones pouvant être inaccessibles aux humains ; application sanitaire avec des capteurs déployés sur et dans le corps humain ; application de surveillance avec des capteurs sur les voitures d'un ville, ou les arbres d'une forêt. Les noeuds sont autonomes en énergie et il est primordial d'assurer leur longévité sans retarder la récolte des données. La tache principale réalisée par les réseaux de capteurs sans fils consiste à effectuer des mesures et à envoyer ces données jusqu'à un noeud coordinateur. Cette tache d'agrégation est effectuée régulièrement, ce qui en fait la plus consommatrice d'énergie. L'étude approfondie de la consommation d'énergie des capteurs, qui au centre de ma thèse, peut se traduire de différentes manières.
One can also consider that the latency is null, so that a message can travel along a full path at a given time
An event A occurs with high probability, when n tends to infinity, ifP (A) > 1 -O 1 log(n)
Cette modélisation des interactions se rapproche de celle utilisée pour les protocoles de populations, mais notre modèle de noeud et d'adversaire est très différent de ceux utilisés dans le contexte de protocoles de populations.
Acknowledgments
This PhD thesis wrap up my schooling so there are many people I want to thank for all their support in this long path. But, more simply, it represents the work I have done in the last three years. That is why, I first want to thank the referees Nicola Santoro and Nathalie Mitton for accepting to review my thesis and giving detailed feedback about it. I also want to thank the other members of the committee, Franck Petit and Arnaud Casteigts for their participation, not only for the defense but, through the thesis. I thank my supervisor Sébastien Tixeuil for his continuous support, his motivation and all the precious advises he gave me. I also thank Maria Potop-Butucaru who welcomed me when I arrived in the laboratory and was always present during the thesis. The SMART-BAN project was a great opportunity to work with people in different domains and resulted in interesting collaborations, especially with Julien Sarrazin, Solofo Razafimahatratra and Wilfried Dron.
I thank all the NPA team for everything. The permanent researchers such as
Chapter 6. Distributed Online Data Aggregation as the eventual underlying graph, we can create an algorithm that terminates (but with unbounded cost) or an optimal algorithm if the underlying graph is a tree. When nodes know their entire future, there exists an algorithm with a cost of n.
Against a randomized adversary, we present two optimal algorithms. When nodes have no knowledge, the Gathering algorithm terminates in O(n 2 ) interactions in expectation, which we show is optimal. When each node knows the time of occurrence of their next interaction with the sink, we show that the Waiting Greedy algorithm with parameter τ = Θ(n 3/2 log(n)) terminates in τ interactions with high probability, which we show is optimal.
The results given in this chapter implies that the search for on optimal solution that works in the worst case may is not practical. Instead, we saw that data aggregation algorithm have good performance against a randomized adversary. In more details, this means that when the network evolves randomly (or pseudo-randomly) efficient algorithm should exists.
Part III
Lifetime Estimation of a Wireless
Sensor Network For the distributed online data aggregation problem, our analysis opens several scientific challenges (i) in the short run:
1. Does allowing nodes to send their data a constant number of times instead of once impact our results?
2. Does there exist a randomized algorithm that terminates against any oblivious adversary?
and (ii) in the long run:
1. What knowledge has a real impact on the lower bounds or algorithm efficiency?
2. Can similar optimal algorithms be obtained with fixed memory or limited computational power? | 298,078 | [
"799328"
] | [
"389034"
] |
01218431 | en | [
"math",
"info"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01218431v2/file/planarhypotrFinal.pdf | Susan A Van Aardt
Alewyn P Burger
Marietjie Frick
The Existence of Planar Hypotraceable Oriented Graphs
Keywords: Hypotraceable, hypohamiltonian, planar, oriented graph
A digraph is traceable if it has a path that visits every vertex. A digraph D is hypotraceable if D is not traceable but D -v is traceable for every vertex v ∈ V (D). It is known that there exists a planar hypotraceable digraph of order n for every n ≥ 7, but no examples of planar hypotraceable oriented graphs (digraphs without 2-cycles) have yet appeared in the literature. We show that there exists a planar hypotraceable oriented graph of order n for every even n ≥ 10, with the possible exception of n = 14.
Introduction and background
We denote the vertex set, the arc set and the order of a digraph D by V (D), A(D) and n(D), respectively. Any (undirected) graph may be viewed as a symmetric digraph (by regarding an edge as being equivalent to two oppositely directed arcs). A vertex v of a digraph is called a sink (source) if it does not have outneighbours (in-neighbours). A digraph that does not contain any pair of oppositely directed arcs is called an oriented graph.
A digraph is hamiltonian if it has a Hamilton cycle, i.e., a cycle that visits every vertex. A digraph D is hypohamiltonian if D is nonhamiltonian and D -v is hamiltonian for every v ∈ V (D).
A digraph is traceable if it has a Hamilton path, i.e., a path that visits every vertex. A digraph D is hypotraceable if D is nontraceable but D -v is traceable for every v ∈ V (D). For undefined concepts we refer the reader to [START_REF] Bang-Jensen | Digraphs: Theory, Algorithms and Applications[END_REF].
Hypotraceability in graph theory has an intriguing history. [START_REF] Gallai | Problem 4[END_REF] asked whether all longest paths in a graph share a common vertex. That was before hypotraceable graphs were discovered. (In a hypotraceable graph of order n the longest paths have n -1 vertices each and they have an empty intersection.) [START_REF] Kapoor | On detours in graphs[END_REF] asked whether hypotraceable graphs exist. Also, [START_REF] Kronk | Does there exist a hypotraceable graph?[END_REF] posed a problem in the American Mathematical Monthly entitled "Does there exist a hypotraceable graph?"
In the discussion of the problem, Kronk states that he "feels strongly" that hypotraceable graphs do not exist. Four years later, [START_REF] Horton | A hypotraceable graph[END_REF] constructed a hypotraceable graph on 40 vertices. [START_REF] Thomassen | Hypohamiltonian and hypotraceable graphs[END_REF] presented a procedure by which any four hypohamiltonian graphs with minimum degree 3 may be combined to produce a hypotraceable graph. This resulted in the construction of a hypotraceable graph of order n for every n ∈ {34, 37, 39, 40} and for all n ≥ 42. A few years later, Thomassen (1976) also provided a hypotraceable graph of order 41. [START_REF]Flip-flops in hypo-hamiltonian graphs[END_REF] raised the problem of the existence of planar hypohamiltonian graphs and [START_REF] Grünbaum | Vertices missed by longest paths or circuits[END_REF] conjectured that such graphs do not exist. However, [START_REF] Thomassen | Hypohamiltonian and hypotraceable graphs[END_REF] constructed a planar hypohamiltonian graph of order 105 and presented a recursive procedure for constructing infinitely many planar hypohamiltonian graphs. Later, planar hypohamiltonian graphs of smaller order were found by [START_REF] Hatzel | Ein planarer hypohamiltonscher graph mit 57 knoten[END_REF] (order 57), [START_REF] Zamfirescu | A planar hypohamiltonian graph with 48 vertices[END_REF] (order 48), [START_REF] Araya | On planar hypohamiltonian graphs[END_REF]Wiener (2011) (order 42), andJooyandeh et al. (2016) (order 40). It was also shown in the last mentioned paper that the construction procedures of Thomassen (1976) yield planar hypohamiltonian graphs of all orders greater than 42, and planar hypotraceable graphs of order 154 and all orders greater than or equal to 156.
The importance of hypotraceable graphs was recognised when [START_REF] Grötschel | On the monotone symmetric travelling salesman problem: Hypohamiltonian/hypotraceable graphs and facets[END_REF] showed that certain classes of hypotraceable graphs induce facets of the monotone symmetric travelling salesman polytope. Since no good (or even nearly good) characterisation of hypotraceable graphs has yet been found, it is unlikely that an explicit characterisation of these polytopes can ever be given. Grötschel and Wakabayashi (1981) also showed that hypotraceable digraphs contribute considerably to the difficulty of the asymmetric traveling salesman problem. [START_REF] Thomassen | Hypohamiltonian graphs and digraphs[END_REF] showed that there exists a planar hypohamiltonian digraph of order n if and only if n ≥ 6. Hypotraceable digraphs are easily obtained from hypohamiltonian digraphs by the following construction of [START_REF] Grötschel | Hypotraceable digraphs[END_REF].
Construction 1 [START_REF] Grötschel | Hypotraceable digraphs[END_REF]) Let D be a hypohamiltonian digraph of order n and let y ∈ V (D). Now split y into two vertices x and z such that all the out-neighbours of y become out-neighbours of x and all the in-neighbours of y become in-neighbours of z. The result is a hypotraceable digraph of order n + 1. We say that it is obtained from D by splitting the vertex y into a source and a sink. The vertex splitting procedure, applied to the planar hypohamiltonian graphs constructed by [START_REF] Thomassen | Hypohamiltonian graphs and digraphs[END_REF], yields planar hypotraceable oriented graphs of every order from 7 upwards. Figures 1 and2 depict the smallest planar hypohamiltonian digraph (see [START_REF] Thomassen | Hypohamiltonian graphs and digraphs[END_REF]) and the smallest planar hypotraceable digraph (see [START_REF] Grötschel | Hypotraceable digraphs[END_REF]), respectively.
The existence of hypohamiltonian oriented graphs was established by [START_REF] Thomassen | Hypohamiltonian graphs and digraphs[END_REF]. He showed that the Cartesian product 2015) showed, by means of various other constructions, that there exists a hypohamiltonian oriented graph of order n for every n ≥ 9. They also showed with an exhaustive computer search that there are no hypohamiltonian oriented graphs of order less than 9.
- → C k × - → C mk-1 of two directed
The vertex splitting procedure applied to hypohamiltonian oriented graphs yields hypotraceable oriented graphs of every order greater than 9. van Aardt et al. ( 2011) also found a hypotraceable oriented graph of order 8. It is obtained from a hypohamiltonian digraph that is not an oriented graph but has a vertex incident with all its 2-cycles, so splitting that vertex into a source and a sink destroys all the 2-cycles. [START_REF] Frick | Progress on the traceability conjecture[END_REF] proved that there are no hypotraceable oriented graphs of order less than 8, and [START_REF] Burger | Computational results on the traceability of oriented gaphs of small order[END_REF] showed by means of an exhaustive computer search that there does not exist a hypotraceable oriented graph of order 9. Thus there exists a hypotraceable oriented graph of order n if and only if n = 8 or n ≥ 10.
Thomassen (1978) asked whether there exist planar hypohamiltonian oriented graphs. Recently, van Aardt et al. ( 2013) answered this question in the affirmative by constructing a planar hypohamiltonian oriented graph of order 9 + 12k for every k ≥ 0. By adapting this construction, van Aardt et al. (2015) showed that, in fact, there exists a planar hypohamiltonian oriented graph of order 9 + 6k for every k ≥ 0. The next question to ask is whether there exist planar hypotraceable oriented graphs. Note that if any vertex of the hypohamiltonian oriented graph depicted in Figure 3 is split into a source and a sink, the result is nonplanar. In fact, no planar hypotraceable oriented graph is obtained by applying the vertex splitting procedure to any of the known planar hypohamiltonian oriented graphs. However, in the next section we construct, for each k ≥ 1, a planar hypotraceable oriented graph of order 6k+4 having a source and a sink. The smallest one (of order 10) is depicted in Figure 4. We also present a planar hypotraceable oriented graph of order 12 that has a source and a sink. Then, using a method devised by [START_REF] Grötschel | Hypotraceable digraphs[END_REF], we combine pairs of the constructed planar hypotraceable oriented graphs to produce strong (strongly connected) planar hypotraceable oriented graphs of order 6k and 6k + 2 for every k ≥ 3. We conclude that there exists a planar hypotraceable oriented graph of order n for every even n ≥ 10, with the possible exception of n = 14.
Constructions of planar hypotraceable oriented graphs
As in the case of planar hypohamiltonian oriented graphs (see [START_REF] Van Aardt | An infinite family of planar hypohamiltonian oriented graphs[END_REF]), the circulant digraphs with jump set {1, -2} form the basis of our constructions. In general, for an integer n ≥ 3 and a jump set S of nonzero integers, the circulant digraph -→ C n (S) is defined as follows:
V ( - → C n (S)) = {v 0 , v 1 , . . . , v n-1 }, A( - → C n (S)) = {(v i , v i+j ) : 0 ≤ i ≤ n -1 and j ∈ S},
where indices are taken modulo n. For example, the circulant digraph
- → C 14 (1, -2) is depicted in Figure 5. We note that - → C n (1, -2) is planar if and only if n = 3 or n is even. v 1 v 11 v 0 v 13 v 2 v 3 v 5 v 7 v 12 v 9 v 8 v 6 v 10 v 4
Fig. 5:
The circulant digraph -→ C 14(1, -2)
Construction 2 For each integer k ≥ 1, let H 6k+4 be the oriented graph obtained from the circulant digraph -→ C 6k+2 (1, -2) by deleting the arc v 1 v 6k+1 and adding the arc v 6k v 2 , and then adding two new vertices x and z together with the arcs xv 1 , xv 6k+1 , v 1 z, v 3 z, v 6k-1 z.
The oriented graphs H 10 and H 16 are depicted in Figure 4 and Figure 6, respectively. We shall show that H 6k+4 is a planar hypotraceable oriented graph for every k ≥ 1. First, we present some notation and general observations concerning paths in
- → C n (1, -2). Consider any pair of distinct vertices v i , v j in - → C n (1, -2). We denote the v i -v j path v i v i+1 . . . v j by v i - → C v j . We note that v 3 v 1 v 2 v 0 is a v 3 -v 0 path of length three that use jumps -2, 1, -2 with the consecutive vertex set {v 0 , v 1 , v 2 , v 3 }.
We can create a longer path with a consecutive vertex set by repeating this jumping pattern. In general, for any positive integer t < n/3, there is a
v i+3t -v i path in - → C n (1, -2) with vertex set {v i , v i+1 , . . . v i+3t }, namely the path v i+3t v i+3t-2 v i+3t-1 v i+3(t-1) . . . v i+3 v i+1 v i+2 v i .
We denote this path by v i+3t ← -C v i .
Observation 1 Let v i , v j be two distinct vertices in -→ C n (1, -2). Then the following hold.
(a
) v i - → C v j is the only v i -v j path in - → C n (1, -2) with vertex set {v i , v i+1 , . . . , v j }. (b) If j -i (modulo n) is a multiple of 3, then v j ← - C v i is the only v j -v i path in - → C n (1, -2) with vertex set {v i , v i+1 , . . . , v j }. (c) If j -i (modulo n) is not a multiple of 3, then there is no v j -v i path in - → C n (1, -2) with vertex set {v i , v i+1 , . . . , v j }.
We define the parity of a vertex in -→ C n (1, -2) as the parity of its index. We shall use the following result concerning Hamilton paths in
- → C n (1, -2).
Lemma 1 Suppose n is even and let P be a Hamilton path in -→ C n (1, -2) such that its initial and terminal vertex have the same parity. Then any subpath of P containing only vertices of the same parity has length at most two.
Proof: Let Q be a longest subpath of P that contains only vertices of the same parity. Then Q has less than n/2 vertices. Suppose Q is the path
v i v i-2 . . . v i-2j , with j ≥ 3. Then v i v i+1 , v i-2j-1 v i-2j / ∈ A(P ) and v i-r v i-r+1 / ∈ A(P ), for r = 2, 3, . . . , 2j -1. Moreover, by the maximality of Q, v i-2j v i-2(j+1) , v i+2 v i / ∈ A(P ). Suppose v i is the initial vertex of P . Then v i-1 v i / ∈ A(P ) and v i-2j is not the terminal vertex of P , since Q is not P . Hence v i-2j v i-2j+1 ∈ A(P ), so v i-2j+3 v i-2j+1 /
∈ A(P ) and therefore v i-2j+3 is the terminal vertex of P , contradicting our assumption that the initial and terminal vertices of P have the same parity.
Hence v i is not the initial vertex of P and similarly we can show that v i-2j is not the terminal vertex of
P . Hence v i-1 v i , v i-2j v i-2j+1 ∈ A(P ) and therefore v i-1 v i-3 , v i-2j+3 v i-2j+1 /
∈ A(P ). Then v i-3 is the initial vertex of P and v i-2j+3 is the terminal vertex of P . Thus P is the path v i-3 v i-5 . . . v i-2j+3 , contradicting our assumption that P is a Hamilton path of
- → C n (1, -2). 2
For the particular case n = 6k + 2 we have the following useful result.
Lemma 2 For any integer k ≥ 0 the initial and terminal vertices of any Hamilton path of -→ C 6k+2 (1, -2) have different parities.
Proof: Let P be a Hamilton path in -→ C 6k+2 (1, -2) with initial vertex v 1 and terminal vertex v and suppose is odd.
We now consider the following four cases.
Case 1: P contains the subpath v 1 v 2 v 3 : Then v 3 v 1 / ∈ A(P ) and hence v 3 v 4 ∈ A(P ). An inductive argument then shows that P is the path v 1 v 2 v 3 v 4 . . . v 6k+1 v 0 , so in this case = 0, contradicting our assumption that is odd.
Case 2: P contains the subpath v 1 v 2 v 0 : Then v 0 v 1 , v 1 v 6k+1 , v 6k+1 v 0 / ∈ A(P ) and so v 0 v 6k , v 6k v 6k+1 ∈ A(P ). Now v 6k-1 v 6k , v 6k v 6k-2 / ∈ A(P ). Hence v 6k+1 v 6k-1 , v 6k-1 v 6k-3 ∈ A(P ).
Repeated application of this argument together with Observation 1 shows that P is the path v 1 v 2 v 0 ← -C v 5 v 3 v 4 , since 0-5 ≡ 6k -3 mod (6k + 2). This again contradicts our assumption that is odd.
Case 3: P contains the subpath v 1 v 6k+1 v 0 :
A similar argument as above shows that P contains the subpath v 1 ← -C v 3 . But then v 2 ∈ V (P ), contradicting our assumption that P is a Hamilton path of
- → C 6k+2 (1, -2).
Case 4:
P contains the subpath v 1 v 6k+1 v 6k-1 : Then by Lemma 1, v 6k-1 v 6k-3 / ∈ A(P ). Also v 6k v 6k+1 , v 6k+1 v 6k+2 / ∈ A(D). Since P is not the path v 1 v 6k+1 v 6k-1 it follows that v 6k-1 v 6k , v 6k v 6k-2 ∈ A(D). A similar argument as above shows that P contains the subpath v 1 v 6k+1 v 6k-1 v 6k v 6k-2 ← - C v 4 v 2 .
Hence P cannot contain both v 3 and v 6k+2 , contradicting our assumption that P is a Hamilton path in
- → C 6k+2 (1, -2). 2 v 1 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 v 0 x z v 12 v 2 v 3 v 13
Fig. 6: H16
Theorem 1 H 6k+4 is a planar hypotraceable oriented graph of order 6k + 4, for every integer k ≥ 1.
Proof: Let k be any positive integer. Then H 6k+4 is obviously a planar oriented graph -see the planar depiction of H 16 in Figure 6. We now prove that it is hypotraceable. Since all the out-neighbours of x as well as all the in-neighbours of z are vertices with odd index, it follows from Lemma 2 that H 6k+4 -v 6k v 2 is nontraceable.
Thus, if P is a Hamilton path of H 6k+4 , then P contains the arc v 6k v 2 . Hence P does not contain the arcs v 1 v 2 , v 4 v 2 , v 6k v 6k-2 and v 6k v 6k+1 . This implies that xv 6k+1 and v 1 z are, respectively, the initial and terminal arcs of P . Observe that P contains at most one of the arcs v 2 v 0 and v 0 v 6k and at most one of the arcs v 6k+1 v 0 and v 0 v 1 . Hence P contains either the subpath v 2 v 0 v 1 z or the subpath xv 6k+1 v 0 v 6k . Suppose the former. Then P does not contain the arcs v 6k+1 v 0 and v 0 v 6k . But then v 6k+1 v 6k-1 and v 6k-1 v 6k are in P . Then P is the path xv 6k+1 v 6k-1 v 6k v 2 v 0 v 1 z, contradicting that H 6k+4 has at least 10 vertices. By a symmetric argument we obtain a contradiction if xv 6k+1 v 0 v 6k-2 is a subpath of P . This proves that H 6k+4 is nontraceable.
Next we show that H 6k+4 -v is traceable for any vertex v ∈ H 6k+4 . Since H 6k+4 -{x, z} is hamiltonian, H 6k+4 -x and H 6k+4 -z are both traceable. Using Observation 1 we now present a Hamilton path of the graph H 6k+4 -v j for j = 0, 1, . . . , 6k + 1.
Subgraph
Hamilton path
values of i H 6k+4 -v 0 xv 6k+1 ← - C v 1 z H 6k+4 -v 1 xv 6k+1 v 0 v 6k v 2 - → C v 6k-1 z H 6k+4 -v 6i+1 xv 1 v 2 v 0 ← - C v 6i+2 v 6i ← - C v 3 z i = 1, . . . , k H 6k+4 -v 6i+2 xv 6k+1 v 0 v 6k ← - C v 6i+3 v 6i+1 ← - C v 1 z i = 0, . . . , k H 6k+4 -v 6i+3 xv 6k+1 ← - C v 6i+4 v 6i+2 ← - C v 2 v 0 v 1 z i = 0, . . . , k H 6k+4 -v 6i+4 xv 1 v 2 v 0 ← - C v 6i+5 v 6i+3 ← - C v 3 z i = 0, . . . , k H 6k+4 -v 6i+5 xv 6k+1 v 0 v 6k ← - C v 6i+6 v 6i+4 ← - C v 1 z i = 0, . . . , k H 6k+4 -v 6i xv 6k+1 ← - C v 6i+1 v 6i-1 ← - C v 2 v 0 v 1 z i = 1, . . . , k 2
A computer search showed that every planar hypotraceable oriented graph of order 10 contains H 10 as a spanning subdigraph. From the characterisation of hypotraceable oriented graphs of order 8 presented by [START_REF] Van Aardt | The order of hypotraceable oriented graphs[END_REF], we note that no hypotraceable oriented graph of order 8 is planar. [START_REF] Burger | Computational results on the traceability of oriented gaphs of small order[END_REF] showed by means of an exhaustive computer search that there does not exist a hypotraceable oriented graph of order 9. Hence H 10 is the planar hypotraceable oriented graph of smallest order and size.
For each k ≥ 1, the graph H 6k+4 is an arc-minimal hypotraceable oriented graph, i.e., removing any arc destroys the hypotraceability. This follows from the following observations and the fact that a hypotraceable oriented graph does not contain a vertex with in-or out-degree 1: Any Hamilton path in H 6k+4 -v 1 contains both the arcs v 6k v 2 and v 6k-1 z, Any Hamilton path in H 6k+4 -v 2 contains the arc v 3 v 1 , Any Hamilton path in H 6k+4 -v 4 contains the arc v 3 z.
A computer search (for small k) showed that the digraph obtained from H 6k+4 by adding any of the arcs {v 2i+1 z : i = 2, . . . , 3k + 1} is also a planar hypotraceable oriented graph. We can prove this analytically in general, but the proof is tedious and therefore omitted.
Figure 7 depicts an arc-minimal planar hypotraceable oriented graph of order 12, which was found by computer.
We now use the following construction of [START_REF] Grötschel | Constructions of hypotraceable digraphs[END_REF] to construct strong planar hypotraceable oriented graphs.
Construction 3 [START_REF] Grötschel | Constructions of hypotraceable digraphs[END_REF]) For i = 1, 2 let T i be a hypotraceable digraph of order n i , with a source x i and a sink z i . Form the disjoint union of T 1 and T 2 . Then identify x 1 and z 2 to a single vertex and identify z 1 and x 2 to a single vertex. The result, which we denote by T 1 * T 2 , is a strong hypotraceable digraph of order n 1 + n 2 -2.
Note that if, in Construction 3, each of T 1 and T 2 is a planar oriented graph that can be depicted with the source and sink in the same face, then T 1 * T 2 is also planar. Thus, if k 1 and k 2 are any two nonnegative integers and T i = H 6ki+4 for i = 1, 2, then T 1 * T 2 is a strong planar hypotraceable oriented graph of order 6(k 1 + k 2 ) + 6. If T 1 is the planar hypotraceable graph of order 12 depicted in Figure 7 and Fig. 7 T 2 = H 6k+4 , k ≥ 1, then T 1 * T 2 is a strong planar hypotraceable oriented graph of order 6k + 14. Thus we have proved the following.
Theorem 2 There exists a strong planar hypotraceable oriented graph of order 6k and of order 6k + 2 for every integer k ≥ 3.
Theorem 3 There exists a planar hypotraceable oriented graph of order n for all even n ≥ 10 with the possible exception of n = 14.
Figure 8 depicts the strong planar hypotraceable oriented graph of order 18 that is obtained by using two copies of H 10 in Construction 3.
It is still an open question whether there exists a planar hypotraceable oriented graph of order 14 or one of odd order. We also do not know whether there is a strong planar hypotraceable oriented graph of order less than 18.
Fig. 4 :
4 Fig. 4: A planar hypotraceable oriented graph of order 10
is hypohamiltonian if and only if there is a pair of relatively prime positive integers m and n such that ma + nb = ab -1. Recently, van Aardt et al. (
cycles is a hypohamiltonian oriented graph if C 6k+4 is hypohamiltonian for each k ≥ 0. Penn and Witte (1983) -→ C a × → -C 3 × proved that the Cartesian product k ≥ 3 and m ≥ 1, and also that -→ -→ C b
: A planar hypotraceable oriented graph of order 12
000 000 111 111
000 000 111 111 000 000 111 111 000 000 111 111 00 00 11 11 00 11 00 00 11 11
000 000 111 111 000 000 111 111 00 11 00 11
000 000 111 111 000 000 111 111 000 000 111 111 00 11 00 11 00 11
00 00 11 11
Fig. 8: A strong planar hypotraceable oriented graph of order 18
* This material is based upon work supported by the National Research Foundation of South Africa under Grant number 77248 † This material is based upon work supported by the National Research Foundation of South Africa under Grant number 103832 ‡ This material is based upon work supported by the National Research Foundation of South Africa under Grant number 81075 | 21,290 | [
"971861"
] | [
"249878",
"252568",
"249878"
] |
01467226 | en | [
"phys"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01467226/file/Zr%20doping%20on%20Lithium%20Niobate%20crystals-%20Raman%20spectroscopy%20and%20Chemometrics.pdf | Ninel Kokanyan
David Chapron
Edvard Kokanyan
Marc D Fontana
Zr doping on Lithium Niobate crystals: Raman spectroscopy and Chemometrics
Raman measurements were investigated on Zr -doped lithium niobate LiNbO 3 crystals with different concentrations. Spectra were treated by fitting procedure and principal component analysis which both provide results consistent with each other. The concentration dependence of the frequency of the main low-frequency optical phonons gives an insight of site incorporation of Zr ions in the host lattice. The threshold concentration of about 2% is evidenced, confirming the interest of Zr doping as an alternative to Mg doping for the reduction of the optical damage in lithium niobate.
Introduction
A strong limitation to the applications of congruent commercial LiNbO 3 (LN) crystals in optical parametric oscillators and electro-optic devices comes from the fact that, under illumination with visible or near-infrared light, there are semi-permanent changes in the index of refraction of the crystal, due to the photorefractive (PR) effect [START_REF] Volk | Lithium Niobate. Defects, Photorefraction and Ferroelectric Switching[END_REF]. This so-called "optical damage", causing beam distortion, dramatically diminishes the use of LN in devices. This drawback and limitation exist unless some strategy to reduce the photorefractive effect is implemented. The optical damage resistance (ODR) can be increased considerably by changing the LN crystal composition from congruent to stoichiometric [2][3][4] and/or by adding into LN lattice an appropriate non-photorefractive dopant [START_REF] Volk | Lithium Niobate. Defects, Photorefraction and Ferroelectric Switching[END_REF][2][3][4][START_REF] Petrosyan | Proc. SPIE[END_REF]. Among the ODR dopants that have been tested, the most utilized is, nowadays, MgO that is known to be efficient in molar concentrations above 5 mol% [START_REF] Zhong | [END_REF]. A problem remains with this dopant owing to difficulties to grow large high optical quality MgO doped LN crystals. Recently, a set of tetravalent impurity ions (Hf 4+ , Sn 4+ , Zr 4+ ) as new ODR ions were checked [7][8][9][10][11]. It was shown that for these ions, the concentration threshold for ODR can be strongly reduced. In particular, a sample with 2mol% ZrO 2 can withstand a light intensity of 2x 10 7 W/ cm 2 of 514 nm laser.
The ODR is therefore 40 times larger than that of 6.5% MgO [11]. In addition, it was shown that the non-linear optical, and electro-optical coefficients are preserved [12,13] in Zr-doped LN, so that this crystal is promising for applications of frequency conversion and laser modulation. As a consequence, Zr-doped LN with a threshold concentration of around 2-3mol% [14]. can be a good alternative to stoichiometric and/or Mg-doped crystal for reduction of the photorefractive effect. According to Kong et al. [11] the main question remains about the reason of the small threshold achieved with Zr doping, compared with other ODR dopant ions. In other terms the microscopic mechanism of incorporation of Zr in LN lattice is unknown and the control and improvement of this material requires the understanding of the substitution process. The threshold refers to the concentration above which the OD or PR largely decreases and is generally associated in congruent LN crystal to a complete removal of Nb antisites (i.e. the Nb ions going to the Li-site) [START_REF] Volk | Lithium Niobate. Defects, Photorefraction and Ferroelectric Switching[END_REF]. When doping, the impurity ion can go to the site A of native Li ions, or the site B of native Nb ions, so that in doped LN materials the PR effect can increase or decrease, according to the site occupation of the dopant.
In previous studies we have shown that Raman spectroscopy (RS) can be a useful probe of dopant ion [15,16]. A shift of line position (optical phonon frequency) and/or linewidth (phonon damping) of some Raman lines can be thus related to incorporation of defects in the host lattice.
In the present work the Zr -doped LN crystals with different concentrations are investigated by means of Raman micro-probe, in order to have an insight of the sites occupied by Zr ions in the structure of LN and thus an understanding of the threshold concentration required for the reduction of the photorefractive effect as well.
Among all Raman lines we paid attention to those which can be used as the discriminating markers of sites A or B [17]. Thus, the lowest-frequency E[TO 1 ] and A 1 [TO 1 ] Raman lines, which are associated with the motion of Nb against oxygen octahedron, can be suitably probe the site B occupied by native Nb ions. The line A 1 [TO 2 ] corresponds to the out-of-phase vibration of Li and O and can be therefore used to study the environment of the site A and changes due to Li ions, Li vacancies and Nb antisites [17,18].
The concentration of Zr samples in samples under study is small (below 2.5 %) i.e. much lower than the content of dopant ions used in previous works [15,16]. Furthermore, the difference of concentrations between two "consecutive" samples in the Zr series under study is very small, so that only a subsequent very small change in Raman spectrum is expected. Chemometrics are technique able to evidence very small changes in spectra [START_REF] Mark | Chemometrics in Spectroscopy[END_REF] and are therefore used here to support the exploitation of Raman spectra, and the dependences of vibrational modes on Zr content. Finally, the behavior of main modes vs Zr is used to derive the incorporation mechanism of Zr in the LN lattice
Experimental results and analysis
The crystals were grown by Czochralski method from congruent melts to which appropriate amounts of zirconium oxide (ZrO 2 ) were added. A set of samples with concentrations of ZrO 2 equal to 0.625, 1.00, 1.25, 1.50, 2.00 and 2.50 mol%, were prepared and denoted respectively as LNZr0.625, LNZr1, … Given concentrations correspond to the content of ZrO2 in the melt and are very close to those in the crystals. The relative amount of Zr in the crystals, and the presence of impurities other as Zr were checked with X-ray fluorescence. Raman measurements were carried out by means of a Horiba-Aramis spectrometer with an absolute spectral resolution of 1cm -1 and diffraction grating of 1200 g.mm -1 . The 633nm of a He-Ne laser with intensity of 1.27*10 7 W/m 2 was used as the exciting line and the scattered radiation was detected by a CCD camera (200 pixels). E[TO 1 ] and A 1 [TO 1 +TO 2 ] spectra were recorded in the Y(XZ)Y and Y(ZZ)Y backscattering configurations respectively [17,[START_REF] Ridah | [END_REF]. Raman spectra were carried out at room temperature and low temperature (-180°C) as well in order to obtain more resolved lines. Indeed, lines at room or higher temperature are broader and asymmetric rendering more difficult any spectra exploitation [21]. One can notice that the damping for each phonon under study continuously increases with doping, reflecting a growing disorder in the LN lattice, whereas the frequency exhibits a minimum value for 1.5 or 2% Zr, and then increases for larger concentrations.
This anomaly in the behavior of dependence on Zr content needed to be confirmed since only small changes of frequency were observed. Therefore, Raman spectra were additionally treated by principal component analysis (PCA), in order to evidence their relative variation with Zr concentration. Indeed, PCA is a statistical method commonly used for data classification and can be applied to the spectra analysis allowing to express their relative variability by projection on orthogonal components [START_REF] Je | Hoboken, A User's Guide to Principal Components Analysis[END_REF][START_REF] Jolliffe | Principal Component Analysis[END_REF][24][25].
Performing a PCA on spectra we obtain the projection values of sample on each PC. The score corresponds to the relative weight of each spectrum to one component and thus gives an idea of the relative change between different samples. Loadings represent the new base, which are representative to the variation on each spectrum
𝑆𝑝𝑒𝑐𝑡𝑟𝑢𝑚 = 𝑆 + 𝑠 1 𝑃𝐶 1 + 𝑠 4 𝑃𝐶 4 + ⋯ + 𝜀 ( 1
)
Where <S> is the average spectrum, s i are scores, PC i are loadings and e are residual values. In our investigations PCA was performed using the Unscrambler10.3 software [26]. The analysis was performed after performing a standard normal variant pretreatment on the spectra utilizing the same software. The transformation is applied to each spectrum individually by subtracting the mean spectrum and scaling with the spectrum standard deviation. This means that it normalizes the intensity and corrects the baseline of all spectra.
In figure 3 are plotted different results derived from PCA applied to the spectra reported in figure 1. The components PC1 and PC2 provide 96 and 3% of the entire signal, so that they nearly reflect the whole variance. Whereas PC1 gives the change in intensity, PC2 reveals the change in the maximum peak position, since it behaves as the first derivative. The component related to line position shift (here PC2) is only used hereafter to exploit and interpret Raman data. This consistency between results derived from two completely different analysis processes from the same Raman spectra, corroborates the dependences of phonon frequency given above and renders more reliable the interpretation of these changes with Zr content.
Interpretation and Conclusion
We are now able derive the mechanism of incorporation of Zr ions in the LN lattice from the dependences of the frequency of A more increase of the Zr concentration leads to an enhancement of the occupation of Li sites. When Zr replaces Li, as the mass of Zr ion is larger than this of Li ion, the frequency of A 1 [TO 2 ] mode decreases for concentration varying between 1 and 2mol %. In this range the frequency of phonons A 1 [TO 1 ] and E[TO 1 ] is nearly constant. Finally, for 2.5mol% concentration the frequencies of A 1 [TO 1 ], E[TO 1 ] and A 1 [TO 2 ] modes increase again. This can be attributed to the occupation of the Zr ions on both sites A and B as well, rendered more disordered the LN structure as reflected by the phonon damping.
It can be mentioned that Zr introduction is accompanied by a large increase of A1 phonon damping. This means that the Zr ion is slightly displaced along c axis with respect of native Li and Nb sites.
Our results show that 2% is a critical content above which the mechanism of Zr ion incorporation in LN lattice strongly changes. As shown by the behavior of phonons A 1 [TO 1 ] and E[TO 1 ] which are both related to Nb site, the concentration 2% corresponds to the total remove of Nb antisites and therefore to the threshold of optical damage resistance. These studies confirm the interest of Zr doping for reducing the optical damage of LN.
It was shown that PCA can be useful to highlight small variations between spectra. More generally it is a fast method comparing to classical fitting method, and can be specially used in the case of a big number of samples and/or spectra in order to make a sort between samples before starting fitting procedure.
yFIG 1 .
1 FIG 1. Raman spectra at -180°C of lowest frequency E[TO 1 ] mode for different concentrations of Zr in Zr-doped Lithium Niobate crystals. Vertical lines correspond to the frequency of undoped LN and LNZr2.5 samples. E[TO 1 ] Raman spectra on undoped and various Zr doped LN crystals are reported in figure 1. A continuous broadening is noted with increasing Zr content while a small shift of peak maximum is observable for LNZr2 and LNZr2.5 samples as compared with all others. Spectra were fitted to damped harmonic oscillators in order to derive the frequency and the damping of phonons E[TO 1 ], A 1 [TO 1 ] and A 1 [TO 2 ]. Results are reported in figure 2.
.
FIG 2 .
2 FIG 2. Concentration dependence of the frequency and damping of E[TO 1 ] (a) A 1 [TO 1 ] and A 1 [TO 2 ] (b) phonons.
FIG 3 .
3 FIG 3. Loadings of PC1 and PC2 of PCA performed on the spectra (range of 100cm -1 -200cm -1 ) obtained at low temperature (-180°C), for the Y(ZX)Y configuration. The spectrum of 2mol% Zr (at the bottom) and its derivative (at the top) are shown for comparison.
FIG 4 . 6 FIG 5 .
465 FIG 4. Frequency of E[TO 1 ] mode and score of PC2 as a function of the Zr concentration in Lithium Niobate.
E[TO 1 ]
1 , A 1 [TO 1 ] and A 1 [TO 2 ] plotted in figure 2. We remind that the modes E and A 1 [TO 1 ] mainly involve B site, while the phonon A 1 [TO 2 ] probes the site A. Furthermore, A 1 phonons correspond to ionic motions along the ferroelectric c axis, while E phonons are polarized in the plane normal to c. When they are incorporated in the LN lattice, Zr ions can at first replace Nb antisites on sites A, pushing Nb on going to their intrinsic B sites. As a consequence, the A 1 [TO 2 ] mode frequency increases due to a strengthening of A-O bond since ionic radius of Zr 4+ is larger than Nb 5+ , whereas the frequencies of A 1 [TO 1 ] and E[TO 1 ] are rising as well, involving stronger B-O bond. | 13,173 | [
"10185",
"4590",
"852845"
] | [
"202503",
"202503",
"403897",
"202503"
] |
01467230 | en | [
"phys"
] | 2024/03/04 23:41:44 | 2014 | https://hal.science/hal-01467230/file/abarkan2014.pdf | Mustapha Abarkan
Michel Aillerie
email: [email protected]
Ninel Kokanyan
Clément Teyssandier
Edvard Kokanyan
Electro-optic and dielectric properties of Zirconium-doped congruent lithium-niobate crystals
Measurements of electro-optic and dielectric properties in Zirconium (Zr)-doped lithium niobate (LN:Zr) crystals are performed as functions of the dopant concentration in the range 0.0-2.5 mol%. The clamped and unclamped electro-optic coefficients r 222 of Zr-doped LN and the corresponding dielectric permittivity as well, have been experimentally determined and compared with the results obtained in undoped congruent LN crystals. We show that the electro-optic and dielectric properties present a kink around 2 mol% of zirconium which seems to be the "threshold" concentration required to strongly reduce the photorefractive effect. All reported results confirm that the LN:Zr is a very promising candidate for several non linear devices.
Introduction
Lithium niobate, LiNbO 3 (LN) crystals are studied extensively and used in many applications thanks to their large piezoelectric, electro-optic and nonlinear coefficients. LN offers good transmission and high extinction ratio with a modest driving voltage in the transverse configuration, i.e. when an electric field is applied perpendicular to the direction of the optical beam. Thus, LN has a wide range of applications in electro-optic (EO) modulation and laser Q-switching [START_REF] Mccannt | A versatile electronic light shutter composed of a high speed switching circuit coupled with a lithium niobate Pockels cell[END_REF][START_REF] Zhang | Diode-laser pumped passively Q-switched green laser by intracavity frequency-doubling with periodically poled LiNbO 3[END_REF][START_REF] Krätzig | Photorefractive centers in electro-optic crystals[END_REF][START_REF] Banfi | Wavelength shifting and amplification of optical pulses through cascaded second-order processes in periodically poled lithium niobate[END_REF]. Nevertheless, the performances of Q-switched devices require EO materials with a relatively low driving voltage and especially a high resistance to optical damage. The EO properties of LN, and particularly the coefficient r 222 , are acceptable, but the optical damage threshold in crystals with the congruent composition is very low (0.3 GW/cm 2 ) compared with other optical materials [START_REF] Volk | Lithium Niobate: Defects, Photorefraction and Ferroelectric Switching[END_REF][START_REF] Eimerl | Progress in nonlinear optical materials for high power lasers[END_REF]. This is mainly the reason why the use of pure LN congruent crystals on applications requiring high power laser pulses is limited.
The main optical damage process in LN crystals is the photorefractive damage originates from the photo-generation by the optical beam of mobile charge carriers in the bright region that migrate toward the dark zones [START_REF] Krätzig | Photorefractive centers in electro-optic crystals[END_REF]. This carrier displacement induces the presence of a space charge field that creates modifications of refractive indices via the electro-optic effect. For general considerations, another origin of the optical damage could be due to thermal effects, such as the thermo-optic effect that changes the refractive indices in the crystal due to a local heating induced by the power density of the laser beam. Nevertheless, in LN crystal, for relatively high power densities of the beam, these effects are negligible compare to the photorefractive effect [START_REF] Polgár | Spectroscopic and electrical conductivity investigation of Mg doped LiNbO3 single crystals[END_REF]. It was proved that the optical damage threshold depends on the amount of intrinsic defects, and is considerably increased in stoichiometric LN and congruent LN doped with specific metal ions, such as Mg, Zn and in [START_REF] Zhong | Proceedings of the 11th International Quantum Electronics Conference IQEC '80[END_REF][START_REF] Volk | Optical-damage-resistant impurities in lithium niobate[END_REF]. Thus, to increase the optical damage threshold, two solutions exist. The first one consists of decreasing the number of intrinsic defects by growing LN crystals with the ratio R = [Li]/ [Nb] closer to the one corresponding to the stoichiometric composition (or close to it) but the growth of high quality crystals is difficult to obtain. The second approach consists in doping congruent lithium niobate crystals by appropriate doping ions because congruent crystals present a large amount of non-stoichiometric defects and, due to its complex defect structure, can accept a wide variety of dopants with various concentrations. Among possible dopant, we can site divalent ions such as Mg 2+ [START_REF] Furukawa | Optical damage resistance and crystal quality of LiNbO 3 single crystals with various [Li]/[Nb] ratios[END_REF][START_REF] Bryan | Increased optical damage resistance in lithium niobate[END_REF] and Zn 2+ [START_REF] Volk | Optical-damage-resistant LiNbO 3 :Zn crystal[END_REF] or trivalent ions such as Sc 3+ and In 3+ , which are known, for specific concentrations, to improve the optical damage resistance of LN crystals. Recently, hafnium (Hf) was found to be a new optical damage-resistant element leading to a significant increase of the photorefractive resistance at doping threshold concentration around 2 mol% in the melt [START_REF] Kokanyan | Reduced photorefraction in hafniumdoped single-domain and periodically poled lithium niobate crystals[END_REF][START_REF] Razzari | Photrefractivity of Hafnium-doped congruent lithium-niobate crystals[END_REF][START_REF] Li | The optical damage resistance and absorption spectra of LiNbO 3 :Hf crystals[END_REF][START_REF] Abarkan | Electro-optic and dielectric properties of Hafnium-doped congruent lithium niobate crystals[END_REF][START_REF] Minzioni | Strongly sublinear growth of the photorefractive effect for increasing pump intensities in doped lithium-niobate crystals[END_REF][START_REF] Minzioni | Linear and nonlinear optical properties of Hafnium-doped lithium-niobate crystals[END_REF]. It as been found that the light-induced birefringence changes of LiNbO 3 crystal doped with 4 mol% of HfO 2 were comparable to that of 6 mol% MgO doped crystals. This concentration of hafnium correspond to the threshold concentration predicted by the charge compensation approach equal to the half of the Nb excess concentration between 3 and 4 mol% in a congruent crystal, i.e. [Hf] = ([Nb]- [Li])/2, with the notation [] corresponding to the mol% concentration of species. By else, the advantage of Hafnium is also a distribution coefficient near one at the threshold concentration so that high quality LN:Hf crystals may be easier to grow than the usual LN:Mg crystal with MgO 6% having a distribution coefficient closer than 1.2 [START_REF] Kokanyan | Hafnium-doped periodically poled lithium niobate crystals: Growth and photorefractive propertie[END_REF]. Starting with these crystal physical properties and growth considerations, some crystal grower groups as the one of Kokanyan et al who proposed for the first time tetravalent impurity ions including Zirconium as new nonphotorefractive ions [START_REF] Kokanyan | Reduced photorefraction in hafniumdoped single-domain and periodically poled lithium niobate crystals[END_REF][START_REF] Petrosyan | Growth and evaluation of lithium niobate crystals containing nonphotorefractive dopants[END_REF]. It was shown that zirconium presents a distribution coefficient closer to one for a zirconium threshold concentration in doped LN crystals latter found around 2.0 mol%. Even if all authors working in this research field suggest that Zr might represent an excellent alternative for obtaining higher optical damage resistance crystals with high optical quality, contradictory data concerning this threshold concentration are reported in literature [START_REF] Kong | Highly optical damage resistant crystal: Zirconiumoxide-doped lithium niobate[END_REF][START_REF] Liu | An excellent crystal for high resistance against optical damage in visible-UV range: near-stoichiometric zirconium-doped lithium niobate[END_REF][START_REF] Argiolas | Structural and optical properties of zirconium doped lithium niobate crystals[END_REF]. It is therefore of interest for EO modulation or Q-switching applications, to know the EO coefficients values of zirconium-doped lithium niobate crystals as function of concentration over a wide frequency range from DC up to high frequencies.
This knowledge is so as important since, in addition to photoconductivity, photorefractive behavior is linked to the EO properties. It is reminded that the high-frequency or constantstrain (clamped) coefficient r S is the true EO coefficient due to the direct modulation of the refractive indices by the applied electric field, in contrast to the low-frequency or constantstress (unclamped) coefficient r T , which includes in addition the contribution of the lattice deformation by the electric field. As a consequence we have carried out EO measurements over a wide frequency range and we report for the first time both clamped and unclamped EO coefficients values versus Zr concentration.
In the present contribution, we report, as function of zirconium concentration, experimental results and analysis obtained in the characterization of the EO coefficients r 222 T and r 222 S mainly involved in Q-switch laser applications. The coefficient r 222 is obtained when a light-beam is propagating along the optical axis (c-axis) of the crystal and, therefore with an optical polarization of the transmitted beam in the isotropic plane, which renders it unaffected by the temperature dependence of the birefringence within this configuration. The EO results would be then discussed based on the absorption spectrum of the grown crystals. To complete the study, we have measured the frequency dependence of the corresponding dielectric permittivity ε 22 and finally established the figure of merit allowing the comparison between materials, used as modulator, dedicated for Q-switching applications.
Experimental
Sample preparation
Based on the Czochralski technique, a growth set-up using a single platinum crucible with rf heating element in air atmosphere was used to grow a set of Zr-doped lithium niobate crystals. In order to obtain directly during the growth process single-domain crystals, a dc electric current with a density of about 12 A/m 2 passed through the crystal-melt system. The starting materials used for sintering the lithium niobate charges of congruent composition were high purity Nb 2 O 5 and LiCO 3 compounds from Johnson-Mattey and Merck. ZrO 2 was introduced into the melt with concentration equals to 0.625, 0.75, 0.875, 1.00, 1.25, 1.50, 2.00 and 2.50 mol%, respectively. Finally, samples were shaped in a parallelepiped shape with about x, y, z = 5, 10, 5 mm dimensions and were optically polished on all the surfaces.
The record of the UV-Visible-NIR optical transmission spectra reveals the optical qualification of the samples. Transmission spectra were recorded at room temperature with an un-polarized light using a Perkin Elmer Lamda 900 spectrometer. Samples were place with their polished faces at normal incidence, with the c (z) axis parallel to the k vector of the incident light. The transmission spectra recorded in all crystals present same behaviors with a flat transparency response in the whole visible range and an absorption edge appearing in the UV. The fundamental absorption edges of the crystals were measured at the absorption coefficient of 20 cm -1 . Additionally to the good transparency without absorption parasitic peaks observed in all samples, these spectra bring interesting information on the role and influence of the dopant in lithium niobate crystals [START_REF] Kovacs | Composition dependence of the ultraviolet absorption edge in lithium niobate[END_REF]. This point will be discussed further. However, we have plotted the interesting part for the present work of the absorption coefficient spectra and the absorption edges for the various samples, presented in Fig. 1. We observe a clear displacement of the absorption edge to UV when the concentration increase in the crystals up to 2 mol% of zirconium, whereas for crystal doped with higher concentration, the displacement of the absorption edge is to the visible wavelengths. The zirconium concentration of 2 mol% in congruent LN crystal, which corresponds to the minimum observed in the absorption edge position can be considered as a threshold concentration.
Electro-optic measurements
For EO measurements, we used an experimental setup based on the Sénarmont arrangement with the transfer function presented in Fig. 2.
Transmitted intensity The point M 1 locates at 50% transmission (I max -I min )/2 point corresponds to the so-called linear working point. It is associated with the "Modulation Depth Method" (MDM) [START_REF] Aillerie | Measurement of the electro-optic coefficients: description and comparison of the experimental techniques[END_REF] and can be used to determine the EO coefficient as a function of frequency from DC to 1 MHz defined by the specifications of power supply and signal acquisition electronic apparatus used for these experiments.
M 1 M 0 M' 0 I max ( I max -I min )/2 I min I M Angle of analyzer β Δi(t) ΔV(t)
Measuring the peak-to-peak amplitude i pp of the modulated signal at the point M 1 , one can obtain the EO coefficient directly from the following equation [START_REF] Aillerie | Measurement of the electro-optic coefficients: description and comparison of the experimental techniques[END_REF]:
3 0 ( ) 2 ( ) ( ) pp eff pp eff i d r V n I L ν λ ν ν π = ( 1 )
Here, I 0 = I max -I min represents the total intensity shift of the transfer function, L is the length of the crystal along the beam-propagation direction, d is the sample thickness along the applied electric field direction, n eff is the effective refractive index, λ is the laser wavelength and V pp is the peak-to-peak value of the applied ac field at the frequency ν.
The M 1 point can be also associated to a method called "Time Response Method" (TRM) [START_REF] Abarkan | Frequency and wavelength dependences of electro-optic coefficients in inorganic crystals[END_REF]. At this working point, the instantaneous variation of the transmitted beam intensity Δi(t) induced by the applied voltage ΔV(t) is given by:
3 0 ( ) ( ) ( ) 2 eff eff n LI i t r t V t D π λ ∆ = ⊗∆ ( 2
)
where ⊗ is the convolution operator and r eff (t
)
eff eff D i r V n I L λ ν ν ν π ∆ = ∆ ( 3 )
The optical response at short time leads to the clamped (high frequency) coefficient r S , while the optical response at longer time provides the unclamped (low frequency) coefficient r T .
We have shown that this technique allows obtaining the frequency dispersion of the EO coefficient from DC up to at least 500 MHz, mainly limited by the rising time of the voltage pulse [START_REF] Abarkan | Frequency and wavelength dependences of electro-optic coefficients in inorganic crystals[END_REF]. It is to be mentioned that the values of the coefficients obtained by these techniques are absolute values.
For the specific experiments that are concerned by this work, electro-optic measurements were carried out by both MDM and TRM methods presented above. Within experiments devoted to the determination of the EO coefficient r 222 the beam propagates along the c-axis of the sample and an external electric field was applied along the a-axis (or equivalently the b-axis).
The measurements were carried out at room temperature using a He-Ne laser (λ = 633 nm). In the MDM method, we used an ac voltage of 220 V peak-to-peak at 1kHz, whereas in the TRM method, a pulse of amplitude equal to 700V was applied onto the sample. Within these conditions, the sample dimensions and the performing optical and electric arrangements used for the experiment, the final uncertainty is in the order of 6% on the EO coefficients. By else, in the limit of the accuracy of the methods, no additional physical effects, as the distortion of the laser beam due to photorefractive effect or the presence of the quadratic electro-optic effect that could disturb the measurements were detected. Figure 3 shows the recording of both the applied voltage and the optical signal measured in the 0.8 mol%-Zr doped LN sample at different time within the TRM method. In the long-time range, the optical signal oscillates with periods corresponding to the main piezo-electric frequency resonances. In the short time, the two insets of Fig. 3 illustrate the EO response for different time scales. For time shorter than 350 ns the oscillations do not exist since the acoustic waves need more time to propagate across the crystal. The value of the intensity corresponding to the plateau appearing for time below 350 ns is much smaller than this obtained for time larger than 200 µs, which indicates a large acoustic contribution (see Table 1). The frequency dispersion of the EO coefficient r 222 is calculated according to Eq. ( 3). As shown in Fig. 4, the frequency dependence of the EO coefficient r 222 of the crystal is flat on both sides of the piezo-resonances, giving, for this crystal doped with 0.8 mol% of zirconium the values of the EO coefficient at high frequency r 222 S = 3.9 ± 0.3 pm/V and at low frequency r 222 T = 6.6 ± 0.4 pm/V which are in a good agreement with those measured with the MDM method (see Table 1). The measurements of the coefficient r 222 were performed for each crystal under investigation. All crystals present the same behavior as function of frequency. The dependence of the clamped and unclamped EO coefficients, r 222 T and r 222 S on the molar Zirconium concentration is shown in Fig. 5 and presented in Table 1.
We can see in Fig. 5 that both clamped and unclamped EO coefficients r 222 present a kink around 2.0 mol% Zr. This kink in the EO properties of LN:Zr was already suggested in literature by Argiolas et al [START_REF] Argiolas | Structural and optical properties of zirconium doped lithium niobate crystals[END_REF]. The values of the low-and high-frequency r 222 coefficients in Zr-doped LN in the case of 2.0 mol% are equal to r 222 T = 5 ± 0.4 pm/V and r 222 S = 2.9 ± 0.3 pm/V, respectively corresponding to a decrease of about 25% of the values obtained in the other samples in the zirconium doped series. It is important to note that the observed kink has a value much greater than the uncertainty obtained in the measurements (equal to 6%). We note that contrary to Hf-doped LN crystals [START_REF] Abarkan | Electro-optic and dielectric properties of Hafnium-doped congruent lithium niobate crystals[END_REF][START_REF] Nava | Zirconium-doped lithium niobate: photorefractive and electro-optical properties as a function of dopant concentration[END_REF], the value of r 222 coefficient versus Zr concentration in LN:Zr crystals has to be emphasized since this coefficient presents a nonmonotonous dependence as for crystals doped with other ions such as Zn or Mg [START_REF] Abdi | Influence of Zn doping on electrooptical properties and structure parameters of lithium niobate crystals[END_REF][START_REF] Grabmaier | Growth and investigation in MgO-doped LiNbO 3[END_REF].
Dielectric measurements
In ferroelectric inorganic materials, it was established that the EO properties are linked to the linear dielectric properties. In particular, it was shown that the frequency dependence [START_REF] Salvestrini | Comparative measurements of the frequency dependence of the electrooptical and dielectric coefficient in inorganic crystals[END_REF] of an EO coefficient reproduces the behavior of the corresponding dielectric permittivity ε. Such a link between ε and r still exists in their dependence on the dopant composition.
We have undertaken the measurements of the dielectric permittivity ε 22 as function of frequency for all crystal-samples. Using a low voltage equal to 1V, inducing an electric field in sample under test equal to 1kV/m, the dielectric measurements were done by means of two impedance analyzers HP4151 and HP4191A in frequency ranges from 1 Hz to 13 MHz and from 1 MHz to 1GHz, respectively. The frequency dispersion of dielectric permittivity ε 22 of the 0.8 mol%-Zr doped LN crystal measured along the c-axis is shown in Fig. 4. All samples present the same general response with the frequency being such that the Zr-concentration dependence of the clamped and unclamped permittivities ε 22 T and ε 22 S present a same kinkbehavior as the EO coefficient r 222 as shown in Fig. 6.
All data are reported in Table 1. As expected, the behavior of ε 22 with frequency is very similar to this of r 222 . In Fig. 4, we can see that ε 22 displays a jump between both sides of the piezoelectric resonances, which corresponds to the electromechanical contribution Δε to the static permittivity ε T . This will be commented and discussed below.
Discussion
We discuss the frequency and Zr concentration dependences of the EO coefficients. It is well known that the frequency dependence of the EO coefficients in crystals reflects the various physical processes contributing to the EO effect according to the working frequency [START_REF] Kaminow | An Introduction to Electro-Optic Devices[END_REF]. In inorganic crystals the contribution arising from optical phonons is responsible for a large value at high frequency r S . In piezo-electric crystals, the EO coefficient measured under lowfrequency electric field or DC electric field arises from additional contribution related to the crystal deformation via the piezo-electric and the elasto-optic effects. This additional contribution is the so-called acoustic or piezo-optic contribution, denoted r a and is therefore given by the difference between the unclamped r T and clamped r S coefficients. It can be thus derived from the experimental values of the EO coefficients measured below and above the piezo-electric resonances. We can observe a slight dependence of the acoustic contribution on Zr concentration, which is found r 222 a = r 222 T -r 222 S = 2.1 ± 0.4 pm/V in the case of the 2.0% Zr-doped LN crystals. By else, even if the dependence of all physical properties on zirconium concentration in lithium niobate is still not available, we are able to interpret rather trustfully our experimental results. The acousto-optic contribution to the electro-optic effect, r a can be estimated from the elasto-optic (Pockels) E ijkl p at constant electric field, and from the piezoelectric kij d tensors [START_REF] Nye | Physical Properties of Crystals[END_REF] ,
, ,
a E ij k ij lm lm k lm r p d = ( 4 )
Likewise, the difference between the low-and the high-frequency values of dielectric permittivity recorded on both sides of the acoustic resonances, Δε ij is expressed as:
, , , ,
T S E ij ij ij ij k kl ij k ijkl l ij kl kl d e d C d ε ε ε ∆ = -= = ( 5
)
where e is the piezoelectric stress tensor and C E is the tensor of the elastic constants at constant electric field or elastic stiffness. Therefore the difference Δε corresponds to the electromechanical contribution to the static permittivity ε T . It is to be noted that the coefficients , E ij lm p can be also expressed with coefficients E ijkl C and e kl using the thermodynamic relations. According to the point group 3m of LN crystal [START_REF] Nye | Physical Properties of Crystals[END_REF][START_REF] Jazbinsek | Material tensor parameters of LiNbO 3 relevant for electro-and elasto-optics[END_REF], and by application of Neumann's principle to the p, d and C E tensors followed by the use of the reduced-subscript notation, we obtain the piezo-optic contribution to r 222 from Eq. ( 4) ( )
222
and the piezo-optic contribution to ε 22 from Eq. ( 5) as ( )
2
E E E E d C C d d C d C ε ∆ = - - + ( 7 )
Using the values of piezo-electric and elasto-optic coefficients, available in literature [START_REF] Warner | Determination of elastic and piezoelectric constants for crystals in class (3m)[END_REF][START_REF] Dixon | A new technique for measuring magnitudes of photoelastic tensors and its application to lithium niobate[END_REF] for the congruent composition only, we found (Eq. ( 6)) r 222 a = 2.7 pmV -1 in a good agreement with the experimental value, within the experimental error (10%) (see Table .1). This piezooptic contribution is relatively large in LN since it constitutes nearly 40% of the total value r 222 T ~6.6 pmV -1 . Within Eq. ( 7), also evaluated for pure congruent lithium niobate, we found Δε 22 = 35, which is close to the step directly detected in the experiments between both sides of the piezoresonances.
We have demonstrated in our experiments that both the unclamped and clamped EO coefficients r 222 and the dielectric permittivity ε 22 present a kink at a threshold zirconium concentration of around 2 mol% and that the steps Δε and r a are only slightly changed when Zr is introduced in the LN lattice. From the above consideration, this large step between lowand high-frequencies in both the EO coefficient r 222 and the dielectric permittivity ε 22 in LN:Zr is mainly due to the large electromechanical process which constitutes the main contribution in the congruent material. We also note that the term included C 44 is the largest in the congruent crystal even if it is too small to explain itself the piezo-optic contribution to ε 22 .
Thus, the slight changes in the electro-optic coefficient and dielectric constant can be attributed to a slight changes in piezo-electric and elasto-optic coefficients in doped crystals and generally to the strain effects along a (or b) axis. Now, considering the change in the electro-optic properties, appearing in the sample doped with 2 mol% of zirconium, we can note that this concentration corresponds to the threshold concentration determined in absorption measurements. However, Kovacs et al. have shown that the absorption edge is affected by the Li/Nb ratio [START_REF] Kovacs | Composition dependence of the ultraviolet absorption edge in lithium niobate[END_REF]. Therefore, below the threshold concentration, the reduction in the absorption edge can be attributed to the increase of the Li/Nb ratio due to the substitution by Zr ions of Nb ions on lithium site, i.e. on antisite Nb Li . The increase of the absorption edge for higher concentration, above the threshold concentration, is linked to the disorder induces by zirconium incorporation on both Li and Nb sites and to the necessary charge compensation process. Thus, the threshold concentration can be explained by the existence of doping-induced lattice reorganization. The threshold observed in the EO and optical properties is in coincidence with the photorefractive concentration threshold observed by straightforward photorefractive measurements in Ref [START_REF] Nava | Zirconium-doped lithium niobate: photorefractive and electro-optical properties as a function of dopant concentration[END_REF].
In addition, as concerns the applications of lithium niobate crystals, we have calculated the figure of merit linked to the driving voltage and the power (switching speed) of an EO modulator [START_REF] Yariv | Optical Waves in Crystals[END_REF] or of a Pockels cell [START_REF] Salvestrini | Comparative study of nonlinear optical crystals for electrooptic Q-switching of laser resonators[END_REF]. This figure of merit qualify EO devices used as modulators or Q-switches and is defined as F = n 7 (r S 222 ) 2 /ε 22 . The values of F obtained in the present study for the LN:Zr samples are listed in Table 1. We remark that F is quasi constant for all samples regardless the zirconium concentration even for the crystal at the threshold position. As, the figure of merit F, qualifying electro-optical devices, is preserved by Zr doping and due to its highly optical damage resistant, the crystal with 2.0 mol% concentration can be a promising materiel for EO modulation and Q-switching applications.
Conclusion
We have determined the unclamped and clamped values of the electro-optic coefficients r 222 , and the corresponding dielectric permittivity ε 22 as well, in Zr-doped LiNbO 3 crystals with varying Zr concentration. The frequency dependence of the EO coefficient reflects this of the associated permittivity. The piezo-optic contribution of the EO coefficient is much larger in r 222 , which is mainly related to the stronger electric field induced -deformation along the a (b) axis, as reflected by the difference between the low-and high-frequency values of ε 22 .
Both EO and dielectric coefficients reveal a small dependence on the Zr content introduced in the LN lattice and present a kink at 2.0 mol%, which is attributed to the strain contribution related to the introduction of Zr ions. Otherwise, if compared to undoped congruent crystal, the zirconium doped lithium niobate crystals present the advantage to have a smaller photorefractive damage, especially for concentration equal 2.0 mol%, and therefore should be more suitable for EO and NLO applications.
Fig. 1 .
1 Fig. 1. Absorption edge versus Zr concentration in congruent LN doped crystals evaluated from absorption spectra. Insert: Zoom of the absorption spectra in the range of the absorption edge.
Fig. 2 .
2 Fig.2. Optical transmission function of Sénarmont setup versus the angle of the analyzer β and applied voltage. The point M 0 is the minimum transmission point for which the output optical signal has a frequency twice the frequency of the applied electric field. M 1 is the 50% transmission point yielding the linear replica of the ac voltage.
Fig. 3 .
3 Fig. 3. Responses Δi(t) to a step voltage ΔV(t) at different time scales in the case of the EO coefficient r 222 in 0.8%-Zr doped LN single crystal. Measurements were performed at the wavelength of 633nm.
2 Fig. 4 .
24 Fig. 4. Comparison of the frequency dispersions of the dielectric permittivity ε 22 and of the EO coefficient r 222 in the 0.8 mol% Zr-doped LN crystal.
#Fig. 5 .
5 Fig. 5. EO coefficient r T 222 and r S 222 versus Zr concentration in congruent LN.
#Fig. 6 .Table 1 .
61 Fig. 6. Dielectric constant ε 22 T and ε 22 S versus Zr concentration in congruent LN. Table 1. Absolute values of the r 222 EO coefficient and related parameters of LN:Zr crystals as function of zirconium concentration: The EO coefficients at constant stress (r T ) were obtained by both MDM and TRM methods and at constant strain (r S ) by the TRM method, at 633 nm and at room temperature. The dielectric permittivities ε 22 T and ε 22 S were measured at room temperature. The figure of merit F = n 7 (r 222 S ) 2 /ε 222 S was calculated from experimental values. M D M m e t h o d T R M m e t h o d Acoustic contribution Dielectric constants Figure of merit r 222 T (pm/V)
#
200837 -$15.00 USD Received 15 Nov 2013; revised 24 Nov 2013; accepted 24 Nov 2013; published 2 Jan 2014
#
200837 -$15.00 USD Received 15 Nov 2013; revised 24 Nov 2013; accepted 24 Nov 2013; published 2 Jan 2014 (C) 2014 OSA 1 January 2014 | Vol. 4, No. 1 | DOI:10.1364/OME.4.000179 | OPTICAL MATERIALS EXPRESS 189
is the instantaneous value of the EO coefficient. As we can display and measure on an oscilloscope the time signals Δi(t) and ΔV(t), the frequency dispersion of the EO coefficients can be derived from the ratio of Δi(ν) and ΔV(ν) as obtained by the Z-transformation of the signals Δi(t) and ΔV(t) respectively:
( ) 3 2 0 ( ) ( )
#200837 -$15.00 USD Received 15 Nov 2013; revised 24 Nov 2013; accepted 24 Nov 2013; published 2 Jan 2014 (C) 2014 OSA 1 January 2014 | Vol. 4, No. 1 | DOI:10.1364/OME.4.000179 | OPTICAL MATERIALS EXPRESS 183
#200837 -$15.00 USD Received 15 Nov 2013; revised 24 Nov 2013; accepted 24 Nov 2013; published 2 Jan 2014 (C) 2014 OSA 1 January 2014 | Vol. 4, No. 1 | DOI:10.1364/OME.4.000179 | OPTICAL MATERIALS EXPRESS 181
January
| Vol. 4, No. 1 | DOI:10.1364/OME.4.000179 | OPTICAL MATERIALS EXPRESS 184
(C) 2014 OSA 1 January 2014 | Vol. 4, No. 1 | DOI:10.1364/OME.4.000179 | OPTICAL MATERIALS EXPRESS 185
(C) 2014 OSA 1 January 2014 | Vol. 4, No. 1 | DOI:10.1364/OME.4.000179 | OPTICAL MATERIALS EXPRESS 187 | 32,360 | [
"760155",
"4548",
"10185"
] | [
"265400",
"202503",
"202503",
"202503",
"403897"
] |
01306433 | en | [
"spi"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01306433v2/file/MetaReconstructedBone.pdf | Ivan Giorgio
Ugo Andreaus
Francesco Dell'isola
Tomasz Lekszycki
Viscous second gradient porous materials for bones reconstructed with bio-resorbable grafts
Keywords: Metamaterials, second gradient materials, porous media, viscous dissipation, bone remodelling 2010 MSC: 00-01, 99-00
It is well known that size effects play an important role in the mechanical behavior of bone tissues at different scales. In this paper we propose a second gradient model for accounting these effects in a visco-poro-elastic material and present some sample applications where bone is coupled with bioresorbable artificial materials of the kind used in reconstructing surgery.
Introduction
Substantial size effects are known to occur in the elastic behavior of (i) single osteons [START_REF] Lakes | On the torsional properties of single osteons[END_REF], (ii) human compact bone [START_REF] Frasca | Strain and frequency dependence of shear storage modulus for human single osteons and cortical bone microsamples-size and hydration effects[END_REF][START_REF] Yang | Transient study of couple stress effects in compact bone: torsion[END_REF][START_REF] Yang | Experimental study of micropolar and couple stress elasticity in compact bone in bending[END_REF][START_REF] Park | Cosserat micromechanics of human bone: strain redistribution by a hydration sensitive constituent[END_REF][START_REF] Buechner | Size effects in the elasticity and viscoelasticity of bone[END_REF], (iii) human trabecular bone [START_REF] Harrigan | Limitations of the continuum assumption in cancellous bone[END_REF][START_REF] Ramézani | Size effect method application for modeling of human cancellous bone using geometrically exact cosserat elasticity[END_REF]. In the first case, the size effects are attributed to compliance of the interfaces between laminae. In the second case, there is experimental evidence that the cement lines as compliant interfaces account for most of the difference in stiffness between osteons and whole bone. In the third case, continuum properties vary by more than 20-30% over a distance spanning three to five trabeculae and hence a continuum model for the structure is suspect [START_REF] Harrigan | Limitations of the continuum assumption in cancellous bone[END_REF]. Therefore, Ramézani et al. [START_REF] Ramézani | Size effect method application for modeling of human cancellous bone using geometrically exact cosserat elasticity[END_REF] used the Cosserat theory to describe the hierarchical multi-scale behavior of trabecular human bone using micro-CT images, namely: i) macroscale, dealing with cancellous bone or spongy bone at real size; ii) meso-scale, representing non-homogeneous and stochastic network clusters; iii) micro-scale, indicating the micro-randomness and heterogeneous deformations; iv) sub-micro-and nano-scale, showing single lamellas including collagen fibers and apatite crystals. Generally speaking, the limitations of the continuum assumption appear in two areas: near biologic interfaces, and where there are large stress gradients. To incorporate the scale of the microstructure of a heterogeneous material within the continuum framework, a number of phenomenological 'remedies' have been proposed that involve the relaxation of the local action hypothesis of classical continuum mechanics. Such enriched (or enhanced) continuum mod-els aim at including information on the microstructure and can be classified into three main groups [START_REF] Fatemi | Generalized continuum theories: Application to stress analysis in bone[END_REF], namely: (i) non-local integral models [START_REF] Kröner | Elasticity theory of materials with long range cohesive forces[END_REF][START_REF] Eringen | On nonlocal elasticity[END_REF], (ii) higher-order gradient models [START_REF] Madeo | A second gradient continuum model accounting for some effects of micro-structure on reconstructed bone remodelling[END_REF][START_REF] Dell'isola | The postulations á la D'Alembert and á la Cauchy for higher gradient continuum theories are equivalent: a review of existing results[END_REF][START_REF] Alibert | Second-gradient continua as homogenized limit of pantographic microstructured plates: a rigorous proof[END_REF] and (iii) micropolar theories [START_REF] Cosserat | Théorie des corps déformables[END_REF][START_REF] Altenbach | On the linear theory of micropolar plates[END_REF][START_REF] Altenbach | On generalized Cosserat-type theories of plates and shells: a short review and bibliography[END_REF]. Bleustein [START_REF] Bleustein | A note on the boundary conditions of Toupin's strain-gradient theory[END_REF] showed how the boundary conditions of a linear theory of an elastic continuum with micro-structure [START_REF] Mindlin | Micro-structure in linear elasticity[END_REF] can be reduced to those of a corresponding linear form of a strain gradient theory [START_REF] Toupin | Elastic materials with couple-stresses[END_REF]. Following this way of thinking, second gradient materials can be interpreted as a particular limit case of micromorphic (or micropolar) media because they can be deduced from micromorphic ones by constraining the micromorphic kinematic descriptors to be equal to the classical strain ones by introducing internal constraints and Lagrange multipliers. We remark that this constrained approach which is rigorous in a finite-dimensional space, it is assumed reasonably acceptable in an infinite-dimensional space on the basis of an argument of analogy. This paper is inspired by the more general framework of a research oriented to design the mechanical characteristics of the biomaterial constituting the graft, namely mass density and resorption velocity, in order to optimize the mass density distribution of the growing bone tissue. The continuum model employed in this paper is accordingly richer than standard Cauchy continuum, including higher gradients of displacement in the deformation energy. The addition of terms in the energy involving second gradient of the displacement arises from the consideration of the geometry of the trabecular structure of the bone. Trabeculae are indeed organized (locally) as a lattice system oriented along the principal stress directions [START_REF] Antonio | Orientation of orthotropic material properties in a femur FE model: A method based on the principal stresses directions[END_REF]. Since a not negligible amount of deformation energy is stored in the form of bending of trabeculae [START_REF] Parr | Finite element micro-modelling of a human ankle bone reveals the importance of the trabecular network to mechanical performance: New methods for the generation and comparison of 3D models[END_REF], a classic Cauchy model is not sufficiently rich and instead terms in the energy describing the curvature of the microstructure have to be considered. This fact naturally leads to second gradient energy models. More generally, it has been proven [START_REF] Pideri | A second gradient material resulting from the homogenization of an heterogeneous linear elastic medium[END_REF] that high contrast at micro-level of mechanical properties can impose at macro-level the need of introducing deformation energies depending on higher displacement gradient. In general, generalized continuum theories such as couple stress and micropolar have degrees of freedom in addition to those of classical elasticity [START_REF] Steigmann | Isola, Mechanical response of fabric sheets to three-dimensional bending, twisting, and stretching[END_REF][START_REF] Andreaus | Numerical simulations of classical problems in two-dimensional (non) linear second gradient elasticity[END_REF]: however, in the case of the second gradient models this is not needed [START_REF] Bleustein | A note on the boundary conditions of Toupin's strain-gradient theory[END_REF]. All such theories are thought to be applicable to materials with fibrous or granular structure. Experimentally Yang and Lakes measured the effect of size on apparent stiffness of compact bone in quasi-static torsion [START_REF] Yang | Transient study of couple stress effects in compact bone: torsion[END_REF] and bending [START_REF] Yang | Experimental study of micropolar and couple stress elasticity in compact bone in bending[END_REF].
Material and methods
The considered specimen is constituted by the union of two bi-dimensional square portions, one constituted of bone tissue and the other of biomaterial; the square size is L/2 = W = 0.5 cm. The mass densities of the two materials are initially assigned in each zone and they will evolve in the subsequent remodeling process according to the mechanical and biological laws presented in the following (see Eq. ( 17)). The support conditions on one edge are shown in a self-explanatory way in Fig. 1.
B M
Figure 1: Sample under study at initial stage. The labels "B" and "M" stand for bone and graft material, respectively.
A traction distribution corresponding to a pure bending is applied to the opposite edge as shown in Fig. 1; the load is harmonically variable with a low frequency Ω in order to activate the component of the stimulus which is related to dissipation, because this phenomenon plays a key role in the bone functional adaptation, as discussed in [START_REF] Giorgio | A visco-poroelastic model of functional adaptation in bones reconstructed with bio-resorbable materials[END_REF]. In particular, we set
f b (x 2 , t) = 2x 2 W -1 [F 0 + F 1 sin(Ωt)] (1)
Some relevant results will be presented with reference to the probe point P m in the material zone (Fig. 1).
Governing Equations
Kinematics. In order to give a macroscopic description of the system under study constituted by an insert of bioresorbable grafting material and a piece of bone, i.e. a porous mixture, we introduce the placement field:
χ : (X, t) → x (2)
which takes each point of body X in the reference configuration B and time t ∈ R into a place x in the current configuration. Therefore, we consider the solid-matrix macroscopic displacement (u = x -X) as a basic kinematical descriptor and use the Saint-Venant strain tensor
E ij (X, t) = 1 2 (u i,j + u j,i + u i,k u k,j ) (3)
to take elastic deformations into account. Because of the porous nature of our system, we introduce another independent kinematical descriptor to describe the microdeformations of pores inside the solid matrix of the system. In particular, we introduce the change of the Lagrangian porosity, i.e. the change of the effective volume of the fluid content per unit volume of the body with respect to an equilibrium volume [START_REF] Biot | Mechanics of deformation and acoustic propagation in porous media[END_REF]. In detail,
ζ(X, t) = φ(χ(X, t), t) -φ * (X, t) (4)
where φ and φ * are the Lagrangian porosity related to the current and the reference configuration, respectively. By adopting the approach of the mixture theory, these porosities can be expressed as follows
φ = 1-(ρ b /ρ b +ρ m /ρ m ), φ * = 1-(ρ * b /ρ b +ρ * m /ρ m ) (5)
where ρ b and ρ m are the apparent mass densities of bone tissue and artificial material, respectively; the superimposed hat denotes the true densities, while the superscript * indicates all quantities in the reference configuration.
Variational equation of motion. As already mentioned, the bone is organized at micro-level as a three-dimensional porous network of interconnected trabeculae (cancellous bone). It can be also seen as a quasi-periodic system of cylindroid structures, i.e. osteons, (cortical bone) characterized by a high contrast of mechanical properties between bending and extension. Therefore, using the classical framework of Poromechanics (Biot [27], Cowin [START_REF] Cowin | Bone poroelasticity[END_REF]) and second gradient continua (Mindlin [19], Toupin [START_REF] Toupin | Elastic materials with couple-stresses[END_REF]), for the potential energy-density -potential energy per unit of macro-volume-we take a homogeneous, quadratic function of the variables E, ∇E, ζ and ∇ζ [START_REF] Placidi | Gedanken experiments for the determination of two-dimensional linear second gradient elasticity coefficients[END_REF][START_REF] Dell'isola | Generalized Hooke's law for isotropic second gradient materials[END_REF][START_REF] Dell'isola | The postulations á la D'Alembert and á la Cauchy for higher gradient continuum theories are equivalent: a review of existing results[END_REF]]
E = 1 2 λ(ρ * b , ρ * m ) E ii E jj + µ(ρ * b , ρ * m ) E ij E ij + 4 α 1 (ρ * b , ρ * m )E ii,j E jk,k + α 2 (ρ * b , ρ * m )E ii,j E kk,j + 4 α 3 (ρ * b , ρ * m )E ij,i E kj,k + 2 α 4 (ρ * b , ρ * m )E ij,k E ij,k + 4 α 5 (ρ * b , ρ * m )E ij,k E ik,j + 1 2 K 1 (ρ * b , ρ * m )ζ 2 + 1 2 K 2 ζ ,i ζ ,i -K 3 (ρ * b , ρ * m ) ζ E ii (6)
where λ and µ are the Lamé parameters
λ = ν Y (ρ * b , ρ * m ) (1 + ν)(1 -2ν) , µ = Y (ρ * b , ρ * m ) 2(1 + ν) , (7)
here expressed in terms of the Young modulus of the mixture are the maximal elastic moduli. The second gradient stiffness coefficients are assumed to be:
Y = Y Max b (ρ * b /ρ b ) 2 + Y Max m (ρ * m /ρ m ) 2 (8
α 1 = α 2 = α 4 = Y (ρ * b , ρ * m ) 2 , α 3 = 2Y (ρ * b , ρ * m ) 2 , α 5 = 1/2Y (ρ * b , ρ * m ) 2 (9)
being a suitable scale length of the microstructure, i.e., in this case, it is related to the diameter of trabeculae or osteons which are around 200 µm. The coefficient K 1 is a coefficient of compressibility related to the bone marrow inside pores and can be evaluated as
K 1 = φ * K f + (α B -φ * )(1 -α B ) K dr -1 (10)
depending on the stiffness of marrow K f and the drained bulk modulus of the porous matrix K dr = Y /(3(1 -2ν)), α B being the Biot-Willis coefficient. K 2 , assumed to be constant, is a parameter which takes into account the gradient of porosity, i.e., a non-local stiffness related to interaction phenomena among neighboring pores. The coupling between microstructure and solid bulk is assumed to be
K 3 = ĝ(φ * ) λ K 1 ( 11
)
where the weight
ĝ(φ * ) = 0.9 π atan 15 φ * - 1 2 + atan 15 2 (12)
is a monotonically increasing function which modulates the micro-macro coupling depending on the reference porosity.
To model dissipative phenomena, we employ a Rayleigh dissipation function
2D s = 2µ v Ėij Ėij - 1 3 Ėii Ėjj + κ v Ėii Ėjj ( 13
)
related to the solid-matrix macroscopic rate strain. The material parameters κ v and µ v are the bulk and shear viscosity coefficients, respectively. The Generalized Principle of Virtual Work, including dissipation effects and neglecting inertia terms, for independent variations δu i and δζ, therefore, runs as follows
- B δE d B + B δW ext d B = B ∂D s ∂ Ėij δE ij d B ( 14
)
where δW ext is the variation of the work done by external actions. The assumed potential energy-density ( 6) is the motivation for the adoption of the following form for the variation of work done by external action:
δW ext = ∂τ B τ i δu i dS + ∂τ B T α δu α,j n j dS + ∂B Ξ δζdS, (15)
where body and line forces are neglected, τ i is the surface traction on the boundary, and T α is an external double force per unit of area, i.e., a mechanical 'dipole action' [START_REF] Green | Multipolar continuum mechanics[END_REF] which consists in two opposite forces, with zero resultant, acting on two points which can be considered distinct only at the length scale of the micro-structure, i.e. trabeculae or osteons (see for more details on physical interpretation [START_REF] Mindlin | Micro-structure in linear elasticity[END_REF][START_REF] Polizzotto | A note on the higher order strain and stress tensors within deformation gradient elasticity theories: Physical interpretations and comparisons[END_REF]) and resulting in a macro-deformation exclusively in a higher gradient continuum (while such an external action is irrelevant in a simple Cauchy theory), Ξ is a micro-structural action which describes the local dilatant behavior of a porous material induced by pore opening, capillary interaction phenomena among neighboring pores.
It is worth noting that a second gradient continuum model can be deduced via an homogenization procedure based on micro-macro identification to obtain an equivalent model which reproduces at the macro scale the behavior of the material characterized by a complex microstructure at micro-scale (see e.g. [START_REF] Goda | A micropolar anisotropic constitutive model of cancellous bone from discrete homogenization[END_REF][START_REF] Alibert | Second-gradient continua as homogenized limit of pantographic microstructured plates: a rigorous proof[END_REF][START_REF] Cecchi | Heterogeneous elastic solids: A mixed homogenization-rigidification technique[END_REF]). In this homogenization process, some critical issues as for instance a damage occurring in the bone tissue or in the biomaterial can be taken into account (see e.g. [START_REF] Placidi | A variational approach for a nonlinear 1-dimensional second gradient continuum damage model[END_REF][START_REF] Placidi | A variational approach for a nonlinear onedimensional damage-elasto-plastic second-gradient continuum model[END_REF][START_REF] Misra | Micromechanical model for viscoelastic materials undergoing damage[END_REF]).
Interface modeling. Interface conditions can be formulated by adding to the energy density [START_REF] Buechner | Size effects in the elasticity and viscoelasticity of bone[END_REF] an internal boundary extra term:
E int = 1 2 K ζ [[ζ]] 2 + 1 2 K u [[u]]•[[u]]+ 1 2 K ∇u [[(∇u)n]]•[[(∇u)n]] (16
) This simple interface potential energy-density can be useful to better describe the real conditions of connection between bone tissue and artificial material with a proper choice of the stiffnesses characterizing the mechanical properties of the junction layer, K ζ , K u and K ∇u , which, respectively, define an elastic interaction due to the jump in the fields of ζ, u and ∇u for the two different regions jointed. The symbol [[•]] stands for the jump of any field f (X) through the interface, i.e., [
[f ]] = f + -f -).
Evolution rules. The evolution of the apparent mass densities is governed by the rules [START_REF] Lekszycki | A mixture model with evolving mass densities for describing synthesis and resorption phenomena in bones reconstructed with bio-resorbable materials[END_REF][START_REF] Andreaus | Modeling of the interaction between bone tissue and resorbable biomaterial as linear elastic materials with voids[END_REF]:
ρ * b = A b (S) H (φ) with 0 < ρ * b ρb ρ * m = A m (S) H (φ) with 0 < ρ * m ρ * m (X, 0) (17)
which are assumed to depend on the mechanical stimulus S resulting from an external applied load and the current porosity φ. The functions A b and A m are taken to be A {b,m} (S) = s {b,m} S for S ≥ 0 r {b,m} S for S < 0 [START_REF] Bleustein | A note on the boundary conditions of Toupin's strain-gradient theory[END_REF] with different constant rates for synthesis (s b and s m = 0) and for resorption (r b and r m ). Of course, we remark that the synthesis of bio-resorbable material is not allowed, i.e., s m = 0. The weight function H is characterized by a Ulike shape with a maximum in the neighborhood of φ = 0.5 to emphasize the most effective conditions in the remodeling process. In the definition of the stimulus, the presence of a lazy zone, bounded by two thresholds (P
The signal P related to the sensing biological activity is assumed to take the following form [START_REF] Giorgio | A visco-poroelastic model of functional adaptation in bones reconstructed with bio-resorbable materials[END_REF]:
P (X, t) = B (a E s + b D s ) (ρ * b ) e -X-X 0 2 2D 2 dX 0 B e -X-X 0 2 2D 2
dX 0 (20) characterized by a reference length D which delimits the range of action of the biological processes. E s is the density of strain energy of the solid matrix (including first and second gradient terms) and = 0.2ρ * b /ρ b is the density of active sensor cells assumed to be present only in the living bone tissue.
Numerical results
In order to show that the model presented is able to describe some possible mechanisms of bone remodeling among those induced by the presence of bio-material, the simple case outlined in Sec. 2 is numerically analyzed. For this purpose, an iterative procedure is implemented as sketch below. At first, the apparent mass density fields, for both bone tissue and bio-material, are evaluated starting from the given initial values and at any subsequent time step; subsequently, at each time step the mechanical equilibrium Eq. ( 14) is solved with a FE method (using COM-SOL Multiphysics) under the specific boundary conditions considered; then, using the results of the two previous steps, the stimulus distribution in the body is computed with Eq. ( 20); finally, by means of the remodeling evolution Eq. ( 17), the rate of the apparent mass densities is determined and so the procedure goes back to the first step. Specifically, to overcome the limitation of a finite element code (COMSOL Multiphysics) optimized for constitutive models of the first gradient continua, it was decided to find a micromorphic model of the first gradient that was equivalent to the second gradient model, using the technique of Lagrange multipliers [START_REF] Forest | Micromorphic approach for gradient elasticity, viscoplasticity, and damage[END_REF]. A further improvement can be achieved employing the recent developed tools of the isogeometric analysis particularly suitable for their inherent high continuity to treat the second gradient models (see for more details e.g. [START_REF] Cazzani | Isogeometric analysis of plane-curved beams[END_REF][START_REF] Cazzani | Con-stitutive models for strongly curved beams in the frame of isogeometric analysis[END_REF][START_REF] Greco | An isogeometric implicit G1 mixed finite element for Kirchhoff space rods[END_REF][START_REF] Greco | An implicit G1 multi patch B-spline interpolation for Kirchhoff-Love space rod[END_REF]). Starting from an initial apparent mass density uniform for the bone tissue (ρ * b = 0.5ρ b ) and the biomaterial (ρ * m = 0.5ρ m ), we perform a parametric analysis by varying the characteristic length of the second gradient governed by Eq. ( 9). In all the following figures we plot mass densities normalized with respect to the maximum values. In Fig. 2, we compare the model of the first gradient ( = 0L) with that of the second gradient, for two different values of the characteristic length ( = {0.05L, 0.1L}), as reported close to the respective curves in figure. Here and henceforth, the time is normalized for the reference value of 6.048 × 10 5 s which corresponds about one week, lengths are normalized by L. The three dashed curves refer to the bone, while the three continuous curves relate to the material. It is observed that as the characteristic length increases, the evolutionary process takes place in a longer time and therefore the material undergoes a greater resorption and this entails that the bone has available a greater space to grow. Therefore, in the stationary state, the bone attains a greater mass density (and the material a lesser one) as the characteristic length increases. We note that this behavior is similar to that found in the case of the first gradient (Fig. 3), when increasing values of the maximum Young modulus Y Max m of the bio-material are assumed. In this case, the increased stiffening, ascribable to the second gradient of displacement introduced in the constitutive relation of the mixture, is due to bending deformation at the level of microstructure. Therefore, neglecting this effect means disregarding an important contribution to the deformation energy, as seen from the simulations, which plays a significant role in the remodeling process. Figure 4 shows the situation at the end of the adaptation process in terms of apparent mass density of bone (Fig. 4a) and biomaterial (Fig. 4b), when the model of the first gradient is used. Here and henceforth, the labels "B" and "M" stand for bone and graft material, respectively. Figure 4 serves as a reference to compare the results found via the second gradient model, respectively with characteristic lengths 0.05L (Fig. 5) and 0.1L (Fig. 6). Figures 5 and6 show the distributions of the apparent mass density of (a) bone and (b) biomaterial at the end of the process with the second gradient model with = 0.05L (Fig. 5) and = 0.1L (Fig. 6). It can be observed that the main differences are located in the area in the vicinity of the application of load (see Fig. 1) and further from the bone area. The portions of resorbed biomaterial and deposited bone grow with increasing characteristic length. To facilitate the understanding of the distribution of mass density in Figs. 4,5, and 6, the level curves in plan have been reported too. In order to design the mass density to optimize the distribution of the material, it is thought to reduce the initial mass density in the neighborhood of the neutral axis (see Fig. 8a), given that the load condition is of bending. Thus, the initial (a) and final (b) states of the remodeling process, both as regards the bone (Fig. 7) and the biomaterial (Fig. 8), were compared in the case in which the initial mass density of the biomaterial is not uniform and the characteristic length is 0.07L. In the present case the neutral axis is aligned in the longitudinal direction of the sample. It is noted that the biomaterial area with greater porosity constitutes a fast track to the bone penetrating inside the area of the material, which in fact is more dense in the area where the initial density of the biomaterial is lower, having a greater available space.
Conclusions
This paper presents a constitutive model for the bonebiomaterial mixture, characterized by taking into account a second gradient model. The aim was to compare the results obtained via a simpler model of the first gradient, noting that the effect of the second gradient is to delay the process of evolution, in a manner similar to what happens when the biomaterial is stiffened under the assumption of first gradient model. This difference can be significant and therefore, it is not possible a priori ignore the effects of a second gradient without capturing this evolutionary aspect. The proposed model is characterized by several material parameters. The need to introduce these parameters is intrinsically linked to the complexity of the analyzed system. Depending on the specific cases analyzed some simplifications are possible, for instance considering the limit cases in which these parameters approach zero or infinity. However, the aim of this paper is to develop a general and flexible model which can be used in many cases significant for the study of bone reconstruction. It is clear that with a micro model very detailed, fewer parameters are required, but it is equally clear that in this case a study on a sample with size suitable for applications is very expensive from a computational point of view. Thus, the use of generalized continua can improve the computational cost without losing too much predictive power at the expense of the introduction of further constitutive parameters. On the other hand, the mechanical system the authors want to study is rather complex: it would be naive to believe that it can be described with a simple model. In any case, it is impossible to do so.
This study was done assuming uniform initial mass densities of bone and biomaterial in their respective areas. It is then analyzed the case in which the mass density of the biomaterial is not uniform at the beginning of the process, resulting in a greater bone growth where the rarefaction of the biomaterial allows it.
) and Poisson ratio. Y Max b and Y Max m
Figure 2 :Figure 3 :Figure 4 :
234 Figure 2: Time evolution of the mass densities of bone (dashed line) and material (solid line) in the probe point Pm when characteristic length is varying ( = {0, 0.05L, 0.1L}).
Figure 5 :Figure 6 :
56 Figure 5: Distributions of the mass densities of (a) bone and (b) material at the end of the process with = 0.05 L.
Figure 7 :Figure 8 :
78 Figure 7: Distributions of bone mass density at the (a) beginning and (b) end of the process.
s ref and P r
ref
for synthesis and for resorption, respectively), is hypoth-
esized where the osteoregulatory balance of the bone is
maintained. Mathematically:
S(X, t) =
P (X, t) -P s ref for P (X, t) > P s ref 0 for P r ref P (X, t) P s ref P (X, t) -P r ref for P (X, t) < P r ref
February 14, 2017 | 28,535 | [
"762932"
] | [
"159496",
"229915",
"229915",
"229915",
"159496",
"159496",
"235985"
] |
01467344 | en | [
"sdu"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01467344/file/2897.pdf | D Dubois
D A Patthoff
R T Pappalardo
Diurnal, Nonsynchronous Rotation and Obliquity Tidal Effects on Triton using a Viscoelastic Model: SatStressGUI. Implications for Ridge and Cycloid Formation
Introduction: Neptune's biggest moon Triton orbits at an almost constant distance of about 355,000 km from its parent body. The satellite has a very low eccentricity (e = 10 -5 ), and rotates synchronously about Neptune. It is thought to have been differentiated enough for the formation of interior solid and even liquid layers [START_REF] Nimmo | Icarus[END_REF]. Generally, diurnal tidal forcing is the main stressing mechanism a satellite with a sufficient eccentricity can experience. Other possibly combined sources participating in the tidal evolution of a satellite can be nonsynchronous rotation (NSR), axis tilt (obliquity), polar wander, and ice shell thickening. Given Triton's current very low eccentricity, the induced diurnal tidal forcing must be relatively non-existent. Triton's eccentricity has most likely changed since its capture [START_REF] Prockter | [END_REF] and this change in eccentricity may account for the formation of surface features and maintaining a subsurface liquid ocean [START_REF] Prockter | [END_REF][START_REF] Mckinnon | Neptune and Triton[END_REF]. Furthermore, obliquity-induced tides have been shown to play a role in Triton's recent geological activity [START_REF] Nimmo | Icarus[END_REF] with its high inclination. Thus, modeling Triton's tidal behavior is essential in order to constrain its interior structure, tidal stress magnitudes, and surface feature formation. The latter stems from the surface expression of compressive and extensive mechanisms in the icy shell [START_REF] Hoppa | [END_REF]. Triton has a very rich and complex surface geology, with wavy and cycloidal-like features; a testament to tidal deformation. Single troughs and double ridge formation are also observed on Triton associated with the cantaloupe terrain [START_REF] Prockter | [END_REF]5].
Model:
We use an enhanced viscoelastic model, SatStressGUI V4.0 [6,7,8], to simulate obliquitydriven tides on Triton, a body in hydrostatic equilibrium [START_REF] Schubert | Jupiter: The Planet, Satellites and Magnetosphere[END_REF]. The (retrograde) satellite rotates with a period of about -5.8 days, and an eccentricity of 10 -5 , whereby the diurnal effect is considered to be almost negligible. In this context, we propose a 4-layer interior model (Table 1) and offer a preliminary Love number calculation. The innermost layer is composed of a relatively thick silicate mantle and lies below a putative liquid water ocean. The outermost layers are divided into an inner low viscosity and prone to convection (mainly controlling the tidal dissipation throughout the satellite's evolution) layer, and an outer more viscous and rigid layer. Here we explore the contribution of obliquity-driven tides on cycloid formation and provide constraints on Triton's interior structure. Table 1. Input rheological parameters for our 4-layer model.
Obliquity-driven tidal estimates without nonsynchronous rotation:
The following figures show the tidal evolution at Triton over the course of 1 orbit around Neptune, with an obliquity of 0.1º and argument of periapsis of 0º. The maps are plotted East positive, with tension marked as positive and compression as negative. The red and blue vectors represent the s 1 and s 2 principal components respectively. A tentative cycloidal ridge is modeled with the starting point at 0º longitude and mid-southern hemisphere 30ºS latitude, a region known for its unique surface geology [5].
Table 2. Cycloid generation parameters. The yield strength is the threshold that initiates fracture in the ice. This fracture will propagate as long as the strength is below this threshold and greater than the propagation strength. The propagation speed is usually taken to be <10 m/s. (A) and 280º past perijove (B). The latter shows a full cyloid modeled over one orbit, which has already started propagating by the end of (A). Note that the stress magnitudes are relatively low (2-3 kPa), and generating wavy features remains difficult. The parameters used here are given in Table 2. Positive values represent tension while negative represent compression.
Obliquity-driven tidal estimates with nonsynchronous rotation:
In this case, we used the same conditions as in the previous case, while adding NSR effects with a 10 Myr NSR period. Table 3. Cycloid parameters used in the NSR case, where the ice is more likely to fail with the inclusion of NSR due to the larger stresses. We calculate the 0.1º obliquity-related Love numbers where the diurnal effect is minimized to be h 2 = 1.2 and k 2 = 0.2, which present the same order of magnitude as [8] and h 2 slightly greater than [START_REF] Nimmo | Icarus[END_REF]. With added NSR effects, h 2 = 1.99 and k 2 = 0.99.
Figure 1 .
1 Figure 1. Here we show stresses for obliquity-induced (0.1º obliquity) tidal stress maps at 40º past perijove
Figure 2 .
2 Figure 2. Obliquity-induced (0.1º) with added NSR (10 Myr period) effects at 40º (C) and 280º (D) past-perijove. The magnitudes are larger than with no NSR, and the induced arcuate feature different from the previous case.
Acknowledgements: We are grateful to JPL's Visiting Student Research Program, NASA's Postdoctoral Program, JPL's Student Undergraduate Research Fellowship and the French Ministry of National Education, Higher Education, and Research. | 5,526 | [
"9287"
] | [
"61129",
"391267",
"111268",
"61129",
"61129"
] |
01467373 | en | [
"phys"
] | 2024/03/04 23:41:44 | 2017 | https://hal.science/hal-01467373/file/epl-emile-corrected.pdf | O Emile
Janine Emile
H Tabuteau
2-D evanescent trapping of colloids in the vicinity of a micrometer waveguide
Keywords: 47, 57, J--Colloidal systems 42, 50, Wk -Mechanical effects of light on material media, microstructures and particles 64, 75, Xc -Phase separation and segregation in colloidal systems
come
laser beams, conventional use of optical tweezers leads to 5 a diffraction limited volume. New strategies based on 6 evanescent waves and/or nanostructures substrates have 7 been developed to reach the nanoscale trapping volume 8 range [5][6][7][8]. In the near field, for example on the sur-9 face of a prism or under focused evanescent wave illumi-10 nation, at total reflection [9,10], the intensity gradient 11 force of the evanescent wave could be steep, leading to a very efficient trapping. Curiously, in one or two dimen-13 sion evanescent trapping, most of the experiments dealt 14 with tapered fibers and submicron waveguides, in order to benefit from a deep penetrating evanescent wave around 16 the guide [11][12][13][14][15]. However, although the intensity at the waveguide surface may be lowered, larger waveguides ticles near a 3.5µm-diameter post, using low power laser 23 light. We report on the trapping mechanism and we show 24 that it is a two step process.
25
Experimental set-up. -The wave guide is a PDMS (PolyDiMethilSiloxane) cylinder or post (diameter d = 3.5 ± 0.5µm, height h = 15 ± 5µm) made on a glass slide by standard soft lithography [16]. We make array of posts (10 × 10 square) with a distance of 500µm between them. The precisions on the post dimensions are linked to the measurements performed with an optical microscope. Even if a single post is involved during an experiment, the data statistics results from the use of several posts. They are stuck to the upper side of the cell (see Fig. 1). Particles flow in between the posts but also underneath them.
The dilute colloidal suspension (10 -4 volume fraction) is made of 2µm-diameter polystyrene particles (Life Tech.) dispersed in heavy water in order to increase the particles number that flow through the posts. We apply a pressure difference across the cell by using a pressure controller system (Elveflow OB1 Mk2) with an accuracy of 0.3 mbar, leading to a permanent flow. Particles are mostly advected, i.e. with a negligible Brownian motion. Depending on the applied pressure, the velocity of the trapped particles is between 8µm/s and 20µm/s corresponding to a Péclet number of the order of 10 the particles are imaged with a ×10 objective followed by an image expanding system and a camera (Edmund Scientific). The frame rate is 12.5 images per second. Images are analyzed using the free available ImageJ software. A Notch filter (Edmund Scientific OD=7) is used to block most of the green light in order to be able to track the particles. In the green, at λ=532nm, the optical index of the PDMS is n 2 = 1.407 [START_REF]Handbook of optofluidics[END_REF], whereas the index of the dilute solution is close to the index of water n 1 = 1.327, and the index of the colloids is n = 1.596. Since n is higher than n 1 we get an efficient trapping of the particles by the optical gradient force [1,2].
Results. -We have been able to track more than 400 particles as they flow in the vicinity of the post and eventually get trapped by the evanescent wave. A typical example of such a trajectory appears on Fig. 2. The first picture (top left corner of Fig. 2a) is the raw color picture. Although most of the green light has been blocked by the notch filter, it still saturates the camera. We have subtracted the green and the blue color images, keeping only the red one, in order to track the particles with a reasonable contrast.
We have then plotted the particle position versus time (see Fig. 2b). The trajectory can be divided in three different regimes. The particle first follows a fluid streamline that goes towards the post from (t = 0s to t = 0.48s), at constant velocity (17µm/s). This first regime is well resolved with our acquisition system, as can be seen on Fig.
2b. Similarly to the particle capture by a collector within a filter [START_REF] Wright | [END_REF], one can define the distance between the particle and the post, b, called thereafter impact parameter.
In a second regime, the particle is captured by the evanescent wave near the post (t = 0.56s). Although the laser intensity is weak, this is a fast regime where the particle is suddenly attracted towards the post along a radial direction and then remains very close to it. We cannot temporally resolve this second regime with the acquisition rate we used. It is surely faster than 0.08 s. We introduce a capture angle θ c that corresponds to the angle for which Finally, in a third regime, the particle rotates slowly 95 around the post until it becomes still at a given position 96 in a direction corresponding to an angle θ p (from t = 0.64s 97 to t = 0.88s). This 0.2s long-regime can be also resolved 98 here as can be seen in Fig. 2b. Actually, the particle is not 99 really at rest since it slightly oscillates around the trap-100 ping position. This is probably due to its residual Brow-101 nian motion of the particle. However it oscillates only in 102 the angular direction, meaning that the restoring force is 103 steeper in the radial direction than in the angular direc-104 tion. It is worth noting that when the laser is switched 105 off, the particle departs from the post following the flow 106 direction. Therefore, the particle remains very close to 107 the post only when the laser is hold on. Because of these 108 three very different regimes, the trapping force must have 109 a large intensity range. It thus could be used to probe 110 forces within the suspension such as a Casimir force for 111 example [19], using different sizes of particles.
112
From the analysis of the trajectories, one can investigate 113 the statistics of the impact parameter for all the trapped 114 particles. This is shown in Fig. 2b. The maximum impact 115 Fig. 4a) [START_REF] Kogelnik | Theory of optical waveguides[END_REF]. This leads to the following equation
148 2kn 2 d cos θ = 2φ + 2πm ( 1
)
where k is the wave-vector modulus, θ the incidence angle, TM polarization [START_REF] Born | Principles of optics[END_REF] tan(φ
T E /2) = -(n 1 β)/(n 2 cos θ) tan(φ T M /2) = -(n 2 β)/(n 1 cos θ) (2)
β being equal to β = [(n 2 /n 1 sin θ) 2 -1] 1/2 . Here, since the two optical indexes are rather close to each other, the incidence angles are nearly the same. Nevertheless, Eq. 1 can be solved graphically for each polarization, as shown on Fig. 4b. Actually, the two curves of the left part of Eq. 1 superimpose. According to Fig. 4b, up to 6 modes can propagate. The fundamental mode corresponds to an incident angle which is far above the critical angle θ s = sin -1 (n 1 /n 2 ) whereas higher modes correspond to an incident angle closer to θ s . Besides, since the penetration depth z d equals z d = 1/kβ, the evanescent wave penetrates deeper for the higher modes than for the fundamental mode. The capture is performed by the highest mode, whereas all the modes participate to the trapping. However, since the m = 0 has the lowest penetration depth (of the order of 0.3µm), its intensity variation is the steepest, and it is thus the most efficient mode for trapping. For the m = 6 mode we find a penetration depth of the order of 2µm in agreement with the capture range of 2.25µm found experimentally (see Fig. 2c).
The height of the post is about h = 15µm. The particles are trapped in the upper zone of the post. The laser is injected from below. Then the modes propagating in the guide can be mixed so as the amplitude of the field is distributed among the various modes, although the fundamental mode has the highest amplitude.
We could have calculated the modes within the waveguide using a model of a cylindrical waveguide as it has been done in the literature (see for example [START_REF] Yariv | Photonics: Optical Electronics in Modern Communications 6th Ed[END_REF]). However, there is an assumption that is usually done concerning the polarization of the field that is: the propagation is isotropic and then the field is treated as a scalar quantity. Nevertheless, we observe here an accumulation of the particles at specific angles. As explained in the next paragraph, this accumulation is due to an anisotropic total internal reflection of the field on the post surface. This induces a difference of the evanescent wave intensity between TE and TM polarizations.
Trapping angle. Let us now focus on the trapping angle θ p . The evanescent wave amplitude in the post vicinity equals to t × E 0 , E 0 being the amplitude of the incident wave and t the modulus of the complex transmission coefficient given by [START_REF] Born | Principles of optics[END_REF] t
T E = 2n2 cos θ √ n 2 2 cos 2 θ+n 2 1 β 2 t T M = 2n2 cos θ √ n 2 1 cos 2 θ+n 2 2 β 2 (3)
Such coefficients are plotted on Fig. 4c. One can note that t T M > t T E as it is usually found with evanescent waves [START_REF] Almaas | [END_REF]24]. Precisely, in our case, the laser is linearly polarized with a polarization aligned with a direction corresponding to an angle of 150
F grad = 2n 2 πr 3 c [ m 2 -1 m 2 + 2 ] grad(I) (4)
where r = 1µm is the particle radius, c is the celerity of 3, given that around half of the power is injected in the 229 guide, and that around 0.1% of the initial power ends the 230 m = 6 mode [26], one finds a force in the radial direction that is of the order of F r ≈ ×10 -17 N at a distance of 1µm from the post. Closer to the post since the other modes are also playing a role, the force is much higher.
One can also calculate the intensity gradient in the tangential direction, considering the variation of the intensity of the evanescent TM an TE modes around the post.
gradI t = I T M -I T E π/2(d/2 + r) (6)
where I T M and I T E are the intensities of the TM and TE evanescent modes respectively. According to the expressions of the coefficients in Eq. 3, one finds that the force in the tangential direction is of the order of 1 × 10 -20 N, i.e. 3 orders of magnitude lower than in the radial direction. This is in good agreement with the experimentally estimated force in the tangential direction F t . Since a 10images-per-second-acquisition rate camera enables to resolve the dynamics of the trapping in the tangential direction, the resolution of the dynamics in the radial direction would imply using a camera with a 10 4 -images-per-secondacquisition rate at least.
We have also noticed a slight displacement of the particles in the upward direction. They are propelled along the guide by the radiation pressure, corresponding to the light propagation in the guide as already observed in other tapered waveguides [27][28][29]. Since the guide is only 15µm long, this displacement is hardly noticeable.
Multiple particle trapping.
So far, we have focused on single particle trapping but several particles could be trapped successively. We may then form particle aggregates with various shapes (see Fig. 5). Even in this case, particles accumulate around the θ p direction, partly coating the post in that direction. As their number increases they start forming chains or 2 dimensional planar structures as if the particles themselves form patterned surfaces [30,31], that could be used for new functionalities [32]. Dielectric particles could also form pattern structures [33][34][35][36]. However, here, the evanescent wave is modified by the presence of already trapped particles, and the incoming new colloids attach themselves to the trapped ones, whereas in the mentioned references, the patterning of particles comes from the evanescent field alone. Besides, the particle aggregation as well as the particle dynamics could be dramatically changed depending on the ellipticity and shape of the trapped particle [37,38]. This could indeed leads to new structures with multiple trapped states and particular dynamics.
Conclusion. -As a conclusion, we have experimentally trapped micro-meter size particles in the evanescent wave of a low power laser guided in a 3.5µm diameter multimode waveguide. We find a capture range of 2.25µm from the guide surface. We evidence a migration of the particles around the post towards a specific direction corresponding to the TM direction of the incident light. Such manipulation of micro-objects with low power laser may tants in dedicated microfluidic devices [42]. Our approach 296 can easily be "parallelised" in order to be more efficient, by 297 splitting a high power laser source on thousands of posts.
* * *
We would like to thank A. Hubert for early interest, and
18 should
18 in principle lead to reduced evanescent waves pen-19 etration and steeper intensity variations. It should then 20 allow an efficient trapping at the surface of the guide. In 21 this letter we address the trapping of 2µm-diameter par-22
4 .Fig. 1 :
41 Fig. 1: Experimental set up. Drawing not to scale.
figure Fig. 2b.
Fig. 2 :
2 Fig. 2: a) Dynamics of a typical particle trajectory during the trapping process. The first image is a raw one while the other are corrected and correspond to the various times during the process. Arrow: position of the particle. b) Trajectory of the trapping process. The colored particles correspond to the colored arrows on the images. θc is the capture angle and θp the trapping angle. The zero angle corresponds to the horizontal direction. c) Number of particles captured for different values of the impact parameter b.
Fig. 3 :139
3 Fig. 3: Number of particles trapped versus capture angle θc (red curve), and versus trapping angle θp (blue curve). The step of the angles is 5 • .
149φ
the phase shift experienced at total reflection, and m the 150 mode number. The phase shift is different for a TE or a 151
Fig. 4 :
4 Fig. 4: a) Ray optics interpretation of the optical guiding. b) Graphical resolution of Eq. 1 for d = 3.5µm. c) Transmission coefficients tT E and tT M of the evanescent wave for the two polarizations, and β coefficient versus the angle of incidence.
208
209
We have rotated the polarization of the laser with a po-210 larization aligned with a direction equal to 120 • and 180 • .211We have observed that the particles are being trapped for 212 a trapping angle θ p = 120 • and θ p = 180 • respectively. 213 The particles are thus trapped in a direction that corre-214 sponds to the TM polarization of the laser beam. 215 Let us estimate the order of magnitude of the gradi-216 ent forces. As can be seen on Fig. 2b, we can follow the 217 rotation of the particle around the post. Using the fun-218 damental principle of dynamics, the experimentally esti-219 mated force in the tangential direction is thus of the order 220 of F t ≈ ×10 -20 N. Concerning the force in the radial di-221 rection, according to [25], the gradient force writes 222
223 light, m = n 2
2 /n 1 and I is the evanescent light intensity.224In the radial direction the intensity gradient is equal to225 grad(I) z = I 0 z d exp(-z/z d )(5)where I 0 is the light intensity of the evanescent wave on the 226 surface of the post. Assuming a waist of the m = 6 mode 227 equals to 4µm, taking into account the expressions of Eq.
228
Fig. 5 :
5 Fig. 5: Pictures showing several particles trapped by the light on the post. The particles have been colored for the sake of clarity.
fects in the vicinity of optofluidics devices that modify or 286 are even detrimental to the objects [39], are here severely 287 reduced. | 15,666 | [
"14036"
] | [
"57111"
] |
01348422 | en | [
"info"
] | 2024/03/04 23:41:44 | 2016 | https://enac.hal.science/hal-01348422/file/InformatikSpektrumArticle-Final.pdf | Sheelagh Carpendale
Nicholas Diakopoulos
Nathalie Henry Riche
Christophe Hurter
Data Visualization for Communication and Storytelling
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Data Visualization for Communication and Storytelling
Sheelagh Carpendale, University of Calgary, CA Nicholas Diakopoulos, University of Maryland, US Nathalie Henry Riche, Microsoft Research, US Christophe Hurter, ENAC -French Civil Aviation University, FR Close to forty researchers and practitioners descended on Schloss Dagstuhl to forge an interdisciplinary agenda on the topic of data-driven storytelling using visualization in early February, 2016. With burgeoning research interest in understanding what makes visualization effective for communication, and with practitioners pushing the envelope of the craft of visual communication, the meeting put different modes of thinking between computer science researchers and data visualization practitioners in close proximity for a week.
Central to our vision of the convening was that the vast majority of research on data visualization to date has focused on designing and implementing novel interfaces and interactive techniques to enable data exploration. Major advances in visual analytics and big data initiatives have concentrated on integrating machine learning and analysis methods with visual representations to enable powerful exploratory analysis and data mining. But just as interactive visualization plays an important role in data analysis scenarios it is also becoming increasingly important in structuring the communication and conveyance of insights and stories in a compelling format. Visual data-driven stories have proliferated in many different forms, from animated infographics and videos, to interactive online visualizations.
One domain where there has been extensive and practical progress on the question of data-driven storytelling is data journalism. News sites like FiveThirtyEight or the New York Times' The Upshot have seen a recent surge of attention and interest as a means of communicating data-driven news to the public. By carefully structuring the information and integrating explanation to guide the consumer, journalists help lead users toward a valid interpretation of the underlying data. Because of the rapid and practical progress of data-driven storytelling in the domain of journalism, our seminar sought to put some of the top practitioners from that field together with computer science researchers to discuss the challenges and opportunities of datadriven communication.
The Dagstuhl seminar was structured to leverage the interdisciplinarity of the attendees by first tapping into a divergent design thinking process meant to enumerate the range of issues that are relevant to data-driven stories. Hundreds of index cards and sticky notes were sacrificed as participants generated ideas (see Figure 1). Groups of participants formed around common interests and each of these major themes were then the focus of discussion. Each work group was geared towards developing an outline and plan to produce a written chapter for a forthcoming edited book on the topic of data-driven storytelling. Some groups met for a day or two and then reformed around other topics, whereas other groups spent the entire week going deep in exploring a single topic. And as if the daytime activities weren't enough, additional evening breakout groups formed around additional topics of interest like Education in Data Visualization, Urban Visualization, and the Technology Stack for data-driven stories.
In-between the intense, small group sessions the entire group came together daily for five-minute lightning talks on a wide array of relevant topics. These stimulating talks primed the group for approaching data-driven storytelling from different perspectives and were an entertaining and informative way to share creative ideas or results in small and easily digestible nuggets. Among the more than 25 lightning talks, topics ranged from storytelling with timelines, to mobile visualization, the use of data comics, visual literacy, affect and color, data-story design workflows, and even the visualization of data through cuisine.
Outcomes
Our initial goal of the seminar was to have groups work intensively on their chosen topic(s) so that an outline and workplan could be developed to write a contributing chapter to a book on data-driven storytelling. The book is underway and will have contributions on each of the main themes outlined above, as well as an introductory chapter by the editors / organizers of the Dagstuhl seminar. Moreover, our creative contributors at the seminar produced other outputs as well: curated lists of example data driven stories, as well as of storytelling techniques were created and will be published online, and a blog has pulled together some of the formative impressions of participants (https://medium.com/data-driven-storytelling).
Below we briefly summarize the expected contents of each of the chapters that will form the book.
Techniques and Design Choices for Storytelling
This chapter will discuss techniques and design choices for visual storytelling grounded in a survey of over 60 examples collected from various online news sources and from award-winning visualization and infographic design work. These design choices represent a middle ground between low-level visualization and interaction techniques and high-level narrative devices or structures. The chapter will define several classes of design choices: embellishment, explanation, exploration, navigation, story presentation, emphasis, focus, and annotation. Examples from the survey for each class of design choices will be provided. Finally, several case studies of examples from the survey that make use of multiple design choices will be developed.
Exploration and Explanation in Data-Driven Stories
This chapter will explore the differences between and integration of exploration and explanation in visual data-driven storytelling. Exploratory visualizations allow for a lot of freedom which can include changing the visual representation, the focus of what is being shown and the sequence in which the data is viewed. They allow readers to find their own stories in the data. Explanatory stories include a focused message which is usually more narrow and guides the reader often in a linear way. Advantages and disadvantages of exploration and explanation as well as dimensions that help to describe and classify data-driven stories will be developed. The space is described by identifying freedom, guidance regarding representation, focus and sequence as well as interpretation as important dimensions of data-driven storytelling and existing systems are characterized along these dimensions. Recommendations will be developed for how to integrate both aspects of exploration and explanation in data-driven stories.
From Analysis to Communication: Supporting the Lifecycle of a Story
This chapter will explore how tools can better support the authoring of rich and custom data stories with natural / seamless workflows. The aim is to understand the roles and limitations of analysis / authoring tools within current workflow practices and use these insights to suggest opportunities for future research and design. First, the chapter will report a summary of interviews with practitioners at the Dagstuhl seminar; these interviews aim to understand current workflow practices for analysis and authoring, the tools used to support those practices, and pain points in those processes. Then the chapter will reflect on design implications that may improve tool support for the authoring process as well as research opportunities related to such tool support. A strong theme is the interplay between analytical and communicative phases during both creation and consumption of data-driven stories.
The Audience for Data-Driven Stories
Creators of data-driven visual stories want to be as effective as possible in communicating their message. By carefully considering the needs of their audience, content creators can help their readers better understand their content. This chapter will describe four separate characteristics of audience that creators should consider: expertise and familiarity with the topic, the medium, data, and data visualization; expectations about how and what the story will deliver; how the reader uses the interface such as reading, scrolling, or other interactivity; and demographic characteristics of the audience such as age, gender, education, and location. This chapter will discuss how these audience goals match the goals of the creator, be it to inform, persuade, educate, or entertain. Then it will discuss certain risks creators should recognize, such as confusing or offending the reader, or using unfamiliar jargon or technological interfaces. Case studies from a variety of fields including research, media, and government organizations will be presented.
Evaluating Data-Driven Storytelling
The study of data-driven storytelling requires specific guidelines, metrics, and methodologies reflecting their different complex aspects. Evaluation is not only essential for researchers to learn about the quality of data-driven storytelling but also for editorial rooms in media and enterprises to justify the required resources the gathering, analyzing and presentation of data. A framework will be presented that takes the different perspectives of author, audience and publisher and their correspondent criteria into account. Furthermore it connects them with the methods and metrics to provide a roadmap for what and how to measure if these resulting data-driven stories met the goals. In addition, the chapter will explore and define the constraints which might limit the metrics and methods available making it difficult to reach the goals.
Devices and Gadgets for Data Storytelling
This chapter will discuss the role of different hardware devices and media in visual data driven storytelling. The different form factors offer different affordances for data storytelling affecting their suitability to the different data storytelling settings. For example, wall displays are well suited to synchronous co-located presentation, while watches and virtual reality headsets work better for personal consumption of pre-authored data stories.
Ethics in Data-Driven Visual Storytelling
Is the sample representative, have we thought of the bias of whoever collected or aggregated the data, can we extract a certain conclusion from the dataset, is it implying something the data doesn't cover, does the visual device, or the interaction, or the animation affect the interpretation that the audience can have of the story? Those are questions that anyone that has produced or edited a data-driven visual story has, or at least should have, been confronted with. After introducing the space, and the reasons and implications of ethics in this space, this chapter will look at the risks, caveats, and considerations at every step of the process, from the collection/acquisition of the data, to the analysis, presentation, and publication. Each point will be supported by an example of a successful or flawed ethical consideration.
Conclusion
The main objective of this Dagstuhl seminar was to develop an interdisciplinary research agenda around data-driven storytelling as we seek to develop generalizable findings and tools to support the use of visualization in communicating information. Productive group work converged to delineate several research opportunities moving forward:
• The need for interfaces that enable the fluid movement between exploratory and communicative visualization so that storytelling workflow is seamless and powerful. • The need to develop typologies of visual storytelling techniques and structures used in practice so that opportunities for supporting these techniques can be sought through computing approaches.
• The need to develop evaluation frameworks that can assess storytelling techniques and tools both scientifically and critically. • The need for design frameworks that can guide the structure of visual information for experiences across different output devices, both existing and future. • The need to understand the audience and their role in co-constructing meaning with the author of a data-driven story. • The need for ethical frameworks that should guide tool development for visual data-driven communication.
These opportunities were productively enumerated at the Dagstuhl seminar and are in the process of being written up as chapters in our book on data-driven storytelling that will appear in early 2017.
More information on the Dagstuhl seminar can be found at http://www.dagstuhl.de/16061.
Figure 1 .
1 Figure 1. Converging on topical groups from hundreds of individual ideas. | 13,038 | [
"924125",
"6085"
] | [
"82231",
"75866",
"46954",
"380071"
] |
01467567 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467567/file/978-3-642-38928-3_10_Chapter.pdf | Jonathan Lewis
email: [email protected]
The Role of Microblogging in OSS Knowledge Management
Keywords: microblogging, twitter, knowledge management
Given that microblogging has been shown to play a valuable role in knowledge management within companies, it is useful to understand how it is being used in relation to OSS. This project studies tweets related to 12 open source projects and keywords, ranging from web content management systems (CMSes) to general office applications. It found considerable differences in the content and exchange of tweets, especially between specialist products such as CMSes and office suites such as OpenOffice. Tweets concerning the more specialist projects tended to provide information rather than updates on the user's current status. We found a high proportion of event-driven traffic for some CMS projects, and a lower proportion for the office products and groups of projects.
Introduction
In any knowledge-intensive project or organization, informal communication is vital to the timely spread of information and ideas. Considerable research has been undertaken on the role of microblogging services such as Twitter in knowledge management within enterprises [START_REF] Boehringer | Adopting enterprise 2.0: A case study on microblogging[END_REF][START_REF] Reinhardt | Communication is the key-support durable knowledge sharing in software engineering by microblogging[END_REF][START_REF] Riemer | Enterprise microblogging[END_REF][START_REF] Guenther | Modeling microblogging adoption in the enterprise[END_REF][START_REF] Riemer | Tweet inside: Microblogging in a corporate context[END_REF]. Many OSS developers and users also use Twitter, but little attention has been paid to the role played by microblogging in exchanging and diffusing knowledge in OSS projects. This study of OSSrelated microblogging explores what kind of information is being exchanged on Twitter regarding open source software, and the different ways in which Twitter is being used. In order to answer these questions, a study was made of statuses (Tweets) related to 12 OSS projects, using a taxonomy adapted from previous research on intra-enterprise microblogging. The projects selected had an emphasis on, but were not restricted to, web content management. Table 1 gives an overview of the projects and the numbers of statuses collected, sampled and analyzed.
This paper is organized as follows: Section 2 discusses the selection of projects, presents statistics about the data used, and discusses the challenges in collecting and cleaning Twitter data. Section 3 contains the findings and analysis. Section 4 discusses threats to validity and topics for further research. 2 Analyzing OSS microblogging
Selection of keywords/projects
Six web content management systems (CMSes) were selected, five of them written in PHP (Joomla, TYPO3, Silverstripe, Drupal, and Xoops) and one written in Python (Plone). 1 CMSes were selected because they are tightly focused on a particular product used by specialists (mostly website developers and system administrators) but nevertheless have large enough user and developer populations to generate sufficient traffic. Furthermore they can be compared to each other.
The server-side web scripting language PHP and the web application framework Ruby on Rails (RoR) were added in order to compare communications regarding web development using PHP and Ruby.
OpenOffice and libreoffice were included because, compared to the CMSes, they were likely to have more end-users who were not IT professionals. Their inclusion would thus help to highlight the characteristics of microblogging in the more specialist projects.
Apache and Mozilla, two umbrella organizations for a number of open source projects, were included because, while being similarly Web-centered to the CM-Ses, their wider focus promised to show us differences between communication in individual projects and larger open source organizations.
Selection of microblogging service
While Twitter is the most well-known microblogging service, there is an alternative service, identi.ca, which uses open source software and, unlike Twitter, allows users to import and export their data using the FOAF standard. It might be the case that open source users and developers would make greater use of identi.ca for ideological reasons. In order to check whether this is the case, the search APIs of both services were queried for the selected keywords/projects between 15 February and 6 March 2013. The results are shown in Table 2. The results clearly show several orders of magnitude more activity regarding open source keywords on Twitter compared to identi.ca, which justifies the selection of Twitter as the data source for this study.
Data Collection
The Twitter Search API was queried approximately every two hours from 7 May to 26 December 2012 for keywords related to the 12 OSS projects. The keywords were not mutually exclusive, so for example a status containing both "PHP" and "Drupal" could be included in both samples. 2 The statuses were saved to a PostGreSQL database. A total of 1,245,282 statuses containing the 12 keywords were collected, as shown in Table 1.
After collecting the statuses, a random sample of English-language statuses for each keyword was exported from the database for manual coding. 3 A sample size of 1,000 statuses for each project was coded where available, giving a total of 11,676. This compares with the total of 3,152 Twitter posts examined by Ehrlich and Shami, although their sample was not divided into 12 projects.
Data Cleaning
The sampled data was cleaned by excluding the following kinds of status:
Non-English statuses The number of non-English statuses in the sample was very low because the sample included only statuses tagged as English language.
Irrelevant statuses The number of irrelevant statuses was also generally very low, with the exception of Apache, Mozilla and RoR. "Apache" obviously has many other uses, while "RoR" refers not only to Ruby on Rails but also to such things as the Retraining of Racehorses. There were a large number of irrelevant statuses containing the word "Mozilla" because when Firefox users tweeted titles of web pages on any topic the text would include "Mozilla Firefox."
Robot statuses Many studies of microblogging have focussed on user behavior and accordingly have selected users who were recognizably individual people. Considerable numbers of Tweets, however, are generated automatically. This is also the case where open source software is concerned, and it is a thorny question whether to include them or not. It was decided to exclude these "robot" statuses, above all because they were much more numerous for some keywords/projects than others, making it difficult to compare results. Defining and identifying automatically generated statuses is not easy, but the following categories of statuses were excluded: were defined as posts with 'job' or 'elance' (from 'freelance') in the sender's name.
Duplicate statuses Duplicate statuses, which were defined as those containing identical text (with any urls excluded) and posted by the same user. Statuses containing identical text but sent by different users were labelled as retweets.
Table 3 shows details of data cleaning. In such cases, we put the Tweet into the current status category, which we broadened to include notices and reports of events happening on the day of posting e.g. "Great RoR Meetup tonight at @manilla office with @userD and lots of swearing. Good times."
The coding was carried out by the author alone; clearly it would have been better to have three or more coders in order to reduce error and bias. Table 4 shows the numbers of statuses in each category, and Figure 1 displays the same data as proportions.
We also broke down the large "provide information" category into domainspecific subcategories. Table 5 and Figure 2 show the numbers and proportions of subcategories respectively.
Results and Analysis
Status Updates versus Information Provision
Do OSS-related statuses follow general Twitter usage in concentrating on the user's current activities, or do they tend to provide more general information?
Motivation In their study of general Twitter users [START_REF] Naaman | Is it really about me?: message content in social awareness streams[END_REF], Naaman et al. used cluster analysis to suggest that 80% of the users sampled were "Meformers", who mostly tweeted about what they were doing, and only 20% were "Informers" forwarding information. In contrast, Ehrlich and Shami [START_REF] Ehrlich | Microblogging inside and outside the workplace[END_REF] found that about 10% of tweets sent by their regular users in IBM were about current status, compared with just under 30% that were providing information. We can therefore hypothesize that, given a more professional context, OSS-related tweets will have a higher proportion of information provision than "Me now" content. However, this is not to deny the potential value of "Me now" posts as a lubricant maintaining easy communication between participants in OSS projects.
Results Percentages of current status updates were between 0.9% and 8.5% (see Table 4 and Figure 1), while those providing information accounted for between 36% and 73% of each sample.
The two office applications both show a higher proportion of current status posts than the other projects. Reading the statuses did indeed suggest a lot of general users posting tweets along the lines of "man im tryna type this paper but for whatever reason my OpenOffice is gone!" This supports the hypothesis that statuses written by professionals related to their work tend to be more information providing than "Me now"-ish.
A number of factors help to explain the higher proportions of information providing Tweets in our samples than those found by Ehrlich and Shami. First, despite excluding many automatically generated job-related posts at the data cleaning stage, 1035 job-related posts remain in the "provide information" category; it is unlikely that Ehrlich and Shami's subjects would be sending many such posts. Second, the "provide information" category includes advertisements for online resources such as articles, books or software, which are also less likely to be sent by Ehrlich and Shami's subjects. Third, Naaman et al. found that tweets sent from mobile terminals are more likely than those sent from computers to be "Me now" rather than "provide information", and we can expect that most of the users and developers of the projects studied will be working on computers. Fourth, Naaman et al. also found that women rather than men are more likely to post "Me now" statuses, and given the high proportion of male participation in most open source software projects this may be having some effect.
Event-driven traffic
How much activity is prompted by events, both off-and online?
Motivation Research on microblogging has shown that much activity occurs around events, particularly but not confined to offline events such as conferences [START_REF] Vega | Communities of Tweeple: How Communities Engage with Microblogging When Co-located[END_REF][START_REF] Ebner | Getting granular on twitter: Tweets from a conference and their limited usefulness for non-participants[END_REF]. Vega et al. [START_REF] Vega | Where are my tweeps?: Twitter usage at conferences[END_REF] found that conference participants tweeted more than usual in the week of the event. Conferences, and smaller events such as sprints and meetups, are the site of intense discussions between participants on all aspects of their project, so we can expect greater Twitter activity to exchange information with other participants and also to communicate what is happening.
Results Table 5 and Figure 2 show the numbers and proportions of eventrelated tweets for the 12 project keywords.
Proportions of event-related tweets are low for all keywords except TYPO3 and Plone. In the case of TYPO3, there was something of a spike in activity around the conference held in Stuttgart in early October 2012, while for Plone there was a more pronounced concentration of tweets around the conference held in Arnhem in the same month. While it is not easy to distinguish current status and provide information categories concerning events, for neither project did tweets sharing current status e.g. "home and awake after a cool (exhausting) #T3CON12DE -now let's rock #TYPO3, #NEOS and everything like we rocked this conference!" predominate, suggesting that communication is more focused on providing information about the conference to participants and non-participants e.g. "Thanks for attending our talk on #TYPO3 and #TYPO3Neos at #drupalhagen. Our slides are available here https://t.co/j6RJolxx."
It is interesting to note that for both TYPO3 and Plone, when we exclude job-related statuses, we also find that a higher proportion of statuses for these two projects mention "community" than is the case with other projects. 4 Only 4 out of the 35 and 7 out of the 56 statuses for Plone and TYPO3 respectively that mentioned community were event-related, e.g. "The #Plone community is just fucking crazy (in a good way). Prost!!!! #beersprint" Therefore this does not seem to be merely a case of people tweeting about community when surrounded by their fellow developers and users. Further investigation is required to clarify whether this high proportion of event-related statuses and mentions of community is a coincidence, and if not then what is the relationship between the two. While it goes without saying that talking about community does not create one, the spontaneous nature of Twitter makes it a promising medium to explore how open source users and developers think and feel about their projects.
Rival products
To what extent do statuses mention non-code related factors such as rival products?
Motivation In their study of GNOME mailing lists, Shibab et al. [START_REF] Shibab | On the central role of mailing lists in open source projects: An exploratory study[END_REF] found that external factors, and particularly the emergence of rival products, played a significant role in shaping discussions among developers. They used these external developments to explain a decline in the market share of the Evolution mail client as rival products emerged. We can expect that a similar analysis of Twitter activity will show which products are perceived as rivals by those within and without open source projects. These findings, particularly if they could be tracked over years rather than months, would help to explain design decisions and shifts in market share.
Results We excluded job-related statuses, then counted the occurrences of words in the text of statuses for each keyword/project. Table 6 shows a selection of the technology-related words appearing in the top 50 most commonly used words for each project, along with the number of statuses analyzed. Note that the number of occurrences can be greater than the number of statuses because project names are often used more than once in a single tweet. Some of the words are components or closely related to the projects and some are rival products.
Of the CMSes, statuses regarding all except TYPO3 and Xoops mention rival products, predominantly Wordpress. It would be worthwhile to follow this up with a longitudinal study to establish whether Wordpress is gaining in its position as the chief rival to most open source CMSes. Tweets about the office products, as expected, mention each other and Microsoft Office. It is also interesting to see that statuses on PhP and Ruby on Rails both make significant mention of each other.
Threats to Validity and Topics for Further Research
One limitation of this study is that it compares proportions of different kinds of Twitter use across projects/keywords based on samples of similar size, while the absolute numbers of statuses for the different keywords differ greatly. Therefore, while we can conclude that e.g. a higher proportion of Silverstripe-related tweets than Apache-related tweets are concerned with events, that does not mean that Apache users and developers do not make equally active use of Twitter with regard to events, while also using Twitter more actively to e.g. provide links to documentation.
It would be desirable to increase the sample size for each project in order to increase the reliability of the data. It would also be better if the sample came from a full twelve-month period because many projects have large conferences once a year, and the current seven-month period risks excluding some events. Not all statuses containing those keywords during the period were collected, for two reasons. First, the Twitter Search API does not guarantee to return all statuses for a given period. Second, it was not possible to ensure that the data collection script ran uninterrupted for the entire period. When such interruptions occurred, and the volume of posts was high, due to the limitation placed on the number of results returned from Twitters Search API (1500 in this case), it was not possible to collect all the statuses posted since the previous collection. These gaps might introduce distortions into the findings, for example if they coincide with a major conference or code release related to a particular project, thus missing a spike in microblogging activity. However, the long period of data collection can be expected to reduce the impact and perhaps to even out such distortions. The incompleteness of the data would also be a problem if the study were aiming to do a network analysis of Twitter-based communication in OSS projects, as there would be many missing directed messages and replies. However, as our purpose here is merely to analyze the content of individual statuses, this is not an issue for the current research. 5In retrospect, it would have been desirable to include Wordpress, one of the most popular open source CMSes. Unfortunately Twitter's search API does not allow statuses more than a few days old to be collected. 6The study could also be improved by obtaining information about numbers of followers and giving greater weight to posts that are more read.
Table 1 .
1 Overview of Twitter Statuses Retrieved and Sampled, 7 May-26 Dec 2012
Project/Keyword Project type (language) Number of statuses collected Number tagged English Number sampled
Joomla CMS (PHP) 278,821 172,989 1,000
TYPO3 CMS (PHP) 22,800 7,553 1,000
SilverStripe CMS (PHP) 1,985 1,678 1,000
Drupal CMS (PHP) 209,901 158,917 1,000
Xoops CMS (PHP) 1,317 676 676
Plone CMS (Python) 9,507 6,509 1,000
RoR Web app framework (Ruby) 24,603 13,781 1,000
PHP Web scripting language 54,121 32,114 1,000
Apache Group of software projects 28,3646 165,988 1,000
Mozilla Group of software projects 30,9958 157,094 1,000
OpenOffice Office software 35,718 11,185 1,000
libreoffice Office software 12,905 1,969 1,000
Sum 1,245,282 730,453 11,676
Table 2 .
2 Numbers of Statuses Retrieved from Twitter and identi.ca Search APIs, Feb 15-Mar 6, 2013
Project/keyword Twitter identi.ca
Joomla 34,716 10
TYPO3 2,422 0
SilverStripe 457 1
Drupal 24,591 65
Xoops 1,447 0
Plone 1,324 3
RoR 39,916 5
PHP 77,114 875
Apache 43,280 103
Mozilla 41,077 303
OpenOffice 4,160 21
libreoffice 6,567 357
Total 277,071 1,743
Statuses sent automatically when someone posts to a forum e.g. XOOPS: Re: Linux Xoops white page issue? [by kidx] http://t.co/MxW4r2Ba 5. Statuses generated by job sites. Based on examination of the samples, these
1. Machine status Tweets e.g. Fedora [f18-arm] :: [97.44%] Completed -[0] Built -[3] Failed :: Task Error [perl-OpenOffice-UNO-0.07-3.fc17]-[1142021] 2. Repository-generated statuses e.g. cesag committed revision 1694 to the Xoops France Network SVN repository, changing 1 files: cesag committed revi... http://t.co/nXREVkxj 3. CMS change statuses generated by crawlers detecting changes in web page code e.g. http://t.co/YhTLHvCe: Change from NetObjects Fusion to TYPO3 http://t.co/YhTLHvCe #cms 4.
Table 3 .
3 Details of Data Cleaning @userC im just getting into Web Development, but i dont if it would be more begginer firiendly to learn PHP or RoR, help!?" It was not always easy to decide which category a post should fall into; making a distinction between status and provide information proved particularly difficult, e.g. "In thinking caps on Mozilla writeable society session. A few mins, great minds and awesome ideas. See 'em here: http://t.co/dHk1zVSk #pdf12"
Keyword/ Project Number sampled Irrelevant Not English Robots Duplicates Number after cleaning
Joomla 1,000 1 1 323 27 650
TYPO3 1,000 0 3 45 1 951
SilverStripe 1,000 0 3 121 23 854
Drupal 1,000 1 4 147 9 839
Xoops 676 9 39 164 55 417
Plone 1,000 20 7 196 5 772
RoR 1,000 218 7 78 24 673
PHP 1,000 6 8 297 20 674
Apache 1,000 565 5 148 6 277
Mozilla 1,000 236 34 5 0 725
OpenOffice 1,000 0 18 31 49 902
libreoffice 1,000 0 33 266 18 696
Sum 11,676 1,056 162 1,821 237 8,430
2.5 Data Coding
"Doing some module troubleshooting on the @Xoops forums #XOOPS
#imAwesome :P"
2. Provide information (sharing information/URLs, reporting news)
"Get an overview and demonstration of #Acquia Search: http://t.co/9I352KgY
#Drupal"
3. Directed posts (addressed to one or more other users):
"@userA Ah, well on TYPO3 Sonar you can't -that's decided by the
profile. If you want to change it you'd need your own Sonar install imo."
4. Retweets:
Ehrlich and Shami
[START_REF] Ehrlich | Microblogging inside and outside the workplace[END_REF]
, building on work by Java et al.
[START_REF] Java | Why we twitter: understanding microblogging usage and communities[END_REF]
and by Zhao and Rosson
[START_REF] Zhao | How and why people twitter: the role that micro-blogging plays in informal communication at work[END_REF]
, proposed that microblog posts can be sorted into six categories. This study employed Ehrlich and Shami's scheme, which is introduced below along with examples from our data. User names have been changed.
1. Status (giving details of the poster's current activities): "
Table 4 .
4 Numbers of Statuses in Each Category
Keyword/ project Ask question Directed Directed with question Provide info Retweet Current status Sum
Joomla 3 28 7 455 146 11 650
TYPO3 57 75 23 321 419 56 951
SilverStripe 9 52 9 381 371 32 854
Drupal 12 71 12 435 272 37 839
Xoops 2 5 2 240 163 5 417
Plone 11 42 8 277 400 34 772
RoR 13 92 13 417 109 29 673
PHP 6 28 3 455 176 6 674
Apache 8 12 4 151 93 9 277
Mozilla 3 30 5 439 224 24 725
OpenOffice 27 132 21 363 285 74 902
libreoffice 34 119 27 259 198 59 696
Sum 185 686 134 4,193 2,856 376 8,430
Table 5 .
5 Subcategories of "Provide Information" Statuses
Keyword/ project Book Code release Event Documen-tation General Jobs News-letter Security Sum
Joomla 8 42 9 10 208 172 1 5 455
TYPO3 3 40 61 10 182 10 13 2 321
SilverStripe 3 86 39 18 175 18 0 42 381
Drupal 14 34 56 52 160 117 0 2 435
Xoops 1 24 0 101 85 15 4 10 240
Plone 18 37 78 14 106 9 0 15 277
RoR 0 3 9 10 67 328 0 0 417
PHP 11 10 3 21 87 312 0 11 455
Apache 6 22 5 22 72 12 0 12 151
Mozilla 0 28 9 6 384 0 0 12 439
OpenOffice 8 36 4 8 288 4 0 15 363
libreoffice 0 32 5 24 183 2 0 13 259
Sum 72 394 278 296 1,997 999 18 139 4,193
Fig. 1. Categories of Twitter Status
Table 6 .
6 Frequently occuring words in non-job related statuses
Project/ keyword number of statuses word number of occurrences
Joomla 561 wordpress 106
drupal 31
TYPO3 979 (none)
Silverstripe 921 wordpress 38
Drupal 765 wordpress 75
joomla 44
Xoops 525 (none)
Plone 950 wordpress 36
RoR 368 php 23
PHP 374 apache 110
mysql 83
ruby 30
rails 30
magento 23
Apache 406 mysql 137
openoffice 30
Mozilla 729 google 75
chrome 45
windows 41
ipad 32
microsoft 28
OpenOffice 923 libreoffice 94
excel 93
word 68
microsoft 42
libreoffice 946 openoffice 68
office
Statuses were also gathered for the keywords 'Geeklog", 'Mambo", 'mojoPortal" and 'WebGUI" in order to analyze the PHP-based content management system of those names, but the keywords were abandoned due to the small proportion of relevant statuses (in the case of Mambo) and the small number of statuses retrieved (in the other cases).
In fact, 10 statuses occurred in samples for two keywords/projects.
59% of the statuses collected were tagged as being in English.
We choose to ignore the large number of mentions of "community" for PHP because 24 of the 27 mentions are due to retweeting of a status praising one company's community engagement. The lack of comments on the retweets suggested an advertising campaign.
In addition, gathering only statuses that contain particular keywords would not capture all the directed Tweets sent between any given set of users.
Some commercial services offer to retrieve historical Tweets. | 25,028 | [
"1001481"
] | [
"485038"
] |
01467568 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467568/file/978-3-642-38928-3_11_Chapter.pdf | Laura Arjona
email: [email protected]
Gregorio Robles
Jesús M González-Barahona
A preliminary analysis of localization in free software: how translations are performed
Keywords: free software, open source, libre software, translations, internationalization, localization, collaborative development, crowdsourcing, open innovation
Software is more than just source code. There is a myriad of elements that compose a software project, among others documentation, translations, multimedia, artwork, marketing. In this paper, we focus on the translation efforts that free, libre, open source software (FLOSS) projects undergo to provide their software in multiple languages. We have therefore analyzed a large amount of projects for their support and procedures regarding translations, if they exist. Our results show that many, but not all, projects offer some type of support and specify some ways to those wanting to contribute. Usually, projects from a more traditional libre software domain are more prone to ease such tasks. However, there is no general way to contribute, as formats and procedures are often project-specific. We have identified as well a high number of translationsupporting tools, with many projects having their own one. All in all, information about how to contribute is the main factor for having a very internationalized application. Projects accepting and giving credit to contributing translators have high levels of internationalization, even if the process is rudimentary.
Introduction
It is common that free, libre, open source software (FLOSS) projects follow an open development model, accepting contributions from external developers and easing (at least in theory) the entrance of new members. The availability of source code, the use of version control systems, software forges and public mailing lists to manage the project are technical means to put in practice the freedoms that are guaranteed by the license of the software. However, many of these tools, documentation and support are focused on (source code) developers, who have the technical skills to use and understand them. Beyond source code, there is a myriad of elements that compose a software project, such as documentation, translations, multimedia, artwork, marketing, among others.
In this paper we put the focus on translations. We analyze how they are managed in a variety of FLOSS projects, if they have a defined process, and the tools and support that are provided. It should be noted that the use of libre software licenses creates a scenario in which any modification to the software is allowed, along with redistribution of those modified versions. This means for example that end-users or any third party may translate the software to their desired language or dialect, without the need of permission. This legal viability is usually viewed as an advantage of libre software versus proprietary alternatives, both by users and by developers. For users, it is a way to guarantee the availability of the software in their language, despite of the interests of the developer group or company. For developers, it is a way to increase the dissemination of the program, attract new users and new contributors to improve the project, specially in the case that the localization processes happens 'inside' the project. In order to ease this, libre software developers have created several means to materialize and take profit of the possibility to internationalize and localize a libre software project, in a similar way to what they do with coding tasks: developing helpful tools, collaborative platforms, giving credit to each contribution and creating a translator community around the project.
The main contributions of this paper are following:
-It provides a complete perspective of translations in FLOSS. To the knowledge of the authors there are few in-depth analyses of the field of translations in libre software. -It looks if FLOSS projects are open to external contributions in the case of translations, and if they ease this task by providing tools, support and guidance. -It studies if FLOSS projects have standard procedures and common tools that allow that the knowledge acquired from translating in one project to be re-used in other projects.
The structure of this document is as follows: next, we will introduce some definitions and concepts related to translation tasks. In Section 3 we introduce the research questions that we address in this paper. The next section contains the related research on localization in libre software projects. Then, we describe the sources of information and methodology used to answer the research questions, and show our results in Section 6. Finally, conclusions are drawn and further research possibilities are discussed.
Definitions
Software internationalization (often represented by the numeronym i18n) refers to the process to prepare a program to be adapted to different languages, regional or cultural differences without engineering changes [START_REF]Questions and answers about internationalization[END_REF]. It involves tasks as separating the text strings (those shown to the user in the different interfaces) to be translated, supporting several character sets, using certain libraries to manage dates, currencies or weights.
Software localization (often represented by the numeronym l10n) refers to the process of adapting the software to a particular target market (a locale) [START_REF]Questions and answers about internationalization[END_REF]. It involves tasks as translating the text messages, documentation and published material to the desired language or dialect, adapting graphics, colors and multimedia to the local language or culture if needed, using proper forms for dates, currency and measures and other details. Crowdsourcing represents the act of a company or institution taking a function once performed by members and outsourcing it to an undefined (and generally large) network of people in the form of an open call [START_REF] Howe | Crowdsourcing: A definition[END_REF]. The concept of free software was born to retrieve the spirit of collaboration, so it is very common, specially in the last years, to find libre software projects that build a collaborative web infrastructure to carry out localization tasks (what sometimes is called crowdtranslation).
Research questions
In this work, we target following research questions:
How do FLOSS projects enable localization?
With this research question we aim to know if FLOSS projects consider localization a separate task of the rest of the source code management. While it is possible to handle localization as any other modification of the source code of a software project, in a well-internationalized software project, the task of translation can be made by conventional users or other people that may not have the programming skills needed for contributing using software development tools.
To answer this question we will look at the websites of several libre software projects and specially at the pages related to how to contribute, and see if there are any special guidelines on how to contribute with translations, comparing the process to follow with the one for contributing with source code. In addition, we will inspect if the objects to be localized are enumerated, and look for the existence of specialized internationalization or localization teams, guidelines and support platforms.
What tools and collaborative platforms are used by FLOSS projects to approach l10n tasks?
The goal of this research question is to gather information about the tools that libre software developers have created to manage the localization process. These tools may be pieces of software such as standalone editors specialized in handling localization files, web platforms, scripts to integrate changes in translations into the source code repository, among others.
To answer this question, in addition to review the related literature, we will investigate libre software catalogs in order to find localization tools and platforms, and will inspect the website of libre software projects to assess if they recommend any particular tool.
What are the results and consequences of these approaches to software localization?
We plan to count the number of languages to which each software project is translated, and to know if there are significant differences depending of the translation tool or platform used, or other aspects of the localization process. We will review literature in order to find other consequences (economic, social. . . ) of the efforts in localization made by the libre software communities.
Related research
Compared with matters related to source code and even documentation, the number of research articles that address the issue of translations in software is very scarce.
There are some papers that provide some insight on the presence of free software for translations (not necessarily translating libre software) [START_REF] Mata | Formatos libres en traducción y localización[END_REF]. For instance, Cánovas [START_REF] Cánovas | Herramientas libres para la traducción en entornos MS Windows[END_REF] and Díaz-Fouces [START_REF] Fouces | Ferramentas livres para traduzir com GNU/Linux e Mac OS X[END_REF] show that there are mature, libre software tools aimed for the translation of software. And directories with tools have been published, such as [START_REF] Flórez | Free translation software catalog[END_REF][START_REF] Cordeiro | Open source software in translator's workbench[END_REF], including a study of applications for crowdsourcing in translations by the European Commission [START_REF]Crowdsourcing translation[END_REF].
From the software engineering perspective, Robles et al. analyzed the KDE desktop environment looking for patterns in different kind of contributions by the type of file (localization files, multimedia, documentation, source code, and others) [START_REF] Robles | Beyond source code: The importance of other artifacts in software development (a case study)[END_REF]. From the field of economics, Giuri et al. analyze the division of labor in free software projects and how it affects project survival and performance [START_REF] Giuri | Skills, division of labor and performance in collective inventions: Evidence from open source software[END_REF].
In the libre software communities, Gil Forcada, with the feedback from other community members, conducted a GNOME I18N Survey in August 2010, by sending a questionnaire to every GTP (GNOME Translation Project) language coordinator, and collecting answers [START_REF] Forcada | Gnome localization update for q1 2012[END_REF].
Souphavanh and Karoonboonyanan [START_REF] Souphavanh | Free/open source software : localization[END_REF] provide a broad perspective on the localization of Free/Open Source Software (FOSS) for the benefit of policy-and decision-makers in developing countries, highlighting the benefits and strategies of FLOSS localization.
Methodology
We have gathered the information presented in this work from several publicly available sources: websites of the projects, documentation, mailing lists for translators, and related literature showed in the bibliography.
For each libre software project analyzed, we have written in a research script3 following aspects: if there is information about how to localize the project, the recommended tools and website of localization, if internationalization and localization teams exist, and other remarkable aspects.
For each libre software tool or web platform analyzed, we have annotated our research script with the following aspects: the type of localization tools (standalone software, plugin, web platform...), the year of first release, the original project for which was developed (if any), the number of projects that use the tool (if it is stated in the main website of the tool), if well-known projects that use the tool, and other remarkable aspects.
Results
As a preliminary analysis, we have analyzed 41 libre software projects, from small tools as Mustard -the microblogging client for Android-, to GNU/Linux distributions as Debian or Mandriva. We have analyzed groups of "similar" projects as the office suites OpenOffice.org and LibreOffice and the CMS Drupal and Wordpress.
The projects analyzed are listed and grouped as follows4 :
-Operating Systems (10 projects): Debian, Ubuntu, Fedora, Mandriva, Open-SUSE, Android Open Source, CyanogenMod, Replicant OS, FreeBSD, Open-BSD -GNU/Linux, cross-platform applications (16 projects): GNOME, KDE, Tux Paint, GNU MediaGoblin, Limesurvey, OpenStack, Wordpress, Drupal, Pootle, Scratch, BOINC, LibreOffice, OpenOffice.org, Mozilla apps, Calibre, VLC -Windows projects (6 projects): Filezilla, Notepad++, 7zip, eMule, Azureus-Vuze, Ares -Android projects (9 projects): F-Droid, Mustard, K9 Mail, AdAway, Open-StreetMap Android (OSMAnd), Barcode Scanner, aCal, Frozen Bubble, Cool Reader
We have analyzed 22 libre software tools or platforms for supporting the localization process. Among them we can find plugins for text editors, standalone programs for localization, and complete online frameworks. They are introduced in the following subsections. We also have compiled a list of 15 more translation tools for further research, which can be found in our research script, along with all the details about the selected projects and localization platforms.
How do FLOSS projects enable localization?
Information about how to contribute translations Most of the projects have a dedicated page about how to contribute translations, and many include several objects to localize (in addition to the messages or strings presented to the user, for example the documentation and website). Some projects maintain translations for different releases (the case of Ubuntu) but most of them focus on the development release. There are some projects with no information about how to contribute, neither code nor translations. This situation is more frequent in the case of libre software created for the Android or Windows platforms than in the case of cross-platform or GNU/Linux tools, we assume that because of cultural reasons. In Figure 1 we summarize our findings about this topic.
Fig. 1. Information about how to contribute translations
A remarkable exception is the Android Open Source Project, as it does not offer information about the localization of the software. There is a Localization document 5 explaining the process that Android developers should follow in order to make their applications localizable; the project has also published the Hello, L10N tutorial6 , providing an example of how to build a simple localized application that uses locale-specific resources [START_REF] Arjona | Mining for localization in android[END_REF]. However, there is no information about how to contribute translations to Android itself. In some forum threads Jean-Baptiste Queru (Technical Leader of Android OSP at Google) explains that Google translates Android internally, not accepting community contribu-tions 7 . Surprisingly, the libre software forks, CyanogenMod and Replicant OS, do not show either public information about localization. In CyanogenMod the topic has been discussed in several forum threads. However, at this time it still does not offer a specific protocol to contribute translations, managing them as any other modification to the source code. It does not have language teams or internationalization delegates either.
Observation #1: Many, but not all, projects offer information and support to contribute translations. Usually those projects from a more traditional libre software domain are more prone to ease such tasks to external contributors.
Existence of internationalization and localization guides and infrastructure
It is task of the internationalization team to study and decide how the localization of the project is going to be driven, setup the corresponding tools and write the proper documentation for translators and developers.
Most of the projects create language teams and use mailing lists to communicate (the use of the native language in those lists is common), and many software projects define the complete process of localization in a wiki page or website. For example, OpenStack discussed which platform to use: Pootle, Launchpad or Transifex, showing some comparative charts and assessment in their translations wiki page (old version 8 ). The current version of the wiki9 is a comprehensive guide of the localization project, both for translators and for developers. Another example of a complete guide for localization can be found in the Apache OpenOffice.org wiki 10 .
Regarding the localization infrastructure, there are different approaches. Some projects use an external, web-based collaborative environment to coordinate the localization teams (for example, OpenStack uses Transifex, and Wordpress uses the official Pootle server). Others deploy an instance of those web-based software in a community server -the Fedora and Mandriva distributions have set up their own Transifex servers, and OpenOffice.org and LibreOffice offer their own, but independent, Pootle servers. Finally, other projects are developing their own localization tools: standalone programs -as Lokalize for the KDE project or the i18nAZ plugin for Azureus/Vuze-, and web based localization platforms, such as Damned Lies11 -the translation platform developed and used by the GNOME project-, or the Drupal localization website 12 which is used in the Drupal CMS project. The Mozilla project uses (a) Verbatim (their own Pootle server) for the localization of some aspects, (b) an in-house developed wiki for the translation of their help documentation, and (c) Narro, an external web-based tool, developed by the Romanian translation coordinator.
There are as well efforts on providing tools for particular tasks of the translation process. One example is the KDE internationalization build system 13 which basically consists of a script running trough the KDE source code repository server, extracting the localizable strings in the KDE source code and generating the localization templates periodically. This way developers only need to care about writing code with localization in mind, and localizers will always find an updated template with the latest strings to be translated.
For 32 out of 41 projects that have been analyzed, we have obtained the way of submitting translations. Figure 2 shows the results. Some of the Operating Systems projects use a localization web platform, while others follow a traditional way, similar to code contributions (basically, sending translations via BTS or with mail systems). For GNU/Linux or crossplatform tools we can find that most of them clearly define a platform to manage l10n contributions, but there are some successful projects (in term of number of translations) as Filezilla or Notepad++ that simply explain how to translate the corresponding files and ask to send the contributions to the project leader directly. The Windows specific projects analyzed do not use localization platform for managing translations; however, some of them offer templates in a forum thread and, if required, open another thread for translators to attach their contributions. Finally, for those Android projects that offer information about how to contribute translations, all of them use a dedicated platform, although not the same one (Transifex, Pootle, Weblate or Android-PHP-translator).
Observation #2:
There is no standard way to contribute with translations. The tools and procedures, if they exist, are heterogeneous and sometimes even project-specific.
Used and accepted file formats for localization
There are a number of standards to be applied to the task of translation and localization that determine the use of certain file formats to ensure interoperability. Hence, several tools can be used to perform the translation work.
According to the OAXAL framework [START_REF]Organization for the Advancement of Structured Information Standards (OASIS). Open architecture for xml authoring and localization reference model (oaxal) tc wiki[END_REF], if we consider the complete process of localization (not only related to software localization), there are certain standard file formats to be used, as tmx, tbx, and xliff. XLIFF (XML Localization Interchange File Format) consists of an XML specification to hold translatable strings along with metadata [START_REF]Xliff version 1.2. oasis standard[END_REF]. This format is frequently used in professional translation, not only in software internationalization. Version 1.2 of XLIFF was approved by OASIS in February 2008.
However, these standards are recent. The libre software community has been concerned with localization from before these efforts, much before the OAXAL framework was designed. Therefore, other kind of file formats became de facto standards as the "PO" (Portable Object) file format for software translations.
The GNU Project [START_REF]Gnu project[END_REF] chose the Gettext [START_REF]GNU Gettext[END_REF] tool, implemented originally by Sun 14 , to enable the internationalization of GNU software. Gettext explores the source code and extracts all the "printed" strings into a POT file, a template that contains a list of all the translatable strings extracted from the sources.
With time the use of PO files has become a de facto standard for libre software internationalization. However we can still find multiple file formats for internationalization of software, documentation and other elements in any libre software development project. For example, the Android Operating System uses "Android Resources" to make the localization of Android applications easy 15 .
Observation #3: PO (Portable Object) files are still the de facto standard in use in libre software projects, although many projects use their own format. Newer, standardized formats such as XLIFF are seldom used.
What tools and collaborative platforms are used by FLOSS projects to approach l10n tasks?
In general, each language has an independent localization team that agrees on using locale-specific tools, formats, guidelines. Each team is, as well, responsible for the translation, review, and submission. The tools used by these teams vary from traditional tools such as Poedit and plain text editors (with plugins for handling translation files), to sophisticated web platforms that integrate translation and revision processes, sometimes even automatically committing the changes to the versioning systems.
In our research script we provide a list with all the tools in use, which we have augmented with the most relevant tools referenced in the literature.
Text editor add-ons Emacs 16 and Vim have some tools for working with Gettext:
-PoMode for editing .po catalogs, po-mode+.el with extra features for PoMode, and -MoMode for viewing .mo compiled catalogs.
-po.vim is the Vim plugin for working with PO files (http://www.vim.org/ scripts/script.php?script_id=695).
Standalone tools
-Poedit 17 is a cross-platform editor for .po files (gettext catalogs). It allows to configure the tool with the information related to the translator (name and email) and the environment, and every time a file is translated and saved, that information is included in the file. -Virtaal 18 is a more modern translation tool, which allows working with XLIFF files [START_REF] Morado | Bringing industry standards to open source localisers: a case study of virtaal[END_REF]. -The Translate Toolkit 19 is a collection of useful programs for localization, and a powerful API for programmers of localization tools. The Toolkit includes programs to help check, validate, merge and extract messages from localizations, as poconflicts, pofilter or pogrep. -Pology20 is a Python library and a collection of command-line tools for indepth processing of PO files, including examination and in-place modification of collections of PO files (the posieve command), and custom translation validation with user-written rules. The Pology distribution contains internal validation rules for some languages and projects.
Web platforms for supporting translation
In the last years, a number of web based translation platforms have been developed in order to benefit from the advantages of Internet-based collaboration, automate certain tasks and also lower barriers to external contributions. This new method has been called crowdtranslation [START_REF] Canovas | Open source software in translator training[END_REF].
In many cases, these platforms begin as a web-based infrastructure created for a particular project, and later they are used for other projects too.
-Probably the oldest example of a web based translation platform is Pootle 21 , which is built on top of the Translate Toolkit. Pootle allows online translation, work assignment, provides statistics and includes some revision checks. -Launchpad 22 , which is a complete software forge including a powerful web based translation system, was developed by Canonical to handle the Ubuntu development, and initially developed under a proprietary license, but later released under the GNU Affero General Public License, version 3. -Transifex23 is a newer platform which enhances the translation memory support. -Weblate24 is a recent tool, developed by Michal Čihař (the phpMyAdmin project leader), which offers a better integration with the git versioning system.
Many different and popular projects decide to host their translations in an external service using one of the above platforms. The already mentioned Damned Lies25 used in GNOME or the Drupal localization website26 belong to this group. Wordpress uses the official Pootle translation server27 (although some languages are handled in GlotPress, an online localization tool developed inside the project), and the FreedomBox Foundation and the Fedora community use Transifex28 for the localization of their websites.
Observation #4:
There is a large amount of translation-supporting tools.
What are the results and consequences of these approaches to software localization?
Results on the libre software projects In our research script we have annotated the number of languages to which each project is translated. For some cases, the number of languages with more than 50% of the objects translated and the number of languages with more than 90% completion is provided. However, it is difficult to measure the success of the methods and tools used in these terms, since other causes may have to be taken into account.
We have found that the number of language teams or the amount of translations in a libre software projects does not depend on the translation tool(s) used. The most important factor is to have clear specifications on how to contribute with localization tasks. For example, the Notepad++ project recommends to download an XML file, translate it, test it and send it by email to the project leader. With this simple approach, the program has been translated to 66 languages (all the translators are credited in a dedicated page; this may contribute too to attract new translators).
However, we have identified some projects with users wanting to contribute translations, but without much success. These projects do not offer information on how to translate the software, or if this information is provided it is very sparse. In addition to Android (already mentioned in section 6.1), we have the case of the Ares/Galaxy project: its issue tracking system contains 18 tickets (out of a total of 33) with user-submitted translations to the program; many of them remain unconsidered.
Observation #5: Information about how to translate a libre software is the main factor for having a very internationalized application. Project accepting and giving credit to contributing translators have high levels of internationalization, even if the process is rudimentary.
Other results or consequences
Business opportunities
The experience in the development and participation in localization tasks in libre software projects opens new opportunities for professionals of language management. Marcos Cánovas and Richard Samson have studied the mutual benefits of using libre software in translator training [START_REF] Canovas | Open source software in translator training[END_REF], both for trainees and for the libre software community.
Companies specialized in translation and localization of software may cover several cases where the community or company developing a software cannot reach, such as the maintenance of several versions of the localized system, or localization of the software for a specific customer [START_REF] Gomis | Free software localization within translation companies[END_REF].
One example of this is Acoveo 29 , a company offering a solution for software localization consisting in a Maven and a Jenkins plugin that are offered for free (under the AGPL license) to their customers. The Maven plugin takes care of extracting all the translatable strings of the client's software, and sends the 29 http://www.acoveo.com strings to www.translate-software.com, where professional translators will localize them at a certain price per word. The Jenkins plugin allows the customer to control the localization workflow and progress.
Another example is ICanLocalize30 which offers website translations, software localization, Text translations and general translations. They have developed a Drupal module called "translation management"31 , released under the GPL license, and the Wordpress MultiLingual Plugin32 (which is licensed as GPL and offered at a certain cost, including the fees for the website content Finally, there is Transifex, a libre software project that has become a company offering hosting and support on its platform.
Localization results as free culture works
In addition to the the localization files of the software, translation memories (a database that stores paired segments of source and human translated target segments), glossaries (definitions for words and terms found in the program user interface), and corpora (large and structured set of texts) may be released as free cultural works too, lowering the barrier for new translations of other projects to the target language. An example of this is the Atshumato Project33 . The general aim of this project is the development of open source machine-aided translation tools and resources for South African languages [6]. Among them, Atshumato releases Translation memories in the Translation Memory eXchange format (TMX) with Creative Commons Attribution Non-Commercial ShareAlike 2.5 license34 for the following language pairs: ENG-AFR (English to Afrikaans), AFR-ENG (Afrikaans to English), ENG-NSO (English to Sepedi), NSO-ENG (Sepedi to English), ENG-ZUL (English to IsiZulu), ZUL-ENG (IsiZulu to English).
Observation #6: Translation of libre software projects offers new business and cultural opportunities.
Conclusions and further work
In this paper, we have investigated how FLOSS projects manage translations by studying a big number of them for the information and support they provide, and the tools that are being used.
Internationalization and localization in libre software projects are two of their most interesting advantages for dissemination, competition with proprietary alternatives, and penetration in new markets. Therefore, we have seen how most of the libre software projects offering information about how to contribute or join the community, offer information about how to contribute with translations as well. From the projects under study, we have found that the GNU/Linux or cross-platform projects handle translations in a more systematic way (much more standardized than Windows or Android projects), probably due to the tradition of the collaboration spirit of the GNU project and the dissemination of Gettext, and the cultural influence of those platforms where contributions have always been welcome.
However, there is big heterogeneity in how projects proceed with translations. Procedures vary from project to project, and no generalized way exists to contribute, as is normally the case for source code. The heterogeneity can be observed as well from the technological point of view. There are many tools that support translations, resulting in a productivity increase for the project, but not allowing translators to reuse their knowledge on other projects. If we compare the tools that exist to contribute source code with the ones that are used for translations, we have not found a concentration of localization tasks in few platforms in the same way as some software forges have concentrated a high number of projects [START_REF] Mockus | Amassing and indexing a large sample of version control systems: Towards the census of public source code history[END_REF]. However, web platforms for translations such as Launchpad (more than 1800 projects) or Transifex (more than 1700 projects) are gaining popularity in recent times and may change this in the near future.
From our analysis of the projects, we have seen that publishing information about how to translate a libre software is the key element to have a project with a large number of translations. This has been found to be true even for smaller projects that use simple ways to accept contributions, even per e-mail or in the e-mail forums. If this practice is augmented with giving credit to the contributing translators, the results have been found to be even better.
Regarding future lines of research, a more comprehensive analysis (involving a higher number of projects) should be performed in order to measure the effect of using different tools and procedures in the localization tasks and management.
We think that due to the availability of the source code of libre software projects, and the fact that many of them use a versioning system that keeps track of the history of changes to all the files, it is possible to inspect the source tree or mine the versioning system logs to get information about the software. This could provide factual details about how localization is driven (for example, looking at changes in the localization files or the information about the translators present in their headers), give hints about vulnerabilities in the process (as, for example, translation files without any change for a long time, which may mean that there are parts of the project that are outdated or not maintained).
Having a distributed team of collaborators, taking benefit of crowdtranslation, increases the number of languages and strings translated but may introduce inconsistencies or quality problems, introducing additional effort for reviewers. It would be interesting to make a comparative analysis on software projects that use the crowdsourcing models with the ones that maintain a stable translation team, in order to see if they evolve similarly o differently in number of languages, size of the translator community, and medium-long term involvement of the contributors).
Fig. 2 .
2 Fig. 2. Means to send translations in libre software projects
The research script is publicly available at http://gsyc.urjc.es/ ~grex/repro/ oss2013-translations for reproducibility purposes and further research.
More details about this sample (selection criteria and other) can be found in our research script.
http://developer.android.com/guide/topics/resources/localization.html
http://developer.android.com/resources/tutorials/localization/index. html
See for example: Proposal to add Basque translation to Android Source Code http://tinyurl.com/b2fkpbn; Hungarian localization -questions about legal issues http://tinyurl.com/b9xqrek
http://wiki.openstack.org/Translations?action=recall&rev=9
http://wiki.openstack.org/Translations
http://wiki.openoffice.org/wiki/Localization_AOO
http://l10n.gnome.org/
http://localize.drupal.org
http://techbase.kde.org/Development/Tutorials/Localization/i18n_ Build_Systems
http://compgroups.net/comp.unix.solaris/History-of-gettext-et-al
http://developer.android.com/guide/topics/resources/localization.html
http://www.emacswiki.org/emacs/Gettext
http://www.poedit.net/
http://translate.sourceforge.net/wiki/virtaal
translate.sourceforge.net
http://pology.nedohodnik.net/
http://translate.sourceforge.net/wiki/pootle/
http://launchpad.net
http://transifex.com
http://weblate.org
http://l10n.gnome.org/
http://localize.drupal.org
http://pootle.locamotion.org/
http://transifex.net
http://www.icanlocalize.com
http://drupal.org/project/translation_management
http://wpml.org
http://autshumato.sourceforge.net/, initiated by the South African Department of Arts and Culture and developed by the Centre for Text Technology (CTexT) at the North-West University (Potchefstroom Campus), in collaboration with the University of
Pretoria.34 The Non-Commercial clause makes them not a completely "free culture work" but allows quite more freedoms to the receiver than traditional copyright application. | 37,177 | [
"989158",
"982555",
"989159"
] | [
"302798",
"227133",
"227133"
] |
01467569 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467569/file/978-3-642-38928-3_13_Chapter.pdf | Nabil El Ioini
Alessandro Garibbo
email: [email protected]
Alberto Sillitti
Giancarlo Succi
email: [email protected]
Nabil El
An Open Source Monitoring Framework for Enterprise SOA
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
To address the business needs of today's organizations, Web Services (WS) emerge as an enabling technology that allows building flexible systems that integrate multiple pieces of software into one system [START_REF] Predonzani | Components and data-flow applied to the integration of web services[END_REF]. Contrary to traditional software development paradigms, where software vendors have to agree on the communication protocols and the interfaces to use, web services improve the collaboration of different software vendors to build the final product by using standardized protocols and interfaces. Nowadays, the effort to push the adoption of web services is twofold. From one side research communities and companies support WS by providing tools that help develop and manipulate web services, and on the other side standardization bodies supply standards to regulate the use of WS [START_REF]Web Services Description Language (WSDL 1.1). W3C Note[END_REF] [START_REF]Simple Object Access Protocol (SOAP 1.2), W3C Recommendation[END_REF][8] [START_REF] Petrinja | Introducing the OpenSource Maturity Model[END_REF].
The concept of trust in the domain of services is an important issue, since many services are bound at run time, and the correct behavior of the services is not known a priori [START_REF] Damiani | WS-Certificate[END_REF]. Being able to give high confidence about the behavior and the quality of a service at run-time can have a great influence on service compositions.
A big challenge with WS is how to trust the claims of services' providers about the quality of their services [START_REF] Ioini | Open Web Services Testing[END_REF]. In web services, we delegate part of our business logic to an external provider to do it for us. Thus, we have no control of what could happen during the execution of that part of the business logic. One solution to increase customers' trust in the provided services is to provide a monitoring framework where all the stakeholders that take part of a SOA infrastructure are involved in the monitoring process. This way all the stakeholders can collect and use the monitoring data to make decisions about which services to use.
The contribution of this work is twofold. First we are providing a monitoring framework in which the stakeholders have the possibility to specify their monitoring requirements and have access at runtime to the collected data, however, this requires them to adhere to certain practices and collaborate to monitor the used WS. The second contribution is that we are combining the monitoring from the service requestor and service provider point of views. This is done by collecting data from both sides and allow both parties to make use of the data collected by the other party.
The rest of the paper is organized as follows. In section 2 we presented the related work. Section 3 presents an overview of the system architecture. Section 4 discusses the implementation details. Section 5 and 6 show a demonstration scenario and experiments and section 7 concludes and presents future work.
Related Work
Existing monitoring techniques differ in many aspects. The main properties that distinguish them from each other are the stakeholders involved, data collection techniques, the degree of invasiveness and the monitoring requirement specification. The spectrum of approaches proposed in the literature spreads along all these dimensions, however, not all the approaches take in consideration all the dimensions.
In the literature many approaches resemble to ours such as SCALEA-G [START_REF] Truong | SCALEA-G: a Unified Monitoring and Performance Analysis System for the Grid[END_REF], which is a unified monitoring and performance analysis tool for the grid. SCALEA-G uses a different architecture but has similar functionality. IBM Tivoli [START_REF]IBM Tivoli Composite Application Manager for SOA[END_REF] uses a special integration bus to collect monitoring data so that all the messages passing through the bus are captured. In [START_REF] Mahbub | A Framework for Requirements Monitoring of Service Based Systems[END_REF][START_REF] Mahbub | Run-Time Monitoring of Requirements for Systems Composed of Web-Services: Initial Implementation and Evaluation Experience[END_REF] an approach for monitoring BEPL workflows is presented. A central execution engine intercepts all the passing events and stores them in an event dataset. The probes to be monitored are defined using event calculus.
The main difference between these approaches and ours is that we are providing an approach conformant to a bigger infrastructure, which is the NEXOF-RA. Although the techniques we used might not be new but our goal is to have a framework compatible with the goal of NEXOF-RA, which is establishing strategies and policies to improve delivery of applications and enabling the creation of service based ecosystems where service providers, service requestors and third parties easily collaborate.
System Overview
Traditionally system verification and validation (V&V) is a pre-deployment exercise designed to assess the capabilities of systems before putting them into production. In the paradigm of WS and SOA based systems [START_REF] Damiani | WS-Certificate[END_REF], the dynamic nature of these systems requires the extension of the V&V setting to include run-time quality assessment that cannot be applied before deployment.
To ensure the quality of Web Services, we propose a monitoring framework that enables runtime data collection to automatically identify anomalous events defined by any of the stakeholders, aggregates related events and presents data to the interested parties with different levels of granularities.
The framework adopts a proxy-based strategy to collect monitoring data (Figure 1), where each participant in the interaction needs to use a proxy that allows the interception of the messages coming in and out. The framework relies on different receptors implemented inside the proxies to filter out the events of interest and report them to the monitoring server.
The framework provides an environment where registered services can be monitored automatically and according to users requirements. The monitoring model is described in the NEXOF-RA Monitoring SOA Enterprise pattern [START_REF]Monitoring in Enterprise SOA pattern[END_REF]. The main idea of the model is allowing all authorized users to perform two types of monitoring options: push and pull. The push monitoring allows users to register the events of interest to be monitored; if those events occur, all the registered users are notified. The pull monitoring instead, allows authorized users to request log data concerning a specific service.
Design choices
The design choices have been shaped to satisfy the requirements defined by the ESOA pattern [START_REF]Monitoring in Enterprise SOA pattern[END_REF]. Four attributes were given priority.
1. Maintainability: in an infrastructure that meant to be used in a SOA environment, maintainability is an important factor due the size and complexity of the interacting systems. By decomposing the framework into sub-components, maintainability becomes easier [START_REF] Granatella | Selecting components in large COTS repositories[END_REF]. Each of the sub-components is responsible for a precise set of operations, which are exposed as interfaces to communicate with the rest of the framework. The framework itself looks as a component (a service) from outside, and it communicates with the other components (e.g. proxies collecting data) through a set of pre-defined interfaces (Web Services calls). Therefore, any part of the framework could be updated or modified without affecting the rest of the sub-components. 2. Availability: In a SOA environment, it is critical to understand the availability of the services, mainly because the usual everyday operations may depend on services, which for some reason could be unavailable at some time slots. To this end, the framework needs to be able to monitor all the services that take part of the ESOA environment including the framework itself. As we mentioned before, the framework is built as a set of services, and this allows the framework to monitor its availability. 3. Performance: A drawback of the framework in terms of performance is the overhead generated by the proxies. We have decided to use a proxybased architecture for collecting data. However, before doing that we have considered other options mainly: a. Monitoring aware middleware [START_REF] Zheng | A qos-aware middleware for fault tolerant web services[END_REF], relying on a collecting data from interceptors implemented as part of the middleware. This technique has the advantage of being able to monitor detailed information about the services e.g. resources usage, since the interceptors are part of the infrastructure in which services are deployed. However, it imposes a limitation in that it requires all the participants to use the same middleware, in case their existing middleware does not have the monitoring capability, which could have a high cost.
b. Central proxy server relying on a central proxy, which can be used by all the participants. This technique represents several challenges. The first one is that it cannot capture precise information about the quality of service. For example, the response time that is captured by the proxy will represent the response time from the proxy to the service and not from the client to the service. The second challenge is that internal quality attributes of the services cannot be captured such as the time needed by the service to process the request internally. For this reason we have decided to choose a proxy-based approach, which does not require the participants to alter their environments as well as it has access to more accurate information, since the proxy is considered to be part of the participants' infrastructures. Of course, there is a cost to pay in terms of the overhead generated by the proxy, however, compared to the advantages we gain, we consider it to be acceptable. Furthermore, the framework offers two strategies for monitoring to deal with how much overhead is generated.
The push approach requires much more overhead since each component is required to actively report any event of interest to a specific client. For example, a client requirement might be to be notified if the throughput of the monitored service exceeds 5 ms. In this case the services actively filter out all the incoming requests and in case of an event that exceeds 5 ms, it needs to report it to the monitoring server. The second approach is called the pull approach, and this is a more relaxed approach, in which the service collects the data locally and it is the monitoring server that asks for it when needed. 4. Adaptability: By having a well defined interfaces and separation of concerns, the framework could be adapted and reused in different contexts. Our framework is used to monitor Web Services, however, other types of systems could be used as well. For example, the proxies can be adapted to monitor different types of components instead of Web Services, and communicate the monitored data to the monitoring server.
Implementation
The framework architecture features two main components, namely, the monitoring server and the monitoring proxy (Figure 2). The monitoring server provides facilities to store the collected data and serve it to the users when they need it. It also handles notifing the users in case of anomalous events. The current implementation of the monitoring server is done as a web application, and all the interactions with the outside components are done through web services. The monitoring proxy is a Java application that is used by all the participants of the monitored environment. We have extended an existing open source project called membrane router2 as the monitoring proxy.
Monitoring Server
It is the central component of the monitoring server. It is implemented as a J2EE application running on top of Apache Tomcat container. The monitoring server offers four operations and requires four operations from the components that need to be monitored. o NotifyEvent: notifies the users who have subscribed to the different events.
• Offered operations:
o ConfigureMonitringPolicy: enables the users of the monitoring server to set their policy preferences;
o GetMonitoringData: provides the user with the possibility to pull the monitoring data from the server;
o SubscribeEvent: allows to specify the events they are interested in;
o notifyEvent: allows every user (customers and providers) to send a notification to the monitoring server using this interface.
Monitoring Proxy
The role of the proxy is to collect data from the different participants including service providers, service users and the monitoring server. The data is sent to the Fig. 2. Architecture overview monitoring server for storage and analysis. In the current implementation we are using membrane router as the monitoring proxy.
Membrane router is composed of three main parts: 1) the EndpointListener, which waits for the incoming messages 2) the EndpointSender, which sends the messages to their destination and 3) a set of interceptors in between the two end points. The part that we are mostly interested in is the interceptors. In the current implementation we have two interceptors to capture response time and throughput (Figure 3).
The interceptors implement two functionalities:
1. Intercept input messages: this function captures the input messages and adds new headers to them such as the sender id and timestamp 2. Intercept output messages: when a message passes through the router and is processed by the service provider, the provider sends back the response. The response message is captured again and, the headers that have been set in the input message are checked. A practical example that we have used is to calculate the throughput by checking the time elapsed between receiving the input message and receiving the output message.
Data Collection
As we mentioned before the monitoring framework offers two monitoring strategies with a different level of invasiveness and time to react to the events recorded.
Fig. 3. The proxy architecture
The push approach is a timely monitoring strategy, which relies on external data source about the data to collect. The external data source is the set of configurations submitted by the stakeholders interested in monitoring the service. These configurations take the form of a simple triple of the form: <Property, condition, value> where the property means the property to monitor such as response time, throughput, the condition are Boolean expressions (e.g. >, <, =). In the current implementation the Boolean expressions are used to compare the monitored values against. However, to avoid imposing on any specific format for defining the monitoring requirements, it is up to the interceptors' developers to decide how their interceptors receive the monitoring requirements.
The degree of invasiveness has a great impact on the quality of the collected data. The more invasive the technique is, the more data can be collected. Our current implementation is limited in this side, because everything that we collect starts when the service requestor or provider submit the request/response to the proxy. So everything that happens before that such as the internal state of services is not captured. This limitation can be addressed by increasing the level of invasiveness such as the case of [START_REF] Petrinja | Introducing the OpenSource Maturity Model[END_REF], but a tradeoff needs to be made between the data collected and the implementation cost.
Demonstration Scenario
Several scenarios have been considered and tested. In this demonstration, we focus on the two monitoring strategies, namely, the push and the pull. To use the framework some assumptions are needed. We assume that every participant has built the interfaces required to interact with the monitoring server. We also assume that all the participants have deployed the monitoring proxy in their environments.
In the pull strategy, the process is initiated by a user or a group of users setting their configuration policy for some specific service. Once this is done, the monitored service starts collecting the required events. At any point of time, users who have registered to that specific service can make an implicit call asking the monitoring server to pull the data that has been collected so far. This approach has the advantage of instantly showing how the service is performing. Figure 4 presents a sequence diagram of this approach. The push approach on the other hand is initiated by the user registering to a specific event. The monitoring server forwards the registration to the service of interest. The service keeps track of all the users and their events of interest, and once the event occurs, the monitoring server is asked by the service to notify the respective users. The monitoring server is then responsible for notifying the users. The sequence diagram in Figure 5 shows the sequence of activities of the push approach.
Experiments
Starting from the scenarios above we have performed different tests to show how the framework could be used in practice. We have setup a testing environment as shown in Figure 6.
The testing environment is composed of the monitoring server, a service requestor and a service provider. As shown in Figure 6, each one of components has a proxy deployed as part of its infrastructure to collect the monitoring data. The service provider is a web application serving simple web services such as arithmetic operations. The service requestor is a web service-testing tool called SoapUI 3 . SoapUI allows generating SOAP requests to make use of the services exposed by the service provider. The monitoring server uses also a proxy to capture all the requests and responses of the monitoring server. This allows the activities of the server to be monitored also. In our testing we have primarily focused on the two monitoring strategies, the push and the pull. However, other types of tests were explored. In the following the tests and their results are presented in Table I Test Description Result
Push
The service requestor registers a new event "response time > 5 ms". This means that if the service response time is greater than 5 ms, the service requestor wants to be notified. The monitoring server forwards this information to the service provider proxy to add it to the list of properties to monitor.
For testing purposes we have implemented a simple service which takes an integer value as parameter and use it to delay the service response time, for instance if we pass as parameter number 5 the service will have 5 ms as response time. Using SoapUI we have generated 10 SOAP calls with different response times.
The service requestor has been notified every time a request violates the condition specified.
The notification was sent by email.
Pull
S1
The service requestor can request at anytime the monitoring data collected so far by the proxies. Using SoapUI we generate 10 random requests to the service. On the monitoring server side the database is still empty. The service
The monitoring server checks its local database, but since no data is available, it requests the data from the proxy. The proxy returns 10 events, which are forwarded to the service
Fig. 1 .
1 Fig. 1. Proxies for data collection
•
Required operations: o ConfigureMonitroingPolicy: sets the monitored component configuration policy such as time slot before sending data to the server; o GetLoggedData: pulls all the data from monitored component; o IsAuthorized: requests the monitored component if the user who is trying to access monitoring data has authorization to do so;o SubscribeEvent: subscribes a new event to be monitored;
Fig. 4 .
4 Fig. 4. The pull approach
Fig. 5 .
5 Fig. 5. The push approach
Fig. 6 .
6 Fig. 6. Testing environment
NEXOF-RA project http://www.nexof-ra.eu/
Membrane Router, http://www.membrane-soa.org/soap-router-doc
requestor calls the getMonitoringData service of the monitoring server.
requestor and also stored in the database.
Pull
S1
We performed the same scenario as S1 but this time we have some existing data in the monitoring server database.
The monitoring server sends back the existing data in the database. Additionally, it sends also the newly collected data by the proxy.
Availability Once the configuration is set. We shutdown the service provider service.
The provider proxy could not receive any response from the service, so it notifies the monitoring server, which notifies the service requestor.
Overhead
Compare the response time of services with and without proxies. 1000 requests have been sent by the service requestor with and without the proxies Without the proxies the average response time was 0.12 ms while the ones with the proxies was 0.16 ms.
Conclusions
In this paper, we have presented a comprehensive framework for simplifying services monitoring. Our framework adopts the model proposed in the NEXOF-RA Monitoring SOA Enterprise Pattern, which defines the architecture of the monitoring components. The framework has been implemented by integrating different open source components and techniques to increase the level of functionality to the final user. The main advantage of the framework is that it has been implemented as a set of services, which gives it the ability to monitor itself. We have tried to be less invasive as possible to avoid adding extra costs for existing infrastructures. However, this limits the number of events we can monitor. For the future we are working to add more interceptors to the proxies to collect more information. The current implementation is a prototype that we have used to test the different components. The next step we are considering is to perform a larger case study in which real services infrastructures will be used to study the effect of the monitoring on the relations between service requestors and service providers. | 22,575 | [
"1001482",
"1001483",
"1001484",
"989715"
] | [
"463159",
"485042",
"463159",
"463159"
] |
01467572 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467572/file/978-3-642-38928-3_16_Chapter.pdf | Tetsuo Noda
email: [email protected]
Terutaka Tansho
email: [email protected]
Shane Coughlan
Effect on Business Growth by Utilization and Contribution of Open Source Software in Japanese IT Companies
The expanded use of Open Source Software (OSS), and the expansion of the market caused by this adoption has led to a corresponding increase in the number of businesses acting as stakeholders in the field. Some of these are pure users of OSS technology but a great number are developers of such technology, and can be understood to have a substantial investment in this paradigm. It is reasonable to assume that such businesses are rational actors, and that their investment or contribution to the field implies a positive economic benefit either currently obtained or assumed as a return in the future. This paper analyzes how OSS affects Japanese IT companies' business growth both through simple use and by deeper engagement as a stakeholder in OSS community. This is the first time that such a link between the utilization of OSS and economic growth has been explored in the context of Japan, and it can hopefully lay a foundation for further study regarding the real economic value of this approach to software.
Introduction
The expanded use of Open Source Software (OSS), and the expansion of the market caused by this adoption has led to a corresponding increase in the number of businesses acting as stakeholders in the field. Some of these are pure users of OSS technology but a great number are developers of such technology, and can be understood to have a substantial investment in this paradigm. The question is why this is so. From the viewpoint of the enterprises (demand side) intending to introduce OSS, the most frequently cited reason for investment is described as cost-reduction. However, the pure cost-reduction on the part of these enterprises (or governmental organizations) may lead to the shrinking of the market of IT companies (supply side).
In such an environment it is necessary for supply side IT companies to cultivate new markets to maintain or expand their business. Somewhat ironically, the IT companies are facing the cost pressure from OSS in some cases, and then these companies will need to incorporate outside resources (such as outside OSS developers and their contribution) into their inside product portfolios to maximise their R&D returns.
In short, supply side companies have to reduce the cost as required by competitive pressure, a pressure partly brought amplified by OSS, and participate in the development processes to ensure their continued competitiveness in the market. This paper analyzes how OSS affects Japanese IT companies' business growth both through simple use and by deeper engagement as a stakeholder in the community.
2 Open Innovation and OSS Business Model 2.1 Open Innovation, and Matter of Free Ride [START_REF] Chesbrough | Open Innovation: The New Imperative for Creating And Profiting from Technology[END_REF] describes traditional separate style business strategy as "Closed Innovation", in which enterprises develop ideas, marketing, support, and financing by themselves. And, research and development is almost exclusively performed intra-enterprises. However, the superiority of "Closed Innovation" as an economic model for creativity is being reduced because of the liquidity of labour, improvements in the knowledge power of employees, and the existence of venture capitals to drive new innovation elsewhere. In this context business enterprises have begun to use inflow and outflow of knowledge to fit their purposes, not only accelerate their internal innovation, but encourage the innovation to be used externally. This process is "Open Innovation" which blurs boundaries between business enterprises, and by joining internal resources and external resources together, extra economic values for all parties concerned is generated. This development style is essentially the same as the longer established OSS development style. OSS is developed by a "Community" of stakeholders, which may be structured as a Bazaar style or a Cathedral style, it tends to be open for all developers, software engineers and business enterprises with an interest in participation, and they can participate or withdraw any stage in the overarching process (though naturally continued participation is incentivised in terms of increase ownership of the technology produced or increased customisation to fit individual use-cases). From the perspective of businesses engaging as stakeholders in this field, they join a community beyond the separated confines of their own organisation in order to absorb the fruit of innovation and developed software from third parties, who participate for similar reasons in turn. It is essentially a situation of enlightened self-interest.
One immediate consideration from this perspective is that the development of OSS technology inherently reduces costs for each stakeholder, with the complete burden of development being shared by all contributors. Conceptualising OSS technologies as platforms on which products or services can be delivered, it is easy to extrapolate that OSS contributions therefore can be directly tied into reductions in the cost of bringing new products and services to market, and therefore provides a market benefit through what can be called a leverage effect.
From a less positive perspective, if we assume rational individuals or business enterprises with to obtain convenience maximization, the obvious next step would be to free ride on OSS development, and seek to obtain the platform benefits without the burden of contribution. However, it is equally true that if every individual (or every business enterprise) behaves rationally, value provided by OSS will immediately drop, and quickly run dry. Ghosh (1998) explains this by introducing a "Cooking-pot Market" model, whereby assumptions of inexhaustible supply via digital copying are off-set by understanding that the cost of development, the human labour involved, is both exhaustible and actually based on technical elites. Therefore, rational business enterprises that want to absorb the outcome of OSS must take part in the OSS development processes and contribute to the future of the platforms. [START_REF] Kunai | Three-stage approach of Linux Development[END_REF] categorizes the underlying OSS business model a "Three-step Model" regarding the engagement by business enterprises. As they move up the ladder, though the cost of the development increases, business enterprises can increase the economic effects as shown in Fig. 1.
Three-step Business Model of OSS
Fig. 1. Three-step Business Model of OSS
In the first stage, business enterprises use OSS as End Users, and they only use OSS in the same was as proprietary software. Their primary purpose is cost reduction, but economic effect is very low. In the second stage they use OSS in a more engaged manner, expanding functional features they need, constructing application software, serving support for their customers, and integrating systems. In this stage, the economic effect is comparatively higher than that of the first stage, though cost rises because of the demand of manpower and equipment to launch and sustain these derivative businesses.
In the third stage they participate in the "mainstream" development process of OSS, and bring forth the highest economic effect. They contribute to the "Community" by providing physical support and financial backing. The development style of this stage is different from to stage two, primarily because they develop software in association with other companies, including their competitors. This is -as referenced before -enlightened self-interest. The "Community" has many resourceful software engineers, who contribute to the development process of OSS by fixing bugs or supplying patches. Those closest to each business sector can address its requirements most effectively, and -on a platform rather than product level -competitors can work together to enable the next generation of their difference products without undertaking 100% of the engineering on their own. In this way business enterprises become to be able to reduce the cost of the demand of manpower and equipment. Moreover, developing with OSS engineers and other companies, they are able to acquire the "Leverage Effect". Thus the underlying hypothesis is that process of "Open Innovation" enables business enterprises to absorb the fruits of the "Community" of OSS. Now, this paper tries to establish this hypothesis by the questionnaire survey of IT companies in Japan.
Study Methodology
The methodology we employ in this study is to investigate the effect on the business growth by OSS utilization and contribution in Japanese IT companies (with our primary focus being the supply side of information solutions in business processes). As is described by Kunai, we assume, "The more IT companies contribute to OSS communities, the more they are able to acquire economic effect".
According to this methodology, we sent out a detailed questionnaire survey to IT companies in Japan, during 2012. The survey slips were sent to 642 companies which accede to Information Industry Association in Japan, and 191 companies gave us replies (collection rate: 29.8%). The survey was conducted in the form of a questionnaire containing the items shown in box 1. In the survey we questioned the utilization and contribution of low-level OSS (such as Linux, Database technologies, Programming Languages, etc.). Application-level software (such as ERP, CMS, CRM, etc.) is excluded, because case examples of development of such software are rare in Japanese IT companies. All questions are selected from among alternatives, discrete data.
Utilization and Contribution of OSS
In the questionnaire survey, we ask the utilization ratio of OSS -how much percentage of software development is utilized by OSS in total. "100%" in Linux means that the company uses Linux for all the server operating system, and then "50-74%" in Ruby indicates that Ruby is used in the range of 50-74% software development in the company for example. In this company, they probably utilize "other languages" for the rest of 25-50%. The results are shown in Fig. 2. Most Japanese IT companies use OSS in their business field, especially the Linux operating system core components and various Database technologies (MySQL, PostgreSQL, etc.). At the same time, the use rate of Japanese-origin technology like Ruby and its American-based development framework Ruby on Rails are unexpectedly low (Fig. 2). It is because that in the questionnaire survey the poll of IT companies contains wide ranges, including system integrators, software developers, and network service companies. Ruby use is currently limited within the field of web applications development, along with Ruby on Rails.
Fig.2. Utilization of OSS in Japanese IT Companies n=191
The survey also revealed that the percentage of companies which contribute to OSS communities is relatively low in Japan (Fig. 3). The terms of currency in the questions were originally in Japanese Yen, however, the temrs are converted into US Dollors (100JPY = 1USD) in order that the readers can capture the volume easily.
The result of our study indicates that most of Japanese IT companies use OSS without contributing to OSS development process might show that they are positioned as "free riders" in non-application level. However, the survey also confirmed that, on average, about 20% of IT companies contribute to OSS development process. The question is therefore what the correlations between the utilization of OSS and the contribution towards OSS are.
Correlation between Utilization and Contribution of OSS
Correlation of utilization among OSS
As a whole, the survey shows that correlations of utilization among OSS are strong, and inside this context the, correlation of Linux with regards other OSS (especially Apache and Databases) are quite strong. Most IT companies supply business solutions by using OSS components based on Linux OS and its ecosystem in Japan.
Correlations between Ruby and Ruby on Rails are also strong, though correlations between Ruby and Databases are weak by comparison with other scripting languages.
Correlation between utilization and contribution of each OSS
As a whole, correlations between utilization and contribution of companies in many OSS technology types are not significant. The exception was that the correlation between Ruby and Ruby on Rails in this context is significant in 1% level. And those of Other Languages between Apache and Databases technologies are rather weak but significant in 5% level. We are led to the conclusion that in Japan, most of IT companies make use of Linux, Apache, and database technologies. These types of technology as essentially used in the same manner as proprietary software. Of course, these OSS technologies are being developed thorough worldwide communities by the contribution of many engineers and businesses, so many Japanese IT companies gain their value without much pain as "free riders."
In contrast, Ruby has been developed mainly by the Japanese community (approximately half of its developers are Japanese). Justifiably, companies conducting business using Ruby get engaged in the Ruby community around Japan, and by extension they also get engaged in the development process of Ruby on Rails in America. To some extent this is pure self-interest in terms of building the shared platform, and to some extent it shows that Ruby and Ruby on Rails are still very much developing OSS technologies and have not yet gained a stable valuation in business use yet. It is hard to be a free rider at this point in their lifecycle, so adopters are inherently positioning themselves as investors and contributors.
In addition, it is interesting that the correlation between utilization and contribution of other languages (Perl, Python, PHP, etc.) is also shown. Of course, the number of Japanese developers in these languages' communities is small, different from Ruby. This shows that these OSS script languages, including Ruby, have not gained stable valuation in business use yet, either.
Effect on Business Growth by Utilization and Contribution of OSS
The larger question is how we can survey the effect on business growth by utilization and contribution of OSS, thereby explaining more clearly the actions of the companies in this market as rational actors. We understand that business growth is affected by many factors such as market conditions; however, in order to test our exploratory hypothesis, we investigated the correlations between indicators of business growth and utilization of OSS, and contribution to OSS communities. As a whole the data indicated that a subsequent period prospect of sales growth rate might be impacted by utilization of OSS, and in this context, Ruby favorably compared with other OSS in Japan. At the same time, there is little correlation between indicators of business growth and contribution towards OSS.
The results show that, in Japanese IT companies, the utilization of OSS has an insignificant effect on the present sales growth, but when they use OSS they tend to make allowance for the subsequent sales growth. However, the contribution to OSS communities has an insignificant effect both on the present sales growth and on the subsequent sales growth. As rational economic actors, their investment decisions are impacted by this understanding.
Conclusion and Challenges for the Future
It has become commonplace for business enterprises to use OSS in their business. The logic we understand as framing this such engagement is that the competitive edge that comes from technical advantages delivered by using OSS, and -using the same logic -it is therefore indispensable for them to contribute or participate in the development process of OSS as Kunai proposes. However, our data shows that major OSS, like Linux, Apache, MySQL, and PostgreSQL, are still utilization objects for Japanese IT companies, or "Frontier" technologies. They have been able to get a competitive edge only by the utilization of OSS, and contribution to OSS projects or communities has not been linked to the business growth for them.
At the same time, exceptionally, Ruby and Ruby on Rails are both utilization and contribution objects for Japanese IT companies. They have to contribute or participate in the development process of both technologies. This appears to be because Ruby and Ruby on Rails are still platforms very much under development and have yet to gain a stable valuation in business use. The contribution to Ruby and Ruby on Rails is not linked to the business growth as other OSS, or motivated by the same adoption criteria.
And, we excluded the survey of application-level software (such as ERP, CMS, CRM, etc.). For the future, case examples of development of such software are expected to increase in Japanese IT companies. Moreover, to survey the effect on business growth we take on growth rate of sales and growth of employee number as indicators of business growth. There are also other indicators to estimate business growth. These are our research challenges for the future.
Our data is not perfect, and the survey included collected data from many types of supply side IT companies is lumping together. The advantage is that this poll of IT companies contains a wide range, and their utilization and contribution of OSS are different from each other but provides a snapshot of the overall market. To analyze the effect on business growth by utilization and contribution of OSS in more detail, and to properly understand the free rider versus investor issue that we have uncovered, will require more assorted statistical analyses. One proposed step for further research is to expand future survey criteria to the demand side of IT businesses, who we might term as consequential OSS users, and who also may contribute to the development process of OSS due to its open nature. Even broader research into non-IT but significant software development areas such as banking or heavy industry could also prove fruitful.
Fig. 3 .
3 Fig.3. Contribution to OSS communities in Japanese IT Companies n=191
Box 1: OSS Utilization and Contribution Questionnaire Survey Slips towards Japanese IT Companies
Company profile:
Q1. Home City
Q2. Inauguration of Business
Q3. Main Business Service
Q4. Capital Stock
Q5. Number of Employee
Q6. Number of Developers (programmers, software engineers, etc.)
Q7. Sales Amount
Q8. Growth Rate of Sales (present period)
Q9. Prospect of Sales Growth Rate (subsequent period)
Q10. Growth of Employee Number (present period)
Q11. Prospect of Employee Number's Growth Rate (subsequent period)
Utilization of OSS: (rate of utilization)
Q12. Utilization of Linux
Q13. Utilization of Apache HTTP Server
Q14. Utilization of Database technologies (MySQL, PostgreSQL, etc.)
Q15. Utilization of Programming Language Ruby
Q16. Utilization of Other Programming Languages (Perl, Python, PHP, etc.)
Q17. Utilization of Ruby on Rails
Contribution to OSS Communities: (amount of direct investments
and manpower costs of OSS engineers inside company)
Q18. Contribution to Linux
Q19. Contribution to Apache HTTP Server
Q20. Contribution to Database technologies (MySQL, PostgreSQL, etc.)
Q21. Contribution to Programming Language Ruby
Q22. Contribution to Other Programming Languages (Perl, Python, PHP, etc.)
Q23. Contribution to Ruby on Rails
4 Result and Discussions
Table 1 .
1 Correlations of utilization among OSSCorrelations of contribution among stakeholders in OSS are also strong. In the same context, correlations of cross-contribution between Linux and other OSS (especially Apache and various Database technologies) are comparatively strong. It also held true that correlations of Apache between other OSS (various Database technologies and various Scripting Languages) are also strong.
Linux Apache Databases Ruby O.L. RoR
Linux - .692 ** .625 ** .469 ** .507 ** .402 **
Apache - .554 ** .554 ** .494 ** .409 **
Databases - .473 ** .581 ** .459 **
Ruby - .232 ** .812 **
Other Languages - .255 **
Ruby on Rails -
Spearman's rank correlation coefficient ** 1% level of significance
Correlation of contribution among OSS
Table 2 .
2 Correlations of contribution among OSS
Linux Apache Databases Ruby O.L. RoR
Linux - .836 ** .773 ** .616 ** .696 ** .447 **
Apache - .765 ** .580 ** .702 ** .430 **
Databases - .550 ** .802 ** .526 **
Ruby - .575 ** .772 **
Other Languages - .622 **
Ruby on Rails -
Spearman's rank correlation coefficient ** 1% level of significance
Table 3 .
3 Correlations between utilization and contribution of OSS
contribution Linux Apache Databases Ruby O.L. RoR
utilization
Linux .136 -.002 .004 .128 .083 .110
Apache .151 .135 .054 .149 .125 .111
Databases .050 -.016 .052 .132 .098 .105
Ruby .031 -.013 .007 .324 ** .114 .351 **
Other Languages .144 .161 * .189 * .099 .272 ** .140
Ruby on Rails .087 .086 .065 .331 ** .159 .420 **
Spearman's rank correlation coefficient
** 1% level of significance, * 5% level of significance
Table 4 .
4 Correlations between business growth and utilization of OSS
Growth Rate of Sales Prospect of Sales Growth of Employee Prospect of Employee
(present period) Growth Rate Number Number's Growth Rate
(subsequent period) (present period) (subsequent period)
Linux .191 ** .245 ** .207 ** .133
Apache .167 * .220 ** .079 .066
Databases .131 .222 ** .026 .067
Ruby .135 .214 ** .063 .113
Other Languages .098 .176 * .052 .092
Ruby on Rails .055 .178 * .061 .068
Spearman's rank correlation coefficient
** 1% level of significance, * 5% level of significance
Table 5 .
5 Correlations between business growth and contribution of OSS
Growth Rate of Sales Prospect of Sales Growth of Employee Prospect of Employee
(present period) Growth Rate Number Number's Growth Rate
(subsequent period) (present period) (subsequent period)
Linux -.091 .007 -.032 -.089
Apache -.031 .021 -.092 -.127
Databases -.036 .092 -.083 .020
Ruby .052 .047 .072 .058
Other Languages .019 .057 -.029 .002
Ruby on Rails .034 .075 .018 .049
Spearman's rank correlation coefficient
** 1% level of significance, * 5% level of significance | 22,540 | [
"989698",
"989699"
] | [
"466675",
"466675",
"466675",
"485046"
] |
01467573 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467573/file/978-3-642-38928-3_17_Chapter.pdf | Victor Kuechler
email: [email protected]
Carlos Jensen
email: [email protected]
Deborah Bryant
email: [email protected]
Misconceptions and Barriers to Adoption of FOSS in the U.S. Energy Industry
Keywords: Adoption, barriers, energy sector, case studies
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The energy industry in the United States is changing in many ways. Its growing and diverse energy needs have created a need to collect new types of data and network systems that have historically been "siloed". This new networked grid uses computerbased remote control and automation to intelligently manage the energy as it moves between energy producers and consumers [START_REF]Smart Grid[END_REF]. The American Reinvestment and Recovery Act of 2009 provided $3.4 billion dollars of grant money in the form of the Smart Grid Investment Grant to support the modernization of the power grid [START_REF]Smart Grid Investment Grant Program: Progress Report[END_REF]. As of July 2012, 99 projects have been funded, all of which deploy smart grid technology with a chief goal to reduce peak and overall electricity demand and operation costs; improve asset management, outage management, reliability, and system efficiency; and reduce environmental emissions [START_REF]Smart Grid Investment Grant Program: Progress Report[END_REF]. The drive to establish a "smart grid" has introduced new technology, but also created new security risks and challenges [START_REF] Hahn | Cyber Attack Exposure Evaluation Framework for the Smart Grid[END_REF].
As the modernization of the North American power grid continues, the security of energy delivery systems, control systems, and information sharing must be a high priority. Software, whether supporting operational systems or security technology, is a key element and must be central to any related security discussion. These complex problems require collaboration between regional energy generators, government agencies, standards bodies, and security professionals. Leveraging a community of like-minded individuals who share similar needs and goals can increase the adaptability and flexibility of cyber security initiatives and software within the energy sector.
The free and open source software (FOSS) movement is one of the only distributed software development models that brings together developers from many different domains of knowledge while supporting a common goal of sharing information, collaborating to improve systems, and leveraging diverse knowledge to create some of the most effective software in cost and implementation.
We conducted an exploratory study, funded by the Energy Sector Security Consortium (EnergySec), to map out the use of FOSS software in the energy sector, especially as it relates to cyber security. The motivation for this study was rooted in the common perception that FOSS is less secure due to availability of its source code. Although this perception has existed since the initiation of FOSS, a 2010 study from Boston College supports this claim, concluding that open source software is at greater risk of exploitation [START_REF] Ransbotham | An Empirical Analysis of Exploitation Attempts based on Vulnerabilities in Open Source Software[END_REF]. Despite this perception, many government agencies have adopted FOSS, some of which consider FOSS an equal to proprietary software, including the Department of Defense [START_REF] Wennergren | Clarifying Guidance Regarding Open Source Software (OSS)[END_REF].
Through a survey, we identified seven representative organizations and FOSS projects within the energy sector and identified the tools they use, licenses used, legal concern or solutions, and challenges and hurdles they faced or have overcome in adopting FOSS. We conducted semi-structured interviews with key members of each organization or project with a goal to:
1. Understand the extent in which FOSS participates in energy sector, and vice versa 2. Understand the barriers to the adoption of FOSS and the process of contributing to FOSS 3. Record the best practices derived from current initiatives in FOSS, the energy industry, and government.
Background
Introduction to FOSS
FOSS development is a collaborative process, thus understanding its culture is crucial.
The open and altruistic nature of FOSS is appealing to many users and developers. FOSS development is predominantly volunteer-driven, built on a model of computermediated, asynchronous communication and collaboration. Many people today characterize FOSS in this way. It may be surprising to learn that large projects like Linux, Firefox, and Apache are the exception rather than the rule; most FOSS projects are developed by a handful of people. In a 2002 study of the most active mature projects (i.e., projects not rampingup) on Sourceforge.net, the average project has four developers, and the majority has one [START_REF]Homeland Open Security Technology[END_REF]. In larger projects, more than half of developers only have regular contact with one to five other project members [START_REF] Ghosh | Free/Libre and Open Source Software: Survey and Study, Part 4: Survey of Developers[END_REF].
Because FOSS projects rely on volunteers, users are viewed as potential contributors. Ye and Kishida and Herraiz et al. studied "role transition" from users (using the software, motivating developers) to bug reporters (reporting and documenting problems, submitting feature requests) to active developers (contributing code) [START_REF] Ye | Toward an understanding of the motivation of open source software developers[END_REF], [START_REF] Herraiz | The Process of Joining in Global Distributed Software Projects[END_REF]. FOSS communities and projects operate as meritocracy, where a programmer's ability and contributions to a project shape how the community perceives them.
FOSS's inherent volunteer nature highlights its adaptability to meet the needs of specific groups and niche markets. Volunteers work, initiate, or develop projects that either meets personal needs or the purpose and goals of other like-minded individuals [START_REF] Lakhani | The Boston Consulting Group Hacker Survey[END_REF]. FOSS has continually adapted to meet the needs of almost every domain of the software ecosystem. In 2005, Walli et al. evaluated the use of FOSS in U.S. companies. They found that 87% of the 512 companies they surveyed use FOSS [START_REF] Wennergren | Clarifying Guidance Regarding Open Source Software (OSS)[END_REF]. They also discovered that companies and government institutions use FOSS because it reduces IT costs, delivers systems faster, and makes systems more secure. Large companies with annual revenue over $1 billion saved 3.3 million dollars, medium-sized companies saved an average $1 million, and smaller companies (<50 million) saved $520,000. They also discovered that after years of using FOSS software stacks (Linux, etc.) and web server software, companies were beginning to use other FOSS business software instead of proprietary software [START_REF] Wennergren | Clarifying Guidance Regarding Open Source Software (OSS)[END_REF].
In 2007, David Wheeler also found that FOSS has a significant market share and "is often the most reliable software and I many cases has the best performance" [START_REF] Wheeler | Why Open Source Software/Free Software (OSS/FS, FOSS, or FLOSS)? Look at the Numbers![END_REF]. Similarly, in 2011, Coverity compared open source and proprietary software quality and find "open source quality is on part with proprietary code quality, particularly in cases where codebases are of similar size" [START_REF]Open Source Integrity Report[END_REF]. The open source ideal has been a part of government in spirit since the 1960s, but only the real potential of FOSS has only been advocated since the late 90s. Several publications, "A Case for Government Promotion of Open Source Software" by Mitch Stoltz; "Open Source Code and the Security of Federal Systems" by a multiagency working group (National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism); "Opening the Military to Open Source" by Maj. Deiferth, USAF; and "The Simple Economics of Open Source" by the Bureau for Economic Research was published in the late 90s and early 2000s. These publications started some of the first serious discussions of FOSS adoption in government and led to a variety of open source solutions being deployed by the U.S. government in the early 2000s. This included the NSA's release of SELinux-an operating system integrated with a suite of FOSS pen testing and security tools, and DHS's Homeland Open Security Technology (HOST) program in 2004.
In 2008, The Open Source Census reported that European government entities had the highest use of FOSS per computer scanned, with an average of 68 different open source packages installed per computer. The United States averaged 51 open source packages per computer [START_REF]Open Source Census Tracks Enterprise Use of Open Source Globally[END_REF].
Between 2009 and 2012, several advocacy groups were established to promote the opportunities of FOSS in U.S. government, including Open Source for America-a coalition of companies, academic institutions, communities, and individuals; and CivicCommons-a community-driven app store that brings people together to share their solutions, knowledge, and best practices to improve government.
In 2003, the Department of Defense approved the use of open source in their agency [START_REF] Stenbit | Open Source Software (OSS) in the Department of Defense (DoD)[END_REF], and later in 2012, they released a memorandum stating that open source and proprietary software are considered equal [START_REF] Wennergren | Clarifying Guidance Regarding Open Source Software (OSS)[END_REF]. NASA also launched code.nasa.gova repository of FOSS projects currently in development at NASA. So far, the energy sector has not been part of the efforts to adopt FOSS.
Software Security in the Energy Sector
Security, including cyber security, has long been an important concern for the energy sector, given the electric grids' importance as a key infrastructure. In the past, this was in large parts achieved through the isolation of control systems from more publicly accessible and exposed systems, a practice the industry refers to as "siloing." This practice, in theory, means that for control systems, cyber security largely devolved into a problem of physical security. This model has proven difficult to implement. The temptation or need to network critical systems, be it for monitoring, maintenance, reporting, or to optimize operations means that in practice these systems have been more exposed than the industry has at time been willing to admit.
Robert J. Turk's "Cyber Incidents Involving Control Systems", published in 2005 for the U.S. Department of Homeland Security" summarizes 120 known cyber security incidents [START_REF] Turk | Cyber Incidents Involving Control Systems[END_REF]. They found that forty-two percent of incidents derived from mobile malware, twenty-eight percent from hacks, twenty-six percent from misconfiguration, and four percent from penetration tests or audits. Thirty-eight percent of these incidents occurred from within the organization, sixty-one from outside the organization, and one percent were unknown. Thirty-three percent of the perpetrators were insiders (contractors, former employees, and current employees), forty-three percent were malware authors, and four percent were foreign nations, competitors and unknowns [START_REF] Turk | Cyber Incidents Involving Control Systems[END_REF].
In January 2003, the "Slammer" worm disabled the computerized safety monitoring system at the Davis-Besse nuclear power plant in Ohio. Stuxnet is one of the most well-known cyber threats in history. Its primary goal was to reprogram Siemens industrial control systems, but as of September 29, 2010, Stuxnet had infected about 100,000 computers after it escaped on an employee's laptop from Iran. 60% of infected hosts are in Iran, but it has spread to more than 155 countries [START_REF] Falliere | W32[END_REF]. Since Stuxnet, other cyber attacks have occurred around the world with both criminal and commercial intent that conduct both espionage (collecting data) and sabotage (destroying data). This includes the Duqa and the Flame virus, a trojan similar to Stuxnet that scans computer systems for key private information on very specific machines around the world.
Hahn and Govindarasu explain that "The coupling of the power infrastructure with complex computer networks substantially expand[s] current cyber attack surface area […]" [START_REF] Hahn | Cyber Attack Exposure Evaluation Framework for the Smart Grid[END_REF]. In 2011, the Energy Sector Control Systems Working Group released the "Roadmap to Achieve Energy Delivery Systems Cyber security." This roadmap ad-dresses the growing vulnerabilities in energy delivery systems, including "control systems, smart grid technologies, and the interface of cyber and physical securitywhere physical access to system components can impact cyber security" [START_REF] Batz | Roadmap to Achieve Energy Delivery Systems Cyber Security[END_REF].
"This update recognizes that smart technologies (e.g., smart meters, phasor measurement units), new infrastructure components, the increased use of mobile devices, and new applications are changing the way that energy information is communicated and controlled while introducing new vulnerabilities and creating new needs for the protection of consumer and energy market information" [START_REF] Batz | Roadmap to Achieve Energy Delivery Systems Cyber Security[END_REF].
The focus on the smart grid and automated control systems like SCADA has opened up for a variety of new threats, business practices, market trends, regulations, and technologies. Among these are a few important issues [START_REF] Batz | Roadmap to Achieve Energy Delivery Systems Cyber Security[END_REF]:
"Growing reliance on commercial off-the-shelf technologies" "Increasing reliance on external providers for business solutions and services, which introduces additional cyber and physical reliability challenges" "Increasing interconnection of business and control system networks" "Increasing reliance on the telecommunications industry and the Internet for communications"
Though the energy sector does work with unique systems, and have some unique security concerns and requirements, some of these do overlap with general IT security. It is therefore not surprising to find these groups using FOSS tools such as NMAP, OpenADR, OpenPDC, and Hadoop.
The industry has also developed some unique FOSS solutions. In April 2012, Pacific Northwest National Laboratory announced they would be open sourcing a homegrown, host-based security sensor to encourage community feedback and participation. The cyber tool is called the Hone Project and it "pinpoints which applications or processes infected machines and an external network they are using to communicate" [START_REF] Messmer | Research lab extends host-based cyber sensor project to open source[END_REF].
These efforts notwithstanding, there appears to be a marked dearth in energy sector FOSS projects, including cyber security related projects.
Study Methodology
Our goal was to catalogue experiences and attitudes with regard to the adoption and development of FOSS in and for the U.S. energy sector, especially related to cyber security. We used a mixed-method study design, implementing both online surveysto reach a broad audience, and interviews-to gather rich data. The first step of our process was identifying key FOSS projects, and organizations that operate within the energy sector. We started from a list of contacts from Ener-gySec. We scoured online sources, industry papers, and research articles for others who might provide insight.
Next we emailed each potential contact, explaining the purpose of the study and provided them with a link to our surveys. Two surveys were used, one for people working the energy sector and another for contributors to FOSS projects used in the energy sector. Subjects who identified with both groups filled out both surveys.
Seven semi-structured interviews were conducted with selected survey respondents. Although a regimented series of questions were developed, the semi-structured approach allowed us to explore issues brought up by subjects, or their thinking.
We used open coding as our method of data analysis. Open coding is a method of grounded theory that provides a way for researchers to procedurally organize and analyze qualitative data [START_REF] Berg | Qualitative research methods for the social sciences[END_REF]. We generated codes from our initial survey responses. These codes were discussed among the researchers and used to define categories of comments and concepts in the data. For example, when one participant commented that "business drivers force the energy industry to network their systems […]," this was tagged with the code "barrier/risk"). Interview transcripts, notes, and surveys answers were analyzed line-by-line and coded independently by two researchers. These codes were compared and refined. This process was repeated until there was sufficient agreement about the codes and their meaning. After all data elements coded, common themes were identified.
The burden of proof is relatively low in exploratory studies; the goal of this study was not to prove anything statistically, but to identify broad themes and concepts. Thus our sample sizes were small, and we did not bother with statistical analysis.
Sampling
We divided our subjects into three categories: energy producers and grid operators, the solution providers they rely on (IT contractors, software and hardware companies, etc.), and FOSS projects. All three groups were represented in our surveys, and we wanted to make sure all were represented in our interviews. We therefore chose to interview representatives from Portland General Electric, the Tennessee Valley Authority, Utilisec, Dell SecureWorks, GADS/OS, Green Energy Corp, and the Grid Protection Alliance. Table 1 shows how these organizations were classified in our study.
Table 1. List of organizations
Energy Producers Solutions Providers FOSS Projects Portland General Electric Utilisec GADS/OS Tennessee Valley Authority Dell SecureWorks Green Energy Corp Grid Protections Alliance
Portland General Electric (PGE). PGE serves over 800,000 customers over 52 cities in Oregon and has deployed 825,000 smart meters [START_REF]Portland General Electric[END_REF]. PGE can offer insight into their day-to-day operations in both IT and operations from the perspective of a midsize energy provider.
Tennessee Valley Authority (TVA).
TVA is owned by the U.S. Government and provides electricity to 9 million people throughout the southeast United States, being the 5th largest provider in terms of revenue [START_REF]About TVA[END_REF]. TVA represents a larger energy provider with more diversified resources and needs.
Utilisec. UtiliSec offers cyber security services specifically tailored for electric utilities, with expertise in smart grid security, low-level analysis, and penetration testing. They offer training as well as guidance on real-world systems and security architecture review, penetration testing, and policy composition [START_REF]Utilisec: Electric Utility Cyber Security[END_REF].
Dell SecureWorks®. Secureworks is a provider of information-security-as-a-service, processing more than 13 billion security events and 30,000 malware specimens each day [START_REF]Dell to Acquire Secureworks[END_REF]. Dell SecureWorks have extensive experience partnering with utility providers, helping them solve security challenges with industrial control systems and SCADA networks, smart grid technologies, advanced metering infrastructure and other critical IT assets. Currently, they are working on Snort 2.9 (Modbus and DNP3)-a network intrusion detection system for Unix.
GADS Open Source (GADS/OS).
GADS/OS is a FOSS project that collects and analyzes performance and event data in a power grid and reports it to NERC. More than 200 companies and 3,800 generating units use GADS/OS in the United States and around the world [START_REF]GADS Open Source[END_REF].
Green Energy Corp.
Green Energy Corp provides software solutions and software engineering services to communications, utilities, and energy companies. Green Energy Corp developed a FOSS platform called Greenbus, which enables utilities to integrate legacy technologies into their smart grid [START_REF] Srivastava | Green Energy Corp Introduces Smart Grid Open Source Community[END_REF].
Grid Protection Alliance. The Grid Protection Alliance (GPA) is a non-profit aimed at supporting the development of security-related IT solutions for the energy sector [START_REF]Grid Protection Alliance[END_REF]. GPA's projects include the PMU Connection Tester, openPDC, openPG, and SIEGate.
Findings
FOSS is used more extensively in the energy sector than we initially thought, though most organizations are consumers rather than producers, and most use is behind the scenes. All participants noted, in some fashion, that FOSS was not seen as unreliable or undesirable, though several commented that the FOSS support model could cause problems.
Contributing to FOSS is much more rare than adoption, primarily due to real or perceived intellectual property and liability concerns, uncertainty about how to build community, lack of best practices, and other internal roadblocks. Some concerns were raised about the need to maintain good relations with solution providers and not wanting to appear in direct competition for fear of liability or regulatory oversight. These are common concerns of many industries where there is a lack of strong leadership and examples of FOSS adoption.
FOSS is currently used mostly for ad hoc security solutions. By this we mean that system administrators use one-off FOSS tools to complete small tasks, while proprietary enterprise software is used for critical system management and controls (e.g., SCADA). One participant claimed that "almost every tool used by testers is open source; Backtrack (Linux distribution) is a good example."
On the IT side, use of FOSS seems common, and decisions to adopt FOSS seem to be made based on an assessment of need and availability, as well as vendor support. On the operations side of these organizations, we found that security needs or concerns are downplayed and sometimes ignored due to the supposed separation between IT and production systems. This separation is not always real, and there have been many instances where control systems have been linked to IT systems, either intentionally or accidentally, and security has been breached. As the industry moves to implement a smart grid, the need to network the two sides, as well as consumers and producers, will further weaken the security through isolation model.
Themes
Eleven codes were generated from the interviews and surveys: perceptions, future needs or trends, drivers, barriers, risks, potential opportunities, use cases, best practices, ad hoc solutions and policy, reasons for using open source, and business models. These codes were paired down into two categories: drivers for adoption and risks and barriers. Table 2 and 3 list each theme that was shared between interviewees or unique to an organization or FOSS project. grown security teams (often out of the control systems ranks). With the onset of the smart grid, utilities are forced to modernize and network their control systems. Several subjects noted that controls engineers perceive IT engineers as "cowboys". For this reason, many control engineers do not want to let the IT staff "mess around" with security on their systems. In essence, controls engineers try to shut IT folks out of the system. One interviewee noted that there are not enough security professionals in energy sector. Increasing IT numbers could offset these tensions.
4.1.3
Vendor dependence It was emphasized that a small number of vendors supply the majority of software solutions to the energy sector. The current mentality is to buy whatever the vendor is supplying rather than advocating for new solutions from these vendors, which means that these vendors decide what software is adopted. Without the support of vendors, FOSS will not be a viable option for security. If a FOSS solution fails, the liability falls on the organization and not the software developers or the vendor. If utilities pool their resources they can get vendors to meet changes and add features they need. As noted by one interviewee, "if you get some of the bigger vendors moving in that direction [of open source] then you will have change."
4.1.4
Legal concerns The energy sector is a risk-averse community that follows more than it leads. As one energy producer commented, "the more you use a standard practice, the less questions auditors ask." Regulatory bodies have shaped the adoption and use of software in the energy sector and there are penalties for noncompliance. Common practices, similar tools and methodologies will create simpler audits. This issue presents a variety of legal uncertainty involving the adopting FOSS, which is outside the norm. The energy sector needs one or two strong and influential organizations to prove that FOSS works. Currently, the regional division of the energy sector (e.g., Texas operates independently, the Pacific Northwest collaborates regionally, etc.) does not simplify the task, making it harder to look for role models across regions.
Discussion
Our findings show that FOSS is used extensively in the energy sector, though most of it is relatively informal, and few organizations are large-scale users, much less producers of FOSS. As Table 3 demonstrates, interviewees confirmed that FOSS offers significant benefits in cost and flexibility. However, FOSS is still used in most cases as a cursory solution to improve task efficiency or detect network intrusions. FOSS is not used as a primary solution for managing critical systems. We believe that one of the main reasons for this ad-hoc use of FOSS is a lack of discussion in the energy sector community about the use of FOSS and its potential benefits. Such a FOSS discussion could create a means to petition and influence the market (e.g., vendors, etc.) to deliver more FOSS solutions. The energy community can also leverage best practices, potential opportunities, and the community itself to subdue the risks and barriers that have inhibited FOSS adoption.
One subject noted that there is a lack of documentation that explains how organizations have successfully used FOSS, as well as their reasoning for choosing it. In an effort to broaden their participation in FOSS, the Government of Spain conducted a dossier project in 2010 to catalogue the best practices of FOSS communities and project that are heavily influenced by a public administration [START_REF] Bryant | Public Administrations Code Release Communities: Dossier ONSFA[END_REF]. This helped lay the foundations for further adoption. Without sharing case studies, lists of best practices, lessons learned, or advice about legal issues, an organization will find it difficult to justify change. Similarly, the "keeping the lights on" mentality in the energy sector has demonstrated the strong connection between reliability and financial impact. Creating and sharing case studies and best practices will help circulate FOSS's reliability.
FOSS also needs to be recommended by trusted sources. Procuring support of FOSS from larger vendors in the energy sector will drive FOSS acceptance and adoption. Similarly, open sourcing commercial software can open up new revenue streams in consulting and support, which can also improve vendor buy-in.
Opportunities have also surfaced around creating and maturing open standards. Collaboration will help create common practices, tools, and audit procedures that will help shape standards. An example is the Secure Information Exchange Gateway (SIEGate), a FOSS project that provides a secure channel for transporting real-time data between a "utility control center and other control centers, utilities, and regulatory and oversight entities" [START_REF]Siegate: Secure Information Exchange Gateway for Electric Grid Operations[END_REF]. This project is a collaborative effort between the Grid Protection Alliance, University of Illinois-Urbana Champaign, Alstom Grid, PJM Interconnection, and the Pacific Northwest National Laboratory. Since the FOSS development model hinges itself on the motto "release early, release often", it is designed to take in changes and deploy them quickly. This creates a responsive environment for implementing policy change and complying with new standards.
Ultimately, one of the most powerful and easiest way to promote the development and adoption of FOSS could be through a top-down initiative and promotion from a regulatory or government agency like the U.S. Department of Energy, much like the U.S. Department of Defense and the U.S. Department of Homeland Security have done for FOSS through the HOST program [START_REF]Homeland Open Security Technology[END_REF].
Limitations
Although we believe the cases examined in this study characterize many of the entities operating in the U.S. energy sector, there are limitations to our work. Firstly, while interviews with individuals within the projects and organizations provided an expansive perspective of FOSS and cyber security, these perspectives are not necessarily representative. Many other organizations operate within the energy sector that did not participate in the study. This includes national labs, electric cooperatives, regulatory bodies and other affiliates. Due to scheduling, policy, or other reasons, representative members of these groups were unable to participate in the study.
That said, and while more study should be undertaken to confirm and expand on their findings, these case studies provide some interesting insight into the barriers and opportunities of FOSS adoption in the U.S. energy sector.
Table 2 .
2 Drivers for adoption
EP: Energy Producer; SP: Solutions Provider; FP: FOSS Project;
#: Number of respondents who agreed
Acknowledgements. We would like to thank EnergySec for funding this project, as well as providing insight into the energy sector. We also appreciate the participation of Portland General Electric, Tennessee Valley Authority, Utilisec, Dell Secure-works®, GADS Open Source, Green Energy Corp, and Grid Protection Alliance.
Lack of solutions (including FOSS) for control systems X X X 4 Reluctance to acknowledge vulnerability of control systems X X 4 Energy providers prefer to buy complete solutions X 1 Jurisdiction (e.g., federal vs. private, regional) issues limit collaboration X 1
From these codes, we identified themes, significant converging perceptions, drivers, barriers and risks associated with the use and adoption of FOSS in the energy sector. The themes we identified are the following:
4.1.1
FOSS as an "unknown" or "hippy movement" Many in the energy sector perceive FOSS as a "hippy movement," an ad-hoc effort rather than an effective software model. When FOSS is suggested, most people don't know where the support comes from, which is crucial in energy operations. One FOSS project manager noted that "every client ultimately asks 'how do you stay in business? Will you be there when I need you?'" This same person finds himself doing presentations because people "don't understand open source." One energy producer prefers to pay for something because it guarantees a complete product. This might explain why FOSS is currently used in an ad hoc way with very little, if any, organized discussion of open source adoption aside from small in-house teams or regional collaborators. On the other hand, these perceptions do not apply to more well-known FOSS systems like Linux.
4.1.2
Separation between controls engineers and IT The division of jurisdiction between operations engineers and IT has impeded the broader adoption of FOSS in the energy sector. Systems are being secured by home- | 32,494 | [
"1001485",
"989657",
"1001486"
] | [
"489548",
"489548",
"485049"
] |
01467574 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467574/file/978-3-642-38928-3_18_Chapter.pdf | Gottfried Hofmann
Dirk Riehle
Carsten Kolassa
Wolfgang Mauerer
email: [email protected]
A Dual Model of Open Source License Growth
Every open source project needs to decide on an open source license. This decision is of high economic relevance: Just which license is the best one to help the project grow and attract a community? The most common question is: Should the project choose a restrictive (reciprocal) license or a more permissive one? As an important step towards answering this question, this paper analyses actual license choice and correlated project growth from ten years of open source projects. It provides closed analytical models and finds that around 2001 a reversal in license choice occurred from restrictive towards permissive licenses.
Introduction
Research on open source software (OSS) and development processes has gained significant momentum over the last decade. Landmark work was published by Lerner and Tirole in 2003 [START_REF] Lerner | Some simple economics of open source[END_REF]. A meta-study was conducted by Aksulu and Wade in 2010 [START_REF] Aksulu | A comprehensive review and synthesis of open source research[END_REF] to give an overview of the state of the research in the field. Yet many basic questions remain to be answered. One of them is the question of licensing.
When a project has the ability to chose its license freely, license choice is frequently controversial. The same applies to the situation where a project decides to switch from one license to another. Besides philosophical reasons to favor one type of license over another there is the concern whether the chosen license has an impact on the project's success.
Our research question is to understand the relationship between OSS licenses and project growth. In this paper we answer the question of which type of license do people prefer.
Roughly from the early 1960s to the early 1980s sharing of source code for computer programs was commonplace and conducted in an informal manner. This kind of collaboration happened in an academic setting. When commercial companies started to enforce intellectual property rights, the first open source licenses emerged as an effort to retain the collaborative environment by providing a legal framework.
Among the first of these initiatives was the Free Software Foundation (FSF) [3], which published a first version of the GNU General Public License in 1989 [START_REF][END_REF]. The GPL includes a clause that forces developers who make changes to the code to release their changes under the same conditions as the GPL. This property of the GPL led to the attribution of the GPL as a 'viral' [START_REF] Gonzaĺez | Viral contracts or unenforceable documents? Contractual validity of copyleft licences[END_REF] or 'reciprocal' license. Another term for this kind of licensing is 'copyleft'. For the remainder of this paper, licenses of this kind will be called 'restrictive'.
In 1988, two licenses were first published whose conditions were later coined 'copyfree' or 'permissive', namely the MIT license [6] and the BSD license [7] 1 . Both do not require derived work to be licensed under the same terms 2 , thus redistributing code for proprietary products is possible.
Later, licenses were created like the GNU Lesser General Public License (LGPL) that are less restrictive than the GPL-like licenses yet still not completely permissive. Projects that use those licenses are not subject of this analysis for the sake of simplicity.
Please note that both license types emerged roughly at the same time, so none of the two types used for the analysis here had a "head-start" over the others, see Fig. 1. • An estimation of changing-points that separates the growth into two periods.
The rest of the paper is structured as follows. Section 2 reviews related work. Section 3 presents the data source and research method. Section 4 provides the discovered models and statistical validation. We discuss potential limitations in section 5 and present our conclusions in section 6. Yet there are still restrictions like in the 'New BSD License' which does not permit advertising of derived products with the name of the licensor.
Related Work
Various studies have been conducted in the past to find out about the rationale behind a license choice. Sen, Subramian and Nelson [START_REF] Sen | Determinants of the choice of open source software license[END_REF] suggest that "OSS managers who want to attract a limited number of highly skilled programmers to their open source project should choose a restrictive OSS license. Similarly, managers of software projects for social programs could attract more developers by choosing a restrictive OSS license". Lerner and Tirole [START_REF] Lerner | The scope of open source licensing[END_REF] argue that "Projects with unrestricted licenses attract more contributors". In contrast, Colazo and Fang [START_REF] Colazo | Impact of License Choice on Open Source Software Development Activity[END_REF] analyzed 44 restrictively-and 18 permissively-licensed projects from the SourceForge database. The restrictively-licensed projects had a significantly higher developer membership and coding activity.
In a series of articles [START_REF] Aslett | The trend towards permissive licensing[END_REF] [12] [START_REF] Aslett | On the continuing decline of the GPL[END_REF], Aslett describes a recent trend in open source licensing that shows that the ratio of permissively-vs. restrictively-licensed projects is slowly shifting in favor of permissive licensing. Source for the data is both the Ohloh.net [START_REF] Ohloh | the open source network[END_REF] database and FLOSSmole [START_REF]FLOSSmole: Collaborative collection and analysis of free/libre/open source project data[END_REF]. The time-frame of that analysis is from 2008 to 2011. We are not verifying these findings as the author looks at trends from 2008 onwards. This paper looks at the developments from 1995 to the middle of 2007 filling the gap left by Aslett.
Deshpande and Riehle [START_REF] Deshpande | The Total Growth of Open Source[END_REF] use 5122 active and popular open source projects from the Ohloh database as a sample and find that open source in both added SLoC per month and new projects per month shows in total exponential growth.
Data Source and Research Method
The sample source of this paper is a snapshot of the Ohloh.net [START_REF] Ohloh | the open source network[END_REF] database dated March 2008. The Ohloh database has been collecting data of open source projects since 2005 from publicly visible revision control repositories. Since those repositories provide a history, the available data dates back as early as 1983 [START_REF] Luckey | The world's oldest source code repositories[END_REF]. Yet data before 1995 was omitted as it was too sparse to be useful. Data after June 2007 was also omitted as it was not fully collected yet. According to Koch [START_REF] Koch | Evolution of open source software systems -A large-scale investigation[END_REF], revision control systems (RCS) are a very good source to study open source projects.
Our analysis is data driven: we are discovering existing characteristics in our data rather than starting off with a hypothesis and attempting to invalidate or validate it. We analyze how the total growth of open source projects can be correlated to the chosen type of license and provide closed-form models. We provide details of our final findings and list the models we tried to fit.
Metrics Employed
To measure growth of the size of projects, we use the metric Source Lines of Code (SLoC) added per month. A SLoC is a line in a commit (code contribution) that is neither empty nor a comment. According Herraiz et. al. [START_REF] Herraiz | Towards a theoretical model for software growth[END_REF], SLoC is a good metric to measure project growth. To show this they compared SLoC to various other common metrics of size (number of functions etc.) and complexity (McCabe's cyclomatic complexity, Halstead's length, volume, level etc.) of software projects and found a high correlation between them.
SLoC are calculated using the Unix diff command between two consecutive versions and then removing blanks and comments.
Growth
To determine the total growth of one license-type, all SLoC of all projects in a license-bin are added up in month-windows after removing the initial month. Removing the initial month is done to reflect the fact that the size at 'birth' of a project is not of interest when measuring growth. Thus the problems of forks and projects that started privately are also addressed 3 .
We chose added SLoC per month because it represents all developers as opposed to choosing the number of projects started per month which would only represent those who started a project. Thus our approach is representative of the behavior of the entire developer population.
Distinction of License-Types
The model for permissive and restrictive licenses in this paper is based on the model proposed by Lerner and Tirole [START_REF] Lerner | The scope of open source licensing[END_REF]. It was expanded by additional licenses that occur in the data set. All licenses are required to be approved by the OSI. Our sample contains 1861 projects in the category 'permissive' and 3257 projects in the category 'restrictive'. Projects offering both restrictive and permissive licenses are counted in both sets. Projects under 'mildly restrictive' or 'weak copyleft' licenses like the LGPL have been omitted for the sake of simplicity. Table 1 lists the number of occurrences in the sample.
The total number of projects included in the analysis is 5118 which is too large to list the individual projects in here. At its time, it constituted about 30% of all active open source projects.
3 Note that this does not account for the case when a project becomes open source but the history of the revision control system is preserved or when a fork imports the history, too.
Research Results
Fig. 2 shows the total added SLoC per month for the permissive and restrictive set. For the remainder of this paper, the data for the permissive set in each figure is on the left side and the restrictive set on the right. The blue curve is a smooth nonparametric fit obtained with the Loess method [START_REF] Fahrmeir | Regression -Modelle, Methoden und Anwendungen[END_REF]. The curve shape is not influenced by a-priori considerations, it is solely data driven, and can be used as a visual aid in the comparison of descriptive models introduced below. The gray shaded area around the Loess curve represents the 95% confidence interval. From the functions that returned a fit, we used Pearson's r² and visual inspection of the graphs to determine the best fit. For both sets the exponential model returned the highest Pearson's r² (0.960 for the permissive and 0.937 for the restrictive set) and best visual compliance. Equation [START_REF] Lerner | Some simple economics of open source[END_REF] shows the formula for the exponential model.
y∼ y0 * exp(a * x)
(1)
As a remedy for the heteroscedasticity that can be seen in Fig. 2 we logtransformed the response. The graphs with Loess curve in blue are shown in Fig. 3. We found by visual inspection that for both sets there are two distinct periods of growth with a changing-point around 2000 to 2002. We estimated the points by conducting a segmented linear regression. The results are summarized in Table 2. The ordinary least-squares (OLS) estimator used for the linear regression is sensitive to autocorrelation in the data. We computed the Durbin-Watson-statistic 5 for both segmented linear models which returned significant autocorrelation at lag 1 for the permissive set and marginal autocorrelation at lag 1 for the restrictive set listed in Table 3. To take the autocorrelation into account, for both models the two segments were re-fitted using the generalized least-squares (GLS) estimator which works as a maximum-likelihood-estimator even under the presence of correlation. The resulting fits are shown in Fig. 4.
Fig. 4. Segmented linear models on log-scale of total added SLoC using GLS with blue Loess curve
The residuals are shown in Fig. 5 and the quantile-quantile (QQ)-Plots [START_REF] Fahrmeir | Regression -Modelle, Methoden und Anwendungen[END_REF] in Fig. 6. Table 4 lists the slopes for both periods with 75% and 95% confidence intervals. During the first period, the restrictive set grows faster with a confidence of 95% while the trend reverses in the second period where the permissive set is growing faster with a confidence of 75%. Note that after the changing-point, both sets grow slower. But the restrictive set shows a stronger slowdown than the permissive one. An overview of the confidence levels regarding the differences-in-slopes is shown in Table 5: Beyond the changing-point, the different growth speeds can not be distinguished with 95% confidence, yet the results indicate that the initial trend was reversed and the permissive set has been growing faster since then. Fig. 7 shows the models transformed to the original non-logarithmic scale. The restrictive model visually deviates from the Loess curve towards the end, an effect that is intensified by the high slope in that area. In the future the curves would intersect again. Perm.
y=0.00694⋅e 0.00160⋅x ⋅e
(-0.000277⋅( x-ψ ) t ) ⋅e ε Restr.
y=0.00017⋅e 0.00205⋅x ⋅e
(-0.000922⋅( x -ψ) t ) ⋅e ε
The back-transformed models include the error term, because the error roughly has a mean of zero for the linear models on the log-transformed response, which is no longer the case when the models get transformed back to normal scale. An estimate of the bias was conducted using the "smearing estimate of bias" method for residuals that are not normally distributed [START_REF] Newman | Regression analysis of log-transformed data: Statistical bias and its correction[END_REF]. The bias needs to be taken into account when the models are used for prediction and is 1.049 (4.9%) for the permissive set and 1.035 (3.5%) for the restrictive one. We emphasize that this correction does, naturally, not eliminate the other complications associated with predictions from non-mechanistic models. 6 (x -ψ ) t defines a function where ψ is the break-point and (x -ψ ) t is 0 for (x <ψ )
Limitations
The quantitative analysis has shortcomings in regard to the database used:
• Sample size: The sample constituted of 1861 projects in the category 'permissive' and 3257 projects in the category 'restrictive'. The real number of active projects in both categories was much larger during the analyzed time-frame. Deshpande and Riehle [START_REF] Deshpande | The Total Growth of Open Source[END_REF] have estimated that the database holds roughly 30% of the active open source projects of the analyzed timeframe, a proportion we consider relevant to examine overall trends. • Aggregation: Also, we have only been looking at aggregate growth of open source projects, not at the growth of individual projects. We believe this to be justified, given the research question of this work. While some projects are large, the overall project size distribution has a long tail, making it impossible for any single project to have a substantial effect on the overall growth. • Reproduceability: All data is publicly accessible and can be derived from the projects. The easiest way is to use the original data service, Ohloh.net, which has recently opened API access to its database for the general public.
Conclusions
This paper presents an empirical study of open source project growth using a large data set (about 30% of all active open source projects at its time). It repeats the prior finding that open source software code is growing at an exponential rate. It adds to that original finding a higher precision of the closed-form mathematical models of that growth. In addition, the paper looks at a project's open source license choice and provides a growth analysis binned by two dominant license types: permissive and restrictive (reciprocal) licenses. The paper provides analytically closed models for both license types and finds that both models are exponential as well. Surprisingly, both the permissively licensed and the restrictively licensed project data sets are best modeled by two separate exponential models with a changing point at around 2001 for both types of projects. Even more surprising, we find that restrictively licensed projects were growing faster than permissively licensed projects around until that changing point in 2001, and permissively licensed projects have been growing faster since then.
We attribute this finding to the growth of commercially sponsored open source communities, for example, the Linux, Apache, or the Eclipse Foundation [START_REF] Riehle | The Economic Case for Open Source Foundations[END_REF]. Corbet found that most of the new code written for the Linux Kernel 2.6.20 was paid for by companies [START_REF] Corbet | Who wrote 2.6.20?[END_REF]. Similarly, in other yet unpublished work we have found an increasing and broad investment of company resources into community-owned open source. Such investments into a common good only make economic sense, if companies can reap benefits through complementary products that build on the common good. A restrictive license would restrict the creation of a competitively differentiated complementary product, so we believe that most companies will prefer a permissive license for the common good. The combined effect of increased commercial investment with the need for competitively differentiated products built on top of that shared investment has lead to an increase of permissively licensed projects and this obviously to such an extent, that number and size of permissively licensed projects have overtaken those of restrictively licensed projects. From this argument, we can only expect this trend to accelerate.
Fig. 1 .
1 Fig. 1. Time-frame of the analysis.
1
Both licenses are available in multiple versions now, like the 2-clause, 3-clause and 4clause BSD license or the X11 license.
2
2
Fig. 2 .
2 Fig. 2. Total SLoC added per month with blue Loess curve
Fig. 3 .
3 Fig. 3. Total SLoC added per month on log-scale
Fig. 5 .
5 Fig. 5. Fitted values of segmented linear models using GLS on logarithmic data against residuals with Loess-curve in blue
Fig. 7 .
7 Fig. 7. Segmented linear models on normal scale
Table 1 .
1 Licenses by Type. Multiple versions of a license are counted as one. For example GPL v1, v2 and v3 are listed as GPL only. Some projects have multiple licenses.
Permissive Restrictive
License Name Observations License Name Observations
BSD 730 GPL 3248
MIT 378 CC-BY-SA 24
Apache 479
zlib/libpng 26
Public Domain 4 34
Artistic License 210
Python license 17
Zope 8
Vovida 1
Table 2 .
2 Estimated changing-points
License-type Estimated changing-point 95% Confidence 2.5% 97.5%
Permissive 2001-12 2000-06 2003-05
Restrictive 2000-02 1999-08 2000-08
Table 3 .
3 Autocorrelation and Durbin-Watson-Statistic for the segmented linear models up to lag 3
5 The Durbin-Watson-statistic is approximately 2 for no autocorrelation. Values up to 0 or 4
indicate positive or negative autocorrelation [20].
Table 4 .
4 Comparison of the slope of the segmented linear models using GLS on logtransformed response for the restrictive and permissive set including confidence intervals.
Type Period Slope 75% 12.5% 87.5% 95% 2.5% 97.5%
Perm. 1 2 0.00160 0.00153 0.00167 0.00148 0.00172 0.00133 0.00123 0.00143 0.00115 0.00150
Restr. 1 2 0.00205 0.00198 0.00210 0.00194 0.00215 0.00112 0.00103 0.00122 0.00097 0.00128
Table 5 .
5 Confidence levels
Period Total Growth Confidence
1995-2001 Restrictive > Permissive > 95%
2001-2007 Restrictive < Permissive 75%
Table 6
6 lists the model fomulas.
Table 6 .
6 Models on nomal scale
Type Model 6
• Data incompleteness: The collection process began in 2005, a date where some open source projects had already discarded the earlier history, for example when moving to another RCS. However this does not affect the results regarding the differences in growth between the permissive and the restrictive set since the selection-bias does not differentiate between licenses.• Project source: The snapshot of the Ohloh.net database had only connected to CVS, Subversion and Git source code repositories. Since almost all open source projects where maintained in one of these repositories during the analyzed time-frame this is only a minor limitation. • Copy and paste: The database does not account for copy and paste. Copy and paste introduces a bias towards restrictively-licensed projects because a restrictive project can incorporate code from a permissive one but not viceversa. To analyze the influence of this bias is a suggestion for further research.
Public Domain is considered a permissive 'license' in this paper. | 21,073 | [
"982508",
"1001487"
] | [
"311714",
"311714",
"311714",
"300755"
] |
01467577 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467577/file/978-3-642-38928-3_20_Chapter.pdf | Andrea Janes
email: [email protected]
Danila Piatov
email: [email protected]
Alberto Sillitti
email: [email protected]
Giancarlo Succi
email: [email protected]
How to calculate software metrics for multiple languages using Open Source parsers
Source code metrics help to evaluate the quality of the code, for example, to detect the most complex parts of the program. When writing a system which calculates metrics, especially when it has to support multiple source code languages, the biggest problem which arises is the creation of parsers for each supported language. In this paper we suggest an unusual Open Source solution, that avoids creating such parsers from scratch. We suggest and explain how to use parsers contained in the Eclipse IDE as parsers that support contemporary language features, are actively maintained, can recover from errors, and provide not just the abstract syntax tree, but the whole type information of the source program. The findings described in this paper provide to practitioners a way to use Open Source parsers without the need to deal with parser generators, or to write a parser from scratch.
Introduction
Measurement in software production is essential for understanding, controlling, and improving the result of the development process. Since software is highly complex, practitioners need tools to measure and analyze the quality of the source code (see e.g., [START_REF] Sillitti | Measures for mobile users: an architecture[END_REF][START_REF] Jermakovics | Lagrein: Visualizing user requirements and development effort[END_REF][START_REF] Jermakovics | Visualizing software evolution with lagrein[END_REF][START_REF] Di Bella | A multivariate classification of open source developers[END_REF][START_REF] Janes | The dark side of agile software development[END_REF]).
To calculate various source code metrics for different programming languages we developed a extensible metrics calculation system based on two component types:
parsers: for every supported language, a parser produces an intermediate representation of the source code, used for further analysis; -analyzers: analyzers take the intermediate representation of the source code and calculate the desired source code metric independently from the originating language.
The main problem we encountered was writing the parsers themselves. This paper describes our findings in the creation of parsers based on Open Source components. Moreover, it gives a brief overview of the architecture the metric calculation system we developed.
Parsing C/C++ is difficult [START_REF] Werther | A modest proposal: C++ resyntaxed[END_REF][START_REF] Birkett | Parsing C++[END_REF][START_REF] Piatov | Using the Eclipse C/C++ development tooling as a robust, fully functional[END_REF] and therefore robust (can recover from errors), fully functional, actively maintained, and Open Source C/C++ parsers are hard to find. To cite a Senior Engineer at Amazon.com: "After lots of investigation, I decided that writing a parser/analysis-tool for C++ is sufficiently difficult that it's beyond what I want to do as a hobby." [START_REF] Birkett | Parsing C++[END_REF].
Due to the inherited syntactic issues from C, for example that "declaration syntax mimics expression syntax [START_REF] Ritchie | The development of the C language[END_REF]" the C++ grammar is context-dependent and ambiguous. This makes the creation of a C++ parser a complex hence difficult task (not to mention the complex template mechanism present in C++). An example for the statement above is that a construct like a*b in C/C++ can either mean a multiplication of a and b or, if a is a type, a declaration of the variable b with type a*, i.e., a pointer to a.
Strictly speaking, a "parser" performs a syntactic analysis of the source code. However, to correctly interpret the parsing results, we needed more than just a syntactic analysis, namely:
1. Particularly for C/C++, preprocessing to expand includes and macros. 2. Semantic analysis, i.e., obtaining type information and bindings is also part of the parsing process. To completely parse C/C++ code, the parser has to resolve the types of all symbols. Resolving bindings means to understand which declaration (e.g., a function, type, namespace) some reference is pointing to.
Moreover, in the particular case which we describe in this article, we had the following additional requirements:
3. The parser has to be able to ignore syntax errors and to continue parsing the remaining source code correctly. 4. C/C++ is continuously developing, and new language features are introduced from time to time, therefore a parser needs to be well-maintained.
Open source compilers like the GNU C compiler (GCC) or LCC are available that fulfill the criteria described above, but-to the best of our knowledge-they do not provide an API to obtain the (intermediate) parsing results. Moreover, LCC supports only C, and not C++.
The availability of a functioning C/C++ parser is important for developers of the Open Source community to evaluate and improve their software [START_REF] Russo | Agile Technologies in Open Source Development[END_REF]. This applies particularly to larger projects which need a quantitative approach to quality. Unfortunately such a parser is so hard to find. To overcome this issue, we evaluated writing a C/C++ parser from scratch (using a parser generator) or to identify or adapt an Open Source alternative. The next section describes how we decided.
Related works
We found the following 7 Open Source parsers for C++: clang [START_REF]clang: a C language family frontend for LLVM[END_REF], cppripper [START_REF] Diggins | cpp-ripper, An open-source C++ parser written in C#[END_REF], Elsa [START_REF] Mcpeak | The Elkhound-based C/C++ parser[END_REF], GCC [START_REF]GCC, the GNU Compiler Collection[END_REF] using the -fdump-translation-unit option, GCC XML [START_REF] King | GCC-XML, the XML output extension to GCC[END_REF], the Eclipse CDT C/C++ parser [START_REF]Eclipse CDT (C/C++ Development Tooling[END_REF], and Doxyparse used within the Analizo metric system [START_REF] Terceiro | Analizo: an extensible multi-language source code analysis and visualization toolkit[END_REF], a parser based on Doxygen [18].
The time it takes for existing Open Source communities to develop such parsers as well as comments on various blogs (e.g., [START_REF] Stackoverflow | How much time would it take to write a C++ compiler using flex/yacc[END_REF]) induced us to think that to write a parser from scratch would take us months, if not years:
-it took clang 8 years from the 1st commit in July 2001 [START_REF]First revision of clang[END_REF] to the release 1.0 in October 2009 [START_REF] Lattner | Llvm 2.6 release![END_REF]; -GCC XML has a first commit more than 12 years ago (August 2000) [START_REF]GCC-XML, the XML output extension to GCC repository[END_REF] and its authors still do not consider it production-ready, since the current release version 0.9; -the first release of the Eclipse CDT C/C++ parser was more than 10 years ago (August 2003 [START_REF]Eclipse CDT (C/C++ Development Tooling) repository[END_REF]) and it is still constantly updated; -several projects are abandoned, such as cpp-ripper (first and last commit during September 2009 [24]) or Elsa (last release August 2005, after 3 years of development [START_REF]Elkhound: A glr parser generator and elsa: An elkhound-based c++ parser[END_REF]).
Moreover, writing such a parser from scratch, we would need to keep it up-to-date with the new language features introduced for C++, which would require a continuous effort.
Based on such information, we decided not to write the parser from scratch but to select an Open Source C/C++ parser that can be instrumented to correspond to our requirements. Moreover, we added the following requirement for such a parser:
To avoid learning new parsing technologies for different languages, we prefer
Open Source parser solutions that are able to parse several languages, not only C++.
Results and implementation of a proof of concept
We evaluated the candidates using the 5 requirements stated above. Three parsers fulfilled the requirements 1-4: clang, the Eclipse CDT parser, and Doxyparse. Because of requirement 4, we excluded parsers that are not maintained anymore:
-GCC XML does not parse function bodies, a feature we need to calculate software metrics for functions [START_REF]Frequently Asked Questions[END_REF]. -GCC, using the option -fdump-translation-unit, is able to "dump a representation of the tree structure for the entire translation unit to a file" [START_REF]GCC, the GNU Compiler Collection[END_REF], but this option is only designed for use in debugging the compiler and is not designed for use by end-users [START_REF] Mitchell | GCC Bugzilla Bug 18279[END_REF].
Doxygen, on which Doxyparse is based, does not parse the source code completely. To calculate metrics, the authors of Doxyparse had to "hack" the code of Doxygen and, for instance, modify the Lex [START_REF]Lex -A Lexical Analyzer Generator[END_REF] source file for C to support the calculation of McCabe's cyclomatic complexity on the lexical analysis stage. Since the information provided by Doxyparse is not complete, we ruled it out.
Both clang and Eclipse CDT fulfill our requirements 1-4, but only the parsers provided by Eclipse fulfill requirement 5, i.e., it is an environment not only for C/C++, but also for Java, JavaScript, and other languages.
We successfully developed a proof of concept that instruments the Eclipse C/C++ parser. The Eclipse C/C++ parser is installed together with the Eclipse CDT development environment [START_REF]Eclipse CDT (C/C++ Development Tooling[END_REF]. However, as we found out, it is possible to use it as a Java library, without initializing the whole Eclipse platform. Since the instrumentation of the Eclipse CDT parser is undocumented, we briefly describe here how we accomplished this.
To use the parser in our application, we added the CDT core library org.eclipse.cdt.core *.jar, as well as the other libraries in the CDT installation folder on which the plugin depends, to the classpath of our application.
As shown in listing 1, we pass to the parser a list of preprocessor definitions and a list of include search paths (lines 3 and 4). By extending InternalFileContentProvider and using this class instead of the empty files provider we are able to instruct the parser to load and parse all included files (line 6).
InternalFileContentProvider allows the use of the interface IIncludeFileResolutionHeuristics, which can be implemented to heuristically find include files for those cases in which include search paths are misconfigured. To use the C parser instead of C++, the GCCLanguage class should be used instead of the GPPLanguage class (line 10). Listing 2 shows a C++ parsing example. It outputs "C C f f ", i.e., each encountered name in the abstract syntax tree of the parsed code.
The parser returns an abstract syntax tree (AST) as the result of parsing the code. The AST is the representation of the structure of the program as a tree of nodes. Each node corresponds to a syntactic construct of the code, e.g. a function definition, an if-statement, or a variable reference. This AST is used to calculate metrics, e.g., one could count all conditional statements to estimate McCabe's cyclomatic complexity.
We briefly evaluated the functioning of the Eclipse C/C++ parser by parsing the Linux kernel version 2.6.27.62. It worked without problems out of the box. The parser counted 19,744 files, 8.4M lines of code, and 2.6M logical lines of code. It took 11 minutes to parse and calculate metrics on 2 CPUs and 3 minutes on 12 CPUs. We successfully applied the same method to instrument the parsers for Java and JavaScript that are part of the Eclipse JDT1 and JSDT2 projects, but due to space constraints, we can only provide the UML component diagram of our implementation in figure 1. In this implementation, parser wrappers execute the parsers and convert the output to the intermediate representation used in the system. The analyzer then uses this information to calculate the metrics independently from the source language. The C# parser (NRefactory [START_REF]The NRefactory library[END_REF]) depicted also in figure 1 is written in C# and has to be executed on a Windows machine. This is why we execute it through a proxy on a remote machine.
The component "metrics calculator" wraps all parser wrappers and the analyzer into one component that is able to parse and extract metrics of a variety of languages. It requires the presence of two additional components: a database to store the extracted code structure and metrics, as well as a component to interface the source repository.
Conclusion and future works
This article deals with an (apparently) simple problem: to "create or find a working C/C++ parser". We defined four requirements for such a parser which are relevant for us and we think also for the research community: we ask for a parser that supports contemporary language features, is actively maintained, can recover from errors, and provides not just the abstract syntax tree, but the whole type information of the source program.
Due to the complexity of writing a C/C++ parser ourselves, we decided to evaluate existing Open Source parsers adding a fifth requirement: that we look for parser solutions that support multiple languages.
The conclusions of our research are that Eclipse fulfills all five requirements, contains a C/C++ parser, as well as other parsers like Java and JavaScript. Unfortunately there is no official documentation about using the Eclipse parsers as a library outside of Eclipse. In the last section we provide an example of how to achieve this and give a birds-eye view of the architecture of our measurement system that is based on the parsers contained in Eclipse.
In the future we intend to adopt a systematic approach to evaluate the maturity of parsers, e. g., using the Open Maturity Model [START_REF] Petrinja | Introducing the opensource maturity model[END_REF] or automatic measurement techniques [START_REF] Sillitti | Monitoring the development process with eclipse[END_REF][START_REF] Scotto | A non-invasive approach to product metrics collection[END_REF].
1 2 FileContent 5 6 7 11 } 1 .
2567111 private static IASTTranslationUnit parse(char[] code) throws Exception { fc = FileContent.create("/Path/ToResolveIncludePaths.cpp", code); 3 Map<String, String> macroDefinitions = new HashMap<String, String>(); 4 String[] includeSearchPaths = new String[0]; IScannerInfo si = new ScannerInfo(macroDefinitions, includeSearchPaths); IncludeFileContentProvider ifcp = IncludeFileContentProvider.getEmptyFilesProvider(); IIndex idx = null; 8 int options = ILanguage.OPTION_IS_SOURCE_UNIT; 9 IParserLogService log = new DefaultLogService(); 10 return GPPLanguage.getDefault().getASTTranslationUnit(fc, si, ifcp, idx, options, log); Listing Calling the CDT parser
1 3 4 5 @ 12 } 2 .
345122 public static void main(String[] args) throws Exception { 2 String code = "class C { private : C f(); }; int f();"; IASTTranslationUnit translationUnit = parse(code.toCharArray()); ASTVisitor visitor = new ASTVisitor() { Override public int visit(IASTName name) { 6 System.out.print(name.toString() + " "); Listing Parsing "class C { private : C f(); }; int f();"
Fig. 1 .
1 Fig. 1. UML component diagram of the whole metrics calculation architecture.
http://www.eclipse.org/jdt
http://www.eclipse.org/webtools/jsdt | 15,638 | [
"1001491",
"1001492",
"1001484",
"989715"
] | [
"463159",
"463159",
"463159",
"463159"
] |
01467578 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467578/file/978-3-642-38928-3_21_Chapter.pdf | Adina Barham
email: [email protected]
The Emergence of Quality Assurance Practices in Free/Libre Open Source Softaware: A Case Study
Keywords: quality assurance, test, social network analysis, information flow
As the user base of Free/Libre Open Source Software (FLOSS) diversifies, the need for higher quality is becoming more evident. This implies a more complex development model that includes various steps which were previously associated exclusively with proprietary development such as a formal quality assurance step (QA). However, little research has been done on how implementing formal quality assurance impacts the structure of FLOSS communities. This study aims to start filling this gap by analyzing interactions within such a community. Plone is just one among many FLOSS projects that acknowledged the importance of verification by implementing a quality assurance step.
Introduction
A previous preliminary study [START_REF] Barham | The emergence of quality assurance in open source software development[END_REF] established that almost one third of the top 50 FLOSS software products ranked by number of downloads on www.ohloh.net had implemented explicit QA procedures. Furthermore, more than a quarter of the top 100 products ranked by number of user have some kind of QA. Verification, and more specifically quality procedures under the FLOSS development model has attracted a lot of interest within the academic community [START_REF] Halloran | High quality and open source software practices[END_REF][START_REF] Hedberg | Assuring Quality and Usability in Open Source Software Development[END_REF][START_REF] Michlmayr | Quality practices and problems in free software projects[END_REF][START_REF] Schmidt | Leveraging open-source communities to improve the quality & performance of open-source software[END_REF][START_REF] Chengalur-Smith | Sustainability of Free/Libre Open Source Projects: A Longitudinal Study[END_REF][START_REF] Spinellis | Evaluating the Quality of Open Source Software[END_REF][START_REF] Aberdour | Achieving Quality in Open Source Software[END_REF][START_REF] Zhao | Quality assurance under the open source development model[END_REF]. The structure of communities behind FLOSS has also been extensively researched. Studies focus on many community aspects such as structure and dynamics [START_REF] Crowston | The social structure of Free and Open Source software[END_REF], communication patterns between core and periphery [START_REF] Oezbek | The Onion has Cancer: Some Social Network Analysis Visualizations of Open Source Project Communication[END_REF][START_REF] Masmoudi | Peeling the Onion': The Words and Actions that Distinguish Core from Periphery in Bug Reports and How Core and Periphery Interact Together[END_REF], or migration within the hierarchy of FLOSS projects [START_REF] Jensen | Role Migration and Advancement Processes in OSSD Projects: A Comparative Case Study[END_REF]. However, little research has been done on how implementing formal QA affects the community. This research aims to start filling that gap by improving our understanding of how QA fits into the organizational structure of FLOSS communities.
A single open source project was chosen as a pilot case study in order to develop research questions that can then be applied in a wider comparative study of QA in open source projects. The Python-based content management system Plone was selected because it is a mature project (began in 1999) and because its development process includes a QA step [14]. The QA team has a dedicated webpage where one can find basic information such as activity description, communication channels and team leaders [15]. QA activities include triaging new bugs, validating submitted patches, ensuring that new releases are usable and generally help in the release process.
Research Questions
Q1: How is the QA layer included in the Plone community structure? We aim to find out how much contributors work only on QA and how much they work on other aspects of the project. Also, we ask how much peripheral members perform QA tasks. Previous research has approached the latter issue for Firefox and it has been shown that the percentage of periphery contributions is 20-25% [START_REF] Masmoudi | Peeling the Onion': The Words and Actions that Distinguish Core from Periphery in Bug Reports and How Core and Periphery Interact Together[END_REF].
Q2: What are the characteristics of activities performed by members of the Plone QA team? It is logical to draw the conclusion that some members will be more active than others but it would be interesting to investigate if members are equally active on all communication channels or if their tasks are limited to certain areas of the project.
Q3: How does the QA team communicate with other teams? Previous research has shown, that participants who have better access to information are able to contribute more efficiently [START_REF] Aral | Productivity Effects of Information Diffusion in E-mail Networks[END_REF] therefore interrupting the information flow might affect negatively the project's evolution. For this reason it is important to determine if there are any members that control the information flow. Due to the fact that social networks are in a continuous change [START_REF] Watts | Twenty-first century science[END_REF], it might be useful to also establish the stages that the community went through before reaching its current state.
Data and Research Method
In order to measure QA activity levels, issue tracker data as well as mailing list data were taken into account. Data was retrieved in December 2012 -January 2013 and stored locally. The issue tracker data contained 13026 bugs with 55883 associated comments, and was downloaded using a web crawler. 29525 e-mails were downloaded from all the Plone mailing list archives that were parsable using MailingListStats [18]. In addition, a list of Plone contributors containing names and nicknames used in code repositories was downloaded from Ohloh.net [19].
Data from the QA mailing list which started in 2011 included 41 members of whom approximately 70% had sent only one e-mail at the time of data collection. Of the remaining 12 members only 1 had sent more than 10 e-mails, 4 were not active on other mailing lists and 7 were not listed as code contributors. The 4 members who were not active on other mailing lists were also not listed as code contributors. However, their activity on the QA mailing list was low: none sent more than 5 e-mails. Therefore QA does not constitute a separate layer in the Plone community.
An interesting fact that can be observed from Fig. 1 is that there is no correlation between time progression and activity levels. In addition, there seems to be no correlation between the number of bugs posted and the number of associated comments, which leads to the question the reasons behind these spikes in activity levels. To create the network graph only authors who had replied to someone were taken into consideration. The next step consisted of eliminating loops or arcs starting and pointing to the same vertex. An additional reduction was performed in order to remove vertices that had no connections with other vertices. The resulted network contains 3414 vertices connected by a total of 16042 arcs of which 5093 have a value greater than 1. This means that 10949 connections (68%) are created by only one interaction. These members are occasional or peripheral contributors. From the remaining arcs 31% have values between 2 and 79 which means that arcs with values between 1 and 79 account for almost 99% of all arcs. The average degree is 9.39, which means that on average, a person interacts with approximately 9 other people. The network was then processed by transforming arcs into edges by summing up their values. Members who interacted with only one other member represented 35% of the whole community. 86% of community members interacted with a lower than average number of members (i.e. < 9).
To assess task distribution among community members the graph was divided into clusters. The cluster that contains QA mailing list members represents 1.06% of the whole community, and contains a sub-cluster of members who do not contribute code accounting for 0.46% of the community.
Social networks are dynamic as they change their structure over time and for this reason it is important to consider time frames [START_REF] Howison | Validity Issues in the Use of Social Network Analysis for the Study of Online Communities[END_REF][START_REF] Christley | Global and Temporal Analysis of Social Positions at SourceForge[END_REF]. Social network analysis methods were applied using 6-month time frames to analyze the states through which the community went after dedicating a communication channel to the QA team. The size of the network varied between 226 and 746 vertices. The variation in size was expected considering the fact that many arcs were created after only one interaction. In addition the highest degree was 7.52 while the lowest was 4.29.
Conclusions
Q1: How is the QA layer included in the Plone community structure? Considering that the most active QA members also contributed code and were active on other mailing lists, QA is not a separate layer in the community. However, non-QA tasks might have been performed in different time frames than the ones in which members were part of the QA team; comparing time frames would allow us to clarify this. Furthermore, approximately 70% of QA mailing list members have sent only one 1 email which might suggest that these are periphery members performing QA tasks. Only 30% of QA mailing list participants have not been active on other channels. In addition, members active on the QA mailing list account for only 1% of the whole community. Q2: What are the characteristics of activities performed by members of the Plone QA team? An interesting phenomenon that occurs within the Plone community is the increase in activity levels that seems not to be linked to time progression. In addition it seems that the number of bugs opened does directly relate to the increase in comment activity levels. This suggests that there are other variables that influence activity levels. Q3: How does the QA team communicate with other teams? The community seems to form a large component that spans both issue tracker and mailing list with the exception of few small sub-networks. This means that there is a lower risk for some members to control the information flow and to jeopardize the communication flow. However, there is a small group of people that are highly engaged in communicating with other members (86% of community members interacted with less than 9 other members -9 being the average number of interactions a member has). In line with this conclusion, a large percentage of community members (68%) create links defined by only one interaction. This means that the rest of the community has somewhat stronger connections whereas a small percentage of users (1%) have very strong connections defined by more than 79 interactions. In addition, after analyzing the networks created using time frames, one could reach the conclusion that due to the drastic decrease in community size many participants were occasional contributors or in other words members of the periphery.
Limitations and Further Research
A number of limitations should be noted. First, only limited data cleaning was carried out. Second, it is possible that community members have used other communication channels than those listed on the relevant Plone websites. Third, it is possible that some members of the QA team did not actively participate in the mailing list. For these reasons, it would be desirable to conduct follow-up interviews with members of the community. These interviews would also shed light on the reasons for peaks in activity levels.
It could also be useful to re-run the community analysis using smaller time frames as 6 months may be a too big window for a community of this size. In addition, the community evolution could be analyzed using time frames covering the period before the formal adoption of QA in order to track down potential migration from one layer to the other.
A single case study cannot provide a recipe for success that can be applied to all FLOSS projects, but can be used to create hypotheses to be validated in future studies. Based on the findings of this paper the following hypotheses were formulated: H1: The majority of the QA team members perform non-QA tasks as well.
H2: Approximately 80% of QA tasks are performed by a small percentage of the community. H3: Increase in activity levels is not linked to time progression. H4: Members performing QA are not an isolated layer in the community.
Fig. 1 .
1 Fig. 1. Clusters within the Plone community | 12,894 | [
"1001493"
] | [
"485038"
] |
01467579 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467579/file/978-3-642-38928-3_2_Chapter.pdf | Keisuke Tanihana
email: [email protected]
Tetsuo Noda
email: [email protected]
Empirical Study of the Relation between Open Source Software Use and Productivity of Japan's Information Service Industries
This paper analyzes the relation between OSS (Open Source Software) use and the performance of Japanese information service industry. We first estimate the market value of OSS, an issue which only a few studies have specifically addressed. The results are then used to analyze the economic effect of OSS. Although our study has some methodological limitations regarding the calculation of the market value of OSS, we demonstrate that the economic effect of OSS is generally positive.
Introduction
This paper examines the relation between the use of Open Source Software (OSS) and the productivity of the Japanese information service industry from an economic perspective. OSS has recently become an indispensable resource in the information service industry. As [START_REF] Raymond | Cathedral and Bazarr[END_REF] has shown, the spread of open software development is associated with the increasing participation and contribution of information service enterprises. These enterprises use OSS primarily to enhance their competitive advantage. The open source community continually updates the software on a voluntary basis and, although this input is quite independent of enterprise needs, information service enterprises are the beneficiaries of this process. [START_REF] Chesbrough | [END_REF] accurately described this as "open innovation" process to denote not only the enrichment of an enterprise's "inner" resources, but also the increasing importance of using "outer" resources such as OSS, especially when enterprises regard their competitiveness and growth in productivity as indispensable.
We adopt the view of OSS as an "outer" resource to analyze the business model in the Japanese information service industry in terms of its productivity.
Research Design
Tanihana and Noda (2011) used the concept of "open innovation" to study the relation between software development style and the use of OSS by showing the connections between "inner" and "exterior" resources. This section of our paper is devoted to explaining the theoretical background of OSS.
The Concept of "Connection"
Although the cost of duplicating software is almost negligible, the results of OSS development are made freely available. Therefore, unless there is a scarcity of talented developers, OSS may be regarded as a kind of public good that is noncompetitive and non-exclusive. In the view of [START_REF] Ghosh | Cooking-pot Markets: An Economic Model for the Trade in Free Goods and Services on the Internet[END_REF], all enterprises (i=1, 2, n) can use OSS because of these two attributes. This relation is defined by formula [START_REF] Ashauer | Is Public Expenditure Productive?[END_REF]:
OSS 1 =OSS 2 =…=OSS n (1)
Ghosh (1998) highlighted the ease of duplicating software. In our view, the open style business model accompanying information oriented economy and copyright strengthen his opinion about OSS. We will examine the concept of "open" from the perspective of changes to an enterprise's business style 1 .
The use of OSS reduces an enterprise's transaction costs, as defined by Coase (1937), and changes its economic structure and business model from economy of scale to economy of "connection," as noted by Miyazawa (1986a[START_REF] Miyazawa | Industrialized Society: Pursuit of the Economy of Connection through synergy between knowledge and technology[END_REF][START_REF] Miyazawa | Economics of System and Information[END_REF]). Economy of "connection" is a concept that is not applicable to a single enterprise, but rather the synergy effect produced by many enterprises sharing their technological expertise. On the other hand, Raymond (1998) used the expression "cathedral and bazaar," to distinguish between software development styles, likening the OSS development style, which is open to any other developer, as a bazaar style. In this style, OSS developers are "connected" through the Internet, thereby producing value2 . On this point, it is thought that the development of OSS beyond the "inner" organization indicates the existence of an economy of "connection."
Open Innovation and OSS Utilization
In considering the development of OSS, the connection of "inner" resources with "outer" resources is a vital point. In our view, the development of OSS and its business model is based on the concept of "open innovation" advocated by [START_REF] Chesbrough | [END_REF]. To strengthen their competitive advantage, enterprises generally keep the results of their R&D activities secret, an attitude which [START_REF] Kokuryo | [END_REF] argues may be characterized as a manifestation of independent management. However, under an "open innovation" process, an enterprise's R&D activities are connected to "outer" resources, thereby creating new value. In other words, changes to a business model occur from the reductions in transaction costs, as defined by [START_REF] Coase | The Nature of the Firm[END_REF], with the In considering the development of OSS, the connection of "inner" resources with "outer" resources is a vital point. In our view, the development of OSS and its business model is based on the concept of "open innovation" advocated by [START_REF] Chesbrough | [END_REF].
To strengthen their competitive advantage, enterprises generally keep the results of their R&D activities secret, an attitude which [START_REF] Kokuryo | [END_REF] argues may be characterized as a manifestation of independent management. However, under an "open innovation" process, an enterprise's R&D activities are connected to "outer" resources, thereby creating new value. In other words, changes to a business model occur from the reductions in transaction costs, as defined by [START_REF] Coase | The Nature of the Firm[END_REF], with the economy of "connection" serving as a driving force for "open innovation."
The Linux Foundation (2010) has shown that approximately 70% of contributions to OSS are by business enterprises, and that OSS activities are included in profit growth activities. Thus, enterprises use OSS to enhance their competitiveness. [START_REF] Kunai | Let's go with Linux[END_REF] points out that revenue is generated from a contribution to the OSS community, whereas Fukuyasu (2011) considers the leveraging effect of using OSS. In the latter's view, the cost of developing OSS is shared among contributors, and all market players can enjoy the results produced by the contributors. Consequently, the positive economic effect of OSS takes the form of cost reductions and increased profits because of the leveraging effect of R&D activities.
Thus, for business models that use OSS, the vital point is that value, which cannot be created through independent management, is produced through the "connection" between "inner" and "outer" resources. In addition, OSS is a standard technology produced by the community independently of the enterprises contributing to it. Thus, it is a kind of infrastructure characterized by non-competition and nonexclusion.
We shall consider the business model that uses OSS from the perspective of value addition. Figure 1 shows the relation between sales and costs in information service enterprises. We define the difference between the sale of products or services and development or management costs as value added to an information service enterprise. It is well established that a fundamental objective of all enterprises is to maximize their value added activities.
However, as the center of Figure 1 shows, competitive pressure on services and products is a daily and commonplace necessity. Because each enterprise has to deal with this pressure, price competition will be unavoidable. As a result, there is a fall in sales. On the other hand, under competitive pressure, it is necessary for each enterprise to produce more attractive products and services with a greater market appeal than before. This triggers a rise in development costs. That is, under competitive pressure, because its sales fall and its costs rise, each enterprise will face a reduction in its added value.
The context in which OSS is used is shown on the left side of Figure 1. Because OSS is generally made freely available, enterprises are able to reduce development costs by replacing internal resources with external resources, or by "connecting" both. Similarly, it will be possible for each enterprise to generate profits and surpass its rivals in technology and competitive advantage by "connecting" its own products or services with OSS-based technology, which is developed out-side the enterprise. This process appears to facilitate competitive pricing by enterprises. As already mentioned, our hypothesis is that using OSS can potentially enhance an information service enterprise's capacity to create added value. Therefore, we attempt to grasp the structural and quantitative effects of OSS in this paper.
Model for Empirical Analysis
To analyze the economic effects of OSS, we have to consider the effect of "connection" with respect to resources. If the "connection" between "inner" resources and OSS as "outer" resource is considered, an information service enterprise i's production structure can be expressed by formula (2), as follows:
( ) (2)
Formula ( 2) is a kind of Cobb-Douglas production function 3 , according to which, given technology A, value in an information service enterprise i V i, t consists of its capital input K i, t and labor input L i, t . These resources exist in the "inner" organization. On the other hand, OSS t constitutes the "outer" resources, which develop independently of each enterprise i. In other words, the relation indicated in formula [START_REF] Boehm | Software Engineering Economics[END_REF] shows the "connection" between "inner" resources and "outer" resources that lies behind the "open innovation" business model4 .
If formula (2) is logarithm-zed and deployed, it is possible to obtain formula (3), which specifies the determinants of labor productivity in an information service enterprise.
( ) ( ) (3)
In formula (3), the level of labor productivity (V/L) depends on the level of ratio of capital equipment (K/L) and OSS input. In this formula, the value of coefficient is important. When is positive and significant statistically, OSS gives positive effect to the level of labor productivity in Japan's information service industry. Labor input is composed of the number of workers and the number of labor hours in this paper. For data on the number of workers, we used the "Currently Survey of Selected Service Industries" released by Japan's Ministry of Economy, Trade and Industry. We obtained the labor hours data from the "Monthly Labor Survey Statistics" released by Japan's Ministry of Health, Labor and Welfare. <http://www.mhlw.go.jp/toukei_hakusho/toukei/> w t : Wages for programmer We used the "Basic Survey on Wage Structure" published by Japan's Ministry of Health, Labor and Welfare for the data on the wages for programmers. <http://www.mhlw.go.jp/toukei_hakusho/toukei/> which is studied by Ashauer (1989) and Ford and Porter (1991).
Data Resources and OSS Market Value
Calculation of OSS Market Value
Although software is an intangible asset, quantitative evaluations, such as price, are required to determine the economic effect of OSS. In software development, the person-month scale is generally applied. On the other hand, the value of software is evaluated in terms of market transactions. In our view, the difference between the standards of the person-month scale and the market price makes it difficult to evaluate the economic effect of OSS.
Moreover, there are no official statistics, such as price and market value, for OSS. Although OSS increasingly resembles economic activity in recent years, it was not necessarily developed to obtain evaluation and market share data at the outset. This is one reason why it does not lend itself easily to quantitative assessments in relation to measurements such as price and market value. Therefore, to evaluate the market value of OSS, it is necessary to change from an obscure standard such as personmonth to something more obvious, such as price.
Other scholars have undertaken a few quantitative assessments of OSS. 2008) and Garcia-Garcia and Magdaleno (2010) used a Constructive Cost Model (COCOMO), which calculates a process and a period for software development 5 . COCOMO is a method that is advocated in [START_REF] Boehm | Software Engineering Economics[END_REF] and it can estimate the effort that goes into software development on a person-month scale, based on a line of source code. Multiplying such effort estimates by wages makes it possible to evaluate the monetary value of OSS. We shall use the same method to estimate the market value of OSS in this paper. For this purpose, we shall use the following formula.
( ) ∏ (4)
Formula ( 4) is a basic COCOMO, in which the man-month scale effort EFFORT t depends on the kilo source line of code KSLOC t and cost factors C j . Coefficients a and b are the parameters which are determined by the scale and the environments in software development; these are computed by regressing past development projects as reported in [START_REF] Boehm | Software Engineering Economics[END_REF]. Based on MacPherson (2008), we set coefficient a at 2.4 and b at 1.05 6 . For the number of lines of source code, we 5 The development effort calculated by basic COCOMO does not depend on the kind of programming languages, but rather the number of lines of source code. On this point, it seems that COCOMO maintains the objectivity of calculation. However, it is necessary to realize that this technique does not take into consideration the programming language. 6 There can be three levels to a software development project in COCOMO: Organic, Semi-detached, and Embeded. That is, each parameter is set as a=2.4, b=1.05 at Organic, a=3.0, b=1.12 at Semi-detached, and a=3.6, b=1.20 at Embedded. adopt statistics released by Ohloh, which is a project that publishes statistics and information about OSS 78 .
In COCOMO, cost factors are composed of 15 variables, including hardware, human resources, and project environments. A coefficient was assigned to each of these factors. Then, based on [START_REF] Wheeler | SLOCCount User's Guide[END_REF], we assigned to the development of OSS a cost factor parameter of 2.4. [START_REF] Wheeler | SLOCCount User's Guide[END_REF] has demonstrated that software development not only requires a labor force, but also the capacity to meet the costs of tests, facilities, and management. Because the proportion of labor force input is one of the cost factors in software development, it is necessary to count the variable multiplied by a cost factor coefficient of 2.4 as a development cost.
According to MacPherson et al. ( 2008) and Garcia-Garcia and Magdaleno (2010), OSS development costs and market value can be calculated by multiplying effort by wages. Based on these studies, we define the market value of OSS by formula ( 5), as follows.
( ) (5)
In formula ( 5), man-year scale effort (EFFORT t /12) is obtained by dividing the man-month effort by 12 (i.e. 12 months). Next, multiplying person-year effort by programmer's wages w t enables us to calculate the market value of OSS for every year as OSS t 9 . Because in economic theory, marginal cost and price are equal at market equilibrium point, effort for OSS development is equal to its market value 10 .
Empirical Result
Labor Productivity
Table1 shows the trend in labor productivity per Person-hour in Japan's information service industry for the period 2001 to 2009. Based on "Currently Survey of Selected Service Industries" published by Ministry of Economy, we 7 Ohloh is a project which supplies information about OSS development, publishing about 550,000 project trends as at April 2012. See<http://www.ohloh.net>. 8 Ohloh divides the number of lines of source code into "code", "comments", and "blanks". In this paper we use "code" to refer to the number of lines of source code. 9 There are two problems concerning programmer's wages. First, the wage level in Japan is lower than that of other countries. For example, programmers in Japan earned on average 3. Compared with these results, Japanese programmers' earn a low wage. Therefore, the possibility of an underestimation of market value of OSS cannot be denied. Second, OSS development is global, so using the Japanese average wage level is somewhat unfavorable. Ideally, we would use the time series data to reveal the developer's nationality and the proportion of their respective contributions. However, we were unable to obtain the data for such a calculation. 10 Then, we calculated the real OSS market value with the price index. divided this industry 8 enterprise groups in according to the number of its workers in Table1.
Regarding the labor productivity trend, three points can be deduced from Table 1. First, the level of labor productivity in an enterprise employing 500 or more workers is higher than that of other enterprises. This accounts for the discrepancies among enterprises within the same industry, as noted by [START_REF] Tanihana | Study on Production Structure of Information Service Industry in Japan[END_REF]. Second, Table 1 shows that the overall fluctuation of labor productivity is reduced, along with the scale of business expansion 11 . Moreover, after 2009 labor productivity level in this table reduces. It is caused by economic recession of those days.
From these features, it is obvious that an information service enterprise with over 500 workers, a so-called "big vendor", firmly maintains a high level of productivity. Again, it is possible to see the difference between big vendors and small vendors in terms of labor productivity in the Japanese information service industry.
OSS Market Value
Table 2 shows OSS market values estimated using formulas (4) and ( 5). This paper discusses the Linux kernel, MySQL, PostgreSQL, Apache HTTP Server, Perl, Ruby, Python, PHP, Ruby on Rails, and Open Office, which were objects of this study. Today, each of these 10 objectives are well known and widely used in the field of business. In table 2, the estimated period ranging from 2001 to 2010, except a small part. First, the Linux kernel typifies today's OSS movement. Table 2 shows that its market value was approximately 173.5 billion Yen in 2010. Table 2 clearly shows that no OSS has had the same market value as the Linux kernel during the period of this study. The reason for the high economic value of the Linux kernel is the high accumulation of its source code 12 .
Open Office is one of the office suite programs originally developed by Sun Microsystems and released as OSS in 2000. According to table 2, its market value was approximately 50 billion Yen in 2010. This figure ranks second to that of the Linux kernel.
On the other hand, Ruby on Rails is OSS that was still in the developmental stage. Because Ruby on Rails appeared in 2004, sometime after the other OSS, the accumulation level of its source code is low. This seems to explain its lower economic value.
Contribution of OSS to Labor Productivity
Table 3 shows the estimated contribution of OSS to the Japanese information service industry using a regression of formula [START_REF] Chesbrough | [END_REF]. Our research objects are enterprises, which are categorized into 8 groups based on the "Current Survey of Selected Service Industries." 12 The market value of the Linux kernel grew rapidly between 2005 and 2006. This was due to the rapid increase in code accumulation. The estimated elasticity of the Apache HTTP Server is indicated in the sixth line of the upper section, showing an estimated result of 0.278. From this, it is plausible to conclude that the Apache HTTP Server has had a positive effect on the labor productivity of the information service industry in Japan. As regards programming languages, we estimated the economic effect of four languages: Perl, Ruby, Python, and PHP. The elasticity estimates are shown in the seventh line of the upper section, as well as the second, third, third, and fourth lines of the lower section. Although the estimated results are limited by the number of OSS selected for our study, we can still conclude that, generally speaking, OSS programming languages have had a positive effect on the Japanese information industry.
The estimated results for Ruby on Rails are shown in the fifth line of the lower section. Although its elasticity was -0.048, which indicated a negative economic effect, its t-value was 0.491. Therefore, Ruby on Rails seems not to have had any economic effect on the Japanese information service industry.
We estimated the economic effect of Open Office as an office suite OSS. The results are shown in the sixth line of the lower section. Judging from our results, we can conclude that Open Office has had a positive economic effect on Japan's information service industry.
Conclusion
The paper has discussed OSS from the viewpoint of a "open innovation". The free availability of OSS and the near zero cost in duplicating it are both aspects that are guaranteed by licenses, which leads us to view OSS as a kind of infrastructure or public good.
Information service enterprises use OSS in the bid to strengthen their productivity and competitive advantage. Through the "bazaar" process, OSS is developed in a community that is organized independently from the enterprises contributing to such development.
Information service enterprises are able to create value by "connecting" OSS with their "internal" resources. Chesbrough defines this process as "open innovation" to denote a change in business model from a scale-intensive to a "connection"intensive approach. It is therefore necessary to consider the effect of such a "connection" when analyzing the economic effect of OSS.
To conduct an analysis from an economic viewpoint, it is necessary to factor in the market value of OSS. For this purpose, we used COCOMO to calculate the market value. This calculation depended on effort. We found that the Linux kernel, and Open Office software have a high market value. In the case of Linux kernel, this is because their early appearance has facilitated the buildup of their source code, while the Open Office software's high value is attributable to the fact that it was originally developed as proprietary software, and has built up its source code since then. However, to calculate the market value of OSS, we had to rely on statistics for only Japan. Since the development of OSS is global, we concede that our exclusive reliance on Japanese statistics is not ideal. We had no other choice, though, because we were unable to obtain any data that provided a percentage breakdown of developers' respective nationalities and contributions. We would like to improve upon the method used in calculating the market value in a future study.
The calculation of the market value of OSS enabled us to estimate the contribution of OSS to labor productivity in the Japanese information service industry. We chose ten different kinds of OSS as our research object. The results of our analysis shows, broadly speaking, that although OSS has a positive economic effect on Japan's information service industry, each OSS has a variety of economic effects.
OSS has recently become an indispensable resource in the information service industry. This paper clarifies the relation between technical and economic productivity. In this connection, we can infer from our results that in order to create value and strengthen their competitiveness, information service enterprises will face subjects in style of OSS use. In other words, this will be essential in deciding which OSS will have a significant economic effect on the information service enterprise. However, to profit from the use of OSS, improvements to internal resources, such as talented developers, will also have to be made by such enterprises.
Fig. 1 .
1 Fig. 1. Structure of Sale, Cost and OSS in Information Service Enterprise with the economy of "connection" serving as a driving force for "open innovation."In considering the development of OSS, the connection of "inner" resources with "outer" resources is a vital point. In our view, the development of OSS and its business model is based on the concept of "open innovation" advocated by[START_REF] Chesbrough | [END_REF].To strengthen their competitive advantage, enterprises generally keep the results of their R&D activities secret, an attitude which[START_REF] Kokuryo | [END_REF] argues may be characterized as a manifestation of independent management. However, under an "open innovation" process, an enterprise's R&D activities are connected to "outer" resources, thereby creating new value. In other words, changes to a business model occur from the reductions in transaction costs, as defined by[START_REF] Coase | The Nature of the Firm[END_REF], with the economy of "connection" serving as a driving force for "open innovation."The Linux Foundation (2010) has shown that approximately 70% of contributions to OSS are by business enterprises, and that OSS activities are included in profit growth activities. Thus, enterprises use OSS to enhance their competitiveness.[START_REF] Kunai | Let's go with Linux[END_REF] points out that revenue is generated from a contribution to the OSS community, whereas Fukuyasu (2011) considers the leveraging effect of using OSS. In the latter's view, the cost of developing OSS is shared among contributors, and all market players can enjoy the results produced by the contributors. Consequently, the positive economic effect of OSS takes the form of cost reductions and increased profits because of the leveraging effect of R&D activities.Thus, for business models that use OSS, the vital point is that value, which cannot be created through independent management, is produced through the "connection" between "inner" and "outer" resources. In addition, OSS is a standard technology produced by the community independently of the enterprises contributing to it. Thus, it is a kind of infrastructure characterized by non-competition and nonexclusion.
MacPherson et al. (2008) calculated the development cost of Fedora9, and estimated it at 10.8 billion dollars. Glott and Haaland (2009) also evaluated the development cost of Debian as approximately 12 billion euros. Furthermore, Garcia-Garcia and Magdaleno (2010) estimate the market value of the Linux kernel (version 2.6.30) at 1 billion euros. For their monetary assessments of OSS, MacPherson et al. (
5 million yen in 2000s. However, MacPherson et al.(2008) estimated that programmers earned $7.5 thousand in 2008. Garcia-Garcia and Magdaleno (2010) estimate that programmers in Europe earned € 31 thousand in 2006.
3.1 Data Resources V
i,t : Value added in information service enterprises For the data on value added, we used "Currently Survey of Selected Service Industries" released by Japan's Ministry of Economy, Trade and Industry. <http://www.meti.go.jp/statistics/tyo/tokusabido/index.html> K i,t : Capital input in information service enterprises For the above, we used data from the "Currently Survey of Selected Service Industries" released by Japan's Ministry of Economy, Trade and Industry. <http://www.meti.go.jp/statistics/tyo/tokusabido/index.html> L i,t : Labor input in information service enterprises
Table 2 .
2 OSS Market Value
(Millions of Japanese
Yen)
Regarding changing business styles in enterprises,[START_REF] Kokuryo | [END_REF] points out the change in the form of information processing from centralized to distributed processing.
Raymond's (1988) statement that "Given enough eyeballs, all bags are shallow." is another interesting way of expressing the "connect" effect.
This production function is a fundamental technique in economic growth factor analysis.
This relationship revealed by formula (2) is based on public goods economic analysis model,
This figure shows that management becomes stable according to expansion of business scale. The gap of labor productivity in this figure results from vertical Japanese industrial (keiretsu-like) structure. See[START_REF] Tanihana | Study on Production Structure of Information Service Industry in Japan[END_REF].
Note: t-values are in parentheses. **Significant at 5% level, ***Significant at 1% level.
First, the estimated result shown in the second line of the upper section is the production structure of enterprises that do not use OSS. Because the elasticity of the ratio of the capital equipment is 0.292, it is reasonable to conclude that the Japanese information service industry can enhance its productivity or ability to create added value through the enrichment of "inner" resources.
The estimated results of the Linux kernel are produced in the third line of the upper section, which shows that the contribution of the Linux kernel, when measured in terms of its elasticity, was 0.307. We can conclude from this that the use of the Linux kernel has a positive economic effect on the Japanese information service industry.
We chose MySQL and PostgreSQL as the research objects in database server. Their results are shown in the fourth and fifth line of the upper section, respectively. The estimated elasticity of MySQL was 0.357 and for PostgreSQL it was 0.486. Based on these results, the Japanese information service industry could possibly enhance their labor productivity with these OSS.
OSS type
With | 29,804 | [
"1001494",
"989698"
] | [
"466675",
"466675"
] |
01467580 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467580/file/978-3-642-38928-3_3_Chapter.pdf | James Piggot
Chintan Amrit
How Healthy is my Project? Open Source Project Attributes as Indicators of Success
Determining what factors can influence the successful outcome of a software project has been labeled by many scholars and software engineers as a difficult problem. In this paper we use machine learning to create a model that can determine the stage a software project has obtained with some accuracy. Our model uses 8 Open Source project metrics to determine the stage a project is in. We validate our model using two performance measures; the exact success rate of classifying an Open Source Software project and the success rate over an interval of one stage of its actual performance using different scales of our dependent variable. In all cases we obtain an accuracy of above 70% with one away classification (a classification which is away by one) and about 40% accuracy with an exact classification. We also determine the factors (according to one classifier) that uses only eight variables among all the variables available in SourceForge, that determine the health of an OSS project.
Introduction
Determining what makes a software project successful has been a research topic for well over 20 years. The first model that defined the factors influencing software success was published in 1992 by Delone and McLean [START_REF] Delone | The DeLone and McLean model of information systems success: a ten-year update[END_REF], as the Information Systems Success Model. Since then there has been a considerable effort in research to determine what can be done to minimize project failure. However, factors that influence commercial projects differ from those known as FLOSS or Free/Libre Open Source Software. Attempts at remedying this gap have focused on statistical models that focus on certain aspects of a software development lifecycle. Only recently, has historical data been used to determine the changing nature of factors for success during a projects lifecycle [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF].
In this paper we use machine learning in the form of decision trees, to predict the development stage of an Open Source project based on project metrics 1 , project constraints and circumstance. This model will serve as an indicator of OSS project health that will enable developers to determine accurately in what stage their project is in and what is necessary to improve project success. For organizations seeking to use OSS it can also be used to determine what risks are associated with sponsoring a project.
Previous research has tried to understand which indicators influence a project"s success and how these indicators are interrelated but there have been very few working models [START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF]. What this paper proposes is that, through machine learning we can model which available project metrics are of importance in determining OSS project health. Our method differs from previous attempts at building a model, which were based on statistical correlations that approximated success factors without revealing how they actually influenced project"s status [START_REF] Comino | From planning to mature: On the success of open source projects[END_REF][START_REF] Lee | Measuring open source software success[END_REF][START_REF] Midha | Factors affecting the success of Open Source Software[END_REF].
In the last few years a considerable number of papers have been published that have tried to determine what the indicators of OSS project success are and how these indicators are interrelated. Often a number of these metrics are empirically tested on OSS projects found on SourceForge2 , a key OSS depository. In this paper we try to estimate the status of a project based on various metrics related to an OSS project. We also determine the accuracy of the subjective status of an OSS project in SourceForge that is provided by the OSS project leader. To this extent, we extend the research work on the problems with reporting the status of a software project [START_REF] Snow | The challenge of accurate software project status reporting: a two-stage model incorporating status errors and reporting bias[END_REF], to OSS projects. For the purposes of this research we use SourceForge to obtain a data sample and use longitudinal data collected from 2006 to 2009. We have limited the sample to projects starting in 2005, in order to observe all stages of a project"s lifecycle.
Literature Review
Recent research on OSS success factors has focused on enlarging the scope of influences, common elements found cite factors such as user/developer interest [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF], the critical number of active developers [START_REF] Mockus | Two Case Studies of Open Source Software Development: Apache and Mozilla[END_REF] and software quality [START_REF] Lee | Measuring open source software success[END_REF].
Ever since the publication of the Information Systems Success model by Delone and Mclean [START_REF] Delone | The DeLone and McLean model of information systems success: a ten-year update[END_REF] researchers have attempted to define in what way factors that influence Open Source Software differ from those of commercial software. Early research showed that due to geographical dispersal of developers and lack of formal managerial methods, coordination becomes more difficult [START_REF] Wang | Survival factors for Free Open Source Software projects: A multi-stage perspective[END_REF], this has been off-set by the proliferation of software forges that act as a single locale for communication and development for a project as well as a download site for users. To determine which metrics found on a software forge can be used to determine project success, an explanation of the IS success model is in order.
Open Source Health
In order to gauge the success of the Open Source projects we studied in this paper, we looked into literature on measuring Open Source success.
Crowston et al. [START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF] collect data on the bug tracker and the mailing list of the projects to determine the health of the projects. They propose that the structure of the OSS community determines the health of the community and state that an onion structure is one of the better OSS community structures. Subramanian et al. [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] measure an Open Source project"s success by measuring user interest, project interest and developer interest. They measure user interest by calculating the number of project downloads and to measure the developer interest in the project, Subramanian et al. [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] count the number of active developers in the project. Finally, they measure project activity by calculating the number of files released in the project [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF].
Other authors such as Stewart et al. [START_REF] Stewart | Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects[END_REF] find that licence choice (i.e. how restrictive the licence is) and organizational sponsorship (i.e its affiliation with a forprofit company or university) determine how successful the OSS projects are. In addition to these measures, Sen et al. [START_REF] Sen | Open source software licenses: Strong-copyleft, non-copyleft, or somewhere in between?[END_REF] also find subscriber base (i.e. the number of individuals who chose to be updated about the project developments) and number of developers to reflect the "healthiness" of an OSS project [START_REF] Sen | Open source software licenses: Strong-copyleft, non-copyleft, or somewhere in between?[END_REF]. Chenglur-Smith et al. [START_REF] Chengalur-Smith | Sustainability of free/libre open source projects: A longitudinal study[END_REF] work on similar lines, and predict that a OSS project"s age and size help in the sustainability of the project (i.e. its ability to retain interest and continue to attract developers) [START_REF] Chengalur-Smith | Sustainability of free/libre open source projects: A longitudinal study[END_REF]. Amrit and Hillegersberg [START_REF] Amrit | Exploring the impact of socio-technical coreperiphery structures in open source software development[END_REF], on the other hand, explore the coreperiphery shifts of development activity and its impact on OSS project health. They find that a steady movement of developers away from the core of the software code is indicative of an unhealthy OSS project [START_REF] Amrit | Exploring the impact of socio-technical coreperiphery structures in open source software development[END_REF].
Regarding the techniques used to analyse OSS data, English and Schweik [START_REF] English | Identifying success and abandonment of FLOSS commons: A classification of Sourceforge. net projects[END_REF] produce a six-part classification for OSS projects. They base this classification on phone interviews with OSS developers, manual coding of a sample of OSS projects from SourceForge.net, and theoretical insights from Hardin"s "Tragedy of the Commons". English and Schweik operationalize these definitions and test them on 110,933 SourceForge projects, with low error rates. Wiggins and Crowston [START_REF] Wiggins | Reclassifying success and tragedy in FLOSS projects[END_REF] extend this research and analyse another SourceForge data set. Of 117,733 projects, they classify 31% as abandoned at the Initiation stage, 28% as abandoned at the Growth stage, and 14% as successful at both the Initiation and Growth stages.
Though the dependent variables of English and Schweik [START_REF] English | Identifying success and abandonment of FLOSS commons: A classification of Sourceforge. net projects[END_REF] are well thought out, they do not explore the relationship of their classification with the existing classification of projects in SourceForge. Furthermore, their focus is by and large to determine the number of successful and unsuccessful OSS projects and to classify projects into their six categories. In this paper, we also try to determine the factors that affect the heath of an OSS project. Specifically, we try to predict the subjective classification provided by the project managers and developers of the different SourceForge projects in order to (1) check the validity of the subjective classification and (2) if the classification is indeed valid, one can use the classifier to determine the variables that affect project health.
Success Factors
Previous research has focused on using three well known metrics to determine project success with the added benefit they have corresponding metrics on SourceForge [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF][START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF].
The use of longitudinal data from past projects hosted by SourceForge.net to determine OSS success is also an innovation in recent studies [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF]. They divide the independent variables into two groups; time-variant and time-invariant and determined how they affect the success measures. The outcome of this study validates the idea of using historical data, as they have proved that past levels of developer and user interest influence present interest. The effect of this change in popularity is that lead-developers and project managers should better anticipate the future need for resources and manage both the internal and external network size of a project [START_REF] Sharda | Predicting box-office success of motion pictures with neural networks[END_REF].
The choice of software license can also have a detrimental influence on the success of a project [START_REF] Comino | From planning to mature: On the success of open source projects[END_REF]. They find that if more effort is necessary to complete the project, developers tend to choose less restrictive licenses such as those from the No-Copyleft or Weak-Copyleft categories as opposed to Strong-Copyleft. This even holds true when developers prefer to use more restrictive licenses to ensure that derivative work is adequately protected. The choice of license can be influenced by external factors such as royalties and network effects. As such the preferred license can differ from the optimal license. Other research has shown that when a project has managed to pass through the initial stages of its lifecycle with a less than optimal license it will not severely influence future success [START_REF] Wang | Human agency, social networks, and FOSS project success[END_REF].
A difficult topic to study relates to determining what factors influence OSS projects in both the initial stage and in the growth stage. As data from a project"s initial stages is often absent, this resolves to determining what time-invariant factors can influence the growth stage. Research has found that the initial stage of an OSS project is indeed the most vulnerable time period as a project competes for legitimacy with other similar projects in attracting developers.
Methodology
Data and variable definitions
In order to build a model that approximates the status value of a software project on SourceForge we need to gather factors that might influence developers to assign a particular value of status.
Dependent variables
To determine the success of software project we try and categorize what status (stage) of development a project has reached.
Project Status
The current progress of a software project can be placed in one of five development stages according to the System development life cycle: these are Requirements Planning, Analysis, Design, Development and Maintenance [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF]. Another way to describe a projects progress is through terms such as Planning, Alpha, Beta and Stable. Shifts in progress are marked by improvements in completeness, consistency, testability, usability and reliability [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF].
SourceForge.net maintains a system of 7 status designations (Table 1). The numbers 1 through 6 are for Planning, Pre-Alpha, Alpha, Beta, Production/Stable and Mature The last status is an outside category for projects that are Inactive. Previous research [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] makes it clear that it can be expected that projects reaching advanced stages of their life cycle will be more in favor with users and those in earlier stages, as their input goes beyond mere maintenance. As more users make use of the software, they also generate more bug reports, feature requests and feedback/suggestions. In turn developers develop more patches. As such, the latter stages of development are marked by more development activity related to patches, bugs and feature requests. The use of historical data in previous research [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] shows that increased numbers of developers and users will show later on in increased project activity. To better mark this relationship between different time periods within a project we have selected projects on SourceForge that have valid data from a period of four years from 2006 to 2009. The 7 stage status category used by SourceForge is considered by some [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF][START_REF] Sen | Open source software licenses: Strong-copyleft, non-copyleft, or somewhere in between?[END_REF], an awkward use of the typical lifecycle definitions used in software development. SourceForge uses only vague descriptions for each, and much is left to developers to decide the status their project is in. Especially the difference between pre-alpha and alpha, as well as between production/stable and mature, may be cause for confusion. To overcome this, we also consider the binary project status representing whether the project is active, and the project status variable that has four categories; namely Planning, Alpha and Beta and Stable. To achieve the four stages of the project status, we collapsed the inactive and planning stages to the Planning stage, we aggregated the stages Pre-Alpha, Alpha to Alpha stage and Production/Stable and Mature to the Mature category.
Projects on SourceForge.net can also be differentiated along a different dimension, that of project activity. It can be reasonably assumed that both projects in the planning stages and those that are inactive do not have either code to download or developers to work on the project. As such they should be markedly different from those projects of which code is available for download and alteration. We propose to check for ways in which projects that are inactive differ from those that are active, apart from the aforementioned variables.
Time-invariant variables
Variables that can influence the success of an OSS project can be divided into two groupstime-invariant factors and time-variant factors. The variables included have been previously identified in literature as affecting OSS success [START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF][START_REF] Sen | Open source software licenses: Strong-copyleft, non-copyleft, or somewhere in between?[END_REF].
For time-invariant variables we have chosen those that define a project in general terms such as license [START_REF] Comino | From planning to mature: On the success of open source projects[END_REF], the operating system that can be used and the programming language [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] in which the code is written, to determine if they are factors that have an influence on the project status. Each variable is divided into binary variable categories such as Strong-CopyLeft, Weak-Copyleft and No-Copyleft for license and after which each project is assigned either the value 0 or 1 to show if it supports a particular feature.
The time-invariant variables have been further augmented with simple numerical variables that list the number of features of each category that a project supports. So for license, there is a variable that would count the number of licenses used by a project.
License
The license used by a project can influence the amount of support it gets, as it affects the interests of users and developers [START_REF] Stewart | Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects[END_REF]. Software licenses can be broadly divided into three groups based on the level of restrictiveness that would determine whether users can distribute derivatives or modify the software (copyfree).
These categories are Strong-Copyleft, Weak-CopyLeft and No-CopyLeft. Licenses such as GPL (General Public License) and BSD (Berkeley Software Distribution) License are grouped into these categories depending on whether they support issues such as "copyfree" or not. Various research papers already use this division of licenses and where licenses fall into [START_REF] Wang | Survival factors for Free Open Source Software projects: A multi-stage perspective[END_REF][START_REF] Stewart | Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects[END_REF]. However, numerous licenses cannot be exactly assigned to any of the three categories because they do not conform the GPL format. As far as possible, they are assigned based on the effect these licenses have on user and developer choices.
Operating System
The operating system used for development and use of a project can have a severe impact on its popularity as it determines how many users it could potentially reach as well as what type of license the developer intends to use [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF]. Traditionally open source software has used UNIX, Linux and its derivatives for development, which caused OSS to be somewhat excluded from other operating systems such as Windows and Mac OS X. With the popularity of languages such as a Java that make portability possible Windows has become an increasingly more popular for OSS developers. Previous research [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] has indeed focused on these three categories of operating system: the Windows-family, UNIX (also includes Linux and POSIX) and a other category that includes MAC OS X. They prove that UNIX and Linux type operating system have a negative correlation with user interest, but a positive correlation with developer interest and explain this based on the roots of the OSS community who frequently started their career on UNIX and Linux machines.
We have expanded the number of OS groupings to also include "OS Independent" as a category to denote the increasing popularity of portability. Mac OS X has also been grouped into a separate category as an acknowledgement of its increasing popularity. Other operating systems were left out of this study. The increase in number of categories should allow for better rules to be deduced from our data mining efforts.
Similar to the license variables these categories denote binary variables and an outside category has been added that counts the number of operating systems supported by a project.
Programming Language
The effects of the programming language used in a project has previously solely focused on the C-family of languages, while others where either excluded from study or aggregated into one category [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF][START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF]. This study intends to rectify this deficiency to also include popular languages such as Java and PHP as separate categories without denying the continued importance of C-type languages.
Because the C programming language was used for the implementation of UNIX is has remained popular with UNIX and Linux developers ever since [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF]. Despite memory allocation problems it has remained a favorite for projects that have more stringent processing and real-time requirements. Through the prevalence of high quality compilers and the importance of derivative languages (C++ C# and Visual C++) the use of C can be associated with more developers and project activity [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF].
For our study we have expanded the number of language categories to 5 and included "C-family", "Java", "PHP", "Python" and "Others" as separate categories.
Time-variant variables
The three success measures previously mentioned that have their roots in the IS Success Model also have their equivalents in OSS projects found on SourceForge. These are Project Activity (number of files, bug fixes and patches released), User Activity (number of downloads) and Developer Activity (number of developers per project). [START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF] [START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF] discovered that these measures are interrelated as developers are often users which means that the number of downloads and developers is thus correlated. Project Activity is also closely correlated with User Activity as the latter often download the latest software releases, developers tend to flock to such projects as well. We use the above three metrics as the basis for our time-variant variables.
Other variables include the number of donors, forum posts, mailings lists, feature requests and "Service Requests" that allow users to ask for help from developers. Our dataset also includes the project age in days from 2009 backwards as a control variable. Combined, we have constructed a dataset that contains 38 variables including 35 independent variables and three variations of one dependent variable, i.e. project status with 7, 4 and 2 project statuses.
Dataset sampling
SourceForge.net is the largest web portal for the development of Open Source Software, it acts as a repository for code, as a tracking system for bugs and features and as a communication outlet for those involved in software development. As of November 2012 it hosts some 300,000 projects that differ in a wide range of categories such as intended audience, the topic of the project, the license used as well as technical attributes in which projects can distinguish itself; programming language, OS supported as well as the Graphical User Interface used. For the purpose of this study it is impossible to gather data directly from SourceForge through a screen scraper as the servers of Sourceforge.net cannot distinguish this activity from more nefarious ones such as a Denial of Service attack.
The dataset used has thus been obtained through a third source which has made the data available [START_REF] Howison | FLOSSmole: A collaborative repository for FLOSS research data and analyses[END_REF]. FlossMole.org contains data collected for the period 2006 to December 2009 from which a dataset was compiled of 125,700 projects. Unfortunately, many projects had missing data, due to the fact that no data was entered by developers, or project portals were not maintained, or the screen scraper that collected data often did so wrongly, which corrupted portions of the dataset.
Our dataset initially contained 125,700 projects and most projects had incomplete data for the time period 2006 to 2009. Hence, upon cleaning the data we were left with 28,282 rows in our database,
Experiment Methodology
We used the SPSS 2.0 decision tree analysis able to analyze the data and predict project status. In our research we chose the CHAID [START_REF] Kass | An exploratory technique for investigating large quantities of categorical data[END_REF] and the CART [START_REF] Breiman | Classification and regression trees: Chapman & Hall/CRC[END_REF] method of data classification, in order to handle over 35 independent variables some of them being categorical, numeric and non-parametric.
Decision trees can suffer from over-training, whereby the trees continue to grow and might afterwards not be able to validate test-data because it uses rules learned from the training data that are incompatible with the test data. Both CHAID and CART use different ways to limit the growth of decision trees.
CHAID
Which stands for "Chi-squared Automatic Interaction Detection" uses a statistical stopping rule to keep the tree from growing to impossible sizes. CHAID has the advantage of being able to handle categorical variables. Other research using this method indicate that it excels at analyzing all the factors that can possible influence a dependent variable but it"s result at predicting these value with subsequent data samples is often poor.
CART
Also known as "Classification and Regression Trees" builds a tree based on theory quite different from CHAID. CART uses a non-parametric approach which can work with both categorical and numerical variables, and also has the ability to model the complex interactions among the variables [START_REF] Breiman | Classification and regression trees: Chapman & Hall/CRC[END_REF]. It first grows the tree to its full size and afterwards it prunes the tree until the accuracy of the tree is similar for both the training dataset and the test dataset.
The reason why we chose CHAID and CART is that both classifiers can work with categorical variables and use different theoretical models, they are also comparable in some aspects (lift in response) [START_REF] Haughton | Direct marketing modeling with CART and CHAID[END_REF].
Cross Validation
With both methods of growing a decision tree we have used our dataset in two ways. The first is cross-validation of the entire data sample whereby data is partitioned multiple times over in both trainings sets to build the tree and test sets to validate the tree. This is a process whereby data is manually split into training and test sets. For the purposes of this study we used a 50-50 data split in order to avoid overtraining.
Results
Below are the results of for each of the three types of project status. The results include both the CHAID tests as well as the Cross-validation and data-split methods.
The accuracy with which our classification tree has been able to determine the correct project status can be seen in tables 2 and 3.
5.1Data split
The results in table 2 have been obtained through a 50-50 data split (to prevent overtraining) and represents the results of the training set. The method obtained two results of importance, the first is the exact match of 39.6 % whereby of the 24582 data samples 9729 achieved the correct corresponding status value. For a 7-fold category this result can be considered acceptable (as compared to 1/7 ~ 0.14 for random chance). The second method, or 1-away result, shows what percentage of data samples either had exactly the correct status value, or were just 1 value off the mark. The accuracy for this is 76.2 % and suggests that the results are closely distributed around the correct value. This result validates the decision tree that was grown from the rules deduced from this test.
Table 2 Results of the CHAID decision tree with 7 stage project status
Exact match = 39.6%. number of hits / total cases = 9729 / 24582. 1-away match = 76.2%. (number of hits + number of 1-away hits) / total cases = (9729 + 9013) / 24582.
Cross-validation
For the cross-validation method, the final score seems to closely match those of the data split method. However, for status value corresponding to Planning (1) and Inactive [START_REF] Snow | The challenge of accurate software project status reporting: a two-stage model incorporating status errors and reporting bias[END_REF] the results differ significantly, as this method partitions the data set multiple times it would average out the more extreme values obtained through the data-split method. Both results validate our method to classify software projects found in SourceForge.
Four stage project status: Planning, Alpha, Beta and Stable
In figure 4 are the results of our efforts to classify projects in the four categories popularly described in literature. The test results were obtained using the CHAID method with a 50-50 split of the dataset. The accuracy of 45.4% is better than the score for the 7-fold status category, though it"s predictive value is especially undermined by the low score in its efforts to classify projects in Beta stage. This could be seen as proof that this stage is a subjective stage that is hard to classify through machine learning. The 1-away score of 86.0 % once again proof that the scores are distributed around the correct value though for a 4-fold category the value loses in importance.
This result validates that out method works to determine the correct stage in its lifecycle a software project is in.
Binary Project Status: Active and Inactive
We get an accuracy of 82.4% for classification of projects based on a binary status of active or inactive. The results can be considered to be better, if we consider that the inactive state is an aggregation of the Planning status and the Inactive status, they share many things in common but also have crucial differences for the former can have developers assigned to it. Table 5, Results of CHAID for a binary project status
Cross validation result.
The cross-validation method seems better able to determine whether a project is active (1), because the method splits the dataset 10 times and tests each iteration we can presume that the lower score for the above 50-50 data split is an aberration. This result of 82.8% accuracy, shows that our method can successfully distinguish active projects from inactive projects.
Discussion
The results of our classification show a nearly 40% accuracy for an exact match of the subjective classification and a 76% 1-away match (Table 2) indicates that subjective classification performed by the OSS project leader is quite accurate and correlates with the project data. This is quite unlike what is reported for commercial projects [START_REF] Snow | The challenge of accurate software project status reporting: a two-stage model incorporating status errors and reporting bias[END_REF]. The errors and implications of this finding can be a subject for future research.
By using the CART method of decision tree analysis, we obtained a model for status classification, as shown in Figure 3. The tree shows that numerical metrics such as downloads, donors, developers and forum posts are far more explanatory of project health than time-invariant metrics such as license used, or the operating system supported.
Figure 2: The CART decision tree for our data This is in-line with earlier research [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF]. However, this should not be surprising as those time-invariant metrics are usually decided upon when the project is initiated, and change little over its lifecycle. When they do change, they change only to suit users and developers. On the other hand, time-variant metrics, by their definition, can gauge what popularity a project has presently obtained. The order of importance that metrics have taken in the model is also as expected and follows established literature [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF][START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF].
In the early stages of a project lifecycle, the ability to attract developers is of vital importance in order to be able to develop software along the stated goals. In the latter stages of the software life cycle, it can be expected that users and developers generate more forum posts. This model also shows that the number of donors is an indication as to whether the project has status 4, 5 or 6. This can serve as an indicator, for example, that a project sponsor can have a positive influence on project success. This is in line with the findings of Stewart et al. [START_REF] Stewart | Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects[END_REF].
The number of SVN commits relates to the number of changes developers have uploaded to the central software repository on SourceForge. In the model (Figure 3) it is closely related with the number of developers on a project in the early stages of the software lifecycle. The number of CVS commits denotes the number of official software releases and surprisingly is not part of the model obtained.
The choice of license is also not an important factor in determining whether a project will be able to continue to succeed in the growth stage. This validates other research [START_REF] Stewart | Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects[END_REF][START_REF] Wang | Human agency, social networks, and FOSS project success[END_REF] but for the most part contradicts long established views on how a project would compete for resources.
Even though our dependent variable is the SourceForge subjective classification done by the OSS project leaders, we can definitely say that given the predictive accuracy of 1-away classification, the classification model does reflect the stage/health of the OSS projects. As validation of this claim, we find that most of the important variables are also mentioned by other authors [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF][START_REF] Crowston | Information systems success in free and open source software development: theory and measures[END_REF][START_REF] Lee | Measuring open source software success[END_REF][START_REF] Midha | Factors affecting the success of Open Source Software[END_REF] With our model we have managed to predict the status of a project with reasonable accuracy. The model in figure 3 shows this when status 7 (inactive) is reached after a combination of few downloads few active developers and only a small number of bug reports have been generated.
Conclusion
We make two primary contributions with this research: (i) we demonstrate that the subjective project status (especially the 1-away value) reflects the actual health of the OSS project. This finding is in line with that of [START_REF] Subramaniam | Determinants of open source software project success: A longitudinal study[END_REF] and shows that, in this respect OSS projects differ from commercial projects [START_REF] Snow | The challenge of accurate software project status reporting: a two-stage model incorporating status errors and reporting bias[END_REF] (ii) we determine the variables that affect project status and in turn affect project health based on nearly 30 K projects over a period of four years.
Our research shows that with a limited set of just 8 variables (Figure 3), we can gauge the status of a software project on SourceForge. Analyzing these 8 attributes of the OSS project can help alert Project controllers that their project is either poorly supported or will become obsolescent in the near future due to lack of developer interest. For prospective developers and sponsors this model can give an idea, whether a project is on track to pass through the early difficult stages of a software life cycle on schedule and is in fact not already failing.
We think our results can provide further research opportunities in projects that also suffer from users and developers being flooded with data, whose accuracy cannot be interpreted easily. Crowd funding sites such as Kickstarter3 offers an index of project that are considered "popular" and "most funded" but there may be lopsided metrics as projects size, ambition and accessibility can negatively influence them.
Exact match = 40.7%. Number of hits / total cases = 19929 / 48966. 1-away match = 76.0% (number of hits + number of 1-away hits) / total cases = (19929 + 17294) / 48966.
Table 1 :
1 Overview of SourceForge"s status classification
Classification Development Stage
Number
1 Planning
2 Pre-Alpha
3 Alpha
4 Beta
5 Production / Stable.
6 Mature
Table 3 :
3 Status results through cross-validation.
Table 4
4
Results of CHAID for 4 stage project status
Categories. Accuracy.
1. 2. 3. 4. Exact 1-
match away
1 1877 2081 157 73 44.8%
.
2 1270 5049 1274 858 59.4%
.
3 424 2405 1837 1327 30.7%
.
4 201 1621 1384 2094 39.5%
.
45.4%
86.0%
Figure 1. Stage results.
Growing method; CHAID.
Exact match = 45.4%.
Number of hits / total cases = 10857 / 23932.
1-away match = 86.0%.
(number of hits + number of 1-away hits) / total cases
= 20598 / 23932.
Table 6 :
6 Results of CHAID, CV for binary project status
We use the terms metric and attribute to mean the same concept in this paper
http://www.sourceforge.net
http://www.kickstarter.com/ | 40,703 | [
"1001495",
"1001496"
] | [
"303060",
"303060"
] |
01467581 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467581/file/978-3-642-38928-3_4_Chapter.pdf | Robert Viseur
email: [email protected]
Identifying Success Factors for the Mozilla Project
The publication of the Netscape source code under free software license and the launch of the Mozilla project constitute a pioneering initiative in the field of free and open source software. However, five years after the publication came years of decline. The market shares rose again after 2004 with the lighter Firefox browser. We propose a case study covering the period from 1998 to 2012. We identify the factors that explain the evolution of the Mozilla project. Our study deepens different success factors identified in the literature. It is based on authors' experience as well as the abundant literature dedicated to the Netscape company and the Mozilla project. It particularly highlights the importance of the source code complexity, its modularity, the responsibility assignment and the existence of an organisational sponsorship.
Introduction
After the launch of the GNU project in 1984 and the emergence of Linux in 1991, the Mozilla project was probably one of the most important events in the field of free and open source software at the end of the nineteenth century [START_REF] Viseur | Associer commerce et logiciel libre : étude du couple Netscape / Mozilla, 16ème conférence de[END_REF]. It was a pioneering initiative in the release of proprietary software, while commercial involvement in the development of free and open source software has accelerated over the last ten years [START_REF] Fitzgerald | The transformation of open source software[END_REF]. Netscape was the initiator. The company had over 50% browser market shares, facing up to Microsoft. The challenge was huge. Netscape had to be allowed to maintain its pace of innovation and to sustain its business facing one of the main actors in the software market.
The Netscape decision was inspired by Eric Raymond's essay entitled "The Cathedral and the Bazaar". The author presented his experience with the Fetchmail free software and proposed a set of best practices. "Release early. Release often." is still remembered [START_REF] Raymond | The Cathedral & the Bazaar (Musings on Linux and Open Source by an Accidental Revolutionary)[END_REF]. However, the Mozilla project was not as smooth sailing. Netscape has seen its market share decline over the last few years. The development team was finally fired in 2003. Success of the project resurfaced in 2004 with the lighter Firefox browser. In December 2012, Firefox had a market share of almost 30% in Europe (statcounter.com).
This adventure was studded with victories as well as failures. The implementation of the project needed to find solutions to organisational, legal, technical and economic issues.
Our study is based on the author's knowledge but also on the abundant scientific literature dedicated to the Netscape company and the Mozilla project. Our goal is to identify the success factors for the Mozilla project.
We organised this study into four sections. The first section presents the historical context. The Mozilla project comes from the Netscape browser that dominated the Web browser market between 1993 and 1997. The second section presents success factors that we will use in our case study. The third section develops the case study. The fourth section discusses the results.
Historical Context
The authorship of the World Wide Web is attributed to CERN (info.cern.ch). Tim Berners-Lee and Robert Cailliau implemented the technical foundations for an open sharing of information on the Internet (Berners- [START_REF] Berners-Lee | The World-Wide Web[END_REF][START_REF] Ceruzzi | Aux origines américaines de l'Internet : projets militaires, intérêts commerciaux, désirs de communauté[END_REF][START_REF] Grosskurth | Architecture and evolution of the modern web browser[END_REF]. CERN put the system in operation at Christmas 1990. It also published a library ("libwww") and a line mode browser. The merit of making the W3 accessible to a wider audience is often attributed to NCSA (www.ncsa.illinois.edu). In 1993, Marc Andreessen presented the Mosaic graphical Web browser. This one offered a significantly improved user experience and was massively distributed over the network. A part of the Mosaic development team joined Netscape. Marc Andreessen was a co-founder. Netscape gave birth to popular Web browsers. In 1994 Netscape developed its software (clients and servers) for the World Wide Web. Netscape dominated the market until 1997 [START_REF] Cusumano | Competing on Internet time[END_REF].
Microsoft lagged behind the market. However the commercial issue went beyond the Web browser market [START_REF] Halloul | Le réseau stratégique et la concurrence illustrés par le cas M/N (Microsoft versus Netscape)[END_REF]. Indeed, Microsoft faced the risk that Netscape Navigator and Java would lead to a platform able to replace the Windows operating system [START_REF] Sebenius | Negociating Lessons from the Browser Wars[END_REF]. Microsoft succeeded in about six months to offer a product similar to Netscape. Microsoft relied on a strategy of vertical integration. Internet Explorer was tightly integrated with the Microsoft Windows operating system and came free with it [START_REF] Wang | The revival of Mozilla in the browser war against Internet Explorer[END_REF]. Microsoft Windows dominated the operating systems for personal computers. Its users were fairly indifferent for the tool and they tended to use the software installed by default. Some aspects of Microsoft strategy could be criticised. After the failure of a market sharing agreement, Microsoft used its dominant position within its network to prevent Netscape from selling browser with profit [START_REF] Halloul | Le réseau stratégique et la concurrence illustrés par le cas M/N (Microsoft versus Netscape)[END_REF]. Microsoft imposed Internet Explorer with agreements (for example with computer manufacturers). It organised a policy of systematic incompatibility (format war) [START_REF] Liotard | Les nouvelles facettes de la propriété intellectuelle : stratégies, attaques et menaces[END_REF]. In 1998 Netscape joined legal actions against Microsoft for anti-competitive practices [START_REF] Descombes | Saga Netscape/Microsoft: histoire d'un renversement[END_REF][START_REF] Halloul | Le réseau stratégique et la concurrence illustrés par le cas M/N (Microsoft versus Netscape)[END_REF]Krishnamurthy, 2009, Wang andal., 2005). The Netscape company announced the release of the source code of Communicator 5 (including the browser) in January 1998. The source code was made available in May 1998. The stakes were high: to allow Netscape to maintain its pace of innovation and sustain its business against a competitor playing a key role in the software industry.
The fifth edition of Netscape did not achieve a commercial future. The sixth edition was released in late 2000. Its instability, its incompatibility with many sites and its heaviness disappointed fans and users, resulting in a significant loss in market share. The number of downloaded copies continued to rise steadily. However, in terms of market share, after a peak of 90% in mid-1996, the Netscape market share decreased gradually in favour of Microsoft, at a rate of about 1% per month [START_REF] Cusumano | Competing on Internet time[END_REF][START_REF] Descombes | Saga Netscape/Microsoft: histoire d'un renversement[END_REF] Our research covers the period from 1998 (release of Communicator 5 source code and launch of the Mozilla project) to 2012.
Success Factors for Open Source Projects
Several investigations were devoted to the topic of success factors for open source projects. [START_REF] Comino | From planning to mature: On the success of open source projects[END_REF] are concerned about the definition of success. They identified different types of measures: the use of software, the size of the community (or the level of activity, measured for example by the output per contributor) and the technical achievement of the project. Their study leads to three conclusions: first, the restrictive licenses, such as GPL, have a lower probability of reaching a stable or mature release; second, projects dedicated to advanced users have a higher probability of evolving in the development status; and third, the size of the developer community increases the chances of progress up to a certain size, after which this effect decreases. This might be explained by the appearance of coordination problems in large projects. Fershtman and Gandal (2007) were specifically interested in the output per contributor (lines of code). They concluded that this production is lower if the license is restrictive; it is higher if the software is dedicated to developers (rather than end users) and is lower if the software is written specifically for Linux. The restrictive nature is defined according to three levels (rather than two), with a distinction between permissive licenses (e.g. BSD), license with weak reciprocity (e.g. LGPL) and licenses with strong reciprocity (e.g. GPL).
Stewart, Ammeter and Maruping (2005) focused on the type of the license and the existence of an organisational sponsorship. They showed that a sponsored project became more popular over time, and the popularity of the project (i.e. the success among users) had a positive effect on its vitality (i.e. the success among developers). However the impact of the type of the license is not resolved by this study. These results must be seen alongside those of [START_REF] Roberts | Understanding the Motivations, Participation, and Performance of Open Source Software Developers: A Longitudinal Study of the Apache Projects[END_REF] regarding the Apache project. They showed that being paid to contribute is associated with a higher level of contribution on the source code.
Midha and Palvia (2012) used the "cue utilisation theory" (CUT) to determine the factors influencing the success of open source projects. The factors are categorised as extrinsic and intrinsic factors. The extrinsic factors are the type of license, the number of available translations, the size of the user base (since the software exists), the size of the developer base and the responsibility assignment. The intrinsic factors are the complexity and the modularity of the source code. The authors also distinguished two kinds of success: the technical success (i.e. the activity of developers) and the current market success (i.e. the popularity of the project).
They reached following conclusions: -The technical success does not influence the market success.
-The negative impact of restrictive licenses depends on the status of the project. If we consider the criterion of market success, the negative impact of restrictive licenses only takes place for the first version of the software. It tends to disappear with time. If we consider the criterion of technical success, the negative impact of restrictive licenses does not occur in the early stages of the project but in the following stages. The authors explained this finding by the fact that the license is one of the only pieces of information available to users when the software appears and that the first developers see the restrictive licenses as a protection against the risks of ownership.
-The user base positively influences the market success and the technical success (except in the early stages of the project).
-The number of translations positively influences the success on the market.
-The technical success is positively impacted by the delegation of responsibilities, negatively influenced by the complexity and positively influenced by the modularity.
For our case study, we retain the following factors: the complexity, the modularity, the type of license, the number of available translations, the size of the users' base, the size of the developers' base, the responsibility assignment and the existence of an organisational sponsorship.
These factors are included in the CUT theory. We added the organisational sponsorship (as an extrinsic factor), whose the positive impact is underscored by several studies.
Case Study: the Mozilla Project
The Complexity
Jamie [START_REF] Zawinski | Resignation and postmortem[END_REF] was a veteran of Netscape and one of the initiators of the Mozilla project. He pointed to the fact that the released code was too complicated and difficult to change. As a result, few people contributed and a complete rewrite of the browser was necessary, which delayed the project by six to ten months. This rewrite gave birth to a new rendering engine named Gecko/Raptor. This need for rewriting was also justified by the objective to propose a rendering engine complying with the Web standards and by an erosion of design, which results in iterative development practices implemented by Netscape [START_REF] Cusumano | Extreme programming compared with Microsoft-style iterative development[END_REF][START_REF] Reis | An Overview of the Software Engineering Process and Tools in the Mozilla Project[END_REF]van Gurp and Bosh, 2002).
The Modularity
The release of Netscape and the launch of the Mozilla project were accompanied by the provision of valuable development tools. These tools are used to develop the software itself (XUL, Gecko, etc.) and for the organisation of collaborative work (Bugzilla, Tinderbox, etc.) [START_REF] Reis | An Overview of the Software Engineering Process and Tools in the Mozilla Project[END_REF].
After rewriting the released software, the Mozilla project has been gradually moving towards a stable release. Mozilla 1.4 can be considered as the first fully exploitable release. The Mozilla project has a modular structure. Mozilla technologies can be used by other software and benefit from additional external contributions. The HTML rendering engine, called Gecko, is used as a basis for other browsers like Camino or Galeon. In response to heavy criticisms, Mozilla has been subsequently split into lighter applications. Mozilla Firebird (now Mozilla Firefox) and the Mozilla Thunderbird mail client emerged in 2004.
The extensions (addons.mozilla.org) are considered as an important competitive advantage and mobilise a large community of developers. They can meet very specific needs and attract new users [START_REF] Krishnamurthy | CASE: Mozilla vs. Godzilla -The Launch of the Mozilla Firefox Browser[END_REF]. In July 2012, more than 85% of Firefox users used at least one extension, for an average of five extensions by user. These extensions cover features as diverse as ad blocking, video downloading, browser interface customisation and Web page debugging. The Mozilla Foundation also claims more than 25,000 developers for extensions and three billion downloads worldwide.
The Type of License
The creation of a new software license, called Mozilla Public License (MPL), was a part of the project. It was written to regulate the coexistence between the community and the companies. More than 75 third-party modules were used in the browser. Lawyers regard the MPL as original [START_REF] Montero | Les logiciels libres face au droit[END_REF]. It is a license with weak copyleft. It differs notably from the distinction between modified works and wider works. If the first one must remain under MPL, the second one can use a third-party licensed for the original parts. Sun Microsystems was inspired by this license for the Common Development and Distribution License (CDDL). The Open Source Initiative approved MPL 1.1 as well as newer MPL 2.0.
The choice of a software license to cover the new project was a project within the project [START_REF] Hamerly | Freeing the Source: The Story of Mozilla[END_REF]. The very permissive BSD license did not protect the developers' contributions enough in case of use without compensation by a third party. The GPL was more restrictive (it imposes a conservation and a propagation of the license) but it was considered untenable for commercial developers. A new license, called Netscape Public License (NPL), was submitted to developers in March 1998. It seemed very unpopular and was quickly deemed unacceptable by the community because Netscape had been granted special rights.
The NPL then became Mozilla Public License (MPL). It was identical to the NPL but skipped over the clause giving the possibility for Netscape to transfer the code licensed under NPL in a product not covered by this license. The source code of the browser subsequently evolved from the original NPL coverage to the MPL coverage (Di [START_REF] Penta | An Exploratory Study of the Evolution of Software Licensing[END_REF]. The LGPL and GPL later appeared in the development, in order to address problems of incompatibilities with third-party projects [START_REF] De Laat | Copyright or copyleft ? An analysis of property regimes for software development[END_REF].
The Size of the User Base
The Mozilla project had its roots in the Netscape products and could benefit from the sympathy of a portion of the former user base.
Firefox 1.0 was downloaded two million times in the first two days [START_REF] Baker | The Mozilla Project: Past and Future[END_REF] and surpassed one billion downloads in late July 2009 [START_REF] Shankland | Firefox: 1 billion downloads only part of the story[END_REF]. Foundation estimated the number of users at 300 million compared to 175 million a year earlier.
The Size of the Developer Base
Recruiting new developers was not easy to start the project. According to Jamie [START_REF] Zawinski | Resignation and postmortem[END_REF], the scope of the project entailed a significant delay before they contributed. As a result, a small change could take several hours. The complexity of the code further aggravated this problem. The efforts of the leading contributors were not rewarded. The contributors expected compensation, such as having the ability to change the commonly used browser version and thus see the effects of their own changes. However, the released sources were not those of the commercially distributed version of the browser (4.x branch). Netscape released a code which lacked many features and was full of bugs. The development capabilities were shared between two versions: the 4.x branch (it underwent several changes, from 4.5 to 4.78) and the Mozilla branch. The activity on the proprietary branch did not benefit from the open source branch. The open source development was slowed down by dispersion of resources. The same applied for the commercial branch.
Rewriting the project allowed it to reach its cruising speed. However, changes were then made following the withdrawal of AOL Time Warner, although these were not necessarily visible for an outside observer. The transition was accompanied by a significant decrease in activity of developers and a significant change in the composition of the group of most active developers (Gonzalez- [START_REF] Gonzalez-Barahona | Impact of the Creation of the Mozilla Foundation in the Activity of Developers[END_REF].
The adoption of Firefox slowed after the release of Google Chrome / Chromium. This was accompanied by tensions within the community, and some departures occurred [START_REF] Champeau | Mozilla interroge sur son avenir et ménage sa communauté[END_REF]. The discord focused on the relationship between the contributors in the community and the official structures of the Mozilla project. The fast release cycle that Firefox inaugurated in 2011 in the wake of Chrome (this cycle is particularly interesting on a marketing plan) was also the subject of heated debate. The results of this new policy did not seem negative, with a largely preserved stability and faster correction of errors [START_REF] Khomh | Do faster releases improve software quality? An empirical case study of Mozilla Firefox[END_REF].
The Responsibility Assignment
The Mozilla project maintains a centralised decision-making structure. The whole project is managed by a small group of developers [START_REF] Krishnamurthy | About Closed-door Free/Libre/Open Source (FLOSS) Projects: Lessons from the Mozilla Firefox Developer Recruitment Approach[END_REF][START_REF] Mockus | Two case studies of open source software development: Apache and Mozilla[END_REF][START_REF] Nurolahzade | The role of patch review in software evolution: an analysis of the Mozilla Firefox[END_REF]. This small group delegated tasks and roles. They appointed, in particular, the module owners who are responsible for selecting and implementing changes.
The users can share their report bugs, vote to prioritise them and discuss their resolution. In practice, the development of Mozilla is bug-driven, in other words, led by the bugs (den Besten and [START_REF] Dalle | Voting for Bugs in Firefox[END_REF][START_REF] Reis | An Overview of the Software Engineering Process and Tools in the Mozilla Project[END_REF]. The term "bug" covers defects, enhancements and changes in functionality. The bug reports are encoded, numbered and sorted. If they are confirmed, their correction is planned according to severity and priority. A manager is appointed for each bug. The proposed changes to the source code are finally attached to the bug in the form of patches. These operations are managed through Bugzilla, a Web-based bug tracking system developed in Perl for the needs of the Mozilla project.
The design process is open and User Experience practitioners work in the bug tracker. However, a User Experience director lead the design and "some decisions are made behind the scenes" [START_REF] Bach | FLOSS UX Design: An Analysis of User Experience Design in Firefox and OpenOffice[END_REF]. Community can be conservative with design explorations.
The hierarchical organisation of the Mozilla project is the source of frequent disputes but ensures the coherence of development. This structure is perhaps a relic of the past strong involvement of commercial organisation in the development process (as well as the contributions below the expectations after the launch of the project). The history of the project is also that of a transition from commercial to open source development model [START_REF] Baker | The Mozilla Project: Past and Future[END_REF]. However, the Mozilla project is distinguished by the large number of contributions made by members at the periphery of the community, which tends to show the relevance of this mode of organisation for a project of this scale [START_REF] Wang | Behind Linus's law: A preliminary analysis of open source software peer review practices in Mozilla and Python[END_REF]. Taking the votes on the outskirts of the community into account would be low [START_REF] Dalle | Channeling Firefox Developers: Mom and Dad Aren't Happy Yet[END_REF].
The gradual shift in the market towards the WebKit open source rendering engine raises questions about the ability of the Mozilla project to accommodate large external contributors.
The Organisational Sponsorship
From May 1998 to July 2003, the Mozilla project was supported by Netscape, owned since March 1999 by AOL Time Warner. Since July 2003 the Mozilla Foundation has overseen the development of the Mozilla project and has received a $2 million donation from AOL Time Warner [START_REF] Baker | The Mozilla Project: Past and Future[END_REF]. The Mozilla Foundation derives its income from agreements with commercial search engines. Google accounts for 95% of its revenues. The Mozilla Foundation (www.mozilla.org) announced that it earned $52.9 million in 2005 with its browser. In comparison, the revenues amounted to $2.4 million in 2003 and $5.8 million in 2004. The Foundation employs 70 employees on a permanent basis. It is dependent on and owes its financial survival of this agreement to Google. Mozilla Foundation and Mozilla Corporation (trading subsidiary) expenditures rose to $8.2 million in 2005. Software development accounted for $6,075,474 compared to $768,701 to $1,328,353 for marketing expenses and general administrative expenses.
Market Success
Promoting the Project
The Mozilla project was well promoted in 2003. Rewards were offered to the project by newspapers, such as Linux Journal that promoted Mozilla as one of the 10 best Linux products. The marketing efforts of the Mozilla Foundation amplify this comeback.
From a marketing perspective, the Mozilla Foundation became more focused on end users [START_REF] Baker | The Mozilla Project: Past and Future[END_REF][START_REF] Viseur | Associer commerce et logiciel libre : étude du couple Netscape / Mozilla, 16ème conférence de[END_REF]. It changed the former site, which contained some major flaws, especially from a "commercial" point of view, which was presented as a site for developers with too much complexity for the normal user. There was too much information which was also poorly structured. For example, multiple versions of the Mozilla site were proposed in footnotes. A beta version could have masked a stable version. However, the end user may not know what is involved in beta (it was the second trial period of a software product before its official publication). Improvements were made with the release of version 1.4 of the browser (mid-2003). The latest stable version was clearly shown (upper left corner of the page). The installation files were classified by an operating system (Linux, Mac and Windows). The information sought by the user was directly accessible. Mozilla also introduced the concept of "product". This is familiar to end users, considers the customer and involves a minimum of strategic thinking. A line of products is highlighted. The end user can choose between the classic Mozilla browser and a new lighter browser called Firebird (now Firefox). This development of a line of products was associated with the development of a brand strategy (Mozilla Application Suite, Thunderbird, Firebird, etc.). The name Firefox is registered as a brand. References (such as awards from magazines and quotes from recognised people) were clearly identified in order to give credibility to the products and to reassure new users attracted to the Web site. Contributors were not neglected. Information intended for testers and developers appeared in a different colour at the bottom of the page. The supported operating systems and tools available were also highlighted. The new initiatives, such as telephone support, sales of CDs and gifts, were pointed across the site.
The Mozilla Foundation subsequently continued its marketing efforts. It launched the Spread Firefox initiative in September 2004 [START_REF] Baker | The Mozilla Project: Past and Future[END_REF]. After testing the traditional advertising with the purchase of an advertising page (funded by the community) in the American press, the Mozilla Foundation tested viral marketing in 2005. FunnyFox site (www.funnyfox.fr) hosted three humourous ads made by the Pozz (www.pozz-agency.com) French communication agency: The Office, The Mobile and The Notebook. The goal was that these three Flash movies were to be widely distributed at the initiative of users and spread all over the Internet. The Foundation then went further in co-creation. This marketing technique for involving customers and users in the design, the development, the promotion or the support of new products and services was applied to the creation of advertisements. Mozilla launched a contest called Firefox Flicks (www.firefoxflicks.com) in late 2005. The goal was to create a marketing campaign to tap into the creative energy of amateur and professional filmmakers in order to present Firefox to a wider audience. The community has also been tapped. [START_REF] Krishnamurthy | CASE: Mozilla vs. Godzilla -The Launch of the Mozilla Firefox Browser[END_REF] identifies three missions supported by the community: the development of the brand, the creation of traffic and the conversion of new users (providing banners, writing positive comments, voting on websites dedicated to software, etc.).
Organi sation of the Competitors
The new browser war may have reached a point in September 2008 with the launch of Google Chrome (www.google.com/chrome). Google was motivated by improving the user experience with rich Internet applications [START_REF] Corley | Why are People Using Google's Chrome Browser ?[END_REF], i.e. applications running in a Web browser with characteristics similar to applications running on the workstation. Google offers this type of product (Youtube, GMail, etc.). Google faced stability and speed problems with existing browsers during testing. Google therefore decided to release its own browser. [START_REF] Grosskurth | Architecture and evolution of the modern web browser[END_REF][START_REF] Viseur | Forks impacts and motivations in free and open source projects[END_REF]. Apple was joined on this project by Nokia, Palm and RIM (in practice, Webkit-based browsers dominate the mobile platforms). Chromium is the Chrome free software release (www.chromium.org). The Mozilla ecosystem is particularly fuelled by companies wishing to have a functional browser on platforms other than Microsoft Windows. It is now challenged by a new ecosystem mainly composed of companies that sometimes frontally compete (Google and Apple, for example).
Google Chrome has proven to be fast and stable (process isolation). Google Chrome is frequently offering new versions and its park is quickly renewed through an automatic update system [START_REF] Baysal | A Tale of Two Browsers[END_REF]. The new product was vigorously promoted by Google, for example by using video spots. It quickly became the third most widely used browser, behind Internet Explorer and Firefox. In practice, the evolution of market shares shows a stabilisation of Firefox browser, a decrease of Internet Explorer and a growth of Google Chrome (gs.statcounter.com). In May 2012, Chrome had a market share of 32.43%, compared to 32.12% and 25.55% for Internet Explorer and Firefox respectively. The competition from Chrome tends therefore to erode the market share of the Internet Explorer common competitor.
It raises new questions about the future of the agreement between Google and the Mozilla Foundation, and about the financial sustainability of the Foundation. In the long term, the motivations behind the release of Chrome also question the evolution of the Web required by major companies. The border between the Internet and the desktop tends to gradually subside, as between the Web pages and applications.
The competition from Google Chrome / Chromium is also consistent with that of the Webkit rendering engine, especially in mobile systems. Most mobile browsers use Webkit. Mozilla is marginalised on these platforms as they become proportionately more important. The use of tablets and smartphones already becomes, for example, dominant for consulting media Web sites during certain periods of the day (comScore). Mozilla is trying to reposition itself in the market with the Boot to Gecko project, an open operating system for mobile devices.
Discussion and Perspectives
The influence of intrinsic factors seems to be well emphasised in this case study. The excessive complexity of the released code hindered the start-up of the project and kept potential contributors away, which resulted in a small amount of contributions and a complete rewrite of source code. This suggests the need for thorough preparation before publishing the source code. This rewrite was accompanied by a significant modularisation of source code. The modularity of the browser was enhanced with Firefox that proposed a very popular system of add-ons.
Two external factors stand out: the responsibility assignment and the organisational sponsorship.
The Mozilla project has a centralised decision-making structure and differs from other projects by a module ownership mechanism. It is sometimes a source of tension but this organisation ensures the coherence of the project. The project continues to benefit from many contributions, such as bug reports or corrections ("patches").
The Mozilla Foundation ensures the coherence in the project development and the promotional activity for popularising its software to a wider audience. The impact of the Netscape teams' dissolution also highlighted the impact of the organisation support on the development process of a large-scale and complex project.
In the case of Mozilla, a decision cannot be made regarding the impact of the type of license on the success of the software. However, there was a lot of controversy after the publication of the NPL license illustrated the importance of the license in the eyes of the community and the attention paid to the risks of ownership.
Note that the main competitor (Google Chrome / Chromium) is based on a rendering engine that is also covered by a license with weak copyleft.
The technical success depends on the point of view. The Mozilla Firefox project helped to impose open standards for Web site development. One objective of the project was therefore achieved. The market success is undoubtedly linked to the technical success in securing the application or offering modularity ("add-ons"). The success of the Google Chrome / Chromium competitor is also explained by its technical quality and better functioning with websites using technologies for rich clients (e.g. Flash and Ajax). As for disadvantages, the technologies on which the Mozilla browsers are based (e.g. Gecko), or that derived from these technologies (e.g. XULRunner), encountered difficulties to export to other projects. It is interesting that most software vendors using an open source rendering engine based their product on Webkit and not on Gecko. Regarding further research on this point, it might be instructive to identify the factors that encourage or discourage the reuse of open source components. The highly centralised project organisation may explain the difficulty to collaborate with other key partners.
The market success came back in 2003 and especially in 2004 with the release of Firefox. Firefox was lighter and modular. It benefited not only from its technical qualities but also from the high number of marketing campaigns. Emphasis is placed on the originality of implementing marketing techniques without a classic commercial structure.
The project now faces a major challenge due to the evolution of the market and the arrival of new competitors. The Web evolves gradually under companies' pressure from a world of documents, developed using standards, to a world of heavier and complex applications (Taivalsaari and [START_REF] Mikkonen | Apps vs. Open Web: The Battle of the Decade[END_REF]. The release of Google Chrome fits into this context. Another development is the rise of the mobile Web. If the situation on the workstation is currently competitive with balanced market shares between Internet Explorer, Firefox and Chrome, the mobile systems market evolves to a monopoly to the benefit of the Webkit rendering engine that powers most mobile browsers [START_REF] Hernandez | War of Mobile Browsers[END_REF]. The mobile operating systems offer a preinstalled browser and the success of Boot to Gecko will determine the future of the Foundation due to the growing importance of a mobile Web.
Fig. 1 .
1 Fig. 1. Netscape market share (first browser war)
. From 18% in early 2001, the Netscape market share decreased to 12% in mid-2001. The 7.1 version was released in June 2003 and was based on Mozilla 1.4. It corrected the most troublesome defects. However, the harm was done. Microsoft won the first round of the browser war. Netscape was acquired by AOL Time Warner in March 1999. In July 2003, AOL Time Warner decided to fire the latest Netscape developers. The development of the Netscape browser was stopped (however several Netscape branded software were sporadically released after 2003). The Mozilla project was then supported by the Mozilla Foundation, established on 15 th July 2003. The Firefox browser was released in 2004. Its market share increased regularly, mainly to the detriment of Microsoft Internet Explorer. This growth came to a halt in 2008 with the launch of Google Chrome / Chromium, based on the Webkit open source rendering (or layout) engine. The Mozilla Foundation is absent from Web mobile market and has tried to regain the initiative since 2011 with the launch of the Boot to Gecko project (renamed Firefox OS).
Fig. 2 .
2 Fig. 2. Firefox market share
Fig. 3 .
3 Fig. 3. Google Chrome / Chromium market share
Table 1 .
1 Important milestones of Mozilla project
Year Event
1993 Release of NCSA Mosaic.
1994 Creation of Netscape Communications Corporation.
Release of Netscape Navigator.
1995 Release of Microsoft Internet Explorer.
1998 Launch of Mozilla project.
1999 Acquisition of Netscape by AOL Time Warner.
2000 Release of Netscape 6 based on Mozilla source code.
2003 Death of Netscape. Creation of Mozilla Foundation.
2004 Release of Firefox Web browser.
2008 Release of Google Chrome / Chromium Web browser based on Webkit.
2011 Launch of Boot to Gecko project.
Table 2 .
2 Organisational sponsorship for Mozilla project
Period Sponsor
May 1998 -March 1999 Netscape.
March 1999 -July 2003 Netscape / AOL Time Warner.
July 2003 -present Mozilla Foundation.
Support also came from Unix vendors such as IBM, HP and Sun, who put their research and common development capabilities into the Mozilla project and were able to focus on their own needs [START_REF] West | Patterns of Open Innovation in Open Source Software[END_REF].
Language Translations
In 2012, Firefox was available in over 70 languages. The Mozilla project ensures long efforts to empower translation (see in particular Mozilla Translator).
Technical Success
The release of Netscape and the launch of the Mozilla project were accompanied by the provision of valuable development tools. These tools are used to develop the software itself (XUL, Gecko, etc.) and for the organisation of the collaborative work (Bugzilla, Tinderbox, etc.) [START_REF] Baker | The Mozilla Project: Past and Future[END_REF][START_REF] Reis | An Overview of the Software Engineering Process and Tools in the Mozilla Project[END_REF].
After rewriting the code, the Mozilla project has a modular structure. Moreover Mozilla technologies can be used by other software and benefit from additional external contributions. The HTML rendering engine, called Gecko, was used as a basis for other browsers like Camino or Galeon.
In hindsight, it appears that the development tools developed for the Mozilla project sometimes struggled to spread beyond the projects of the Foundation. This is especially true for XULRunner application framework launched in 2007 [START_REF] Stearn | XULRunner: A New Approach for Developing Rich Internet Applications[END_REF]. XULRunner should bring a runtime environment independent of the operating system for XUL-based applications, a description language for graphical user interfaces based on XML created as part of the Mozilla project. The lack of development environment dedicated to these tools, the gaps in the documentation and the gradual rise of Ajax frameworks help to explain this result in halftone.
The market success came back in 2004, especially with Firefox. This Firefox breakthrough restarted the browser war that Microsoft won a few years earlier. Several technical elements contributed to the success of Firefox: the protection against popups, the software security (Internet Explorer is regularly criticised in this regard, and viruses and security issues caused inconvenience in the summer of 2004), the tab system, the compliance with Web standards and the ability to create extensions [START_REF] Baker | The Mozilla Project: Past and Future[END_REF][START_REF] Krishnamurthy | CASE: Mozilla vs. Godzilla -The Launch of the Mozilla Firefox Browser[END_REF]. | 40,026 | [
"184276"
] | [
"160918",
"457493"
] |
01467583 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467583/file/978-3-642-38928-3_6_Chapter.pdf | Anna Hannemann
email: [email protected]
Ralf Klamma
email: [email protected]
Community Dynamics in Open Source Software Projects: Aging and Social Reshaping
An undeniable factor for an open source software (OSS) project success is a vital community built around it. An OSS community not only needs to be established, but also to be persisted. This is not guaranteed considering the voluntary nature of participation in OSS. The dynamic analysis of the OSS community evolution can be used to extract indicators to rate the current stability of a community and to predict its future development. Despite the great amount of studies on mining project communication and development repositories, the evolution of OSS communities is rarely addressed. This paper presents an approach to analyze the OSS community history. We combine adapted demography measures to study community aging and social analysis to investigate the dynamics of community structures. The approach is applied to the communication and development history of three bioinformatics OSS communities over eleven years. First, in all three projects a survival rate pattern is identified. This finding allows us to define the minimal number of newcomers required for the further positive community growth. Second, dynamic social analysis shows that the node betweenness in combination with the network diameter can be used as an indicator for significant changes in the community core and the quality of community recovery after these modifications.
Introduction
There are about 300, 000 OSS projects registered in sourceforge.net, but only few of them succeed [START_REF] Madey | The open source software development phenomenon: An analysis based on social network theory[END_REF]. Most of the Open Source Software (OSS) projects are started by a very small group of people bound by a goal they want to approach with the project. Later on, successfully developing projects gain a community of peripheral developers, bug fixers, bug reporters and peripheral users. This project community needs to achieve a critical mass of people for the project breakthrough. The meaning of OSS community is multifold, e.g. community members bring new ideas to the project, present a kind of social reward for the developers effort and increase the "market shares" of the project by spreading the word [START_REF] Raymond | The Cathedral and the Bazaar[END_REF], [START_REF] Eric | Open source software and the "privatecollective" innovation model: Issues for organization science[END_REF], [START_REF] Ye | The co-evolution of systems and communities in free and open source software development[END_REF]. Considering the voluntary nature of OSS development, the sustainability of an OSS project depends on the sustainability of its community. The analysis of the OSS community evolution can be used to extract indicators to rate the current stability of community and predict its possible future development.
The study of the OSS movement in general and OSS development principles in particular have evolved to a separate research field of community-intensive socio-technical projects [START_REF] Scacchi | The future research in free/open source software development[END_REF]. Plenty of those studies address the evolution of OSS systems [START_REF] Hongyu | A systematic review of studies of open source software evolution[END_REF]. However, the dynamic analysis of the OSS communities -a social component of OSS projects -is seldom. The existing research papers on OSS community dynamics either present a set of static measurements for a certain cut off of a project history [START_REF] Georg Von Krogh | Community, joining, and specialization in open source software innovation: a case study[END_REF], [START_REF] Jensen | Joining free/open source software communities: An analysis of newbies' first interactions on project mailing lists[END_REF] or are restricted to the developer sub-communities only [START_REF] Robles | Contributor turnover in libre software projects[END_REF], [START_REF] Bird | Open borders? Immigration in open source projects[END_REF]. In this paper, we combine the demographic analysis of an OSS project community as an aging population and dynamic analysis of an OSS as a social structure. We apply our approach on the whole communities of three bioinformatics OSS. The selected projects, BioJava, Biopython and Bioperl, are very similar in their goals, scientific communities and infrastructures (all three supported by "The Open Bioinformatics Foundation"). Thus, we hope to overcome the bias of results in case of too different communities in terms of policies, culture, lifetime, domain, organization of the OSS projects.
The rest of the paper is organized as follows. Section 2 presents an overview of the existing OSS community studies: statistical studies of community dynamics are described in Section 2.1 followed by an overview of OSS social evolution studies in Section 2.2. In Section 3 we present our approach to analyze OSS community evolution and the data used for validation. The results are described in Section 4. Section 5 discusses the results and concludes the paper. An outlook is given in Section 6.
Related Work
A large number of studies was executed upon publicly available communicational and development repositories of OSS projects during the last decade [START_REF] Scacchi | The future research in free/open source software development[END_REF]. Many of those studies address the evolution of OSS system. In the systematic literature review Breivold et al. [START_REF] Hongyu | A systematic review of studies of open source software evolution[END_REF] identify four main research topics on OSS system evolution: software trends and patterns, evolution process support, evolvability characteristics addressed in OSS evolution, examining OSS at software architecture level. The researchers provide an overview of metrics that are used to analyze OSS evolution over time: Software growth metrics, system growth metrics, etc. address only the technical aspects of OSS projects. However, the success of an OSS project depends not only on technical quality of the developing system, but also on the social state of its community [START_REF] Ye | The co-evolution of systems and communities in free and open source software development[END_REF]. The attention of researchers is also attracted to the analysis of the OSS communities: motivation for voluntarism, participation and interaction patterns, social structure, etc. However, the dynamic analysis of the OSS communities is seldom. No metrics for measuring social quality of an OSS project are developed so far. In Section 2.1 we give an overview of the studies, which address evolution of the community composition. While in Section 2.2 the existing investigations of the community restructuring in social terms are presented.
Population Evolution
In [START_REF] Ye | The co-evolution of systems and communities in free and open source software development[END_REF] Ye et al. present a conceptional framework of the OSS evolution. An OSS community is defined as an example of a community of practice (CoP) with the legitimated peripheral participation (LPP) [START_REF] Wenger | Community of Practice: Learning, Meaning, and Identity[END_REF]. According to the LPP concept, through continuous learning the newcomers become experienced community members, thus, they move from the periphery to the community center. Ye et al. call this process "role-transformation" and, thereby, extend the static onionmodel of the OSS communities by time dimension. Role-transformation in open source leads to evolution of community social structure and composition, which in turn results in evolution of developer skills and organizational principles. The authors also define a term "second generation", which is achieved, when an OSS community core is evolved from a single project leader to a group of core members.
Von Krogh et al. in [START_REF] Georg Von Krogh | Community, joining, and specialization in open source software innovation: a case study[END_REF] study the early stage of community establishment in the Freenet project (year 2000). The researchers investigate which behavioral patterns (level of activity and specialization) increase the chances to be granted developer privileges (role-transformation). However, this study is restricted to one OSS project in the early stage.
In contrast to [START_REF] Georg Von Krogh | Community, joining, and specialization in open source software innovation: a case study[END_REF], in [START_REF] Jensen | Joining free/open source software communities: An analysis of newbies' first interactions on project mailing lists[END_REF] Jensen et al. study the joining behavior across four different OSS projects. The projects are analyzed not at their early stage, but when they were already widely acknowledged and supported by a bigger community. The authors estimate a "survival rate" of newcomer in the mailing lists: 9.4% of those, who entered the project in three month period (643), were still participating in mailing lists after six month period. However, only 9 month of the projects' history are taken into consideration.
Robles et al. in [START_REF] Robles | Evolution of volunteer participation in libre software projects: Evidence from debian[END_REF] investigated the meaning of evolution within the Debian project. The finding, that if a package leader leaves a project, the package is very likely to be abandoned in the future, shows the importance to understand and even predict the community restructuring.
In [START_REF] Robles | Contributor turnover in libre software projects[END_REF] Robles et al. use the term "generation" to describe the projects, where the core developers change over time. The results show, that the core remains stable in very seldom cases (3 of 21 projects) and support the expected strong evolution of the leading group and constant need for the emerging gaps to be filled. However, the study is restricted not only to the developers, but even to the core group of them (the most active 20% of committers).
To summarize, the above described studies consider OSS communities as a population: concepts like "generation", "survival rate", "migration" are applied. Demographic methods and models present one possible basis for quantitative analysis of OSS community evolution.
Social Dynamics
Beside quantitative analysis of an OSS community, its social state can be estimated. Hereby, an OSS community is mapped to a graph. The nodes of the graph represent OSS project members and the edges between the nodes represent interaction between the project members. Plenty of social network analysis measures can be calculated using OSS community graph. Similarly, for each project member his/her social status can be estimated.
In [START_REF] Bird | Open borders? Immigration in open source projects[END_REF] Bird et al. study the chances of migration from non-developers to full developer among others as a function of social status. Analysis of Apache, Postgres and Python communities shows, that the meaning of different aspects vary significantly from project to project. The evolution of newcomers is not considered.
Further, the dynamic of social characteristics is addressed in [START_REF] Howison | Social dynamics of free and open source team communications[END_REF] by Howison et al. Using the data from sourceforge.net bug-tracker from 120 different projects, social networks based on direct interaction on submitted bugs are depicted. In order to analyze the evolution of outdegree centralization, the data is sampled in 90-day overlapping windows. Strong variation in community social structure is detected across different projects and within one project over time. The participation behavior proves to be distributed according to power-law: most of the project members join the project for a short period of time. However, the study considers only a relatively short period of project life time.
In [START_REF] Wiggins | Social dynamics of floss team communication across channels[END_REF] Wiggins et al. adapted the analysis methods from [START_REF] Howison | Social dynamics of free and open source team communications[END_REF] and applied to investigate the centralization dynamics of Gaim and Fire. In this study, the significant evolution of communication centralization is showed. For example, a project management activity can reshape the community to a highly centralized network structure.
To summarize, there is a growing interest in OSS community evolution. Monitoring of community social state can be applied in order to detect important internal/external events and thus, to approach sustainability of OSS communities. Both demographic and social network analysis methods and concepts are applied to analyze OSS communities dynamically. However, most of the existing studies are mainly concentrating on the migration from non-developers to the developers. The whole community is rarely addressed. Often only a short cut off of the project history is used for analysis. To our knowledge, the only studies which combine the community statistical and social evolvement are [START_REF] Howison | Social dynamics of free and open source team communications[END_REF] and [START_REF] Wiggins | Social dynamics of floss team communication across channels[END_REF].
Methods
In this study we approach an OSS community as an aging population on the one hand and as a social structure on the other hand. We adapt methods of population projection and dynamic social network analysis and apply them to three bioinformatics OSS projects.
Data
In this study, we use the data from three well-established bioinformatics OSS projects. Bioinformatics is an interdisciplinary research field, where innovative computer science techniques and algorithms are applied to answer emerging research questions of computational biology. There is a branch of commercial bioinformatics applications. However, according to [START_REF] Mangalam | The bio* toolkits-a brief overview[END_REF]"most of them are not scientific for the level of data analysis required in bioinformatics research. It was partly the frustration with commercial suits that drove the foundation of the Bio* groups." All open source projects used in this study, BioPerl, Bio-Java, Biopython, belong to the Open Bioinformatics Foundation. The selected projects are very similar in the problems they address, the community they are intended for, policy and organizational issues they experience. The infrastructure used for the project management all cases is composed of a wikipage, mailing lists and a code repository.
The communication data from the project mailing lists and the development history from the project code repositories for the period of eleven years (2000 -2010) is crawled, filtered from spam and stored in a local database [START_REF] Hannemann | Adaptive filter-framework for quality improvement of open source software analysis[END_REF]. Multiple aliases of same individuals are semi-automatically detected and consolidated. We detected 5507 distinct users 3259 of them had written more then one posting and had got at least one reply. The mailing list aliases are mapped to the developer aliases. Further insights in the project history, we collected from the project wikipages and project participants via private emails.
Analysis Procedures
For our study, we monitored the population evolution over time in combination with the changes in social structure of OSS communities. The data was divided in equal one-year-long periods {01.01.x -31.12.x ∀x ∈ (2000, 2010)}.
Population Ecology
In order to study the evolution of OSS communities, we defined the population characteristics in the following way: Year of Birth is a time point t 0p i when a participant p i entered an OSS project.
In the context of this study it is a time point, when a user has written his/ her first posting in a project mailing list.
Age Group (x; x + 1) at time t consists of all active project members, who participate in a project at the given time point for at least x and at most x + 1 years. In context of this study, a user is defined to be currently active in a project, if he/she has written at least one posting in a mailing list of a project in the current year. In order to visualize the population age structure, the population pyramids are applied. The population pyramids present an effective graphical way to visualize the population development and to detect some tends and outliers, which can lead to some environmental and historical events. It can be also help to indicate the likelihood of continuation of population under study. The Xaxis of a population pyramid represents age or age-groups, while the numbers of people in each age group is plotted along the Y-axis.
Social Network Analysis (SNA) is applied to study the social characteristics of an OSS community modeled as a graph. Individual participants of a community are modeled as nodes of the graph and their relationship (friendship, family relatedness, etc.) is reflected by network ties. The BioJava, Biopython and BioPerl participants are mapped to the nodes V of the project networks. If at least one thread exists, to which both participants v i and v j ∈ V have submitted at least one posting, the link (v i ; v j ) is added to the graph. The edges are binary: either there is a link or not. To analyze the project networks we applied the following SNA measures1 [8]:
-Shortest Path σ st is the minimal length of the path between two nodes s, t -Diameter is the length of the longest shortest path d = max s,v∈V σ st -Node Betweenness is the fraction of shortest paths between two nodes s and
t that contain node v i g(v i ) = σ st (v i ) σ st
-Largest Connected identifies the maximal connected components of a graph -Density is the ratio of the number of edges to the number of possible edges -Transitivity (=Clustering Coefficient) measures the probability that the adjacent nodes of a node are connected -Edge Betweenness Clustering is a method to detect dense interconnected nodes subsets (communities) within social networks with sparse connection to outside of the cluster.
Dynamic network analysis (DNA) extends SNA with the time domain. To analyze the development of social characteristics of the OSS projects over time, we generated the project networks for each year.
Results
The following sections present the results of the previously described dynamic analysis procedures applied to three bioinformatics OSS.
Demographic Forecast
For each project under study, a number of distinct users in each age group (x; x + 1) per year was calculated. Starting from the year 2006, we estimated the survival rates for users of each age group. For example, for 2006 the following survival rates were calculated (0, 1) → (1, 2), (1, 2) → (2, 3) . . . (3, 4) → (4, 5). Accordingly, for year 2010 the survival rates of the oldest project members are represented in (7, 8) → (8, 9). On average, our investigations showed a pattern of the survival rates for certain age groups in all three bioinformatics OSS (cf. Figure 1). A ratio P x of the project participants aged x to x + 1 at time t being still active in the age group x + 1 to x + 2 at time t + 1 follows certain rules:
-
P 0 = [(0, 1) → (1, 2)] ≈ 20% -P 1 = [(1, 2) → (2, 3)] ≈ 40% -P m = [(x, x + 1) → (x + 1; x + 2)] ≈ 90% , ∀x > 1
The results showed, that 20% of people, who were newcomers in the year n, remain active in the year n + 1. Out of those, who remained with a project already for one year ((1, 2) age group) in the year n, there remain only about 40% in the year n + 1. Other age groups have a survival rate of about 90%. The population pyramids of BioJava, Biopython, BioPerl in year 2010 in Figure 1 provide a visual representations of the identified pattern. To summarize, the distribution of survival rates in the investigated OSS projects follows the power law. Additionally, a phenomena of "rebirth" can be observed in the OSS projects. Some project participants leave the project for several years and after some period of time come back to the community. In the years of their absence, these users do not appear in our measurements. However, when they reactivate their participation, we still consider the date of the first posting as the date of the entrance into the project. These users represent no newcomers anymore, as they already have some experience with the project. Therefore, it can happen, that the age group (x+1; x+2) contains more people, than the age group (x; x + 1) contained a year before. Thus, especially for older age groups a survival rate higher than 100% is possible. This finding leads to the conclusion, that if a user was actively participating in a project for more than two years, he/she will probably get "attached" to it on long-term bases. In turn, the percentage of those who survive over two years is very low. Based on the identified pattern, a minimal number of newcomers required to support the same level of participation and, thus, the continuation of project population in the next year can be estimated as follows:
|newcomer| t+1 >= |(0; 1) t | * 0.2 + |(1; 2) t | * 0.4 + • • • + |(x; x + 1) t | * 0.9 (1)
The history of newcomer numbers can be investigated in order to uncover the events, which influenced the rate of user inflow in a project. In Figure 1 the newbie numbers in all three projects under study are illustrated. The highest inflow of new users in all thee projects can be indicated during 2001 -2004. This observation can be linked to the fact, that in those years bioinformatics won a lot of attention due to the announcement of sequencing of the human genome on June 26, 20002 . Also the "Bioinformatics Open Source Conference" started in year 2000, attracting the attention of scientists to the open source software for computational biology. We can conclude, that the newcomer rates depend among others on the events within the project domain outside the project community.
Despite the rise of newbie numbers in all three projects during the mentioned time period, the absolute numbers differentiate considerably in BioJava (over 100), Biopython (less than 100) and BioPerl (over 250). In turn, different newbie numbers result in different absolute numbers of users, who get involved in the project on long-term bases, even if the percentage of survival is almost the same (cf. Figure 1). This occurrence can be linked to the different development stages of three projects at the mentioned time period. While BioJava and Biopython were started at around 1999, the BioPerl has been already developed since 1995. Hence, in early years of 2000, the BioPerl was the most-established project in comparison to the other two and could attract more people, even that the topics addressed by all three are quite similar. Thus, we observe interplay among similar OSS project: "the rich get richer" effect.
Social Evolution
For every year and for each project we generated a network of the currently active project participants. For every of these networks six SNA measures were calculated: diameter, average path length, maximal betweenness, size of the largest connected component, density, transitivity. Some remarkable outlier values in diameter and maximal betweenness series are identified in Biopython and BioJava projects (cf. Figure 2). In most cases, the diameter value within a network was 6, which is consistent with the well-known small-world phenomena of the social networks [START_REF] Mark | The structure and function of complex networks[END_REF]. In 2005 in the Biopython network and in 2006 in the BioJava network, the diameter values reached 12 and 11 respectively. In these periods, the networks do not show previous compactness. Noteworthy are the low values of the maximal node betweenness at the same period of time in both projects. Figure 4 shows social networks of the BioJava, Biopython and BioPerl communities in 2006, 2005 and 2004 respectively. The presented networks are clustered using edge betweenness clustering provided by www.yworks.com. The diameter of the node representations reflects its social importance in the network.
Node betweenness is a centrality measure, which determines dominance of a node within a network. Assuming that the information flow takes the shortest path, the node betweenness let us estimate the fraction of information going through a node. However, the information does not always flow along the shortest path and, therefore, the assumption presents only an approximation. Nevertheless, this approximation allows us to estimate quite well the substantial influence of the network nodes. Nodes with high betweenness values often present an interlink between network clusters (community subgroups). Thus, the low value of maximal betweenness can be an indicator for that a central node looses its influences or leave the network. Therefore, we identified the most central and active project members in BioJava, Biopython and BioPerl for each year.
We detected a change of "main actors" within the Biopython community in 2005: Jeffrey C., Andrew D. and Brad C. got "substituted" by Peter C. and Michiel H. However, this takeover was not very smooth. Figure 3(b) presents contribution level of each central member in each year. First after Jeffrey C. and Brad C. had already reduced remarkably their input to the project, Andrew D. and Michiel H. brought the project to its previous progress state. This could be a reason for low maximum betweenness and diameter values. The Biopython network was almost breaking apart, when its core members left the community (cf. Figure 4(a)).
In the BioJava community there were three central members until 2004: Thomas D., Matthew P. and Mark Sh. In 2005, two of them, Thomas D. and Matthew P., left the project (cf. Figure 3(a) and Figure 4(b)). This period is marked by low maximal betweenness and low value of transitivity, but by almost "normal" diameter value of 7. Hence, a community shrunk due to user "retirement", but it remained joined by the third central member Mark Sh. (who remained in the project from the beginning until present). In2006, many new active "actors" entered the BioJava community. Hereby, the community got less centralized, resulting in a higher diameter value. Later (around 2007), Andreas P. and Richard H. gained the central role in the project. The community again presented a hierarchical, very centralized structure with small diameter and high maximum betweenness values.
In BioPerl, in 2003 -2005, the diameter value raises only up to 9. The maximal betweenness values during the period are very high. This period overlaps with the highest newbies inflow in BioPerl (cf. Figure 1(c)), which resulted in the community expansion (and thus, in higher diameter values). The power within the community stayed in the hands of the same leading people. The community expansion just increased their "power" (resulting in increase of maximum betweenness). There was also a switch in roles among the "main actors" in BioPerl (cf. Figure 3(c)). Until 2006 the maximal node betweenness is gained by Jason S. In 2004, Chris F. enters the BioPerl and achieves the maximal betweenness in the 2006. However, Jason S. continued to contribute to the project actively.
The core of BioPerl is much bigger than in the other two projects. There are on average 24 active distinct developers in BioPerl, while Biopython and BioJava are supported on average by 7 and 11 respectively. A more detailed investigation of the BioPerl community shows, that in contrast to Biopython and BioJava, where only core (very active and socially central actors, experts) and periphery (very passive actors, lurkers) are present, an intermediate layer of "contributors" has been established. Although the project members of this layer put much less effort, than the core, they still provide some active contributions to the project. The edge betweenness clustering of BioPerl network in 2004 detects one very big cluster, which includes almost all project participants (cf. Figure 4(c)). The intermediate layer of active contributors can be a reason for the strong community interconnection and better resistance against "retirement' of core members
Social Evolution and Demographic Forecast
In Figure 1(b) an increase of the newcomer rate in the year 2009 in Biopython can be observed. At the same time, the rise in commits and releases number per year can be detected starting from 2007 (cf. Figure 5). More detailed investigations show that these changes in release-and effort-culture were introduced by the new leading people in the Biopython community (cf. Figure 3(b)). This organizational and development modifications made the Biopython project more attractive for the newcomers.
Discussion and Conclusions
In this paper, we adopted demographic concepts to analyze OSS communities as an aging population and applied several SNA measures to trace social evolution of OSS communities. A survival rate pattern 20-40-90% was identified within the communities of three bioinformatics projects. Only 20% of the newbies "survive" over their first year in a project, the 40% out of them over the second year followed by about 90% of the previous amount to survive in the next years. This pattern leads to the following conclusions:
-The identified pattern allows to predict the minimal number of newbies required to support the same level of participants in the community. -There is a very high probability, that a user who remained with an OSS project longer than two years, will remain with the community further. -The fraction of the users, who "survives longer" than three years is only about 7, 2%. The very low survival rate is conform to the results presented in [START_REF] Jensen | Joining free/open source software communities: An analysis of newbies' first interactions on project mailing lists[END_REF]. -Within ten years of the project history no maximal possible participation duration was identified, causing the continuous community growth even with slightly decreasing newcomer rates. -The core group of each OSS project evolves strongly (conform to the results from [START_REF] Robles | Contributor turnover in libre software projects[END_REF]). -Retirement of a central community member(s) presents a danger for the project sustainability (conform to the results from [START_REF] Robles | Evolution of volunteer participation in libre software projects: Evidence from debian[END_REF]). -There is a phenomena of "rebirth" within an OSS community. Especially those, who get involved deeply in the project for several years and then left it, tend to return to the project later on. -The number of "oldies" gets continuously bigger. This can lead to seclusion of community against newcomers. The concept of "contribution barrier" from [START_REF] Georg Von Krogh | Community, joining, and specialization in open source software innovation: a case study[END_REF] should be extended by social aspects.
The SNA results show, that the combination of increasing diameter and falling maximal betweenness can be used as an indicator for the retirement of the central community member, with a risk of a community to break apart. In the history of all three projects there was a change of the central person within community in about 5 -6 years. In the BioPerl project the change seems to have no strong effect: the community participants remained strongly interconnected, due to the relatively big and well-developed hierarchical community center. On contrary, the Biopython and BioJava communities show a very loosely structure at the period of the change. BioJava project seems to execute the change more smoothly than Biopython, thanks to the overlap of the central user participation time. Many other active members left the community together with the central actors. The both Biopython and BioJava communities experienced the great restructuring. In Biopython we also observe a complete modification of the development principals. The findings confirm the OSS problem, that the knowledge concentrated in the core of community bringing the danger of its total loss, if the core members leave the project. Especially, considering that the retirement of a central member can induce the further outflow of project members from an OSS project.
Our findings indicate, that a combination of maximal betweenness and diameter values, can be used as metric for measuring social stability of an OSS community. Survival rate and newcomer inflow can be applied in order to detect the important internal/external events.
Threats to Validity
The presented findings may not be directly transformed to all OSS projects. The bioinformatics OSS projects are mostly driven by bioinformatics scientist, mainly PhD students working on their thesis. Once they finish their PhD, they may loose the interest or/and time for the contributing, which can be one reason for the observed survival pattern. Further, the quality of any dynamic analysis may be influenced by the selected step size. Until now, we performed the population analysis on BioJava communication data cut at the time point of each release. The achieved results are very similar with those presented in this paper. However, there is about one release per year in BioJava. The survival pattern in the projects with another release culture has to be investigated.
Future Work
Considering socio-technical nature of OSS projects, social evolution of OSS communities presents a big area for further studies. To validate the results of this study, the proposed measurements should be applied to other OSS projects. Moreover, there is a great deal of possibilities to extend the proposed methods for dynamic analysis of OSS communities by additional parameters. For example, the analysis of participation duration can be combined with the information about participant's activity. For each OSS project member we can define a time series: a sequence of contribution numbers within uniform time intervals (e.g. per month). Using statistical methods like Principle Component Analysis, we can detect different "activity-participation duration" patterns.
Fig. 1 .Fig. 2 .Fig. 3 .
123 Fig. 1. Newcomers vs. Survived Users
Fig. 4 .
4 Fig. 4. Project Social Networks
Fig. 5 .
5 Fig. 5. Biopython: Release and Commit Numbers
http://www.r-project.org/ is used for calculations
http://www.ornl.gov/sci/techresources/Human_Genome/project/clinton1. shtml | 34,715 | [
"1001500",
"1001501"
] | [
"303510",
"303510"
] |
01467584 | en | [
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467584/file/978-3-642-38928-3_7_Chapter.pdf | Andrejs Jermakovics
email: ajermakovics@www
Alberto Sillitti
email: asillitti@www
Giancarlo Succi
email: [email protected]@www
Exploring collaboration networks in opensource projects
Keywords: Collaboration networks, version control systems, open source
Analysis of developer collaboration networks presents an opportunity for understanding and thus improving the software development process. Discovery of these networks, however, presents a challenge since the collaboration relationships are initially not known. In this work we apply an approach for discovering collaboration networks of open source developers from Version Control Systems (VCS). It computes similarities among developers based on common file changes, constructs the network of collaborating developers and applies filtering techniques to improve the readability of the visualized network. We use the approach in case studies of three different projects from open source (phpMyAdmin, Eclipse Data Tools Platform and Gnu Compiler Collection) to learn their organizational structure and patterns. Our results indicate that with little effort the approach is capable of revealing aspects of these projects that were previously not known or would require a lot of effort to discover manually via other means, such as reading project documentation and forums.
Introduction
Analysis of collaboration networks in a development environment can provide decision support for improving the software development process [START_REF] Petrinja | Introducing the OpenSource Maturity Model[END_REF]. It has already been widely used in open source [START_REF] Di Bella | A multivariate classification of open source developers[END_REF] and closed source software for exploring collaboration [START_REF] Wolf | Mining Task-Based Social Networks to Explore Collaboration in Software Teams[END_REF], predicting faults [START_REF] Fronza | Failure prediction based on log files using Random Indexing and Support Vector Machines[END_REF][START_REF] Meneely | Predicting failures with developer networks and social network analysis[END_REF][START_REF] Pinzger | Can developer-module networks predict failures?[END_REF], studying code transfer [START_REF] Mockus | Succession: Measuring transfer of code and developer productivity[END_REF] and many other activities [START_REF] Coman | Automated Identification of Tasks in Development Sessions[END_REF][START_REF] Bella | Pair Programming and Software Defects -a large, industrial case study[END_REF]. Moreover, since software artifact structure is strongly related to the organization's structure (Conway's Law) [START_REF] Conway | How Do Committees invent?[END_REF] it becomes important to understand the developer networks involved. The analysis of these networks is often leveraged using visualizations and their appearance plays a significant role in how people interpret the networks [START_REF] Huang | How people read sociograms: a questionnaire study[END_REF], [START_REF] Mccarey | Rascal: A Recommender Agent for Agile Reuse[END_REF]. It is, therefore, important that the network visualizations are easy to interpret and represent the actual network as closely as possible.
A common problem is that the actual social networks are not known and need to be discovered. Many existing approaches rely on communication archives to discover the networks [START_REF] Di Bella | A multivariate classification of open source developers[END_REF][START_REF] Crowston | The Social Structure of Free and Open Source Software[END_REF][START_REF] Sarma | Tesseract: Interactive visual exploration of socio-technical relationships in software development[END_REF]; however these are not always available and do not necessarily reflect collaboration on code [START_REF] Meneely | Socio-technical developer networks: should we trust our measurements[END_REF]. Additionally mapping people across multiple communication systems (issue trackers, forums, email) involves considerable manual effort [START_REF] Pinzger | Dynamic analysis of communication and collaboration in OSS projects[END_REF]. Another possibility is to use dedicated software for tracking development time and pair programming effort [9] but such studies require prior setup.
Most common source of developer networks is the Version Control Systems (VCS) [START_REF] Huang | Mining version histories to verify the learning process of legitimate peripheral participants[END_REF][START_REF] Lopez-Fernandez | Applying social network analysis to the information in CVS repositories[END_REF][START_REF] De Souza | Supporting collaborative software development through the visualization of socio-technical dependencies[END_REF]. The underlying idea is that frequent access and modification on the same code implies communication and sharing. The advantages of using VCS is that they are commonly available for all software development activities, can be mined automatically without human involvement and directly reflect collaboration on code.
Once a developer network is constructed it can be analyzed to improve understanding of the software development process and the organizational structure. The recovered organizational structure then can be used in informing new collaborators or observing the integration of new members. Additional applications include tracking code sharing, finding substitute developers with related code knowledge and assembling communities with prior collaboration experience.
In this work we study the collaboration networks of three open source projects in order to learn their organizational structure and collaboration patterns. To do so we use an approach that mines collaboration networks from version control systems and computes similarities between developers based on commits to common files. Once the similarities are computed the network is visualized using a force-directed graph layout.
Approach
Our proposed approach [START_REF] Jermakovics | Mining and visualizing developer networks from version control systems[END_REF] uses VCS to mine commits to source code files that developers make. It then computes similarities between committers and visualizes them in a network using similarities as link strengths. In cases when the network is too dense it offers multiple filtering techniques to reduce the number of links.
A crucial part of the approach is the similarity measure because it is used as the basis for visualization and filtering. For our purposes, we adopted an established similarity measure, which is also used in Collaborative Filtering techniques [START_REF] Resnick | GroupLens: An Open Architecture for Collaborative Filtering of Netnews[END_REF], [START_REF] Shardanand | Social information filtering: algorithms for automating "word of mouth[END_REF] of Recommender Systems. A number of user similarity measures are derived from the ratings that users assign to items. In our case, we use source files as the items and the number of changes as the rating, which is similar to an approach for recommending software components [START_REF] Mccarey | Rascal: A Recommender Agent for Agile Reuse[END_REF].
Cosine similarity between two developers is obtained by calculating the cosine of the angle between their corresponding vectors d i and d j :
𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 𝑑 ! , 𝑑 ! = cos 𝑑 ! , 𝑑 ! = ! ! •! ! ! ! ! ! = ! !,! ! ! !,! ! !,! ! ! ! !,! ! ! (1)
The similarity accounts for files that were modified only a few times and also for files that many people modify. Thus it is greater if the two developers modified the same files a large number of times and 0 if they did not share any files.
The approach has been previously validated [START_REF] Jermakovics | Mining and visualizing developer networks from version control systems[END_REF] on two projects where the structure was known and was able to discover actual developer networks. It is implemented in a software visualization tool Lagrein [START_REF] Jermakovics | Lagrein: Visualizing User Requirements and Development Effort[END_REF][START_REF] Jermakovics | Visualizing Software Evolution with Lagrein. 22nd Object-Oriented Programming[END_REF] which shows software metrics together with collaboration networks. The tool provides interactive exploration of collaboration networks and user adjustable link filtering. Since networks initially appear very dense due to a large number of links, the tool allows removing low weight links to view the network at different levels of detail.
Network Visualization
A common choice for social network visualizations [START_REF] Heer | Vizster: Visualizing Online Social Networks[END_REF][START_REF] Sarma | Tesseract: Interactive visual exploration of socio-technical relationships in software development[END_REF] is to use multidimensional scaling (MDS) [START_REF] Borg | Modern Multidimensional Scaling: Theory and Applications[END_REF][START_REF] Freeman | Visualizing Social Networks[END_REF] or force-directed algorithms for graph layout [START_REF] Fruchterman | Graph Drawing by Force-Directed Placement[END_REF].
Force-directed algorithms model graph vertices as having physical forces of attraction and repulsion. They iteratively compute vertex positions until the difference between desired and actual distances is minimized. Their advantage is that groups with high connectivity are placed together and similar vertices are placed closer than dissimilar ones.
We apply Fruchterman-Reingold [START_REF] Fruchterman | Graph Drawing by Force-Directed Placement[END_REF] force-directed graph layout to the constructed network by setting the link lengths to developer dissimilarity in order to place similar developers together and dissimilar ones apart. The size of each node in the network is proportional to the number of commits the developer made and no link is created between developers with similarity 0. In cases when the visualization appears complex due to a large number of links, we apply filtering to remove low similarity links.
For visual appeal, links are visualized with transparency (using alpha blending). The transparency of a link is proportional to the similarity -high similarity links are solid while low similarity links are transparent. Thus strong links can be spotted immediately and the viewer can see which edges will disappear first during filtering.
Most developer networks are initially too dense due to large number of links. This makes the networks harder to read and hinders force-directed graph layout. For these reasons we filter out low weight edges using a user specified threshold and reapply graph layout.
Case Study: phpMyAdmin project
Initially, we repeated an existing study [START_REF] Huang | Mining version histories to verify the learning process of legitimate peripheral participants[END_REF] of the phpMyAdmin1 project to compare the resulting developer networks. The project is a web application for administering MySql relational database management systems and is written in PHP, HTML and JavaScript. It lets database administrators create/manage databases and tables, edit data and execute SQL statements. The tool is widely used by system administrators and after fifteen years is a stable and mature product. It has also received multiple awards and several books have been written about its usage. Details about the project's development are outlined in Table 1.
Project History
The project was started as a web frontend for MySQL in 1998 by Tobias Ratschiller who was an IT consultant at the time. Due to lack of time he abandoned the project but it already was one of the most popular MySQL administration tools having a large number of people willing to contribute (Table 2) and a large number of patches coming in. At that time in 2001 the project was taken over by Marc Delisle, Olivier Müller and Loïc Chapeaux who registered the project on SourceForge.net2 project hosting site and the development has continued there ever since.
Collaboration network
Similarly to our approach, the developer network was extracted from the version control system, however the links were created when two developers committed to the same directory and all links had the same weight. With a network obtained using such approach the authors notice that it is impossible to determine the importance of each developer and conclude that all developers play the same role. They also mention that the network might be misleading due to link computation at directory level. We confirm this observation and discover a different structure in the project's network using our approach for the same period (until 2004).
First we compute the links the same way -at directory level and without assigning weights to them. The resulting network is similar in layout to the network in the previous study and, indeed, no particular structure is evident (Fig. 1) due to the density of edges. Afterwards, we apply the proposed approach and apply link filtering. We can notice that there are two main groups. By looking at the changed files, we noticed that these groups work on different sets of files, however most of these files are located in the root directory of the project. For this reason, link computation at directory level produced dense and compact network and we conclude that the computation is more reliable at file level. While exploring the modified file list we noticed multiple developers, which are the only committers to some files. They work on their own subset of files and no one else works on these files. They make many commits (large nodes in the graph) and also are well connected to other developers indicating significant collaboration activity. Thus we conclude that all developers do not play the same role and there are some with more central and important roles. We later confirmed this by examining sourceforge.net3 and project's home page where several such developers (lem9, nijel, swix, loic1) are mentioned as project managers and maintainers.
Case Study: Eclipse DTP Project
Having experimentally selected [START_REF] Jermakovics | Mining and visualizing developer networks from version control systems[END_REF] Cosine similarity and filtering as effective methods for discovering team structure we proceeded to apply the technique to the Eclipse Data Tools Platform (DTP) project 4 . The choice of the project is arbitrary and the goal of the study is to demonstrate the use of the approach in revealing aspects of the organizational structure that are not known beforehand. Details of the project's development are outlined in Table 3. Eclipse Data Tools Platform is a collection of tools and frameworks for database handling and provides an abstract API over database drivers, connections and data so that they can be used in a generic way. It also provides UI inside Eclipse to define database connections and to execute SQL statements. Originally it was started by Sybase in 2005 and later attracted a large community, which is managed by a committee consisting of Sybase, IBM and Actuate. The project is large (1.4 MLOC) and is composed of several subprojects: Connectivity, Enablement, Incubator, Model Base and SQL Development Tools.
The Eclipse project provides information on its committers using its Commits Explorer 5 . This application allowed us to learn that contributions to the project have been made by many individual committers and multiple organizations including Actuate, IBM, Red Hat and Sybase. Using its CVS repository we constructed the developer network of 25 people for the period 2005-2010. Fig. 3 shows the resulting network obtained with link filtering and having each company colored in different color.
The visualization of the network allows us to gain quick insight into the organizational structure of the project. We can see a large contribution from IBM in terms of number of people however Red Hat stands in a more central role. An interesting aspect is that contributors do not contribute equally to all parts of the project. They collaborate closely with other contributors from the same company and to a much lesser extent with contributors from other companies.
73
Case Study: Gnu Compiler Collection (GCC) Project
The GNU Compiler Collection (GCC)6 is a large system of compilers that supports numerous programming languages (C, C++, Java, Objective-C, Fortran, Ada, Go) and compiles to native code of many processor architectures. It is one of the oldest open source projects and has a large number of contributors developing its numerous front-end and back-end projects. It is produced by the GNU Project 7 with Richard Stallman (a.k.a RMS) and now is widely used as the standard compiler on popular Unix-like operating systems, including Linux and BSD.
The organizational process of the project is described as "cathedral" style by Eric S. Raymond [START_REF] Raymond | The Cathedral and the Bazaar[END_REF] due to the fact that the project was under strict control by the Free Software Foundation (FSF). Many developers that were not satisfied with this model started their own forks of projects and formed the EGCS (Experimental GNU Compiler System). EGCS saw more activity than the GCC development and therefore later was made the official version of GCC. As a result the project opened up more and adopted a more "bazaar" style development model to allow more contributions. Studies of the project [START_REF] Yamauchi | Collaboration with Lean Media: how open-source software succeeds[END_REF] show that the development process is largely maintenance and less new software creation. The details of the project's development are outlined in Table 4. One part of GCC is the GNU Fortran (GFortran) project whose purpose is to develop the Fortran compiler front end and run-time libraries for GCC. GFortran development is part of the GNU Project. Initially in 2000 it was developed by Andrew Vaught as project g95 that was a free Fortran 95 standard compiler using GCC backend. Andrew wrote most of the parser and for a while work on g95 continued to be collaborative until the late 2002 when he decided to be the sole developer of g95. Project GFortran then was forked from the g95 codebase and collaborative development has continued there since together with integration with the GCC codebase. Since the forking both codebases have significantly diverged. Most of the interface with GCC was written by Paul Brook.
We extracted and analyzed their Subversion commit log in the period from 1988-2010 containing commits from 349 committers. From the collaborator network we can immediately see a large and strongly connected core and a lot of scattered contributors in the periphery around the core. We also can see smaller strongly connected groups suggesting that there are sub-communities within the bigger GCC community as observed in other open-source projects [START_REF] Di Bella | A multivariate classification of open source developers[END_REF].
A particularly interesting aspect is that this network also contains a strongly connected group separate from the core (marked red). By looking at the changes of this group, we can see that they are developing the Fortran front-end because most of their commits were to /gcc/fortran and /gcc/libgfortran directories. These directories are listed on the project's homepage as the ones where contribution takes place. Thus we can discover that this community is rather closed because it mostly collaborates among its own members and to a much lesser extent with the rest of the GCC contributors.
When we zoom in we can see another closed group to the left of Fortran community. That group (highlighted in yellow) mostly works on the ARM architecture since their commits were in the /gcc/config/arm subdirectory. One committer (pbrook) stands out in the middle between the ARM and the Fortran communities indicating a lot of involvement in both. We verified this information using the project's contributions page 8 , which acknowledges Paul Brook exactly for his work on GNU Fortran and the ARM architecture. He is also listed to have written most of the GFortran code that interfaces with the rest of GCC. Thus by viewing the network we are actually able to identify communities and roles without the need to go through published information or communication archives.
To summarize, by applying the method on open source projects we conclude that it is able to discover various aspects of the projects that were not evident before. We verified them using additional information from the projects however discovering using visualization involves much less effort. These results also added more confidence in the credibility of the approach and we conclude that it can be useful even if the full scope of usefulness is not established.
Conclusions
In this work we studied collaboration networks of three open-source projects using visualizations. The networks were automatically discovered using an approach that analyses software repositories and finds similarities among developers based on commit counts to common files. Collaborating developers are placed closer together so that clusters and close collaboration becomes noticeable. Initially the networks appear very dense therefore we apply filtering to remove low weight links.
We found in phpMyAdmin project that there are developers with central roles and other contributors with peripheral roles. In Eclipse DTP project we noticed that there are contributions from multiple large companies however more collaboration is happening among developers within each company than between companies. Finally in GCC project we have observed that it consists of multiple sub-communities and that Fortran community is more separated from the other communities.
Overall open source projects vary greatly in their organization and collaboration patterns however these are often not documented. Automatic approaches for discovering collaboration networks can thus shed light on the structure of these projects and reveal information that was previously not known.
Fig. 1 .
1 Fig. 1. PhpMyAdmin collaboration network computed at directory level. All contributors appear to have an equal role in the project.
Fig. 2 .
2 Fig. 2. phpMyAdmin collaboration network computed at file level and filtered links. Some contributors appear having a more central role
Figure 9 . 1 .Fig. 3 .
913 Figure 9.1. Eclipse DTP project committer network. There is higher collaboration o
Fig. 4 .
4 Fig. 4. GCC Collaboration network with Fortran community in top-right (red).
Fig. 5 .
5 Fig. 5. ARM and Fortran communities in GCC
Table 1 .
1 PhpMyAdmin project details
Property Value
Repository type Subversion (SVN)
Analyzed period 2001-2004
Codebase size 100 KLOC
Commits 10K
Languages PHP, HTML, JavaScript
Committers 16
Table 2 .
2 PhpMyAdmin project contributors
Contributor name Commiter id Role in the project
Marc Delisle lem9 Project Manager / Founder
Olivier Müller swix Developer / Founder
Loïc Chapeaux loic1 Developer / Founder
Michal Čihař nijel Project Manager
Robin Johnson robbat2 Developer
Garvin Hicking garvinhicking Developer
Table 3 .
3 Eclipse DTP project details
Property Value
Repository type Concurrent Versions System (CVS)
Analyzed period 2005-2010
Codebase size 1.4 MLOC
Commits 90K
Languages Java, XML
Committers 25
Subprojects 5
Table 4 .
4 GCC project details
Property Value
Repository type Subversion (SVN)
Analyzed period 1988-2010
Codebase size 8 MLOC
Commits 100 K
Languages C/C++, Java, Fortran and others
Committers 350
http://www.phpmyadmin.net
http://sourceforge.net/projects/phpmyadmin/
http://sourceforge.net/projects/phpmyadmin/
http://www.eclipse.org/datatools/
http://dash.eclipse.org/
http://gcc.gnu.org/
http://www.gnu.org/
http://gcc.gnu.org/onlinedocs/gcc/Contributors.html | 23,852 | [
"1001502",
"1001484",
"989715"
] | [
"463159",
"463159",
"463159"
] |
00804268 | en | [
"math"
] | 2024/03/04 23:41:44 | 2014 | https://hal.science/hal-00804268/file/InverseSchrodinger200413.pdf | S A Avdonin
V S Mikhaylov
K Ramdani
Reconstructing the potential for the 1D Schrödinger equation from boundary measurements
We consider the inverse problem of determining the potential in the dynamical Schrödinger equation on the interval by the measurement on the boundary. We use the Boundary Control method to recover the spectrum of the problem from the observation at either left or right end points. Using the specificity of the one-dimensional situation we recover the spectral function, reducing the problem to the classical one which could be treated by known methods. We apply the algorithm to the situation when only the finite number of eigenvalues are known and prove the convergence of the method.
Introduction
We consider the problem of determining the potential in a one dimensional Schrödinger equation from two boundary measurements. More precisely, given a real potential q ∈ L 1 (0, 1) and a ∈ H 1 0 (0, 1), we consider the following initial boundary value problem:
iu t (x, t) -u xx (x, t) + q(x)u(x, t) = 0 t > 0, 0 < x < 1 u(0, t) = u(1, t) = 0 t > 0, u(x, 0) = a(x) 0 < x < 1.
(1.1)
Assuming that the initial datum a is unknown, the inverse problem we are interested in is to determine the potential q from the trace of the derivative of the solution u to (1.1) on the boundary: {r 0 (t), r 1 (t)} := {u x (0, t), u x (1, t)}, t ∈ (0, 2T ),
where T > 0 is fixed (it may be arbitrary small). Once the potential has been determined, one can use e.g. the method of iterative observers recently proposed in [START_REF] Ramdani | Recovering the initial state of an infinite-dimensional system using observers[END_REF] to recover the initial data.
The multidimensional inverse problems of determining the potential by one measurement were considered in [START_REF] Baudouin | Uniqueness and stability in an inverse problem for the Schrödinger equation[END_REF][START_REF] Baudouin | Corrigendum. Uniqueness and stability in an inverse problem for the Schrödinger equation[END_REF][START_REF] Bellassoued | Logarithmic stability in the dynamical inverse problem for the Schrödinger equation by arbitrary boundary observation[END_REF][START_REF] Mercado | Carleman inequalities and inverse problems for the Schrödinger equation[END_REF]. Using techniques based on Carleman estimates, the authors established global uniqueness and stability results for different geometrical conditions on the domain in arbitrary small time under certain restrictions on the initial source. No reconstruction procedure (even in 1-d case) has been provided. We also mention the approach proposed by Boumenir and Tuan for the heat equation in [START_REF] Boumenir | Inverse problems for multidimensional heat equations by measurements at a single point on the boundary[END_REF][START_REF] Boumenir | An inverse problem for the heat equation[END_REF][START_REF] Boumenir | Recovery of a heat equation by four measurements at one end[END_REF]. Using the boundary observation for t ∈ (0, ∞), the authors were able to recover the spectrum of the string provided the source is generic (see the definition in the beginning of Section 2). Then choosing another boundary condition, they solved the inverse problem from the two recovered spectra by the classical Levitan-Gasymov method [START_REF] Levitan | Inverse Sturm-Liouville Problems[END_REF][START_REF] Levitan | Determination of a differential equation by two of its spectra[END_REF].
In this paper we propose a different approach. We introduce the unbounded operator A on L 2 (0, 1) defined by Aφ = -φ + qφ, D(A) := H 2 (0, 1) ∩ H 1 0 (0, 1).
(
Using the Boundary Control (BC) method, see [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF][START_REF] Belishev | Recent progress in the boundary control method[END_REF], we first show that the eigenvalues of A can be recovered from the data r 0 (or r 1 ) in arbitrary small interval provided the source is generic. To achieve this, we derive a generalized eigenvalue problem involving an integral operator (see (2.5)), whose solution leads to the recovery of the eigenvalues of A. Then, using the peculiarity of the one dimensional case, we recover the spectral function associated with A, reducing the original inverse problem to the more "classical" one of recovering an unknown potential from spectral data. At this step we use the observations at both boundary points. Thus we establish an algorithm for recovering both potential and the initial data in a very natural setting: we use the observation on the whole boundary in the arbitrary small time, which corresponds to the uniqueness results from [START_REF] Baudouin | Uniqueness and stability in an inverse problem for the Schrödinger equation[END_REF][START_REF] Baudouin | Corrigendum. Uniqueness and stability in an inverse problem for the Schrödinger equation[END_REF][START_REF] Bellassoued | Logarithmic stability in the dynamical inverse problem for the Schrödinger equation by arbitrary boundary observation[END_REF]. We do not use the observation in infinite time and do not change the boundary conditions (cf. [START_REF] Boumenir | Inverse problems for multidimensional heat equations by measurements at a single point on the boundary[END_REF][START_REF] Boumenir | An inverse problem for the heat equation[END_REF][START_REF] Boumenir | Recovery of a heat equation by four measurements at one end[END_REF].)
In the last part of the paper we show how to adapt our algorithm to the more realistic situation where not all but only a finite number of eigenvalues of A are known. We answer a question how many eigenvalues one need to recover in order to achieve prescribed accuracy.
Remark 1.1. The method of solving the inverse problem presented in this paper could be applied to the case of wave and parabolic equations on the interval.
The paper is organized as follows. In Section 2, we detail the different steps of our algorithm. In particular, we describe how to recover the spectral data of A from the boundary data and point out two methods that can be used to recover the potential. In Section 3 we provide the result on the convergence of the method of recovering the potential by a finite number of spectral data.
2 Inverse problem, application of the BC method
From boundary data to spectral data
It is well known that the selfadjoint operator A defined by (1.2) admits a family of eigenfunctions {φ k } ∞ k=1 forming a orthonormal basis in L 2 (0, 1), and associated sequence of eigenvalues λ k → +∞. Using the Fourier method, we can represent the solution of (1.1) in the form
u(x, t) = ∞ 1 a k e iλ k t φ k (x), (2.1)
where a k are Fourier coefficients:
a k = (a, φ k ) L 2 (0,1; dx) .
Definition 2.1. We call the initial data a ∈ H 1 0 (0, 1) generic if a k = 0 for all k ≥ 1.
For the boundary data r 0 , r 1 , we have the representation
{r 0 (t), r 1 (t)} = ∞ k=1 a k e iλ k t φ k (0), ∞ k=1 a k e iλ k t φ k (1) . (2.2)
One can prove that r 0 , r 1 ∈ L 2 (0, T ). This follows from:
(i) the estimates
0 < inf k φ k (0) k sup k φ k (0) k < ∞, ( 2.3)
(see, e.g. [START_REF] Vinokurov | Asymptotics of any order for the eigenvalues and eigenfunctions of the Sturm-Liouville boundary value problem on a segment with a summable potential[END_REF]);
(ii) the equivalence of the inclusions {ka k } ∞ k=1 ∈ l 2 and a ∈ H 1 0 (0, 1); (iii) the fact that the family e iλ k t ∞ k=1 forms a Riesz basis in the closure of its linear span in L 2 (0, T ) (see, e.g. [START_REF] Avdonin | Determining the potential in the Schrödinger equation from the Dirichlet to Neumann map by the boundary control method[END_REF]).
Throughout the paper, we always assume that the source a is generic. The reason for this requirement can be seen from (2.2): if we assume that a n = 0 for some n, then from the representation (2.2) it is easy to see that the pair {r 0 (t), r 1 (t)} does not contain information about the triplet {λ n , φ n (0), φ n (1)}, and, consequently the potential which corresponds to these data is not unique.
Using the method described in [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF] we can recover the spectrum λ k and products a k φ k (0) and a k φ k (1) by the following procedure. We construct the operator C T 0 : L 2 (0, T ) → L 2 (0, T ) by the rule:
(C T 0 )f (t) = T 0 r 0 (2T -t -τ )f (τ ) dτ, 0 t T. (2.4)
and consider the following generalized eigenvalue problem :
Find (µ, f ) ∈ C × H 1 0 (0, T ), C T 0 f = 0, such that T 0 ṙ0 (2T -t -τ )f (τ ) = µ T 0 r 0 (2T -t -τ )f (τ ) dτ, 0 t T. (2.5)
Then, one can prove [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF] that the problem above admits a countable set of solutions (µ n , f n ), n 1. Moreover, for the eigenvalues we have µ n = iλ n , where λ n are the eigenvalues of A; the family {f n (t)} ∞ n=1 is biorthogonal to
{e iλ k (T -t) } ∞
n=1 . A priori we do not suppose that ṙ0 ∈ L 2 (0, 2T ) and therefore, in general, the integral T 0 ṙ0 (2T -t -τ )f (τ ) dτ should be understood as the action of the functional ṙ0 ∈ H -1 (0, 2T ) on f ∈ H 1 0 (0, T ). For algorithmic purposes it is convenient to rewrite the equation (2.5) in the form
T 0 [r 0 (2T -t -τ ) -µR(2T -t -τ )] h(τ ) dτ = 0, 0 t T. (2.6)
where
R(t) = t 0 r(τ ) dτ, and f (t) = t 0 h(τ ) dτ.
Next, we consider the equation of the form (2.5) with r 0 (t) replaced by its complex conjugate r 0 (t). This equation yields the sequence
{-iλ k , g k (t)} ∞ k=1 , (-iλ n = µ n ).
Let us normalize functions f k , g k by the rule:
δ nk = C T 0 f n , g k = T 0 T 0 r 0 (2T -t -τ )f n (τ )g k (t) dτ dt (2.7)
and introduce constants γ k , β k defined by:
γ k = T 0 r 0 (T -τ )f k (τ ) dτ, ( 2.8
)
β k = T 0 r 0 (T -τ )g k (τ ) dτ.
(2.9)
It was proved in [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF] (see formula (2.28)) that the product a k φ x (0) was given by
a k φ k (0) = γ k β k . (2.10)
Similarly we can introduce the integral operator C T 1 associated with the response r 1 (t) at the right endpoint, and repeat the procedure described above to find quantities a k φ k [START_REF] Avdonin | Matrix inverse problem for the equation u tt -u xx + Q(x)u = 0[END_REF]. Summing up, using the method from [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF] we are able to recover the eigenvalues λ k of A and the products φ k (0)a k and φ k (1)a k . As a result, we can say that we recovered the spectral data consisting of .11) Precisely this data was used in [START_REF] Poschel | Inverse spectral theory[END_REF], where the authors have shown the uniqueness of the inverse spectral problem and provided the method of recovering the potential. Instead of doing this, we recover the spectral function associated to A and thus reduce the inverse source problem to the classical one of determining an unknown potential from the spectral data.
D := λ k , φ k (1) φ k (0) ∞ k=1 . ( 2
Given λ ∈ C, we introduce the solution y(•, λ) of the following Cauchy problem on (0, 1):
-y (x, λ) + q(x)y(x, λ) = λy(x, λ), 0 < x < 1, (2.12)
y(0, λ) = 0, y (0, λ) = 1. (2.13)
Then the eigenvalues of the Dirichlet problem of A are exactly the zeroes of the function y(1, λ), while a family of normalized corresponding eigenfunctions is given by
φ k (x) = y(x, λ k ) y(•, λ k )
. Thus we can rewrite the second components in D in the following way:
φ k (1) φ k (0) = y (1, λ k ) y (0, λ k ) = y (1, λ k ) =: A k . (2.14)
Let us denote by dot the derivative with respect to λ. We use the following fact (see [16, p. 30]):
Lemma 2.2. If λ n is an eigenvalue of A, then y(•, λ n ) 2 L 2 = y (1, λ n ) ẏ(1, λ n ).
The lemma below can be found in [START_REF] Poschel | Inverse spectral theory[END_REF] for the case q ∈ L 2 (0, 1) holds true for q ∈ L 1 (0, 1) as well. The meaning of it is that the function y(1, λ) is completely determined by its zeroes, which are precisely the eigenvalues of A.
Lemma 2.3. For q ∈ L 1 (0, 1) the following representations hold
y(1, λ) = k 1 λ k -λ k 2 π 2 ẏ(1, λ n ) = - 1 n 2 π 2 k 1,k =n λ k -λ n k 2 π 2 .
Lemma 2.2 and 2.3 imply that the data D we have recovered (see (2.11)) allow us to evaluate the norm
y(•, λ n ) 2 L 2 = A n B n =: α 2 n , (2.15)
where we introduced the notation
B n := - 1 n 2 π 2 k 1,k =n λ k -λ n k 2 π 2 , ( 2.16)
Reconstructing the potential from the spectral data
The set of pairs
{λ k , y(•, λ k ) 2 L 2 } ∞ k=1 is a "classical" spectral data.
The potential can thus be recovered by Gelfand-Levitan, Krein method or the BCM (see [START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF]). Below we outline two possible methods of recovering the potential.
We introduce the spectral function associated with A:
ρ(λ) = - λ λ k 0 1 α 2 k λ 0, 0<λ k λ 1 α 2 k λ > 0, (2.17)
which is a monotone increasing function having jumps at the points of the Dirichlet spectra. The regularized spectral function is introduced by
σ(λ) = ρ(λ) -ρ 0 (λ) λ 0, ρ(λ) λ < 0, ρ 0 (λ) = 0<λ 0 k λ 1 (α 0 k ) 2 λ > 0, (2.18)
where ρ 0 is the spectral function associated with the operator A with q ≡ 0. In the definition above eigenvalues and norming coefficients are λ 0 k = π 2 k 2 , (α 0 k ) 2 = 1 2π 2 k 2 and the solution to (2.12)-(2.13) for q ≡ 0 is
y 0 (x, λ) = sin √ λx √ λ .
Let us fix τ ∈ (0, 1] and introduce the kernel c τ (t, s) by the rule (see also [START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF]):
c τ (t, s) = ∞ -∞ sin √ λ(τ -t) sin √ λ(τ -s) λ dσ(λ), s, t ∈ [0, τ ], (2.19)
Then so-called connecting operator (see [START_REF] Belishev | Recent progress in the boundary control method[END_REF][START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF])
C τ : L 2 (0, τ ) → L 2 (0, τ ) is defined by the formula (C τ f )(t) = (I + K τ )f (t) = f (t) + τ 0 c τ (t, s)f (s) ds , 0 < t < τ , (2.20)
Using the BCM leads to the following: for the fixed τ ∈ (0, 1) one solves the equation
(C τ f τ )(t) = τ -t, 0 < t < τ, (2.21) Setting µ(τ ) := f τ (+0), (2.22)
and then varying τ ∈ (0, 1), the potential at the point τ is recovered by
q(τ ) = µ (τ ) µ(τ ) . ( 2
.23)
We can also make use of the Gelfand-Levitan theory. According to this approach, for τ ≡ 1, the kernel c τ satisfies the following integral equation with unknown V :
V (y, t) + c τ (y, t) + τ y c τ (t, s)V (y, s) ds = 0, 0 < y < t < 1.
(2.24)
Solving the equation (2.24) for all y ∈ (0, 1) we can recover the potential using
q(y) = 2 d dy V (τ -y, τ -y).
(2.25)
Once the potential has been found, we can recover the eigenfunctions φ k , the traces φ k (0) and using (2.10), the Fourier coefficients a k , k = 1, . . . , ∞. Thus, the initial state can be recovered via its Fourier series. We can also use the method of observers (see [START_REF] Ramdani | Recovering the initial state of an infinite-dimensional system using observers[END_REF]).
The algorithm
1) Take r 0 (t) := u x (0, t) and solve the generalized spectral problem (2.5).
Denote the solution by {µ n , f n (t)} ∞ n=1 and note the connection with the spectra of A: λ n = -iµ n .
2) Take the function r 0 (t) and repeat step one. This yields the sequence {µ n , g n (t)} ∞ n=1 . 3) Define the operator C T 0 by (2.4) and normalize f n , g n according to equation (2.7): (C T 0 f n , g n ) = 1. 4) Find the quantities a n φ n (0) by (2.8), (2.9) and (2.10). 5) Repeat steps 1)-4) for the function r 1 (t) := u x (1, t) to find a k φ k (1). or Gelfand-Levitan method using equations (2.24), (2.25). 9) Use the method of iterative observers described in [START_REF] Ramdani | Recovering the initial state of an infinite-dimensional system using observers[END_REF] or Fourier series to recover the initial data.
Our approach yields the following uniqueness result for the inverse problem for 1-d Schrödinger equation: Theorem 2.4. Let the source a ∈ H 1 0 (0, 1) be generic and T be an arbitrary positive number. Then the potential q ∈ L 1 (0, 1) and the initial data are uniquely determined by the observation {u x (0, t), u x (1, t)} for t ∈ (0, T ).
The method could be applied to the inverse problem for the wave and parabolic equations with the potential on the interval as well. The details of the recovering the spectrum λ k and the quantities a k φ k (0), a k φ k (1) could be found in [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF]. The following important remark, connected with the types of controllability of the corresponding systems, should be taken into the consideration:
Remark 2.5. The time T of the observation can be arbitrary small for the case of Schrödinger and parabolic equations and it is equal to the double length of the space interval (i.e., T = 2 in our case) for the wave equation with the potential.
For the details see [START_REF] Avdonin | Spectral estimation and inverse initial boundary value problems[END_REF].
3 Stability of the scheme : the case of truncated spectral data
In view of applications, in this section we consider the case where only a finite number of eigenvalues of A are available. More precisely, let us assume that we recovered the exact values of the first N eigenvalues λ n and traces A n (see (2.14)), for n = 1, . . . , N . Then we can introduce the approximate normalizing coefficients α n,N by the rule
α n, N = A n B n, N , where B n, N := - 1 n 2 π 2 N k 1,k =n λ k -λ n k 2 π 2 . (3.1)
We can estimate
|α n -α n, N | |A n || B n, N | 1 - ∞ k N +1 λ k -λ n k 2 π 2 . (3.2)
Since the infinite product (2.16) converges, the right hand side of the above inequality is small as soon as N is big enough, provided n is fixed. But the following remark should be taken into the account. Let us remind the following asymptotic formulas for the eigenvalues and norming coefficients:
λ k = π 2 k 2 + 1 0 q(s) ds + O 1 k 2 , k → ∞, (3.3) α 2 k = 1 2π 2 k 2 + O 1 k 4 , k → ∞, (3.4)
Then the infinite product in the right hand side of (3.2) can be rewritten as
∞ k N +1 1 - n 2 + O 1 n 2 k 2 + O 1 k 4 (3.5)
We can see that if n is close to N , then the terms
n 2 +O 1 n 2 k 2
are close to one, and consequently the first factors in (3.5) and the whole product are small. This implies the infinite product in the right hand side in (3.2) is not close to one. This simple observation yields that we can not guarantee the good estimate in (3.2) when n is close to N .
We set up the following question: how many eigenvalues (we call their number by N ) we need to know in order to recover first n approximate normalizing coefficients with a good accuracy, using formula (3.1). In other words, assuming the infinite product in (3.2) or in (3.5) to be close to one, we need to find out the admissible relationship between N and n. Let the parameter γ be such that |γ| < x, we notice that if 0
< (x + γ) < θ < 1 then | ln (1 -x + γ)| < 2 | ln (1-θ)| θ x.
Using this observation, we can estimate for n 2
(N +1) 2 θ: ln ∞ k N +1 1 - n 2 k 2 + O 1 n 2 k 2 + O 1 k 4 ∞ k N +1 ln 1 - n 2 k 2 + O 1 n 2 k 2 + O 1 k 4 2 ∞ k N +1 | ln (1 -θ)| θ n 2 k 2 = 2n 2 | ln (1 -θ)| θ π 2 6 - N k=1 1 k 2 (3.6)
We fix some ε > 0 and choose N and n such that the right hand side of (3.6) is less than ε, then
e -ε < ∞ k N +1 1 - n 2 k 2 < 1.
Consequently, for such N and n we have (see (3.2)):
|α n -α n, N | |A n || B n, N | 1 -e -ε . ( 3.7)
Notice that since
A n = 1 + o(1) and α n = 1 √ 2πn + o(1) as n → ∞, the product |A n || B n, N | is bounded by some positive C < 4. Using formula π 2 6 = N k=1 1 k - 1 2k 2 + O 1 N 3 from [19]
, we summarize all our observations in the lemma:
Lemma 3.1. Let 0 < ε < 1. If n and N satisfy n 2 N ε 2 ln 2 , ( 3.8)
then there exists an absolute constant C > 0 such that
|α k -α k, N | C 1 -e -ε , ∀k = 1, . . . , n. (3.9)
Let σ(λ) be the regularized spectral function [START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF]. We recall the representations (see [START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF]) for the response function and the kernel c τ (in our case τ = 1): Lemma 3.2. Assume that q ∈ L 1 (0, 1). Then the following representation for the response function r,
r(t) = ∞ -∞ sin √ λt √ λ dσ(λ), (3.10)
holds for almost all t ∈ (0, 2τ ). The kernel c τ (t, s) admits the representation (2.19) with the integral in the right-hand side converging uniformly on
[0, τ ] × [0, τ ].
The function c τ (t, s) can also be represented as:
c τ (t, s) = p(2τ -t -s) -p(t -s) (3.11)
where p(t) is defined by
p(t) := 1 2 |t| 0 r (s) ds.
The next useful formula follows directly from (3.11):
c τ (t, t) = 1 2 2(τ -t) 0 r(τ ) dτ. ( 3.12)
If the exact values of the first n eigenvalues λ k and normalizing factors α 2 k , k = 1, . . . , n, were known, then one could construct the "restricted" response functions and kernels defined by
r n (t) = n k=1 sin √ λ k t √ λ k sign λ k α 2 k - sin λ 0 k t λ 0 k 1 (α 0 k ) 2 , c τ n (t, s) = n k=1 sin √ λ k (τ -t) sin √ λ k (τ -s) λ k sign λ k α 2 k - sin λ 0 k (τ -t) sin λ 0 k (τ -s) λ 0 k 1 (α 0 k ) 2 .
from the representation it follows that every r n ∈ C ∞ (0, 2τ ) and Lemma
c τ n, N (t, s) = n k=1 sin √ λ k (τ -t) sin √ λ k (τ -s) λ k sign λ k α 2 k, N - sin λ 0 k (τ -t) sin λ 0 k (τ -s) λ 0 k 1 (α 0 k ) 2 .
(3.16)
We can estimate the difference
c τ n -c τ n, N ∞ n k=1 | α 2 k, N -α 2 k | λ k α 2 k, N α 2 k = n k=1 | α k + α k || α k -α k | λ k α 2 k, N α 2 k .
(3.17)
Using (3.9) and the asymptotic expansions for the eigenvalues and norming coefficients (3.3) (3.4), we deduce from (3.17) that (below, C denotes an absolute constant that might change from line to line):
c τ n -c τ n, N ∞ C n k=1 | α 2 k, N -α 2 k | λ k α 2 k, N α 2 k C n k=1 k| α k, N -α k |.
Let us fix some ε > 0 and n ∈ N. Applying Lemma 3.2 and choosing N such that estimate (3.9) holds, we finally get
c τ n -c τ n, N ∞ C n(n + 1) 2 1 -e -ε . ( 3.18)
We fix now some δ > 0. According to (3.14) we can take n ∈ N such that
c τ n -c τ ∞ δ 2 . (3.19)
We estimate the difference
c τ n, N -c τ ∞ c τ n, N -c τ n ∞ + c τ n -c τ ∞ .
Assuming that n is chosen such that (3.18) and (3.19) holds simultaneously, we obtain the existence of a constant C * > 0 such that
c τ n, N -c τ ∞ C * n 2 1 -e -ε + δ 2 . (3.20)
Then by choosing an appropriate ε in (3.20) (which results in possible increasing of N , see (3.8)), we achieve
c τ n, N -c τ ∞ δ.
We summarize the above observations in the following statement.
Proposition 3.3. Let δ > 0 be fixed. Let n be chosen such that
c τ n -c τ ∞ δ 2 ,
(this is possible thanks to (3.14)). Next, take ε > 0 such that
n 2 1 -e -ε δ 2 ,
Finally, choose N such that
n 2 N ε 2 ln 2 .
Then, there exists an absolute constant C > 0 such that the following estimate holds true: c τ n, N -c τ ∞ Cδ, where the approximate restricted kernel c τ n, N and the approximate normalizing coefficients α k, N , k = 1, . . . , n are defined respectively by (3.16) and (3.1).
In particular, c τ n, N converges uniformly to c τ when n tends to infinity and N is chosen as above.
Along with the equation (2.24) we consider the equation for the approximate restricted kernel c τ n, N : to be invertible along with I + K τ . The latter in turn implies the existence of the potential q n, N that produces the response function
V n, N (y, t) + c τ n, N (y, t) + τ y c τ n, N (t, s) V n, N (y, s) ds = 0, 0 < y < t < τ. ( 3
r n, N (t) = n k=1 sin √ λ k t √ λ k sign λ k α 2 k, N - sin λ 0 k t λ 0 k 1 (α 0 k ) 2 ,
and the unique solvability of (3.21) (see [START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF], [START_REF] Belishev | Characterization of data in the dynamic inverse problem for a two-velocity system[END_REF], [START_REF] Avdonin | Matrix inverse problem for the equation u tt -u xx + Q(x)u = 0[END_REF]).
We define M := (I + K τ ) -1 . The invertibility of I + K τ and I + K n, N τ implies the norms of the solutions V (y, •) L 2 and V n, N (y, •) L 2 to be bounded.
Let us write down the difference (2.24) and (3.21) in the form
V (y, t) -V n, N (y, t) (3.22) + τ y c τ (t, s) V (y, s) -V n, N (y, s) ds = c τ n, N (y, t) -c τ (y, t) + τ y c τ (t, s) -c τ n, N (t, s) V n, N (y, s) ds, 0 < y < t < τ.
The equality above and the invertibility of I + K τ implies the estimate:
V (y, •) -V n, N (y, •) L 2 (y,τ ) (3.23) M 1 + max 0 y τ V n, N (y, •) L 2 c τ -c τ n, N ∞ .
To estimate the L ∞ norm, we write down (3.22)
V -V n, N ∞ c τ -c τ n, N ∞ (M c τ ∞ + 1) 1 + max 0 y τ V n, N (y, •) L 2 .
The last equality in particular implies the uniform convergence of V n, N (y, y) to V (y, y). Using (2.25), we conclude that the potentials q n, N and q corresponding respectively to V n, N and V satisfy t 0 q n, N (s) ds ⇒ n→∞ t 0 q(s) ds, uniformly in t.
(3.26)
The latter in turn, implies that q n, N -→ q, in H -1 (0, 1).
In fact, (3.26) yields more than that: we have the following We remark that (3.27) is still not enough to guarantee the convergence q n, N to q almost everywhere on (0, 1).
On the other hand let us restrict (2.24) to the diagonal y = t: V (y, y) + c T (y, y) + T y c T (y, s)V (y, s) ds = 0, 0 < y < 1, (3.28) and recalling (2.25), (3.12), we see that the best possible result one can expect is a convergence q n,N to q almost everywhere on (0, 1).
Remark 3.5. The stability of the scheme crucially depends on the type of the convergence r n → r. For now we know (see [START_REF] Avdonin | The boundary control approach to inverse spectral theory[END_REF]) that the convergence is pointwise almost everywhere on the interval. The significant progress in the proving of the stability could be achieved by the improvement of this result.
Funding
This work was supported by RFFI 11-01-00407A, RFFI 12-01-31446, the Chebyshev Laboratory (Department of Mathematics and Mechanics, St.
.21) Picking δ > 0 we can use Proposition 3.3 to find n and N such that c τ n, Nc τ ∞ δ. From now on, we always assume that n and N are chosen according to Proposition 3.3. Note that in particular we have N → +∞ as n → +∞. We introduce the operator K n, N τ given by the integral part of (2.20) with the kernel c τ substituted by c τ n, N : K n, N τ f (t) = τ 0 c τ n, N (t, s)f (s) ds. The closeness of c τ and c τ n, N implies the operator I + K n, N τ
in the formV (y, t) -V n, N (y, t) (3.24) = τ y c τ (t, s) V n, N (y, s) -V (y, s) ds + c τ n, N (y, t) -c τ (y, t) + τ y c τ (t, s) -c τ n, N (t, s) V n, N (y, s) ds, 0 < y < t < τ.This leads to the following inequality:V (y, t) -V n, N (y, t) ∞ (3.25) c τ ∞ max 0 y τ τ y V n, N (y, s) -V (y, s) ds + c τ n, N -c τ ∞ + c τ -c τ n, N ∞ max 0 y τ τ y | V n, N (y, s)| ds.Using (3.23) and (3.25) we finally get
Theorem 3 . 4 .
34 If n, N satisfy conditions from Proposition 3.3, then1 ε t+ε t q n, N (s) ds -→ n→∞ 1 ε t+ε tq(s) ds, uniformly in t, ε.(3.27)
6 )
6 Define the spectral data D by (2.11) and find the norming coefficients α
k by using (2.15) and (2.14), (2.16). 7) Introduce the spectral function ρ(λ), the regularized spectral function σ(λ) and the kernel c τ respectively defined by (2.17), (2.18) and (2.19). 8) Solve the inverse problem by either BCM using (2.21), (2.22), (2.23)
Petersburg State University) under RF Government grant 11.G34.31.0026 and the FRAE (Fondation de Recherche pour l'Aéronautique et l'Espace, research project IPPON).
The final version of the paper was prepared when the first author took part in the semester on Inverse Problems in the Institute of Mittag-Leffler. The author is grateful to the Institute for their hospitality. The authors are also very thankful to the anonymous referee for valuable remarks. | 27,883 | [
"3474"
] | [
"154874",
"217892",
"2368"
] |
01467658 | en | [
"math"
] | 2024/03/04 23:41:44 | 2018 | https://hal.science/hal-01467658/file/traces-nevanlinna-def.pdf | A Hartmann
email: [email protected]
X Massaneda
email: [email protected]
A Nicolau
TRACES OF THE NEVANLINNA CLASS ON DISCRETE SEQUENCES
Keywords: 2000 Mathematics Subject Classification. 30D55, 30E05, 42A85 Interpolating sequences, Nevanlinna class, Divided differences
DEFINITIONS AND STATEMENT
This note deals with some properties of the classical Nevanlinna class consisting of the holomorphic functions in the unit disk D for which log + |f | has a positive harmonic majorant. We denote by Har + (D) the set of non-negative harmonic functions in D. Equivalently,
N = f ∈ Hol(D) : lim r→1 1 2π 2π 0 log + |f (re iθ )| dθ < ∞ .
Definition. A discrete sequence of points Λ in D is called interpolating for N (denoted Λ ∈ Int N ) if the trace space N|Λ is ideal, or equivalently, if for every v ∈ ℓ ∞ there exists f ∈ N such that f (λ n ) = v n , n ∈ N.
Interpolating sequences for the Nevanlinna class were first investigated by Naftalevitch [START_REF] Naftalevič | On interpolation by functions of bounded characteristic (Russian), Vilniaus Valst[END_REF]. A rather complete study was carried out much later in [START_REF] Hartmann | Interpolation in the Nevanlinna and Smirnov classes and harmonic majorants[END_REF]. Let Other properties and characterizations of Nevanlinna interpolating sequences have been given recently in [START_REF] Hartmann | Finitely generated Ideals in the Nevanlinna class[END_REF]. In these terms Λ ∈ Int N when for every sequence ω(Λ) ∈ N (Λ) there exists f ∈ N such that f (λ) = ω(λ), λ ∈ Λ. In terms of the restriction operator
R Λ : N -→ N (Λ) f → {f (λ)} λ∈Λ ,
Λ is interpolating when R Λ (N ) = N (Λ).
Definition 1.1. Let Λ be a discrete sequence in D and ω a function given on Λ. The pseudohyperbolic divided differences of ω are defined by induction as follows
∆ 0 ω(λ 1 ) = ω(λ 1 ) , ∆ j ω(λ 1 , . . . , λ j+1 ) = ∆ j-1 ω(λ 2 , . . . , λ j+1 ) -∆ j-1 ω(λ 1 , . . . , λ j ) b λ 1 (λ j+1 ) j ≥ 1.
For any n ∈ N, denote
Λ n = {(λ 1 , . . . , λ n ) ∈ Λ× n ⌣ • • • ×Λ : λ j = λ k if j = k},
and consider the set X n-1 (Λ) consisting of the functions defined in Λ with divided differences of order n -1 uniformly controlled by a positive harmonic function H i.e., such that for some
H ∈ Har + (D), sup (λ 1 ,...,λn)∈Λ n |∆ n-1 ω(λ 1 , . . . , λ n )|e -[H(λ 1 )+•••+H(λn)] < +∞ . Lemma 1.2. Let n ∈ N. For any sequence Λ ⊂ D, we have X n (Λ) ⊂ X n-1 (Λ) ⊂ • • • ⊂ X 0 (Λ) = N (Λ).
Proof. Assume that ω(Λ) ∈ X n (Λ), that is,
sup (λ 1 ,...,λ n+1 )∈Λ n+1 ∆ n-1 ω(λ 2 , . . . , λ n+1 ) -∆ n-1 ω(λ 1 , . . . , λ n ) b λ 1 (λ n+1 ) e -[H(λ 1 )+•••+H(λ n+1 )] < ∞ .
Then, given (λ 1 , . . . , λ n ) ∈ Λ n and taking λ 0 1 , . . . , λ 0 n from a finite set (for instance the n first λ 0 j ∈ Λ different of all λ j ) we have
∆ n-1 ω(λ 1 , . . . , λ n ) = ∆ n-1 ω(λ 1 , . . . , λ n ) -∆ n-1 ω(λ 0 1 , λ 1 , . . . , λ n-1 ) b λ 0 1 (λ n ) b λ 0 1 (λ n )+ + ∆ n-1 ω(λ 0 1 , λ 1 , . . . , λ n-1 ) -∆ n-1 ω(λ 0 2 , λ 0 1 , . . . , λ n-2 ) b λ 0 2 (λ n-1 ) b λ 0 2 (λ n-1 ) + • • • + ∆ n-1 ω(λ 0 n-1 , . . . , λ 0 1 , λ 1 ) -∆ n-1 ω(λ 0 n , . . . , λ 0 1 ) b λ 0 n (λ 1 ) b λ 0 n (λ 1 ) + ∆ n-1 ω(λ 0 n , . . . , λ 0 1 )
Since ω ∈ X n-1 (Λ) there exists H ∈ Har + (D) and a constant K(λ 0 1 , . . . , λ 0 n ) such that λn) , and the statement follows.
∆ n-1 ω(λ 1 , . . . , λ n ) ≤ e H(λ 0 1 )+H(λ 1 )•••+H(λn) ρ(λ 0 1 , λ n ) + e H(λ 0 1 )+H(λ 0 2 )•••+H(λ n-1 ) ρ(λ 0 2 , λ n-1 )+ + • • • + e H(λ 0 1 )+•••+H(λ 0 n )+H(λ 1 ) ρ(λ 0 n , λ 1 ) + ∆ n-1 ω(λ 0 n , . . . , λ 0 1 ) ≤ K(λ 0 1 , . . . , λ 0 n ) e H(λ 1 )+•••+H(
The main result of this note is modelled after Vasyunin's description of the sequences Λ in D such that the trace of the algebra of bounded holomorphic functions H ∞ on Λ equals the space of pseudohyperbolic divided differences of order n (see [START_REF] Vasyunin | Traces of bounded analytic functions on finite unions of Carleson sets (Russian). Investigations on linear operators and the theory of functions[END_REF], [START_REF] Vasyunin | Characterization of finite unions of Carleson sets in terms of solvability of interpolation problems (Russian). Investigations on linear operators and the theory of functions[END_REF]). Similar results hold also for Hardy spaces (see [START_REF] Bruna | A note on interpolation in the Hardy spaces of the unit disc[END_REF] and [START_REF] Hartmann | Une approche de l'interpolation libre gnralise par la thorie des oprateurs et caractrisation des traces H p |Λ. (French) [An approach to generalized free interpolation using operator theory and characterization of the traces H p |Λ[END_REF]) and the Hörmander algebras, both in C and in D [START_REF] Massaneda | Traces of Hörmander algebras on discrete sequences[END_REF]. The analogue in our context is the following.
Main Theorem. The identity N |Λ = X n-1 (Λ) holds if and only if Λ is the union of n interpolating sequences for N .
GENERAL PROPERTIES
Throughout the proofs we will use repeatedly the well-known Harnack inequalities: for H ∈ Har + (D) and z, w ∈ D,
1 -ρ(z, w) 1 + ρ(z, w) ≤ H(z) H(w) ≤ 1 + ρ(z, w) 1 -ρ(z, w) .
We shall always assume, without loss of generality, that H ∈ Har + (D) is big enough so that for z ∈ D(λ, e -H(λ) ) the inequalities 1/2 ≤ H(z)/H(λ) ≤ 2 hold. Actually it is sufficient to assume inf{H(z) : z ∈ D} ≥ log 3.
We begin by showing that one of the inclusions of the Main Theorem is inmediate.
Proposition 2.1. For all n ∈ N, the inclusion N |Λ ⊂ X n-1 (Λ) holds.
Proof. Let f ∈ N . Let us show by induction on j ≥ 1 that there exists H ∈ Har + (D) such that
|∆ j-1 f (z 1 , . . . , z j )| ≤ e H(z 1 )+•••+H(z j )
for all (z 1 , . . . , z j ) ∈ D j .
As f ∈ N , there exists
H ∈ Har + (D) such that |∆ 0 f (z 1 )| = |f (z 1 )| ≤ e H(z 1 ) , z 1 ∈ D.
Assume that the property is true for j and let (z 1 , . . . , z j+1 ) ∈ D j+1 . Fix z 1 , . . . , z j and consider z j+1 as the variable in the function
∆ j f (z 1 , . . . , z j+1 ) = ∆ j-1 f (z 2 , . . . , z j+1 ) -∆ j-1 f (z 1 , . . . , z j ) b z 1 (z j+1 ) .
By the induction hypothesis, there exists H ∈ Har + (D) such that
|∆ j f (z 1 , . . . , z j+1 )| ≤ 1 ρ(z 1 , z j+1 ) e H(z 2 )+•••+H(z j+1 ) + e H(z 1 )+•••+H(z j ) . If ρ(z 1 , z j+1 ) ≥ 1/2 we get directly |∆ j f (z 1 , . . . , z j+1 )| ≤ 4e H(z 1 )+•••+H(z j+1 ) ,
and choosing for instance H = H + log 4 we get the desired estimate.
If ρ(z 1 , z j+1 ) ≤ 1/2 we apply the maximum principle and Harnack's inequalities
|∆ j f (z 1 , . . . , z j+1 )| ≤ sup ξ:ρ(ξ,z j+1 )=1/2 |∆ j f (z 1 , . . . , z j , ξ j+1 )| ≤ sup ξ:ρ(ξ,z j+1 )=1/2 4e H(z 1 )+•••+H(z j )+H(ξ) ≤ 4e 2[H(z 1 )+•••+H(z j )+H(z j+1 )] .
Choosing here H = 2H + log 4 we get the desired estimate.
Definition 2.2. A sequence Λ is weakly separated if there exists H ∈ Har + (D) such that the disks D(λ, e -H(λ) ), λ ∈ Λ, are pairwise disjoint. Remark 2.3. If Λ is weakly separated then X 0 (Λ) = X n (Λ), for all n ∈ N.
By Lemma 1.2, to see this it is enough to prove (by induction) that X 0 (Λ) ⊂ X n (Λ) for all n ∈ N.
For n = 0 this is trivial. Assume now that X 0 (Λ) ⊂ X n-1 (Λ) and take ω(Λ) ∈ X 0 (Λ). Since ρ(λ 1 , λ n+1 ) ≥ e -H 0 (λ 1 ) for some H 0 ∈ Har + (D) we have
|∆ n ω(λ 1 , . . . , λ n+1 )| = ∆ n-1 ω(λ 2 , . . . , λ n+1 ) -∆ n-1 ω(λ 1 , . . . , λ n ) b λ 1 (λ n+1 ) ≤ e H 0 (λ 1 ) e H(λ 2 )+•••+H(λ n+1 ) + e H(λ 1 )+•••+H(λn)
for some H ∈ Har + (D). Taking H = H + H 0 we are done.
sup λ∈Λ #[Λ ∩ D(λ, e -H(λ) )] ≤ n . (c) X n-1 (Λ) = X n (Λ).
Proof. (a) ⇒(b). This is clear, by the weak separation. (b) ⇒(a). We proceed by induction on j = 1, . . . , n. For j = 1, it is again clear by the definition of weak separation. Assume the property true for j -1.
Let H ∈ Har + (D) , inf{H(z) : z ∈ D} ≥ log 3, be such that sup λ∈Λ #[Λ∩D(λ, e -H(λ) )] ≤ j. We split the sequence Λ = Λ a ∪Λ b where Λ a = {λ∈Λ:#(Λ∩D(λ,e -10H(λ) ))=j} (Λ ∩ D(λ, e -10H(λ) )) Λ b = Λ \ Λ a
Now, for every λ ∈ Λ b , we have #(Λ∩D(λ, e -10H(λ) )) ≤ j -1, and by the induction hypothesis, Λ b splits into j -1 separated sequences Λ 1 , . . . , Λ j-1 .
In the case λ ∈ Λ a , there is obviously no point in the annulus D(λ, e -H(λ) ) \ D(λ, e -10H(λ) ) which means that the j points in D(λ, e -10H(λ) )) are far from the other points of Λ. So we can add each one of these j points in a weakly separated way to one of the sequences Λ 1 , . . . , Λ j-1 , and the j-th point in a new sequence Λ j (which is of course weakly separated since the groups Λ ∩ D(λ, e -10H(λ) ) appearing in Λ a are weakly separated).
(b)⇒(c). It remains to see that X n-1 (Λ) ⊂ X n (Λ). Given ω(Λ) ∈ X n-1 (Λ) and points (λ 1 , . . . , λ n+1 ) ∈ Λ n+1 , we have to estimate ∆ n ω(λ 1 , . . . , λ n+1 ). Under the assumption (b), at least one of these n + 1 points is not in the disk D(λ 1 , e -H(λ 1 ) ). Note that Λ n is invariant by permutation of the n + 1 points, thus we may assume that ρ(λ 1 , λ n+1 ) ≥ e -H(λ 1 ) . Using the fact that ω(Λ) ∈ X n-1 (Λ), there exists H 0 ∈ Har + (D) such that
|∆ n ω(λ 1 , . . . , λ n+1 )| ≤ |∆ n-1 ω(λ 2 , . . . , λ n+1 )| + |∆ n-1 ω(λ 1 , . . . , λ n )| ρ(λ 1 , λ n+1 ) ≤ e H(λ 1 ) e H 0 (λ 2 )+•••+H 0 (λ n+1 ) + e H 0 (λ 1 )+•••+H 0 (λn) ≤ 2e H(λ 1 ) e H 0 (λ 1 )+•••+H 0 (λ n+1 ) .
Taking H = H 0 + H + log 2 we get the desired estimate. (c)⇒(b). We prove this by contraposition. Assume that for all H ∈ Har + (D) there exists λ ∈ Λ such that
#[Λ ∩ D(λ, e -H(λ) )] > n . (2)
Consider the partition of D into the dyadic squares
Q k,j = z = re iθ ∈ D : 1 -2 -k ≤ r < 1 -2 -k-1 , j 2π k ≤ θ < (j + 1) 2π k ,
where k ≥ 0 and j = 0, . . .
2 k -1. Let Λ k,j = Λ ∩ Q k,j and r k,j = inf{r > 0 : ∃λ ∈ Λ k,j : #(Λ ∩ D(λ, r)) ≥ n + 1}. Take α k,j ∈ Λ k,j such that #(Λ ∩ D(α k,j , r k,j )) ≥ n + 1. Claim: For all H ∈ Har + (D), inf k,j r k,j e -H(α k,j ) = 0 .
To see this assume otherwise that there exist H ∈ Har + (D) and η > 0 with r k,j e -H(α k,j ) ≥ η .
In particular, by Harnack's inequalities,
(3) log 1 r k,j ≤ 3H(z) + log( 1 η ), z ∈ Q k,j .
Let H := log(2/η) + 4H ∈ Har + (D). By the hypothesis (2) there exist k 0 ≥ 0, j 0 ∈ {0, . . . ,
2 k 0 -1}, λ k 0 ,j 0 ∈ Λ k 0 ,j 0 such that # Λ ∩ D(λ k 0 ,j 0 , e -H(λ k 0 ,j 0 ) ) ≥ n + 1 .
In particular, by definition of r k,j , we have r k 0 ,j 0 ≤ e -H(λ k 0 ,j 0 ) , that is
log 1 r k 0 ,j 0 ≥ H(λ k 0 ,j 0 ) = log( 2 η ) + 4H(λ k 0 ,j 0 ),
which contradicts (3). Now take a separated sequence L ⊂ {α k,j } k,j for which the disks D(α, r α ), α ∈ L, are disjoint, where for α = α k,j ∈ L we denote r α = r k,j . Given α ∈ L, let λ α 1 , . . . , λ α n be its n nearest (not necessarily unique) points, arranged by increasing distance. Notice that ρ(α, λ α n ) = r α .
In order to construct a sequence ω(Λ)
∈ X n-1 (Λ) \ X n (Λ), put ω(α) = n-1 j=1 b α (λ α j ), for all α ∈ L ω(λ) = 0 if λ ∈ Λ \ L.
To see that ω(Λ) ∈ X n-1 (Λ) let us estimate ∆ n-1 ω(λ 1 , . . . , λ n ) for any given (λ 1 , . . . , λ n ) ∈ Λ n . By the separation conditions on L, we know that none of the λ α j is in L. Hence, we may assume that at most one of the points is in L. On the other hand, it is clear that ∆ n-1 ω(λ 1 , . . . , λ n ) = 0 if all the points are in Λ \ L. Thus, taking into account that ∆ n-1 is invariant by permutations, we will only consider the case where λ n is some α ∈ L and λ 1 , . . . , λ n-1 are in Λ \ L. In that case,
|∆ n-1 ω(λ 1 , . . . , λ n-1 , α)| = |ω(α)| n-1 j=1 ρ(α, λ j ) -1 = n-1 j=1 ρ(α, λ α j ) ρ(α, λ j ) ≤ 1,
as desired.
On the other hand, a similar computation yields
|∆ n ω(λ α 1 , . . . , λ α n , α)| = |ω(α)| n j=1 ρ(α, λ α j ) -1 = ρ(α, λ α n ) -1 = r -1 α .
The Claim above prevents the existence of H ∈ Har + (D) such that
r -1 α = |∆ n ω(λ α 1 , . . . , λ α n , α)|e -(H(λ α 1 )+•••+H(λ α n )+H(α))
≤ C , since otherwise, again by Harnack's inequalities, we would have
r -1 α ≤ e 3(n+1)H(α) , α ∈ L .
It is clear from the characterization (1) of interpolating sequences for N that such sequences must be weakly separated. The previous result gives another way of showing it.
Corollary 2.5. If Λ is an interpolating sequence, then it is weakly separated.
Proof. If Λ is an interpolating sequence, then N |Λ = X 0 (Λ). On the other hand, by Proposition 2.1, N |Λ ⊂ X 1 (Λ). Thus X 0 (Λ) = X 1 (Λ). We conclude by the preceding lemma applied to the particular case n = 1.
The covering provided by the following result will be useful.
Lemma 2.6. Let Λ 1 , . . . , Λ n be weakly separated sequences. There exist
H ∈ Har + (D), positive constants α, β, a subsequence L ⊂ Λ 1 ∪ • • • ∪ Λ n and disks D λ = D(λ, r λ ), λ ∈ L, such that (i) Λ 1 ∪ • • • ∪ Λ n ⊂ ∪ λ∈L D λ , (ii) e -βH(λ) ≤ r λ ≤ e -αH(λ) for all λ ∈ L, (iii) ρ(D λ , D λ ′ ) ≥ e -βH(λ) for all λ, λ ′ ∈ L, λ = λ ′ .
(iv) #(Λ j ∩ D λ ) ≤ 1 for all j = 1, . . . , n and λ ∈ L.
Proof. Let H ∈ Har + (D) be such that
(4) ρ(λ, λ ′ ) ≥ e -H(λ) , ∀λ, λ ′ ∈ Λ j , λ = λ ′ , ∀j = 1, . . . , n .
We will proceed by induction on k = 1, . . . , n to show the existence of a subsequence
L k ⊂ Λ 1 ∪ • • • ∪ Λ k such that: (i) k Λ 1 ∪ • • • ∪ Λ k ⊂ ∪ λ∈L k D(λ, R k λ ), (ii) k e -β k H(λ) ≤ R k λ ≤ e -α k H(λ) , (iii) k ρ(D(λ, R k λ ), D(λ ′ , R k λ ′ )) ≥ e -β k H(λ) for any λ, λ ′ ∈ L k , λ = λ ′ . Then it suffices to chose L = L n , α = α n , β = β n , r λ = R n λ .
The weak separation and the fact that r λ < e -H(λ) /3 implies that #Λ j ∩ D(λ, r λ ) ≤ 1, j = 1, . . . , k, hence the lemma follows.
For k = 1, the property is clearly verified with L 1 = Λ 1 and R 1 λ = e -CH(λ) , with C big enough so that (iii) 1 holds (C = 3, for instance). Properties (i) 1 , (ii) 1 follow immediately.
Assume the property true for k and split
L k = M 1 ∪ M 2 and Λ k+1 = N 1 ∪ N 2 ,
where
M 1 = {λ ∈ L k : D(λ, R k λ + 1/4 e -β k H(λ) ) ∩ Λ k+1 = ∅}, N 1 = Λ k+1 ∩ λ∈L k D(λ, R k λ + 1/4 e -β k H(λ) ), M 2 = L k \ M 1 , N 2 = Λ k+1 \ N 1 .
Now, we put L k+1 = L k ∪ N 2 and define the radii R k+1 λ as follows:
R k+1 λ = R k λ + 1/4 e -β k H(λ) if λ ∈ M 1 , R k λ if λ ∈ M 2 , 1/8 e -β k H(λ) if λ ∈ N 2 .
It is clear that (i) k+1 holds:
Λ 1 ∪ • • • ∪ Λ k+1 ⊂ λ∈L k+1 D(λ, R k+1 λ ) .
Also, by the induction hypothesis,
1 8 e -β k H(λ) ≤ R k+1 λ ≤ e -α k H(λ) + 1 4 e -β k H(λ) .
Thus, to see (ii) k+1 there is enough to choose α k+1 , β k+1 such that
e -α k H(λ) + 1/4 e -β k H(λ) ≤ e -α k+1 H(λ) ,
for instance α k+1 = α k -1, and
(5)
1/8 e -β k H(λ) ≥ e -β k+1 H(λ) , that is β k+1 H(λ) ≥ β k H(λ) + log 8.
Assuming without loss of generality that H(λ) ≥ log 8, there is enough choosing
β k+1 ≥ β k + 1.
In order to prove (iii
) k take now λ, λ ′ ∈ L k+1 , λ = λ ′ . Notice that ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) = ρ(λ, λ ′ ) -R k+1 λ -R k+1 λ ′ . Split into four different cases: 1. λ, λ ′ ∈ L k .
Assume without loss of generality that H(λ) ≤ H(λ ′ ). Then, by the definition of R k+1 λ , we see that
ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) = ρ(λ, λ ′ ) -R k λ -R k λ ′ - 1 4 e -β k H(λ) - 1 4 e -β k H(λ ′ ) .
By inductive hypothesis
ρ(λ, λ ′ ) -R k λ -R k λ ′ = ρ(D(λ, R k λ ), D(λ ′ , R k λ ′ )) ≥ e -β k H(λ)
. Thus, by [START_REF] Massaneda | Traces of Hörmander algebras on discrete sequences[END_REF],
ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) ≥ e -β k H(λ) - 1 2 e -β k H(λ) = 1 2 e -β k H(λ) ≥ e -β k+1 H(λ) . 2. λ, λ ′ ∈ N 2 . Assume also H(λ) ≤ H(λ ′ ). Condition (4) implies ρ(λ, λ ′ ) ≥ e -H(λ) , hence ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) ≥ e -H(λ) - 1 4 e -β k H(λ) . If β k ≥ 2, by (5) we have ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) ≥ e -2H(λ) ≥ e -β k H(λ) ≥ e -β k+1 H(λ) . 3. λ ∈ M 1 , λ ′ ∈ N 2 By definition of M 1 there exists β ∈ N 1 such that ρ(λ, β) ≤ R k λ + 1 4 e -β k H(λ) .
Then, using (4) on β, λ ′ ∈ Λ k+1 , we have, by Harnack's inequalities (if
β k ≥ 4), ρ(λ, λ ′ ) ≥ ρ(β, λ ′ ) -ρ(λ, β) ≥ e -H(β) -R k λ - 1 4 e -β k H(λ) ≥ e -2H(λ) - 5 4 e -β k H(λ) ≥ e -4H(λ) ≥ e -β k H(λ) ≥ e -β k+1 H(λ) . 4. λ ∈ M 2 , λ ′ ∈ N 2 . Taking into account the definition of R k+1 λ , R k+1 λ ′ we have ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) = ρ(λ, λ ′ ) -R k λ - 1 8 e -β k H(λ) Since ρ(λ, λ ′ ) -R k λ ≥ ρ(D(λ, R k λ ), D(λ ′ , R k λ ′ ))
, by inductive hypothesis and by ( 5)
ρ(D(λ, R k+1 λ ), D(λ ′ , R k+1 λ ′ )) ≥ 1 4 e -β k H(λ) - 1 8 e -β k H(λ) ≥ e -β k+1 H(λ) .
All together, it is enough to start with C > n, define α 1 = β 1 = C, and then define α k , β k inductively by
α k+1 = α k -1 = • • • = C -k , β k+1 = β k + 1 = • • • = C + k .
PROOF OF MAIN THEOREM. NECESSITY
Assume N |Λ = X n-1 (Λ), n ≥ 2. Using Proposition 2.1, we have X n-1 (Λ) = X n (Λ), and by Lemma 2.4 we deduce that Λ = Λ 1 ∪• • •∪Λ n , where Λ 1 , . . . , Λ n are weakly separated sequences. We want to show that each Λ j is an interpolating sequence.
Let ω(Λ j ) ∈ N (Λ j ) = X 0 (Λ j ). Let ∪ λ∈L D λ be the covering of Λ given by Lemma 2.6. We extend ω(Λ j ) to a sequence ω(Λ) which is constant on each D λ ∩ Λ j in the following way:
ω |D λ ∩Λ = 0 if D λ ∩ Λ j = ∅ ω(α) if D λ ∩ Λ j = {α} .
We verify by induction that the extended sequence is in X k-1 (Λ) for all k ≤ n. It is clear that it belongs to X 0 (Λ).
Assume that ω ∈ X k-2 (Λ), k ≥ 2, and consider (α 1 , . . . , α k ) ∈ Λ k . If all the points are in the same D λ then ∆ k-1 ω(α 1 , . . . , α k ) = 0, so we may assume that α 1 ∈ D λ and α k ∈ D λ ′ with λ = λ ′ . Then we have, for some H 0 ∈ Har + (D),
ρ(α 1 , α k ) ≥ e -βH 0 (α 1 ) , k = 1.
With this and the induction hypothesis it is clear that for some H ∈ Har + (D),
|∆ k-1 ω(α 1 , . . . , α k )| = ∆ k-2 ω(α 2 , . . . , α k ) -∆ k-2 ω(α 1 , . . . , α k-1 ) b α 1 (α k ) ≤ e βH 0 (α 1 ) e H(α 2 )+•••+H(α k ) + e H(α 1 )+•••+H(α k-1 )
.
Taking for instance H = H + βH 0 + log 2 we get
|∆ k-1 ω(α 1 , . . . , α k )| ≤ e H(α 1 )+•••+ H(α k ) , thus ω(Λ) ∈ X k-1 (Λ).
By assumption there exist f ∈ N interpolating the values ω(Λ). In particular f interpolates ω(Λ j ).
PROOF OF THE MAIN THEOREM. SUFFICIENCY Assume
Λ = Λ 1 ∪ • • • ∪ Λ n
, where Λ j ∈ Int N , j = 1, . . . , n, and denote Λ j = {λ (j) k } k∈N . Denote also by B j the Blaschke product with zeros on Λ j . We will use the following property of the Nevanlinna interpolating sequences (see Theorem 1.2 in [START_REF] Hartmann | Finitely generated Ideals in the Nevanlinna class[END_REF]).
|B(z)| ≥ e -H 1 (z) ρ(z, Λ) z ∈ D .
According to Proposition 2.1 we only need to see that
X n-1 (Λ) ⊂ N |Λ. Let then ω(Λ) ∈ X n-1 (Λ) and split it {ω(λ)} λ∈Λ = {ω (1) k } k∈N ∪ • • • ∪ {ω (n) k } k∈N , where ω (j) k = ω(λ (j)
k ), j = 1, . . . , n, k ∈ N. By Lemma 1.2 and the hypothesis {ω
(1) k } k∈N ∈ X 0 (Λ 1 ), hence there exists f 1 ∈ N such that f 1 (λ (1) k ) = ω (1) k , k ∈ N .
In order to interpolate also the values {ω
(2) k } k consider functions of the form f 2 (z) = f 1 (z) + B 1 (z)g 2 (z) . Immediately f 2 (λ (1) k ) = f 1 (λ (1) k ) = ω (1)
k , k ∈ N, and we will have f 2 (λ
(2) k ) = ω (2)
k as soon as we find g 2 ∈ N such that
g 2 (λ (2) k ) = ω (2) k -f 1 (λ (2) k ) B 1 (λ (2) k ) , k ∈ N .
Since Λ 2 ∈ Int N such g 2 will exist as soon as the sequence in the right hand side is majorized by a sequence of the form {e
H(λ (2) k ) } k . Given λ (2) k ∈ Λ 2 pick λ (1) k such that ρ(λ (2) k , Λ 1 ) = ρ(λ (2) k , λ (1)
k ). There is no restriction in assuming that ρ(λ
(2) k , λ (1)
k ) ≤ 1/2. Then, by Lemma 4.1 there exists
H 1 ∈ Har + (D) such that |B 1 (λ (2) k )| ≥ e -H 1 (λ (2) k ) ρ(λ (1)
k , λ
k ) k ∈ N. Now, since f 1 (λ (2)
k ) = ω (1)
k we have
ω (2) k -f 1 (λ (2) k ) B 1 (λ (2) k ) ≤ ω (2) k -ω (1) k B 1 (λ (2) k ) + f 1 (λ (1) k ) -f 1 (λ (2) k ) B 1 (λ (2) k ) ≤ ∆ 1 (ω (1) k , ω (2) k ) + ∆ 1 (f 1 (λ (1) k ), f 1 (λ (2) k )) e H 1 (λ (2) k ) .
By hypothesis, and since f 1 ∈ N , there exists H 2 ∈ Har + (D) such that
∆ 1 (ω (1)
k , ω
k ) + ∆ 1 (f 1 (λ (1) k ), f 1 (λ (2) k )) ≤ e H 2 (λ (1) k )+H 2 (λ (2)
k ) , and therefore, by Harnack's inequalities, ω
k -f 1 (λ (2) k ) B 1 (λ (2) k ) ≤ e H 2 (λ (1) k )+H 2 (λ (2) (2)
k ) e H 1 (λ (2) k ) ≤ e 3(H 1 +H 2 )(λ (2) k ) ,
In general, assume that we have f n-1 ∈ N such that
f n-1 (λ (j) k ) = ω (j) k
k ∈ N, j = 1, . . . , n -1 .
We look for a function f n ∈ N interpolating the whole Λ of the form
f n = f n-1 + B 1 • • • B n-1 g n .
We need then g n ∈ N with g n (λ
(n) k ) = ω (n) k -f n-1 (λ (n) k ) B 1 (λ (n) k ) • • • B n-1 (λ (n) k ) , k ∈ N .
Let us see that the sequence of values in the right hand side of this identity have a majorant of the form {e H(λ (n) k ) } k . Pick λ (j) k ∈ Λ j , j = 1, . . . , n -1 such that ρ(λ
(n) k , Λ j ) = ρ(λ (n) k , λ (j) k ).
There is no restriction in assuming that ρ(λ
(n) k , λ (j) k ) ≤ 1/2. Since f n-1 (λ (j) k ) = ω (j)
k , j = 1, . . . , n -1, an immediate computation shows that
ω (n) k -f n-1 (λ (n) k ) = ∆ n-1 (ω (1) k , . . . , ω (n-1) k , ω (n) k )- -∆ n-1 (f n-1 (λ (1) k ), . . . , f n-1 (λ (n-1) k ), f n-1 (λ (n) k )) b λ (1) k (λ (n) k ) • • • b λ (n-1) k (λ (n) k ) .
Again by Lemma 4.1, there exists H 1 ∈ Har + (D) such that |B j (λ
(n) k )| ≥ e -H 1 (λ (n) k ) ρ(λ (j) k , λ (n)
k ) , k ∈ N, j = 1, . . . , n -1. Hence, by hypothesis and the fact that f n-1 ∈ N there exists H ∈ Har + (D) such that
ω (n) k -f n-1 (λ (n) k ) B 1 (λ (n) k ) • • • B n-1 (λ (n) k ) ≤ [|∆ n-1 (ω (1) k , . . . , ω (n) k )|+|∆ n-1 (f n-1 (λ (1) k ), . . . , f n-1 (λ (n) k ))|] e (n-1)H 1 (λ (n) k ) ≤ e H(λ (1) k )+•••+H(λ (n-1) k )+H(λ (n) k )+(n-1)H 1 (λ (n) k ) .
Finally, by Harnack's inequalities, this is bounded by e 2n(H(λ (n) k )+H 1 (λ
(n) k )) .
1 )
1 B denote the Blaschke product associated to a Blaschke sequence Λ. Let b λ (z) = zλ 1 -λz and B λ (z) = B(z) b λ (z) . Let's also consider the pseudohyperbolic distance in D, defined as ρ(z, w) = zw 1zw , and the corresponding pseudohyperbolic disks D(z, r) = {w ∈ D : ρ(z, w) < r}. According to [4, Theorem 1.2] Λ ∈ Int N if and only if there exists H ∈ Har + (D) such that (|B λ (λ)| = (1 -|λ|)|B ′ (λ)| ≥ e -H(λ) , λ ∈ Λ . Moreover in such case the trace space is N (Λ) = {ω(λ)} λ∈Λ : ∃H ∈ Har + (D) , log + |ω(λ)| ≤ H(λ), λ ∈ Λ .
Lemma 2 . 4 .
24 Let n ≥ 1. The following assertions are equivalent: (a) Λ is the union of n weakly separated sequences, (b) There exist H ∈ Har + (D) such that
Lemma 4 . 1 .
41 Let Λ ∈ Int N and let B the Blaschke product associated to Λ. There exists H 1 ∈ Har + (D) such that
Second and third authors supported by the Generalitat de Catalunya (grants 2014 SGR 289 and 2014SGR 75) and the Spanish Ministerio de Ciencia e Innovación (projects MTM2014-51834-P and MTM2014-51824-P). | 22,481 | [
"13447"
] | [
"27730",
"152774",
"460569"
] |
01467781 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467781/file/978-3-642-38862-0_13_Chapter.pdf | Kawaljeet Kapoor
Yogesh K Dwivedi
email: [email protected]
Michael D Williams
email: [email protected]
Role of innovation attributes in explaining the adoption intention for the Interbank Mobile Payment Service in an Indian context
Keywords: Innovation, Adoption, Mobile Payment, IMPS, Diffusion of Innovations
This study presents an investigation on the role of innovation attributes that significantly influence the behavioural intention and actual adoption of potential consumers towards the interbank mobile payment service. Using attributes from Rogers' diffusion of innovations theory, along with one other attribute, cost, the diffusion of this IMPS application has been studied. The proposed model was empirically tested against the data gathered from both, the adopters and non-adopters of this technology. The SPSS analysis tool was used to perform the reliability tests, and linear and logistic regressions. While relative advantage, compatibility, complexity and trialability displayed significant relationships, observability exhibited a poor impact on behavioural intention. On the other hand, behavioural intention and cost showed significant impacts on the adoption of the IMPS application. The theoretical background, discussions, key conclusions, and limitations, alongside research implications of this study have been presented.
Introduction
The Interbank Mobile Payment Service (IMPS) is a 24/7 interbank electronic fund transfer service available as a mobile application, enabling customers to access their bank accounts via mobile phones to make the required fund transfers in a secured manner (NPCI, 2012). There are two types of this service -(a) Person to Person -where fund transfers between two individuals are allowed. (b) Person to Merchant -where fund transfers between a customer and a merchant are allowed (South Indian Bank, 2012). [START_REF] Liang | Adoption of mobile technology in business: a fit-viability model[END_REF] emphasize on one of the major advantages of using services on mobile phones as the ability to access services ubiquitously, through various devices. The positives of the IMPS application are clearly its round-the-clock availability, savings in terms of time and cost, transactions on a safe/secure mode, and the instant fund transfer ability. As of October, 2012, the national payments corporation of India [npci.org.in] claims that fifty banks have become the IMPS member banks. An article available on Business Standard's (2012) website points out that the adoption rate of this technology has been low; the industry analysts pick on this mobile application's availability to only the smart phone users as a possible reason for the low adoption rate. PayMate, has now addressed this concern and provided the basic phone users with a hybrid SMS-IVR solution. However, PayMate has only partnered with three banks until date, which does not effectively provide a large scale solution to this problem. Shyamala Gopinath, DG, RBI, at the inauguration of IMPS of the NPCI in Mumbai stated (22/11/2010) -"the twin challenge for India is to succeed in reducing the use of cash, while encouraging the spread and use of mobile wallet to reap the full benefits of this ubiquitous product". She identified three stakeholders of this IMPS application-the telecom partners, the member banks, and the participating merchants, who will have to invest combined efforts to deliver the true value to their consumers, whilst sharing the costs and generated revenues amongst themselves. While 50 banks have partnered with IMPS, more banks are yet to adopt this technology. Higher the success rate of this application with the member banks, more number of other banks would want to join the IMPS league. The potential members would be interested in obtaining an insight into the factors that influence the customers to use such a mobile payment facility. What do we know about the factors that encourage consumers to use IMPS? How can we reason the low adoption rate of this technology? Is it simply the lack of cognition? Unfortunately, there are no empirical studies or official adept reports available to address these questions.
Although, there have been numerous studies examining mobile payment in the m-commerce context [START_REF] Barnes | The mobile commerce value chain: analysis and future developments[END_REF][START_REF] Siau | Building customer trust in mobile commerce[END_REF][START_REF] Wu | What drives mobile commerce?: An empirical evaluation of the revised technology acceptance model[END_REF], IMPS is a very recent technology in the Indian context. The features of this particular application differ from the other mobile technologies. Since the technology is very new in India, there have been no studies/publications made yet on this application. Therefore, an empirical investigation is necessary to learn about the factors that entice the consumers to use IMPS. Hence, the research aim of this study is to empirically examine the role of innovation attributes on the adoption of the IMPS technology in an Indian context. Comprehending these influential attributes could serve in assisting the stakeholders to develop competitive strategies that could promote a wider acceptance of this technology. The next section provides theoretical basis for this study and proposes a conceptual model with hypotheses; after which, the research method is explained followed by a findings section presenting the SPSS statistics, which are then discussed against the proposed hypotheses; at closure, the key conclusions, limitations and future research directions have been presented.
Theoretical Basis and Development of the Conceptual Model
IMPS is clearly an integration of internet banking and mobile payment service. This application allows customers to access their bank accounts on mobile phones, using the internet from their mobile's network providers to make a successful payment to another person or merchant. There have been numerous publications on internet banking (IB) and mobile payment services in the past -two studies on IB adoption in Hong Kong use TAM [START_REF] Yiu | Factors affecting the adoption of internet banking in Hong Kongimplications for the banking sector[END_REF] and extended TAM models (Cheng et al. 2003) and find that perceived usefulness and perceived web security have a direct effect on the use intention, the former study extends implications for retail banks in Hong Kong. [START_REF] Brown | Environment on the Adoption of Internet Banking: Comparing Singapore and South Africa[END_REF] study the impact of national environment on the IB adoption in South Africa confirming attitudinal and behavioural control factors influence the IB adoption decision. [START_REF] Chen | A model of consumer acceptance of mobile payment[END_REF] expanded the TAM model and IDT to examine the consumer acceptance of mobile payments. They found that perceived use, perceived ease of use, perceived risk, and compatibility were the determinants of adoption. [START_REF] Mallat | Exploring consumer adoption of mobile payments-A qualitative study[END_REF] conducted an exploratory study on mobile payments which showed that relative advantages and compatibility influenced adoption decisions. [START_REF] Schierz | Understanding consumer acceptance of mobile payment services: An empirical analysis[END_REF] in their empirical study found that compatibility, individual mobility and subjective norm significantly influence consumer acceptance of m-payment services. These studies show that there has been no study on IMPS in the Indian context. More obviously, this can be reasoned for the IMPS application to be very new in the Indian context. Our study, therefore aims at focussing at this application to gain an understanding of the significant determinants of the IMPS acceptance. One of the most common problems faced by many individuals and organizations in an innovation adoption and diffusion process, despite the obvious advantages that their innovation already has to readily offer, is of how to speed up the rate of diffusion of their innovation(s) [START_REF] Rogers | Diffusion of Innovations[END_REF]. To address this concern with respect to the IMPS application, our study uses Rogers' innovation attributes alongside one another added attribute, cost, the justification of which has been provided in the later part of this section. There are many models available for predicting the user behaviour towards a given innovation like -TAM, UTUT, theory of reasoned action, and theory of planned behaviour, but all of these models tend to use similar attributes. The innovation of diffusion theory is more established and uses a set of attributes different from those used by the above mentioned models. Thus, our study borrows attributes from the Rogers' innovation diffusion theory for explorations purposes in the IMPS context.
To explain the process of diffusion, Rogers recognized five attributes -Relative Advantage, Compatibility, Complexity, Trialability, and Observability as the innovation attributes. Over the years, majority of the studies have chosen to use and study these five innovation attributes [START_REF] Tornatzky | Innovation Characteristics and Innovation Adoption-Implementation: A Meta-Analysis of Findings[END_REF]Greenhalgh et al., 2004;Legare et al., 2008;[START_REF] Hester | A conceptual model of wiki technology diffusion[END_REF]. TRA [START_REF] Ajzen | Understanding attitudes and predicting behaviour[END_REF] studied only intention on adoption. TPB [START_REF] Ajzen | Behavioural interventions based on the theory of planned behavior[END_REF] studied intention and actual behavioural control on adoption. The decomposed theory of planned behaviour [START_REF] Taylor | Understanding information technology usage: a test of competing models[END_REF] studied intention and perceived behavioural control on adoption. TAM [START_REF] Davis | User acceptance of computer technology: A comparison of two theoretical models[END_REF]) also studied only behavioural intention on adoption. Based on the aforementioned theories, our study decided that Rogers' five attributes will be studied against behavioural intention, and behavioural intention in turn will be studied against adoption. Since IMPS is an innovation involving cost, this attribute was looked upon with interest. Previous studies on mobile payments consider cost as an important influencing factor in making the adoption decision - [START_REF] Dahlberg | Past, present and future of mobile payments research: A literature review[END_REF] point out that researchers find costs in the form of transaction fee to be a barrier of adoption. Another qualitative study on adoption of mobile payments found the premium pricing of payments to be a barrier of adoption [START_REF] Mallat | Exploring consumer adoption of mobile payments-A qualitative study[END_REF]. Additional to the apparent costs of adopting an m-commerce innovation, a consumer is often subjected to relatively hidden transaction charges which could considerably influence the adoption decision of that particular innovation (Hung et al., 2003;[START_REF] Wu | What drives mobile commerce?: An empirical evaluation of the revised technology acceptance model[END_REF]. Interestingly, all of the above mentioned studies discuss the influence on cost on adoption and not on intention. This could be because a consumer can form favourable intentions towards such innovations (internet banking/mobile payments) without having to actually use (spend money on) that innovation, but when it comes to making an actual transaction, the different aspects of charges associated with that transaction come into picture. This is when the consumer makes the critical decision of adoption or rejection based on how appealing or unappealing is the charge associated to that transaction. Therefore, it was decided for cost to be regressed only against adoption. It was thereby concluded that -relative advantage, compatibility, complexity, trialability, observability and cost will be studied to examine the adoption intention and the actual adoption of the IMPS application in an Indian context (fig. 1).
Relative Advantage
By definition, relative advantage is the degree to which an innovation is better than the idea that it is superseding [START_REF] Rogers | Diffusion of Innovations[END_REF]. This attribute has been studied across many different technologiesmobile internet [START_REF] Hsu | Adoption of the mobile Internet: An empirical study of multimedia message service (MMS)[END_REF] and online portal [START_REF] Shih | Continued use of a Chinese online portal: an empirical study[END_REF] studies revealed in their findings that a higher degree of offered advantage is related to increased levels of adoption intentions. An organizational study on intention to adopt distributed work arrangements found relative advantage was positively related to use intention [START_REF] Sia | Effects of environmental uncertainty on organizational intention to adopt distributed work arrangements[END_REF]. Since the IMPS application supersedes the idea of performing financial transactions on laptops/personal computers, and allows the same transactions on-the-go, this attribute was deemed appropriate.
H1: Relative Advantage will significantly influence the users' behavioural intentions.
Compatibility
Compatibility is the degree to which an innovation is consistent with the existing values, past experiences and needs of potential adopters [START_REF] Rogers | Diffusion of Innovations[END_REF]. This attribute has been studied across different mobile technologies. Examples include the mobile network and mobile internet studies which reveal that compatibility has a significant and positive influence on the consumer intentions [START_REF] Hsu | Adoption of the mobile Internet: An empirical study of multimedia message service (MMS)[END_REF][START_REF] Shin | MVNO services: Policy implications for promoting MVNO diffusion[END_REF]). An empirical study on mobile ticketing service adoption found that compatibility was a strong predictor of use intention [START_REF] Mallat | An empirical investigation of mobile ticketing service adoption in public transportation[END_REF]. In the IMPS context, in order to learn how similar or different, the consumers find transferring money to the desired client/merchant accounts on a mobile phone, this attribute was studied in further detail.
H2: Compatibility will positively influence the users' behavioural intentions.
Complexity
It is degree to which an innovation is perceived as difficult to understand and use [START_REF] Rogers | Diffusion of Innovations[END_REF]. Greater complexity implies increased degree of difficulty in understanding the use of a given innovation. Therefore complexity is assumed to be negatively associated to use intentions. A mobile marketing adoption study shows that complexity has a direct influence on user's decision intentions [START_REF] Tanakinjal | Third Screen Communication and the Adoption of Mobile Marketing: A Malaysia Perspective[END_REF]. Studies on online portal use [START_REF] Shih | Continued use of a Chinese online portal: an empirical study[END_REF] and automatic cash payment system [START_REF] Yang | An implementation and usability evaluation of automatic cash-payment system for hospital[END_REF] found, complexity had a significant negative impact on use intention. Using the IMPS application on a mobile phone may be perceived differently by different users on the complexity scale, based on their skill and adaptability levels.
H3: Lower complexity will positively influence the users' behavioural intentions. [START_REF] Rogers | Diffusion of Innovations[END_REF] defines trialability as the degree to which an innovation is available to be experimented for a limited period prior to its actual adoption/rejection. Many internet banking and mobile internet studies are availablea mobile internet study found that trialability was not a significant predictor of adoption intention [START_REF] Hsu | Adoption of the mobile Internet: An empirical study of multimedia message service (MMS)[END_REF]. [START_REF] Arts | Generalizations on consumer innovation adoption: A meta-analysis on drivers of intention and behaviour[END_REF] in generalizing consumer innovation adoption argue that trialability enhances consumer readiness and has a stronger effect at the behaviour stage, negatively affecting adoption behaviour. It is important to understand, how trialability of IMPS can affect its adoption intention.
Trialability
H4: Trialability will significantly influence the users' behavioural intentions.
Observability
Observability is defined as the degree to which the results of an innovation become clearly visible to others [START_REF] Rogers | Diffusion of Innovations[END_REF]. A technology products study [START_REF] Vishwanath | An examination of the factors contributing to adoption decisions among latediffused technology products[END_REF] found observability significantly impacted intention. [START_REF] Arts | Generalizations on consumer innovation adoption: A meta-analysis on drivers of intention and behaviour[END_REF] in their meta-analysis on drivers of intention and behaviour, showed a partial support to the notion that observability will have a stronger effect at the intention stage. In order to receive more clarity on the effect of this attribute, it has been posited as, H5: Observability will significantly influence the users' behavioural intentions. [START_REF] Tornatzky | Innovation Characteristics and Innovation Adoption-Implementation: A Meta-Analysis of Findings[END_REF] posited cost to be negatively associated with the adoption of an innovation. According to them lesser the cost of adopting an innovation, higher will the probability of it being adopted immediately. A study on mobile virtual network hypothesizes for cost to negatively influence usage behaviour [START_REF] Shin | MVNO services: Policy implications for promoting MVNO diffusion[END_REF]. More conclusions on the effect of cost on the adoption of an innovation (from the previous studies) have already been discussed in section 2. In using the IMPS application, the consumers incur a certain charge per transaction, plus this application is not compatible with all mobile phones. It more specifically runs best on smart phones. In order to account for these potential costs, this attribute was taken into consideration.
Cost
H6: Lower Costs will positively influence the adoption of IMPS.
Behavioural Intention
Apart from the aforementioned six attributes, the effect of behavioural intention on adoption was also included to be measured. [START_REF] Gumussoy | Understanding factors affecting e-reverse auction use: An integrative approach[END_REF] cite [START_REF] Ajzen | Understanding attitudes and predicting behaviour[END_REF] to define behavioural intention as a measure of the likelihood of a person getting involved in a given behaviour. They point at behavioural intention to be an immediate determinant of actual use. Stronger the intention, greater will be the probability of use. Most studies supported for this attribute to have a positive influence on the actual use (Chen et al., 2002;[START_REF] Ajjan | Investigating faculty decisions to adopt Web 2.0 technologies: Theory and empirical tests[END_REF][START_REF] Gumussoy | Understanding factors affecting e-reverse auction use: An integrative approach[END_REF]; [START_REF] Ajjan | Investigating faculty decisions to adopt Web 2.0 technologies: Theory and empirical tests[END_REF] also cite [START_REF] Ajzen | The theory of planned behaviour[END_REF] suggesting that behavioural intention acts as the most important determinant of the adoption decision.
H7: Behavioural Intention positively influences the adoption of IMPS.
Research Method
Survey Instrument
The instrument used for data collection was a questionnaire comprising of 36 questions, out of which, eight were demographic by naturefour out of these were multiple choice, respondent specific questions -focussed on age, gender, education and occupation of the respondent; the remaining four were multiple choice, technology specific questions -focussed on the adoption factor, innovation type, duration of adoption, and frequency of usage. A seven point likert scale was used to measure the attitude of the respondents to the remaining 28 questions. These 28 questions were designed to cover the seven shortlisted constructs. The seven attributes were made up of four items each (table 1).
Table 1 Constructs-Questions Mapping
Pilot Study
The questionnaire was tested against a small sample size to improve upon the instrument design prior to the full scale roll-out of this study. The pilot study was done on a sample of 30 respondents. It was ensured that the population for this study included respondents from all age groups to ensure their understandability of the questionnaire. The respondents' feedback revealed that although the questionnaire was clear and simple by understanding, it appeared to be repetitive. Minor suggestions that were made were addressed and the questionnaire was amended suitably.
Data Collection
All-India data was to be accumulated and therefore it was decided to collect equal number of responses from all of the fournorthern (Delhi City), eastern (Kolkata City), western (Mumbai City) and southern (Bangalore City) regions of India. A total of 330 respondents participated in this survey. Upon the receipt of the questionnaires, it was found that seven questionnaires were incomplete. In the interest of data accuracy and reliability, these seven questionnaires were discarded, and a total of 323 questionnaires were subjected to further analyses. The SPSS data analysis software was used to produce results on the gathered data, the findings of which are made available in section 4. The findings section will provide for results from the (a) frequency tests on the demographic characteristics (b) reliability test showing the internal consistencies of the construct items (c) descriptive test generating the means and standard deviations for all of the seven constructs (d) regression analyses, both linear and logistic, in order to test the stated hypotheses, and (e) multicollinearity test to check for the correlation amongst the predictor variables.
FINDINGS
Demographics
Table 2 is descriptive of the demographic characteristics for this study's respondent-profile. Clearly, the 18-24 age group, the male respondents (53.6%), and the graduates (38.1%) formed the largest proportion groups for our dataset.
Table 2 Demographic Characteristics
Table 3 discloses the demographics specific to use of IMPS, and shows that out of the 323 respondents, there were 249 non-adopters and only 74 adopters (22.9%) of IMPS.
Table 3 Use-Specific Demographic Characteristics
Reliability Test
A reliability test was carried out to learn the internal consistencies of the individual items forming each of the utilized constructs (Table4). There were four constructs for which one item each was deleted in order to arrive at better α values. Hinton et al. ( 2004) illustrated that as a representative of reliability, the Cronbach's alpha could be read across four different reliability types: ≥ 0.90 -excellent; 0.70-0.90 -high; 0.50-0.70 -moderate; and ≤ 0.50 -low. Out of the seven constructs, there were four constructs with high, and three with moderate reliabilities. Higher the Cronbach's alpha values, greater is the consistency amongst the individual items making up a given construct.
Table 4 Reliability Test
Descriptive Statistics
Table 5 provides for the results from the descriptive test. The statistics were extracted in the ascending order of the mean values.
Table 5 Descriptive Statistics: Importance of various innovation-attributes
Regression analysis
Regression analysis is a statistical technique that predicts the values of one dependent variable using the values of one or more other independent variables [START_REF] Allen | Understanding Regression Analysis[END_REF]. This study underwent two types of regression analysis -(a) Linear regression (b) Logistic Regression, which were performed on a total of 323 cases.
Linear Regression
Worster et al. ( 2007) stated that linear regression assumes a linear relationship between the dependent and independent variable(s). A linear regression was performed taking Behavioral Intention as the dependent variable, and the Rogers' five attributes as the independent variables (Table6). The resultant model significantly predicted the behavioral intention of the target population towards IMPS (F (5, 323) = 40.919, p=0.000). The model explains 39.3% of the variance. While four variables were found to have a significant effect, observability did not have any effect on the behavioral intention.
Table 6 Linear Regression
Multicollinearity Test
According to [START_REF] Brace | SPSS for psychologists: a guide to data analysis using SPSS for windows[END_REF] multicollinearity is a situation where a high correlation is detected between two or more predictor variables, which cause problems in drawing inferences about the relative contribution of each predictor variable to the success of the model. The VIF values for this regression analysis vary between 1.456 and 1.904 (Table6).
Clearly, these values are significantly lower than the maximum value of 10 [START_REF] Irani | Understanding Consumer Adoption of Broadband: An Extension of Technology Acceptance Model[END_REF]. Thus, the independent variables for this study are free from the multicollinearity problem. The likelihood of the reported variance explained by these independent variables to be close to the real situation is therefore very high.
Logistic Regression
According to [START_REF] Worster | Understanding linear and logistic regression analyses, pedagogical tools and methods[END_REF], in logistic regression, the outcome variable must be categorical with two possible outcomes, i.e. it should be dichotomous. A logistic regression was performed with adoption as the dependent variable, and behavioural intention and cost as the predictor variables, the results of which are available in table7. The full model significantly predicted the adoption decision of the IMPS users. The model accounted for between 11.1% and 16.8% of the variance (Table8) in the adoption decision. As available in table9, 95.2% of the non-adopters were successfully predicted. However, only 6.8% of predictions for the adopter group were accurate. Overall, 74.9% of the predictions were accurate. Table 10 gives coefficients, Wald statistics, associated degrees of freedom, and probability values for the two predictor variables.
Table 7 Omnibus Tests of Model Coefficients Table 8 Model Summary
Table 9 Classification Table Table 10 Variables in the equation
DISCUSSION
Hypotheses Testing
A total of seven hypotheses were formulated and tested to examine the influence of the independent variables on the dependent variables (Adoption and Behavioural Intention). Only six of these twelve hypotheses were supported by data (H1, H2, H3, H4, H6, and H7). As posited, the data confirms that relative advantage, compatibility, complexity and trialability have significant impacts on the behavioural intentions of the targeted consumers in the IMPS context. Internet banking and telebanking can be seen as the predecessors of IMPS in the Indian context. In terms of compatibility, IMPS is much faster than telebanking. IMPS provides consumers with quicker access to their bank accounts, and offers greater flexibility in terms of the type of payment they need to make. Along with its 24/7 availability, the mobility feature of the IMPS application surpasses internet banking, in that, IMPS allows access to the consumers from anywhere, anytime, via their mobile networks, without having the need to connect through routers/modems to gain internet/Wi-Fi access. From the data results, clearly, the users perceive IMPS to be an easy to use mobile application. Studies from the past are a supportive proof of these facts - [START_REF] Slyke | The impact of perceived innovation characteristics on intention to use groupware[END_REF] used IDT in studying groupware applications and found that relative advantage, complexity and compatibility significantly influenced intention. Chen et al. (2002) applied IDT to study the consumer attitudes towards virtual stores and found compatibility to be strong determinant of consumer intentions. [START_REF] Hsu | Adoption of the mobile Internet: An empirical study of multimedia message service (MMS)[END_REF] studied the adoption of MMS using IDT, and concluded for relative advantage and compatibility to significantly influence the user intentions. [START_REF] Lee | An empirical investigation of anti-spyware software adoption: A multitheoretical perspective[END_REF] combined IDT, TPB, IT ethics, and morality in an empirical investigation on the anti-spyware software adoption, and found that relative advantage and compatibility showed significant effect on adoption intention. Trialability also succeeded in successfully explaining the consumer's adoption intention (hypothesis 4). [START_REF] Meuter | Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies[END_REF] concluded that trialability serves in clarifying the role of potential adopters by helping evaluating their ability to use that innovation, and thus enhancing the consumer readiness towards the given innovation. IMPS is an application which comes with no installation charge or usage clause, i.e. it is a service available for the consumers to use if and when required. In other words, IMPS comes with an unlimited trial period. The consumers can opt to use this application once, or any number of times without any trial obligations, and return to using it again if the service is appealing to them, or simply quit using it, otherwise.
Hypothesis 5 for this study was not supported by the data, in which observability failed to make a significant impact on the consumer intention to adopt IMPS. A recent study on consumer innovations adoption also found that observability was not significantly related to intention [START_REF] Arts | Generalizations on consumer innovation adoption: A meta-analysis on drivers of intention and behaviour[END_REF]. According to [START_REF] Meuter | Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies[END_REF], observability may assist in showing positive outputs, which in turn may motivate the adopters to receive that innovation's rewards. IMPS is purely a mobile application. The visibility of this innovation is not that apparent. To illustrate in more detaila study on e-book reader [START_REF] Jung | Factors affecting e-book reader awareness, interest, and intention to use[END_REF] found that observability had a significant relationship with consumers' intention to use. This is because an e-reader is a whole instrument in itself which is visible when carried around, and whose outcomes can be observed at visibility, thereby significantly affecting the potential consumers' intentions. The case of IMPS, here, is a complete opposite. The use of IMPS application by an active user will not be evidently visible to the eyes of the others, unless the use and the outcomes of using the IMPS application are explicitly discussed with its active users. This effectively makes IMPS less observable in comparison to other innovative products like tablets, e-readers, smart phones etc. This in turn can be reasoned to justify the insignificance displayed by our study's data for trialability towards the use intentions.
The data for this study also confirms the hypotheses with respect to the adoption of the IMPS application. It was confirmed that both, cost and behavioural intention significantly affected the adoption of IMPS. [START_REF] Tornatzky | Innovation Characteristics and Innovation Adoption-Implementation: A Meta-Analysis of Findings[END_REF] identified increased costs as inhibitors of the adoption of any innovation. As previously explained in section 2, the former studies are in conformance with the fact that high costs act as retarding agents in the acceptance of a diffusing innovation. In terms of cost, clearly an application like IMPS can incur three kinds of charges -(a) charge per transaction (b) cost of the compatible mobile phone (c) charges on the data being used from the mobile network providers. Our study's results however show that the existing and potential consumers find these charges to be affordable. Thus, it can be concluded that although IMPS comes with a cost, this application is viewed as inexpensive which in turn encourages adoption.
Similarly, with behavioural intention, the past studies have been in accordance with our findings - Taylor and Todd (1997), while studying the determinants of consumer compositing behaviour found that behaviour was significantly influenced by behavioural intention. [START_REF] Shin | MVNO services: Policy implications for promoting MVNO diffusion[END_REF], in studying the policy implications of mobile virtual network adopter diffusion also found for behavioural intention to have a significant effect on the actual behaviour. [START_REF] Hartshorne | Examining student decisions to adopt Web 2.0 technologies: theory and empirical tests[END_REF] cite Sheppard et al., (1998) and [START_REF] Ajzen | The theory of planned behaviour[END_REF] to support that the previous literature also finds a strong association between actual behaviour and the behavioural intention, which has also been confirmed in the IMPS context, in our study.
Validated Conceptual Model
Figure 2 is representative of the validated model for the factors influencing the behavioural intention and adoption of the IMPS application, as proposed in section 2. The dotted line running from observability to behavioural intention represents that path as insignificant, and the remaining paths from the remaining attributes to behavioural intention are shown to be the significant. Similarly, the paths from cost and behavioural intention to adoption have been shown as significant. In terms of performance, the R 2 values for adoption were measured across the Cox and Snell R 2 (0.111) and Nagelkerke R 2 (0.168). These two values were found to be comparatively lower than the values from the earlier studies measuring influences of different independent variables on the adoption of a given innovation [START_REF] Ungan | Factors affecting the adoption of manufacturing best practices[END_REF][START_REF] Gounaris | Investigating the drivers of internet banking adoption decision: A comparison of three alternative frameworks[END_REF][START_REF] Li | An empirical investigation on the determinants of E-procurement adoption in Chinese manufacturing enterprises[END_REF]. To exemplify a few, [START_REF] Ramamurthy | An empirical investigation of the key determinants of data warehouse adoption[END_REF] reported values of Cox and Snell R 2 -0.412 and Nagelkerke R 2 -0.550; [START_REF] Wang | Understanding the determinants of RFID adoption in the manufacturing industry[END_REF] reported Cox and Snell R 2 value of 0.51 and Nagelkerke R 2 value of 0.69, which are again higher than the values reported by our study (the variance is not very well explained). The adjusted R 2 value for behavioural intention was 0.393. These R 2 values are comparative to the values from former studies (Taylor and Todd, 1997;[START_REF] Ajjan | Investigating faculty decisions to adopt Web 2.0 technologies: Theory and empirical tests[END_REF][START_REF] Hartshorne | Examining student decisions to adopt Web 2.0 technologies: theory and empirical tests[END_REF]. For instance, [START_REF] Gumussoy | Understanding factors affecting e-reverse auction use: An integrative approach[END_REF] reported an adjusted R 2 value of 0.14, which is much lower than the R 2 value of 0.393 (which is higher in value and better) reported by our study. Similarly, another study by [START_REF] Lin | Predicting consumer intentions to shop online: An empirical test of competing theories[END_REF] reported an R 2 value of 0.30 for a TAM model and an R 2 value of 0.35 for a decomposed TBP model, both of which are clearly lower than the reported R 2 value of this study. The above comparisons evidently suggest that the performance of the validated model (figure 2) is satisfactory (the variance is well explained).
Research Contributions and Practical Implications
This piece of work is a contribution to the existing literature on the diffusion of innovation attributes, as Rogers' five attributes were studied and tested in a new context with this study: IMPS application in the Indian context. According to the authors' best knowledge, the IMPS technology is very new in the Indian context, and there have been no research publications made on this technology yet. Hence, the findings from this study should succeed in providing the first insights into how Rogers' attributes, alongside cost, behave with behavioural intention and adoption aspects of the IMPS application. Both, adoption and intention have been studied in parallel to augment the existing research paradigm with more constructive and broader results.
Considering the statement made by the director general of RBI, from the commercial perspective, mobile wallet is becoming a vital part of our transaction systems. Thus, in order to work towards its broader acceptance, the issues of building, promoting, and maintaining the consumers' interest in using such a service becomes important. The findings from this study showed that observability of IMPS was poor, because of which this construct made no impact in building positive intentions of the consumers towards the IMPS application. This result from our study thus indicates that it is important for the banks to rethink strategies on educating the target mass and making them aware of the positives of IMPS, to promote this application in the interest of improvising and attaining the desired type of financial transaction system in India. The results reveal that out of the 323 targeted population, only 22.9% (table3) formed the adopters of IMPS. As discussed in section5, with the prior existence of systems like internet banking and telebanking, consumers already have established banking styles and finance management systems. The low adoption rates indicate that, for mobile banking to overpower these already existing systems, the real challenge for the banks promoting the IMPS application will be to offer consumers with not just the equivalent services (from its predecessors), but with more attractive, easy to use features to draw more consumers towards its adoption.
Conclusions
This study affirms the many established innovation adoption and diffusion notions established by former studies by extending them in the IMPS context. Using Rogers' innovation attributes, alongside cost and behavioural intention, we develop an integrative model to study the influence of these attributes on the adoption of the IMPS application in the Indian context. The results from this study yielded key insights concerning the determinants of IMPS adoption from the proposed conceptual model. The model confirmed that a consumers' usage of IMPS can be predicted from their intentions. It also revealed that IMPS was perceived as inexpensive, and that the low costs associated to this mobile application facilitated its adoption. On the other hand, relative advantage, compatibility, lower complexity and trialability were found to be the significant determinants of the consumers' intention to use the IMPS application. The model also rendered observability as an insignificant determinant of the consumer's intention to use IMPS.
Limitations and Future Research Directions
Although, the current research aims to study the diffusion of IMPS in an Indian context, the data collected was limited to only four states representing each of the north, east, west, and south regions of India. The other cities of the country may bear certain cultural differences that may facilitate or impinge the adoption of IMPS. The future researchers may focus on the cultural factors, and more importantly focus on gathering the data from more number of cities in the country to bring to light the differences in state-wise adoption of this application, if any. Also, future researchers may want to investigate issues such as social influences using qualitative data, which may also fairly impact the adoption of such mobile payment innovations.
This study restricted its focus to only five of Rogers' innovation attributes, alongside cost as an added attribute of study. However, there are other innovation attributes apart from Rogers' five attributes that have been used and reviewed in the past, but not as much as Rogers' attributes. One study that has remarkably reviewed and listed more of such innovation attributes is the meta-analysis presented by Tornatzky and Klein from 1982. They recognized 25 other attributes as innovation attributes, in addition to Rogers' five attributes. Another significant contribution in this field has been a study by Moore and Benbasat from 1991, wherein they developed an instrument to measure individual perceptions taking a total of eight attributes into consideration. It would be interesting to get an insight into how the adoption of IMPS is affected by these other innovation attributes. Therefore, the future research may shift focus towards studying these other innovation attributes in the IMPS context to attain a deeper understanding of its diffusion process.
Our study focussed on studying the relationship between Rogers' innovation attributes and behavioural intention only, the future researchers may consider studying the direct influence of Rogers' attributes on the adoption aspect on IMPS. The findings from the logistic regression (Table8) showed low R 2 values for Cox and Snell and Nagelkerke, indicating that the total variance explained for adoption of IMPS is slightly lesser for our model. The future researchers may consider incorporating more number of adoption attributes for attaining a better explanation of the variance. Finally, as Rogers' (2003) states, diffusion is a process by which an innovation is communicated through certain channels over time. Given how new the IMPS application is, to have a more collective and constructive understanding, its adoption and diffusion process will have to be empirically investigated at different points in time.
Fig. 1 .
1 Fig. 1. Proposed conceptual model for examining intention and adoption of IMPS
Fig. 2 .
2 Fig. 2. Validated model illustrating attributes influencing the intention and adoption of IMPS. | 43,144 | [
"1001523",
"999540",
"999508"
] | [
"322817",
"322817",
"322817"
] |
01467782 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467782/file/978-3-642-38862-0_15_Chapter.pdf | Nripendra P Rana
Yogesh K Dwivedi
email: [email protected]
Michael D Williams
email: [email protected]
Examining the Factors Affecting Intention to Use of, and User Satisfaction with Online Public Grievance Redressal System (OPGRS) in India
Keywords: Online public grievance redressal system, OPGRS, e-government, DeLone and McLean, Seddon, India
The purpose of this paper is to examine the success (by measuring intention to use and user satisfaction) of the online public grievance redressal system (OPGRS) from the perspective of the citizens of India. This is the first time that the success of this e-government system is examined using an IS success model. The model developed includes the constructs such as system quality, information quality, perceived usefulness, user satisfaction, and intention to use. The empirical outcomes provided the positive significant connections between all eight hypothesized relationships between five constructs. The empirical evidence and discussion presented in the study can help the Indian government to improve upon and fully utilize the potential of OPGRS as a useful tool for transparent and corruption free country.
Introduction
Starting from the early 1990s, the revolution of information and communication and technologies (ICTs) has made major and brisk changes in the day-to-day life of people and governments [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF]. Realizing this, many governments across the world are transforming into new forms of government called electronic government (hereafter, e-government) [START_REF] Akman | E-Government: A global view and an empirical evaluation of some attributes of citizens[END_REF] to reinforce and maintain their positions in the global competition [START_REF] Sharifi | An adaptive approach for implementing e-government in I.R. Iran[END_REF]. Though e-government provides obvious benefits to governments, professionals, and organizations, it is citizens who actually predicted to receive a number of benefits [START_REF] Jaeger | The endless wire: E-Government as global phenomenon[END_REF]. Looking at this aspect, one of the most significant requirements of citizen's day-to-day life such as their grievances against the government systems, officials, organizations, and bureaucratic structures in a country like India is quite evident. As governments develop e-government systems to deliver services to the people, there is a need for evaluation efforts that could examine their effectiveness [START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF] and success. OPGRS is one such e-government system which is primarily meant for addressing the grievances, issues, and problems of citizen's everyday life and gets them resolved online by the high-level government officials designated for it. It provides a huge benefit to the people by resolving their problems without much hassle.
Grievance redress mechanism is a part and parcel of the machinery of any administration. No administration can claim to be answerable, responsive, and user-friendly unless it has established a proficient and effectual grievance redress mechanism. In fact, the grievance redress mechanism of a firm is an estimate to examine its efficiency and effectiveness as it provides significant feedback on the working of the administration. The grievances from public are accepted at various points in the Government of India. There are mainly two designated agencies in the central government handling these grievances namely Department of Administrative Reforms and Public Grievances (under Ministry of Personnel, Public Grievances and Pensions) and Directorate of Public Grievances (under Cabinet Secretariat). The public grievance redress mechanism in India functions on a decentralized basis. An officer of the level of Joint Secretary is designated as the Director of Grievances of the Ministry/Department/Organization.
The major reasons of grievances primarily include the socio-economic reasons such as prevalent corruption in the ministries, government organizations, and bureaucratic systems, which are ubiquitous in the current society. The people feel themselves helpless against it and are bound to tolerate it in their day-to-day lives. But, the factors such as lack of awareness and lack of relevant information about whom to complain make this process even more tedious. Looking at this aspect, OPGRS has been designed and developed to take care of such problems of the people without stepping in the offices of ministries and government organizations or even without knowing sometimes where to go to lodge their complaints. In majority of the cases, they don't even know who is accountable to listen to their problems. Therefore, the significance of such e-government systems is felt even more for smooth, transparent and impartial running of the governments. The success of this system can be measured only when a large section of the society adopts this system and the government responds properly to their problems leading to the citizens' satisfaction.
Although, OPGRS offers several advantages (as outlined above), its adoption is currently low. Despite low adoption rate, existing literature has not yet attempted to examine citizens' adoption behaviour and their satisfaction with the use of such an important public administration system. It is evident from the discussion presented above that it would be useful to study intention to adopt or adoption of and satisfaction with this system. Hence, the aim of this study is to undertake an exploratory study to examine the success of OPGRS by exploring citizens' adoptive intention.
The remaining paper is organised as follows to fulfill the desired aim: next section undertake a review of egovernment literature based on IS success model, this would be followed by a brief discussion on the utilised theoretical background of DeLone andMcLean's (1992, 2003) and [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] IS success models. Section 4 then provides an overview of the proposed research model and justification for the proposed hypotheses followed by a brief discussion on utilised research method. Findings are presented and discussed in Section 6 and 7 subsequently. Finally, conclusion including limitations and future research directions and implications for theory and practice are presented in Section 8.
Literature Review
As far as e-government adoption research is concerned, some studies [START_REF] Chai | Repeated Use of E-Gov Web Sites: A Satisfaction and Confidentiality Perspective[END_REF][START_REF] Chen | Impact of quality antecedents on taxpayer satisfaction with online tax-filing systems -An empirical study[END_REF][START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF][START_REF] Gotoh | Critical factors increasing user satisfaction with e-government services[END_REF][START_REF] Hsu | Understanding Information Systems Usage Behavior in E-Government: The Role of Context and Perceived Value[END_REF][START_REF] Hu | Determinants of Service Quality and Continuance Intention of Online Services: The Case of eTax[END_REF][START_REF] Sambasivan | User acceptance of a G2B system: A case of electronic procurement system in Malaysia[END_REF][START_REF] Scott | Understanding Net Benefits: A Citizen-Based Perspective on E-government Success[END_REF][START_REF] Teo | Trust and Electronic Government Success: An Empirical Study[END_REF] have used IS Success Models to analyse the use, intention to use, and satisfaction toward adopting an e-government system. In recent years, many citizens centric Internet based enhanced services implemented by governments of various countries including India. As government develops egovernment systems to offer such enhanced services to the citizens, further assessment efforts are required to measure the effectiveness of the e-government systems. Such evaluation efforts would allow government agencies to determine whether they are capable to deliver what citizens require and provide expected services accordingly [START_REF] Gupta | EGovernment evaluation: A framework and case study[END_REF][START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF].
From the analysis of the research findings of the various literature studies, [START_REF] Chai | Repeated Use of E-Gov Web Sites: A Satisfaction and Confidentiality Perspective[END_REF] implied that success of e-government depends on how governments offer high quality and user-oriented e-government services to the citizens. One of the major factors of the success of the e-government was government websites. The relationship between the quality of a website and its success has been analysed in some research papers [START_REF] Chai | Repeated Use of E-Gov Web Sites: A Satisfaction and Confidentiality Perspective[END_REF]. [START_REF] Palmer | Web site usability, design, and performance metrics[END_REF] discussed that quality of a website can be measured by its connection speed, navigability, interactivity, responsiveness, and quality substance. On the other hand, it was found that website quality is supposed to have positively linked toward developing trusting intention on e-commerce website [START_REF] Mcknight | Trust in e-commerce vendors: A two-stage model[END_REF]. Therefore, website service quality can be considered as one of the strong interpreters of e-government success and user's intention to constantly use an e-government website [START_REF] Chai | Repeated Use of E-Gov Web Sites: A Satisfaction and Confidentiality Perspective[END_REF]. [START_REF] Hsu | Understanding Information Systems Usage Behavior in E-Government: The Role of Context and Perceived Value[END_REF] provided an alternative conceptualization of the IS success model for examining the IS use behaviour of e-government in Taiwanese context. Their analysis indicated that user intention to use IS in egovernment is governed by social (i.e. normative pressure) and functional value (i.e. information and system quality) rather than conditional value (i.e. system quality) and satisfaction. [START_REF] Teo | Trust and Electronic Government Success: An Empirical Study[END_REF] have analysed the influence of trust on the specific e-government systems on the quality constructs (i.e. information, system, and service quality) of the IS success model. They argued that higher level of citizen's trust would be positively associated with information, system, and service quality of the systems [START_REF] Teo | Trust and Electronic Government Success: An Empirical Study[END_REF]. Similarly, backed by the IS success model, [START_REF] Wang | Building the Model of Sustainable Trust in E-government[END_REF] devised a model for citizen's sustainable trust in e-government. [START_REF] Gotoh | Critical factors increasing user satisfaction with e-government services[END_REF] undertook a similar analysis of the online tax declaration services for the Japanese government and examined it quantitatively to elucidate the factors that enhance user's satisfaction with such services. The paper used IS success models with two amendments where preparation quality and result quality were the constructs used apart from system quality, which was directly driven from the IS success model. [START_REF] Hu | Determinants of Service Quality and Continuance Intention of Online Services: The Case of eTax[END_REF] examined the determinants of service quality and continuance intention on the eTax system in context of Hong Kong. The data analysis supported both service traits (i.e. security and convenience) and one technology trait (i.e. perceived ease of use) as the key determinants of the service quality. They also observed that perceived usefulness was not found as the strongest predictor of continuance intention but service quality was. [START_REF] Scott | Understanding Net Benefits: A Citizen-Based Perspective on E-government Success[END_REF] provided a multi-faceted framework for understanding the success of e-government websites from the citizen's perspectives. They established the role of net benefits in the evaluation of egovernment success and extended the knowledge of e-government success by determining the influence IT quality constructs. Chen (2010) discussed taxpayer's satisfaction with the online system for filing the individual income tax returns in context of Taiwan. The system under discussion covered its information, system, and service qualities, which are the precursors of the user's satisfaction with any system. By the use of DeLone and McLean's IS success model, the author intended to demonstrate how the use of the system could be enhanced by the increasing software satisfaction with it. The research also found that information and system quality are significant factors toward achieving this goal [START_REF] Chen | Impact of quality antecedents on taxpayer satisfaction with online tax-filing systems -An empirical study[END_REF]. [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF] measured the success of the Greek taxation information system (TAXIS) from the perspective of expert employees using the constructs including information, system, and service quality, perceived usefulness, and user satisfaction and found the strong links between five success constructs. However, they found the effect of system quality on perceived usefulness quite low and on user satisfaction as nonsignificant. [START_REF] Sambasivan | User acceptance of a G2B system: A case of electronic procurement system in Malaysia[END_REF] used an extended IS success model of [START_REF] Delone | The DeLone and McLean model of information systems success: A ten-year update[END_REF] to examine the factors that influence the intention to use and actual use of the electronic procurement system by various ministries in the Malaysian government. They used DeLone and McLean IS success model by extending it with the factors such as trust, facilitating conditions, and web-design quality and found them strongly linked with intention to use.
3
Research Model Development and Hypotheses
Theoretical Background -IS Success Models
There are primarily three theories given in the area of IS success. The first IS success model was given by DeLone and [START_REF] Delone | Information systems success: The quest for the dependent variable[END_REF] with six factors namely system quality, information quality, use, user's satisfaction, individual impact, and organizational impact [START_REF] Delone | Information systems success: The quest for the dependent variable[END_REF]. In order to address criticism by several studies (such as [START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF] relating to some of its constructs such as individual and organizational impact and use, [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF]
Overview of Proposed Research Model for Examining Success Factors of OPGRS
The theoretical development is based on the above described IS success models [START_REF] Delone | Information systems success: The quest for the dependent variable[END_REF][START_REF] Jaeger | The endless wire: E-Government as global phenomenon[END_REF][START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF]. The decision for not considering certain constructs of these models for proposing research model for this study is based on certain logical facts. For example, service quality is concerned with measuring the quality of service obtained by the IT departments as opposed to the specific IT applications. It mainly examines user's beliefs and their insight of IT department [START_REF] Petter | Measuring information systems success: models, dimensions, measures, and interrelationships[END_REF]. Since this study is concerned with a specific application (i.e. OPGRS), it was deemed inappropriate to include service quality in the proposed research model. The construct 'use' was also excluded from the proposed model as respondents of this study were potential adopters ('not actual users') of the systems. Although, they were shown the working of the system and its benefits and are expected to use this system in the future, they had not have yet experience with the system. For the purpose of this study the perceived usefulness construct from the [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] model was added to replace use. [START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF] and [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] argued that non-use of a system does not necessarily indicate that it is not useful; it may simply indicate that the potential users have other tasks to perform [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF][START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF]. Such logical arguments further justifies basis for the inclusion of perceived usefulness as an appropriate construct in the proposed model. Considering above discussion, proposed research model (see Fig. 1) postulate that system quality, information quality will have significant influence on perceived usefulness and these three constructs together will be significantly related to intention to use, and user satisfaction. Testing the postulated relationships (See Table 2 for the list of hypotheses) can help to extrapolate the success of online public grievance redressal systems.
Hypotheses Development
As illustrated in Fig. 1, a total of eight hypotheses are proposed based on the relationships between five constructs. The core (independent) constructs are listed and defined in Table 1.
Table 1. Definitions of core constructs used in proposed model (Adopted from Seddon, 1997) 1992) IS success model. After replacing use by usefulness, the influence of system quality on perceived usefulness was found very significant. Latter, Fraser and Salter (1995) also obtained a very similar result after replicating [START_REF] Seddon | A partial test and development of the DeLone and McLean model of IS success[END_REF] study. Ease of use has been devised as one of the characteristics of system quality, and the meta-analytic results of this very frequently occurring relationship (in other words, impact of ease of use on usefulness) have been supported and found positive and significant for various categories [START_REF] King | A meta-analysis of the technology acceptance model[END_REF]. Examining the success of a Greek taxation information system (TAXIS), [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF] found a strong connection of system quality on perceived usefulness. However, the effect of system quality on perceived usefulness was found to be very low. It was argued that sample considered was largely based on computer and Internet literates and hence might not be a crucial factor for determining perceived usefulness. Therefore, we hypothesize:
H1: System quality has a positive and significant effect on perceived usefulness of the online public grievance redressal systems.
System Quality Intention to Use
The individual meta-analyses results of [START_REF] Delone | The DeLone and McLean model of information systems success: A ten-year update[END_REF] IS success model supported and indicated a positive and a highly significant relationship of system analysis on intention to use [START_REF] Petter | A meta-analytic assessment of the DeLone and McLean IS success model: An examination of IS success at the individual level[END_REF].
Variable/Construct Definition
System Quality System quality is concerned with whether or not there are 'bugs' in the system, the consistency of the user interface, ease of use, quality of documentation, and sometimes, quality and maintainability of the program code.
Information Quality
Information quality is concerned with the issues such as the relevance, timeliness, and accuracy of information generated by an information system. Not all applications of IT involve the production of information for decision-making (e.g., a word processor does not produce any information) so information quality is not a measure that can be applied to all systems.
Perceived Usefulness
Perceived usefulness is a perceptual indicator of the degree to which the stakeholder believes that using a particular system has enhanced his or her job performance, or his or her group's or organization's performance.
H# Hypothesis H1
System quality has a positive and significant relationship with perceived usefulness H2 System quality has a positive and significant relationship with intention to use H3 System quality has a positive and significant relationship with user satisfaction H4
Information quality has a positive and significant relationship with perceived usefulness H5
Information quality has a positive and significant relationship with intention to use H6
Information quality has a positive and significant relationship with user satisfaction H7
Perceived usefulness has a positive and significant relationship with user satisfaction H8
Perceived usefulness has a positive and significant relationship with intention to use
The impact of ease of use, which is considered as one of the measures of system quality is already established as positive and significant on intention to use in majority of cases (58 out of 101 relationships found significant as per [START_REF] Lee | The Technology Acceptance Model: Past, Present, and Future[END_REF]) and through the meta-analysis under various categories [START_REF] King | A meta-analysis of the technology acceptance model[END_REF] as well. Therefore, we hypothesize:
H2: System quality has a positive and significant effect on intention to use of the online public grievance redressal systems.
System Quality User Satisfaction
The prior empirical findings [START_REF] Iavari | An empirical test of the DeLone-McLean model of information system success[END_REF][START_REF] Rai | Assessing the validity of IS success models: An empirical test and theoretical analysis[END_REF][START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF][START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF][START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF] have supported the positive and significant impact of system quality on user satisfaction as discussed in DeLone and McLean's model. That indicates that the higher levels of system quality are positively associated to higher levels of user satisfaction. However, analysing TAXIS in context of Greece, [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF] found it non-significant. The authors argued that the system quality may not be the prominent factor in measuring satisfaction given the nature of sample being sufficiently computer and Internet literate. However, measuring the e-government system success using the validation of DeLone and McLean model, [START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF] found a significant impact of system quality on user's satisfaction. Therefore, we hypothesize:
H3: System quality has a positive and significant effect on user satisfaction of the online public grievance redressal systems.
Information Quality Perceived Usefulness
This relationship has been supported by [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] IS success model, where they substituted 'IS use' of [START_REF] Delone | Information systems success: The quest for the dependent variable[END_REF] success model by perceived usefulness. [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] pointed out that perceived usefulness is impacted directly by beliefs about information quality. Latter, [START_REF] Rai | Assessing the validity of IS success models: An empirical test and theoretical analysis[END_REF] analysed and validated [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] and its amended models and found the effect of information quality on perceived usefulness as positive and significant. Similarly, [START_REF] Franz | Organisational context, user involvement and the usefulness of information systems[END_REF], [START_REF] Kraemer | The usefulness of computer-based information to public managers[END_REF] and [START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF] have also argued that augmented information quality leads to enhanced usefulness. Moreover, [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF] explored the effect of information quality on perceived usefulness in context of Greek TAXIS systems and confirms the [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] argument. Therefore, we hypothesize:
H4: Information quality has a positive and significant effect on perceived usefulness of the online public grievance redressal systems.
Information Quality Intention to Use
DeLone and [START_REF] Delone | The DeLone and McLean model of information systems success: A ten-year update[END_REF] IS updated model has hypothesized and supported the link of information quality on intention to use. Moreover, the effects of information quality on intention to use is also strongly supported by the meta-analytic outcomes of DeLone and McLean IS success model, which showed the strong relationship strength of these variables even for the least overall sample size obtained to perform the meta-analysis [START_REF] Petter | A meta-analytic assessment of the DeLone and McLean IS success model: An examination of IS success at the individual level[END_REF]. As far as e-government based studies are concerned, they yet need to validate this relationship. Hence, the following hypothesis can be formulated:
H5: Information quality has a positive and significant effect on intention to use of the online public grievance redressal systems.
Information Quality User Satisfaction
Several prior studies on IS success have demonstrated support for the argument that higher degree of information quality leads to enhanced user satisfaction [START_REF] Chae | Information quality for mobile internet services: A theoretical model with empirical validation[END_REF][START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF][START_REF] Iavari | An empirical test of the DeLone-McLean model of information system success[END_REF][START_REF] Mcgill | User-developed applications and information systems success: A test of DeLone and McLean's model[END_REF][START_REF] Rai | Assessing the validity of IS success models: An empirical test and theoretical analysis[END_REF][START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF][START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF][START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF][START_REF] Zhang | A framework of ERP systems implementation success in China: An empirical study[END_REF]. [START_REF] Petter | A meta-analytic assessment of the DeLone and McLean IS success model: An examination of IS success at the individual level[END_REF] meta-analytic assessment of DeLone and McLean model has also strongly supported in effect of information quality on user's satisfaction. In context of e-government adoption research, [START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF] presented and validated a model of e-government system success (based on DeLone and McLean ( 2003) IS success model) and found the influence of information quality on user satisfaction being significantly supported. Similar, results were obtained for TAXIS systems analysed by [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF], where the findings indicated system quality as an important and stronger determinant of employee's satisfaction. Hence, we hypothesize:
H6: Information quality has a positive and significant effect on user satisfaction of the online public grievance redressal systems.
Perceived Usefulness User Satisfaction [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] has validated and supported the positive effect of perceived usefulness on user's satisfaction. [START_REF] Rai | Assessing the validity of IS success models: An empirical test and theoretical analysis[END_REF] specified and empirically measured [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] and its amended IS success model and found the significant correlation between perceived usefulness and user's satisfaction. Moreover, findings of [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF] confirmed that perceived usefulness is a strong determinant of user satisfaction in context of TAXIS systems of Greece. They also argued that this relationship has not been explored much by the researchers in the extant literature [START_REF] Floropoulos | Measuring the success of the Greek Taxation Information System[END_REF]. Moreover, the studies like [START_REF] Franz | Organisational context, user involvement and the usefulness of information systems[END_REF] and [START_REF] Seddon | A partial test and development of DeLone and McLean's model of IS success[END_REF] have also supported a positive correlation between these constructs. Therefore, we hypothesize:
H7: Perceived usefulness has a positive and significant effect on intention to use the online public grievance redressal systems.
Perceived Usefulness Intention to Use
Building on the prior IS research, the technology acceptance model (TAM) conceptualized usefulness as one of the significant insights leading to intention to adopt new systems [START_REF] Lee | The Technology Acceptance Model: Past, Present, and Future[END_REF]. Research has shown that perceived usefulness influences intended adoption of IT [START_REF] Gefen | The Relative Importance of Perceived Ease of Use in IS Adoption: A Study of E-Commerce Adoption[END_REF]. As far as e-government adoption research is concerned, this relationship has been examined through the models such as the TAM and extended TAM (TAM2). Subsequently, out of being examined for a total of 24 times, it was found significant in 21 cases across different studies. The meta-analysis also found the collective effect of perceived usefulness on intention to use as significant. However, as per our knowledge, this relationship has not been explored in context of IS success model in e-government research. Considering the overall performance of this relationship across IS research in general and e-government adoption research in particular, the following hypothesis can be formulated:
H8: Perceived Usefulness has a positive and significant influence on intention to use the online public grievance redressal systems.
Research Methodology
For the purpose of examining e-government system success of OPGRS, the researchers considered survey as an appropriate research method [START_REF] Cornford | Project Research in Information Systems: A Student's Guide[END_REF][START_REF] Choudrie | Investigating the research approaches for examining technology adoption issues[END_REF]. There are various ways to capture the data, however, a self-administered questionnaire was found to be a suitable as a primary survey instrument of data collection in this research. This is due to the fact that this tackles the issue of reliability of information by reducing and eliminating the way the questions are asked and presented (Conford and Smithson, 1996). Moreover, collecting data from the majority of respondents within a short and specific period of time was a critical issue of this research [START_REF] Fowler | Survey Research Methods[END_REF]. Therefore, only closed and multiple-choice questions were included in the questionnaire. The final questionnaire consisted of total 30 questions including 10 questions from respondent's demographic characteristics and 20 questions on the five different constructs of the proposed research model. All these questions were multiple-type, closed-ended and seven-point Likert scale type questions. Likert scales (1-7) with anchors ranging from "strongly disagree" to "strongly agree" [START_REF] Wang | Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success[END_REF] were used for all non-demographic based questions. Appendix A lists all the items for the constructs used in this study.
The sample of the study consists of wide spectrum of respondents from different cities of India including New Delhi, Pune, Mumbai, Bangalore, Patna, Siliguri, and Gangtok. From the literature on IS success models, five factors were identified and a questionnaire for examining intention to use and satisfaction was then created and pilot tested with 34 respondents. While the results of the pilot test were found to be valid and reliable measuring instrument, the researchers agreed that further analysis could reduce the set of factors and that further validation efforts were required [START_REF] Griffiths | User satisfaction as a measure of system performance[END_REF]. Deriving from the success of the pilot test, a total of 1500 questionnaires distributed to respondents through one-to-one and group interaction. The respondents were briefed and demonstrated about the functioning of the online public grievance redressal system and in some cases they were given maximum two days of time to complete the questionnaire. This was done considering the long list of questions in the questionnaire. However, some of the questionnaires were made to respond on spot. A total of 485 completed survey questionnaires were received back. The further scrutiny of questionnaires revealed that 66 of them were partially completed and so rejected from the subsequent analysis. Hence, we were left with 419 usable responses, which made our basis for the empirical analysis for measuring the IS success of OPGRS. The overall response rate was found to be 32.3% with 27.9% valid questionnaires.
5
Research Findings
Respondents' Demographic Profile
This section analyses demographic data (in Table 3) obtained from the respondents. As per the questionnaire results, the average respondent's age ranges from 20 to 34, with males accounting for 67.8% of the sample and 32.2% were female. The majority of the population (i.e., 56.1%) belongs to student community with a fair representation from public-and private-sector employees (i.e., 29.3%). As far as the educational qualifications are concerned, 82% of the total population are having a minimum degree of graduation. The computer and Internet literacy and awareness of the respondents can be judged from their very high computer and Internet experience percentage (≈ 96%). This higher frequency is also supported by their computer and Internet access at various places and Internet use frequency, which is very high. Therefore, it is argued that the sample of respondents could be the best-fit potential users and adopters of the systems such as online public grievance redressal system.
Reliability Analysis -Cronbach's Alpha (α)
Reliability analysis was performed using Cronbach's alpha. It was used for determining the reliability of the scale, which provides an indication about the internal consistency of the items measuring the same construct [START_REF] Hair | Multivariate data analysis, with readings[END_REF][START_REF] Zikmund | Business research methods[END_REF]). Cronbach's alpha reliability for all the constructs except system quality is in the range 0.796-0.881, which is quite good. A Chronbach (α) of greater than 0.7 is considered to be good [START_REF] Nunnaly | Psychometric theory[END_REF][START_REF] Hair | Multivariate data analysis, with readings[END_REF]. Therefore, alphas imply strong reliability for all constructs, but system quality which is at satisfactory level.
Descriptive Statistics
Table 5 presents the mean and standard deviation (S.D.) for all the five constructs and their individual items.
The high overall as well as individual items' means for most of the constructs, except user satisfaction, indicate that respondents react favourably to the IS success measures examined.
Hypotheses Testing
Table 6, 7, and 8 present output of linear regression model analysed using SPSS 19.0. The analysis presented in Table 6 supported all the hypotheses (i.e. H2, H5, and H8) on intention to use as positive and significant. The constructs SQ, PU, and IQ explain 25.1% (adjusted R 2 ) of the variance in respondents' intention to use the online public grievance redressal system. Since, the overall model is significant (F=47.811, p=0.000), the significance of the independent variable was further examined. All independent variables were found significant with 1% significance level except IQ, which was found significant with 5% significance level. Therefore, all the three hypotheses H2, H5, and H8 are supported. 7 presents the β-value of independent variables such as SQ, PU, and IQ on US. The analysis exhibits a stronger effect of SQ (β=0.310) and IQ (β=0.355) on US than IU. However, PU seems to have better influencing BI (β=0.288) than US (β=0.109). That means, higher the usefulness, higher the intention to use the system rather than being more satisfied. Higher the system and information quality, respondents tend to be more satisfied with the system. That means, OPGRS enhances the overall satisfaction of the respondents by providing accurate, reliable, updated information to name a few in the bureaucratic government system in a country like India. All the three hypotheses H3, H6, and H7 have been found positive and significant on user satisfaction. The same set of independent constructs (i.e. SQ, PU, and IQ) explains 43% (adjusted R 2 ) of the variance in the respondents' user satisfaction on the system. The overall model was found significant (F=106.098, p=0.000), and the significance of the individual independent variables was further verified. It was found that both SQ and IQ were found significant on US with 0.001 significant level whereas PU at 0.05 significant level.
Table 8 summarizes the results of the hypotheses testing the dependent variable perceived usefulness. This model explains 42.6% of variance of the system OPGRS on respondent's perceived usefulness. Again, the overall model was found significant (F=154.538, p=0.000) with the individual independent variables IQ and SQ are significant determinants of respondent's perceived usefulness with a significance level of 0.001. More precisely, the hypotheses H1 and H4 are supported. This time, SQ exhibits stronger effect (β=0.447) on perceived usefulness than IQ (β=0.293). That indicates, higher the system quality higher will be its usefulness as perceived by its users. The hypothesis testing results of linear regression analysis with the coefficient values (i.e. β-value), p-value, and R 2 -value are presented along the research model in Fig. 2.
Discussion
The hypothesis testing results indicated that there are strong links between the five constructs supporting the hypotheses. The regression coefficient outcomes indicated that system and information quality are significant positive determinants of perceived usefulness. Moreover, their effect on user satisfaction was even stronger. However, unlike the system and information quality, the effect of perceived usefulness on user satisfaction was significant but its influence on intention to use exhibited even stronger affect as far as OPGRS is concerned.
Finally, the findings also revealed that system and information quality are the stronger predictors of perceived usefulness.
It is evident from the above analysis that perceived usefulness of the system leads the respondents more toward their intention to use it rather being satisfied. This is due to the fact that although perceived usefulness is found as an effective determinant to measure both intention to use as well as user satisfaction, it is more significant toward intention to use than satisfaction. This may be due to the fact that users would tend to use the system more than being satisfied to the level based on the usefulness of the system. The argument of intention to use a system based on its perceived usefulness has already been well established by [START_REF] Davis | Perceived usefulness, perceived ease of use, and user acceptance of information technology[END_REF] through TAM model and supported by a number of studies using this model. Moreover, a significant impact of perceived usefulness on user's satisfaction (established by [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] has also been supported by [START_REF] Colesca | Adoption and use of e-government services: The case of Romania[END_REF] in their study of e-government adoption in Romanian context.
As OPGRS is relatively a new system and not being used by the sample population for lodging their complaints and grievances, the stronger significant relationship between perceived usefulness and satisfaction can be expected only when the system is used to a certain extent. On the other hand, the predictors such as system and information quality make the respondents more satisfied than intending to use the system though all the relationships were found significant. This may be due to the fact that the respondents believed about how the system is going to benefit them without much hassle. The other significant reason for the respondents being more satisfied to this system is it does not involve any transactions and would take away their fear to lose anything monetarily. However, all these relationships were found significant as described in DeLone and McLean IS success Model (2003). Finally, the constructs system and information quality strongly determines the perceived usefulness of the This is due to the simple reason that this is the flexibility, conciseness, ease of use, faster response time, user-friendly interface, accuracy, completeness, and significance of the system which can make it more useful for its use. This strong empirical evidence also supports [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] view of IS success model.
Conclusion
This research is a response to a call for the continuous challenge and validation of IS success models in different contexts (DeLone andMcLean, 2003, Rai et al., 2002). The purpose of this study is to examine the success of OPGRS through an e-government based IS success model, which is developed using DeLone and McLean's and Seddon's IS success models. Therefore, we integrated the constructs of these two models to form a model that could explain the success of OPGRS as perceived by the sample of respondents in context of India. All the eight hypotheses performed significantly as per the expectations of the IS success models. Therefore, it is quite evident from the empirical findings that the implementation of OPGRS seems to be quite successful even though it is not a very old system. However, it was sensed that the government should take more initiatives to enhance the information quality of the system to attract more positive responses from the citizens toward their inclination to use the system. Moreover, there should be an emphasis to highlight the usefulness of the system as a whole to make the citizens aware, prompted, and satisfied.
Limitations and Future Research Directions
Even though the thorough process has allowed us to develop and validate the e-government based system success model, this study has a number of limitations that can be taken care of in the future research. Firstly, the exploration of IS system success model in context of e-government system is relatively new to the e-government researchers. Therefore, the caution needs to be taken while generalizing its findings to the other categories of users (i.e. in G2B and G2G contexts) as well as this model in other developing country even in G2C context. Secondly, this model does not measure the concerns of net benefits as defined in the IS success model [START_REF] Delone | The DeLone and McLean model of information systems success: A ten-year update[END_REF][START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF]. Hence, measuring net benefits from the citizen's points of view can reveal some more facts about the system. However, future researchers need clearly and carefully define the stakeholders and situations under which the net benefits are to be examined [START_REF] Delone | The DeLone and McLean model of information systems success: A ten-year update[END_REF].
Thirdly, the study has not validated this system for specific cultural and geographical contexts. Future research can dig out more on these aspects. Finally, this study has performed empirical investigation of e-government systems success based on the snapshot view of the sample. The longitudinal view of sample data would allow the researchers to better explore the facts about the actual use of the system and it's after effect.
Implications for Theory and Practice
The first theoretical implication of this research is that this system is tested for the first time for its success measure. Secondly, we have integrated [START_REF] Delone | The DeLone and McLean model of information systems success: A ten-year update[END_REF] and [START_REF] Seddon | A respecification and extension of the DeLone and McLean model of IS success[END_REF] IS success models to provide a better understanding of OPGRS's success. The model presented here can be tested further based on longitudinal nature of data gathered from the same set of sample after they use the system for some time or data collected from those respondents who are already using the system. The empirical testing outcomes of the hypotheses linked to the model can help researchers toward a better understanding of citizen's intention to use and satisfaction with the system. The results will allow the e-government practitioners to realize about the factors to give more attention for increasing the citizen's satisfaction and intention to use the system. The current link of information quality with intention to use the system is although significant, it is not strong. The system designer should pay more attention toward enhancing the standard of information quality of the system to strengthen people's intention to use it. The practitioners should be asked to reinforce perceived usefulness of the system in such a way that they can ensure user's satisfaction to a larger extent.
introduced a re-specified model of DeLone and McLean where use of the system was considered to have results of various types, perceived usefulness was introduced in the model as an IS measure. Latter in the year 2003, DeLone and McLean discussed many of the significant IS research efforts that have applied, validated, challenged, and offered enrichments to their original model. The updated IS success model (DeLone and McLean, 2003) incorporated a new construct 'service quality' and substituted the variables, individual and organizational impact, with net benefits with accounting for benefits at different levels of analysis.
Fig. 1. The proposed research model
Fig. 2 .
2 Fig. 2. Validated research model to measure intention to use and user
Table 2
2
. A list of proposed hypotheses System Quality Perceived Usefulness Seddon and Kiew (1994) examined a part of DeLone and McLean (1992) IS success model and replaced 'use' with 'usefulness' and their outcomes partially supported DeLone and McLean (
Table 3 .
3 Demographic
characteristics of respondents
Characteristics Frequency %
Age
20-24 Years 228 54.4
25-29 Years 70 16.7
30-34 Years 52 12.4
35-39 Years 27 6.4
40-44 Years 11 2.6
45-49 Years 13 3.1
50-54 Years 7 1.7
55-59 Years 1 0.2
>= 60 Years 10 2.4
Gender
Male 284 67.8
Female 135 32.2
Education
Non-Matriculation 7 1.7
Matriculation 13 3.1
10+2/Intermediate 55 13.1
Graduate 161 38.4
Post-Graduate 169 40.3
Post-Graduate Research 14 3.3
Occupation
Table 4 .
4 Cronbach
's alpha (α) of constructs
Construct Cronbach's Alpha (α)
System Quality 0.548
Information Quality 0.810
Perceived Usefulness 0.800
Intention to Use 0.796
User Satisfaction 0.881
Table 5 .
5 Descriptive statistics of the constructs and their items
Measure Item Mean S.D.
System Quality (SQ) 5.19 0.97
SQ1 5.17 1.31
SQ2 5.33 1.35
SQ3 5.06 1.37
Information Quality (IQ) 5.02 1.08
IQ1 5.11 1.28
IQ2 4.92 1.38
IQ3 5.05 1.40
IQ4 4.98 1.35
Perceived Usefulness (PU) 5.29 0.96
PU1 5.51 1.35
PU2 5.06 1.41
PU3 4.97 1.51
PU4 5.05 1.36
PU5 5.58 1.23
PU6 5.55 1.23
Intention to Use (IU) 5.26 1.23
IU1 5.31 1.50
IU2 5.20 1.46
IU3 5.27 1.40
User Satisfaction (US) 4.15 0.95
US1 5.08 1.45
US2 5.12 1.45
US3 5.21 1.33
US4 5.35 1.30
Table 6 .
6 Effect of system quality, perceived usefulness, and information quality on intention to use
Unstandardized Standardized
Model Coefficients Coefficients t Sig. Result
B S.E. Beta
(Constant) 1.409 0.326 4.319 0.000
SQ 0.245 0.071 0.194** 3.469 0.001 Supported
PU 0.369 0.072 0.288*** 5.147 0.000 Supported
IQ 0.126 0.060 0.111* 2.098 0.036 Supported
Model R 2 0.257
Adjusted R 2 0.251
F 47.811
Significance 0.000
[Note: *: p<0.05, **: p<0.01; ***: p<0.001][Legend: S.E. = Standard Error, Sig. = Significance]
Table
Table 7 .
7 Effect of system quality, perceived usefulness, and information quality on user satisfaction
Unstandardized Standardized
Model Coefficients B S.E. Coefficients Beta t Sig. Result
(Constant) 0.437 0.221 1.982 0.048
SQ 0.303 0.048 0.310*** 6.343 0.000 Supported
PU 0.109 0.049 0.109* 2.246 0.025 Supported
IQ 0.312 0.040 0.355*** 7.723 0.000 Supported
Model R 2 0.434
Adjusted R 2 0.430
F 106.098
Significance 0.000
[Note: p<0.05, **: p<0.01; ***: p<0.001][Legend: S.E. = Standard Error, Sig. = Significance]
Table 8 .
8 Effect of information quality and system quality on intention to use
Unstandardized Standardized
Model Coefficients Coefficients t Sig. Result
B Std. Error Beta
(Constant) 1.706 0.207 8.249 0.000
IQ 0.259 0.039 0.293*** 6.672 0.000 Supported
SQ 0.440 0.043 0.447*** 10.178 0.000 Supported
Model R 2 0.426
Adjusted R 2 0.424
F 154.538
Significance 0.000
[Note: *: p<0.05, **: p<0.01; ***: p<0.001]
R 2 =0.257
Appendix A. Survey items used in this study
Information Quality IQ1 The public grievance redressal system would provide sufficient information IQ2 Through public grievance redressal system, I would get the information I need in time IQ3 Information provided by public grievance redressal system would be up-to-date IQ4 Information provided by public grievance redressal system would be reliable System Quality SQ1 The public grievance redressal system would be user friendly SQ2 I would find the public grievance redressal system easy to use SQ3 I would find it easy to get the public grievance redressal system to do what I would like it to do Perceived Usefulness PU1 Using the public grievance redressal system would enable me to accomplish lodging complaint more quickly PU2 Using the public grievance redressal system would improve my overall performance PU3 Using the public grievance redressal system would increase my productivity PU4 Using the public grievance redressal system would enhance my effectiveness PU5 Using the public grievance redressal system would make it easier to lodge my complaint PU6 I would find the public grievance redressal system useful in lodging and monitoring complaint User Satisfaction US1 I feel that public grievance redressal system would adequately meet my needs of interacting with government agency US2 Public grievance redressal system would efficiently fulfill my needs of interacting with government agency US3 Overall, I would be satisfied with the public grievance redressal system
Intention to Use
IU1 I intend to use the public grievance redressal system IU2 I predict that I would use the public grievance redressal system IU3 I plan to use the public grievance redressal system in the near future | 53,912 | [
"999539",
"999540",
"999508"
] | [
"322817",
"322817",
"322817"
] |
01467784 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467784/file/978-3-642-38862-0_17_Chapter.pdf | Richard Baskerville
email: [email protected]
Jan Pries-Heje
Discursive Co-development of Agile Systems and Agile Methods
Keywords: Agile Methods, Discourse, Adaptation of Methods, Technological Rules, Scrum
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Agile development approaches assume continuous method evolution, but sometimes offer little guidance how to go about managing this evolution. There is considerable work on the construction of information systems development methods, but this work largely has an engineering orientation of most value to plan-based development. There is a need for more work specifically on the alternatives to such engineering approaches for developing and redeveloping agile methods on the fly. Agile methods involve frequent, fast, and continuous change. On-the-fly redevelopment of agile methods also involves frequent, fast, and continuous change. The research reported here indicates that such evolution of agile methods involves the engagement of a discursive mode of method development.
Our research question regards "How do system developers over time engage in the evolution of both agile systems and agile methods in practice?" The research reported below indicates the presence of a co-development discourse within which both the methodologies and the systems that the methodologies create are evolving. This methodological evolution progresses through a process of fragmentation, articulation, and re-articulation continuously throughout a software development project.
Agile development methods
Agile information systems development has been adopted successfully by practitioners partly or completely replacing more linear or plan-driven approaches. Agile is a success in systems development, and acknowledged as such by both the practitioner and researcher community [START_REF] Dybå | Empirical studies of agile software development: A systematic review[END_REF]. Agile IS development can be defined either as an approach focusing on delivering something useful at high speed [START_REF] Ågerfalk | State of the Art and Research Challenges[END_REF] or as an approach that can adapt to a continuously changing target or requirements [START_REF] Conboy | Agility from First Principles: Reconstructing the Concept of Agility in Information Systems Development[END_REF]. Agile development is organized as iterations, repeating the same activities in short cycles [START_REF] Austin | Research Commentary--Weighing the Benefits and Costs of Flexibility in Making Software: Toward a Contingency Theory of the Determinants of Development Process Design[END_REF]. In 2001 the agile manifesto provided a statement of values and principles (agilemanifesto.org) that has spawned many agile methods. Conboy [START_REF] Conboy | Agility from First Principles: Reconstructing the Concept of Agility in Information Systems Development[END_REF] defined agility as the intersection of speed, change, learning, and customer value. He used these characteristics to propose a taxonomy and an instrument for assessing the agility of a particular practice. This assessment operated across agile, plan-driven, or in-house methods. The agility of a methodology is: "the continual readiness of an ISD method to rapidly or inherently create change, proactively or reactively embrace change, and learn from change while contributing to perceived customer value (economy, quality, and simplicity), through its collective components and relationships with its environment" [3, p. 340].
Scrum
Our cases below use Scrum, a frequent example of agile methodology. Scrum originated as The Rugby Approach [START_REF] Takeuchi | The New New Product Development Game[END_REF]. This approach used small cross-functional teams produce better results. The Rugby theme arose from concepts like "Scrum" and "Sprint" that referred to the game to describe how work was carried out by a team in this approach. We have read and categorized literature on Scrum [START_REF] Sutherland | The Scrum Papers: Nut, Bolts, and Origins of an Agile Framework[END_REF]. As a result we identified four objects (abbreviated "OB"), three types of organization (abbreviated "OR"), and five types of process (abbreviated "PR").
Scrum is iterative as with other agile approaches to IS development. The iteration in Scrum is called a Sprint. Work is organized in short iterations no more than 2-4 weeks long (PR-1). The team is a self-organizing team of equals (OR-1) The user or customer is deeply integrated in the development and has representation through the Product Owner role (OR-2). This role is defined as a user or customer with power and ability to make decisions. The Product Owner defines the desired functionality in a new IT system and originates it as a User Story (OB-1). The Product Owner then prioritizes User Stories in a Product Backlog (OB-2). The functionality from a User Story with the highest priority is broken down into tasks in a Sprint Planning Meeting (PR-2) on the first day of a Sprint. This breakdown of User Stories into tasks is a collaborative effort involving the participation of the project team as well as the Product Owner. Planning Poker (PR2.1) is a mechanism for ensuring the right level of breakdown. All participants estimate the tasks using a number of pre-defined estimates. The allowable estimates are "Less than one day for one developer", "One day", "Two days", "Three days", and "Too big". If the group agrees a task is "Too big" then the task needs to be broken down further. If the group agrees on "Less than one day for one developer" then the task needs to be merged with another task.
The Sprint continues after the planning meeting. Each day the project team meets in a Daily Stand-up Meeting (PR-3) that takes place in front of a Scrum Board (OB-3). A Scrum Master (OR-3) is responsible for the meeting, which should take no more than 15 minutes. In the meeting, each team member chooses one or more of the tasks. During the meeting every team member answers three standardised questions. ( 1) What did you do yesterday? (2) What are you doing today? (3) What problems did you encounter?
The Scrum Board (OB-3) is usually a whiteboard with four columns. The columns are titled "Ready", "In-progress", "Done", and "Done-Done". Ready details the task breakdown and Planning Poker estimates. In-Progress indicates tasks that a team member has chosen the task and work is underway. Upon task completion, this team member moves moved the task to Done. After this, a different team member may quality-assure the task, and afterwards the task moves to Done-Done. After this, the task is registered on a Burn-Down Chart (OB-4) that depicts "expected" versus "realized" production.
During the Daily Stand-up meeting (PR-3) team members record their answers to the first two of the three standardised questions by moving tasks from column to column on the Scrum Board. For example the answer to, "What did you do yesterday?" could move a task from In-Progress to Done. The answer to, "What are you doing today?" can move a task from Ready to In-Progress. Team members usually write their names on the card associated with the task card as a self-commitment made apparent through the Scrum Board.
After the Sprint iteration completes, the product is demonstrated to the Product Owner in a Sprint Review Meeting (PR-4). First, the Product Owner reflects on the value of the deliverable. Second, the team conducts a learning Retrospective (PR-5). In this retrospective, the team considers what succeeded and failed in the Sprint. Finally, the team determines one or two changes to implement or at least pursue in the next Sprint.
Research Method
The research reported in this paper operated from a staged research design (two stages). The first stage involved a multiple case study with six cases. Originally the cases were chosen to study the diffusion and adoption of Scrum. But an early finding of the studies was that all the case companies had evolved agile methods. Hence the focus in this paper is to develop a framework that helps explain how developers evolve agile methods during projects. We chose six cases in order to build a replication logic for contrasting results across the cases [START_REF] Yin | Case Study Research: Design and Methods 4th ed2008[END_REF]. The specific agile methodology was held constant (Scrum) to reduce the ontological noise that would arise from differing perspectives of alternative agile methods. The different cases were selected to provide diversity in organizations using a common agile approach. Thereby the replication logic captures shifts in a more cohesive agility viewpoint across diverse settings. Table 1 provides details of the six organizations represented in our cases. The second stage created an evaluation engagement in which 25 practicing software developers received an orientation to the framework and were given the opportunity to evaluate its use in appraising the agile method adaptation processes in their own organizations. All people in Scrum team interviewed. Three product owners interviewed.
InterFin
International financial organization with IT development in Europe and India.
Scrum team followed for a full project (one year in length). Scrum in Europe and India.
In the six stage-one-cases we collected data using techniques appropriate for each setting. For example, the Interfin case involved interviews and observation, both in India and Europe, over the course of the project. In general, analytic induction (inductive reasoning about a social phenomenon) was used for data analysis [START_REF] Ragin | Constructing Social Research: The Unity and Diversity of Method[END_REF]. This approach involved inducing concepts, so-called "social laws", from a deep analysis of empirically isolated instances [START_REF] Znaniecki | The method of sociology1934[END_REF]. The research question drove this analytic inductive reasoning.
It was an analytical process whereby data was coded when found in the six cases and when it was related to the diffusion and evolution of agile methods [START_REF] Miles | Qualitative Data Analysis: A Sourcebook of New Methods[END_REF]. The analysis revealed that the phenomena in the case could be classified as either objects or organization. An alternative classification also surfaced in which the phenomena could be classified as either static concepts/processes, or as elements that were dynamic and ever changing. There were 12 concepts in this classification scheme that were labeled method fragments (a term adopted from the method engineering literature). In concert with the analytic induction, we followed an interrogatory data analysis process described by Pascale [START_REF] Pascale | Cartographies of Knowledge: Exploring Qualitative Epistemologies[END_REF]. In our implementation, the interrogatives included, "Under what contexts does this adaptation of Scrum arise?" "Under what circumstances may we find exceptions to this general adaptation rule?" The results were coded as technological rules [START_REF] Van Aken | Management Research as a Design Science: Articulating the Research Products of Mode 2 Knowledge Production in Management[END_REF].
Fragmentation and articulation
Agile development meets the need for systems development where a setting is subjected to continual change. Planned methodologies (which invoke mechanistic organization) are less suitable in such settings. But agile approaches (which involve organic organization) fit such settings well [START_REF] Burns | The Management of Innovation[END_REF]. Agile methodologies, like other kinds of methodologies share certain general characteristics. Methodologies are organized collections of concepts, beliefs, values, and normative principles that are supported by material resources [START_REF] Lyytinen | A Taxanomic Perspective of Information Systems Development. Theoretical Constructs and Recommendations[END_REF]. Methodologies often adopt a particular perspective intended for a particular application domain involving particular prerequisites and activities.
Because methodologies are rarely used in the exact way described [START_REF] Bansler | A reappraisal of structured analysis: Design in an organizational context[END_REF][START_REF] Truex | Amethodical Systems Development: The Deferred Meaning of Systems Development Methods[END_REF], method fragments become cobbled together with novel elements that comprise a situated methodology unique to its setting. For example, in agile development often embodies an ad-hoc mix of fragments intuitively assembled from Scrum, XP, etc, [START_REF] Esfahani | Situational Evaluation of Method Fragments: An Evidence-Based Goal-Oriented Approach[END_REF].
A method fragment is a concept, notation, tool, technique, etc. that has been coherently separated from the overall methodology. It is lifted from its original methodological framework and used in a different one. The use of "method fragments" or "method chunks" is a central principle in method engineering [START_REF] Rolland | A proposal for context-specific method engineering[END_REF]. These method fragments can have varying levels of abstraction and granularity [START_REF] Tan | Teaching Information Systems Development via Process Variants[END_REF]. Method fragments are a concept best known within the frameworks and processes of method engineering. Such frameworks engineer IS development methods by assembling them from an inventory of components: methodologies, method fragments, and innovations [START_REF] Brinkkemper | Method Engineering[END_REF]. Computerbased method engineering uses formal models to enable the rapid development of computer-aided systems analysis and software engineering (CASA/CASE) tools that provide a unique methodology for each unique development settings [START_REF] Odell | Method Engineering: Principles of method construction and tool support[END_REF].
This discourse involving organizations, settings, developers, and their method fragments should not be mistaken for the several variations of agile engineering and agile method engineering. Such variations include obtaining feedback from users of the subject methods, or method engineering of agile methods [START_REF] Schapiro | Engineering agile systems through architectural modularity[END_REF], or meta-metamethod engineering or the representation of agile ways to semantically capture the domain knowledge [START_REF] Berki | Formal metamodelling and agile method engineering in metaCASE and CAME tool environments[END_REF]. A key discovery found in our data is the participation of the fragments themselves (along with developers, settings, etc.) in the process. Because of its discursive character, articulation engenders an emergent form of methodology evolution that subsumes its complexity into a holistic and reflective social discourse. It does not deal with complexity through reduction, as an engineering approach might, but rather approaches it as a conversation between people, their problems, and their tools.
Method engineering provides a strategy for method adaptation. Situated method engineering is a strategy for continual integration of fragments such that methods can adapt as developers learn about their changing environment [START_REF] Van Slooten | Situated method engineering[END_REF]. This work has also been used as an approach to software process improvement for agile methods and for object-oriented methods [START_REF] Henderson-Sellers | Creating a Dual-Agility Method: The Value of Method Engineering[END_REF]. One objective has been the rational transformation of unique methodologies using fragment assemblies [START_REF] Karlsson | Combining method engineering with activity theory: theoretical grounding of the method component concept[END_REF]. The approach provides a form of meta-method for configuring off-the-shelf methods componentization [START_REF] Karlsson | Method configuration: adapting to situational characteristics while creating reusable assets[END_REF].
Our agile cases seemed to choose a mode of fragmentation and articulation that differed from the method engineering mode. They used a discursive perspective instead of an analytical design perspective. These were evolving agile methodologies as a discourse shared between organizations, settings, developers, and their method fragments. Fragmentation is retained because it is common beyond engineering. Method fragmentation is found in business process management [START_REF] Ravesteyn | A Context Dependent Implementation Method for Business Process Management Systems[END_REF], security method adaptation [START_REF] Low | Using a Situational Method Engineering Approach to Identify Reusable Method Fragments from the Secure TROPOS Methodology[END_REF], self-organizing systems [START_REF] Puviani | A Method Fragments Approach to Methodologies for Engineering Self-Organizing Systems[END_REF], requirements traceability [START_REF] Domges | Adapting traceability environments to project-specific needs[END_REF], and for requirements elicitation [START_REF] Haumer | Requirements elicitation and validation with real world scenes[END_REF]. Articulation has a discursive character that subsumes complexity into a holistic and reflective social discourse. Unlike an engineering approach that uses reduction to eliminate complexity, it approaches adaptation as a conversation between people, their problems, and their tools.
Co-developing systems and methods
The process of co-development in the cases above can be expressed in terms of method fragments, technological rules, and articulation. We analyzed the major method fragments in the Scrum cases seeking an understanding of how these fragments were adopted or adapted in the cases. This analysis revealed two central dimensions that characterized the fragments.
Dimensions of agile method fragmentation
As discussed above, the analysis provided two central dimensions in the fragmentation and articulation of evolving agile methodology. One dimension distinguishes static fragmentation versus dynamic fragmentation as distinguishing characteristics of the fragments. This was more of a criteria for articulation or re-articulation of the method than it was a factor of the criteria for fragment use [START_REF] Aydin | On the Adaptation of an Agile Information Systems Development Method[END_REF]. Static fragments are often used or reused without changing the fragment internally. Dynamic fragments are often used or reused only after internal modifications or adaptations. These dynamic fragments were often themselves re-articulated in innovative ways. Dynamic fragments required internal changes before the next re-use in the method.
The second dimension distinguished actor fragments versus artifact fragments. In actor fragments, human autonomy was somehow featured in the fragment. Actor fragments tended to be loosely articulated; imbued with a permissive spirit giving people the latitude to re-arrange their behaviors during the development project. In contrast, artifact fragments suppressed human autonomy; imbued with a restrictive spirit that limited changes in individual behavior during the project. Together, these two dimensions define four classes of fragments: Process, Objects, Organization, and Articulation. See Figure 1. Each of these classes is described below. Dimensions of agile fragmentation and reassembly.
Scrum Fragment Technological Rules
For the purpose of expressing the method fragments in Scrum, we adopt the notion of technological rules. Van Aken's design rules operate in the following manner. In a management vision of design science, there are two possible outputs: artefacts or interventions, and there may be three kinds of design in a professional episode: The objectdesign defines the artefact or intervention. The realization-design is the plan for implementing the artefact or intervention. The process-design is the plan for the design process itself. In this sense designing is similar to developing prescriptive knowledge.
Van Aken suggests expressing a design in the form of technological rules: 'A technological rule follows the logic of "if you want to achieve Y in situation Z, then perform action X". The core of the rule is this X, a general solution concept for a type of field problem' [12, p. 23]. Van Aken emphasises that technological rules need grounding [12, p. 25]: 'Without grounding, the use of technological rules degenerates to mere "instrumentalism", that is, to a working with theoretically ungrounded rules of thumb'.
The rules need to be grounded in a way acceptable from a social science perspective.
Based on our analysis of the six cases we were able to formulate the following technological rules for the specific parts of Scrum. In each rule, we note the way in which each rule is empirically grounded in the data collected and the analysis and coding of data from the six cases. Note that the version presented below is the refined version that resulted from an evaluation described later in this paper.
Objects
Object fragments are Static Artifacts. This means they are frequently used for (re)assembly in new variants of situated Scrum without changing (re-articulating) the fragment. These fragments marginalize human autonomy in the sense that these involve structures that do not provide much variance according to the individual actors in the setting. These are listed here along with examples of the technological rules that inhabit each fragment.
Organization
These fragments are Static Actors. This means they are frequently used for (re)assembly without necessarily rearticulating the fragment itself. Roles such as Scrum Master and Product owner are common, and seldom changes. Beyond this role, however, these fragments do privilege human autonomy in the sense that the actors have much latitude in how they enact this role. These organization fragments are listed along with the technological rules that inhabit each fragment.
Organization: Self-organizing team of equals
OR-1: If you have a team of experienced professionals, with more or less the same level of competence, in a culture where hierarchy is not desired -In a situation where you are considering the use of agile methods; specifically Scrum --then consider organizing the team as a self-organizing teams of equals without a project manager to assign tasks Grounding: InterFin showed that this may be hard in a culture with high power-distance (in the Hoftstede sense) OR2.1: If you cannot assign the role of product owner to one person but have many stakeholders that want to be heard and to be part of the decision making -In a situation where you need the decision making ability of the product owner --then organize a product owner forum and name a chief product owner who can make the final decision when disputing views arise among stakeholders Grounding: Exactly the solution chosen in PubliContract where they see it as very beneficial and a way to preserve the effectiveness of the product owner role OR2.2: If you want to have a product owner -In a situation where you do not have access to customers (e.g. because you are doing product development) --then find a person with a good market understanding to fill the role as product owner Grounding:SuperSystem, Globeriver, Interfin and ShipSoft were all doing this Organization: Scrum Master role OR-3: If you want to have a person specifically responsible for ensuring that agile process is followed -In a situation where you are considering the use of agile methods; specifically Scrum, and you have decided to have a self-organizing team of equals (OR-1) --then have one person in each team take on the role as Scrum Master -Alternatively just use existing Project Manager role Grounding: Found as described in SuperSystem, OR3.1: If you want to maintain both a Project Manager and a Scrum Master role and not enacted by same person -In a situation where you have decided to have Scrum Master role enacted --then you need to negotiate responsibilities for the two roles and the interface between them Grounding: PubliContract did exactly this. In DareYou the customer was also the project Manager. In InterFin the Project Manager was placed above two Scrum masters that each had a team of their own
Process
These fragments are Dynamic Artifacts. This means they are more often modified, adapted, and rearticulated as the method evolves. Nevertheless, these fragments do not afford much latitude to the individual actor in changing their behavior within the process. They are listed here along with examples of the technological rules that inhabit each fragment.
Articulation
The articulation group is too poorly structured for expression using technological rules.
The articulation of fragments is itself a fragment because it is the on-the-fly, discursive process where developers assemble the fragments into a working methodology. While similar to the method engineering notions of design rationale or design model, it was not a "meta" process or a "meta" design. In agile development, the discourse about the adaptation and evolution of the methodology is part of the normal development conversation. Articulation fragments were distinctly dynamic actors. Fragments in this group often specified criteria for articulation or re-articulation of the method. It was dynamic because the articulation fragments changed internally on each use in the method. Articulation fragments are actor fragments that privilege human autonomy. They allow people to adjust their future behaviors as the development project evolved. A more complete framing of this method articulation is the theory nexus [START_REF] Pries-Heje | The design theory nexus[END_REF][START_REF] Carroll | Artifact as theory-nexus: hermeneutics meets theory-based design[END_REF]. A theory nexus encompasses the interaction between theories and designed artifacts. It helps frame the process that results when multiple theories overdetermine the design for an object that, in fact, represents a setting where the design outcome is at least partly the result of the use of the object. In our cases, the theories are embodied in the technological rules. Agile methodologies are designed and re-designed on-the-fly, in concert with the use of the methodology, creating a theory nexus as technological rules and fragments-in-use are combined, separated, and recombined. Within the theory nexus, a discourse is present. This discourse articulates and rearticulates the dynamic objects in the presence of the static objects. In the six cases, such discourse episodes were embodied by each Sprint. Only experience with the methodology can determine the exact effects of an evolving methodology in relation to underlying technological rules.
The nexus is a discourse, a complex conversation that extends across (1) a deductive view of the relationship between fragmentation and methodology, (2) a reciprocal relationship between the articulation and re-articulation of technological rules; and (3) the evolutionary iterations of methodological framing [START_REF] Carroll | Artifact as theory-nexus: hermeneutics meets theory-based design[END_REF]. A nexus binds method fragments with realities and shapes a momentary version of a methodology. This moment immediately initiates a new episode in the discourse (a Sprint in the cases). The fragments, the participants, the methodology, the setting, and the problem are engaged in this discourse. Each re-articulation of the methodology results in a new momentary version that necessarily precipitates a new episode (the next Sprint). These re-articulation episodes within the nexus persist throughout the life of an agile development project. There is not a methodology, but a succession of different methodologies that momentarily provide structure through regularities that are present only for unique episodes.
Evaluating the co-development framework
To evaluate the framework, we engaged with 25 software practitioners from mainly engineering-oriented companies, each of whom represented different software development companies and each with extensive experience from companies that had adapted Scrum or were considering doing so. After having been introduced to the framework and the technological rules they were asked to fill out a questionnaire with the purpose of improving the framework. Thirteen of the 25 participants decided voluntarily to participate. Six of the 13 could immediately use the framework to evaluate the Method-System Co-Development activities in their own organizations whereas the seven others reflected on past situations in which they adapted Scrum or imagine such a future situation. The results indicate that the framework was easy to learn and very helpful in their own practice. It was clear from the responses that there was substantial interest in further development of the framework for future use. Further the evaluation pointed to four things that have been changed in the version of the framework presented in this paper. First, many of the technological rules were formulated too briefly (e.g., using a phrase such as "if you want to …"). For version 2 (presented above) we considerably elaborated the technological rules with a focus on the benefits to be achieved from adapting the object, organization or process.
Second, it was stated in the evaluation that the framework was "…not detailed enough for implementation. An example was the relationship between Scrum Master and Project Manager". For version 2 (above), we have added more details and we have made it clear that there are relationships between some of the fragments by adding a numbering system for easy reference and by adding references from fragments to other fragments. See for example "OR1.1" where the technological rule now includes a reference to "PR-1 to PR-5".
Third, it was pointed out that the technological rules on the product owner role were too rudimentary. It didn't include the "hard things" such as "product owner availability" and "what to do if the product owner was only interested in final results and not in partial results after each sprint?". To cope with this comment we added several statements to the technological rule on product owner (see OR-2 above).
Fourth, one evaluator pointed out that he did not believe in the "supermarket approach" that we were using and that he believed there was a minimum level of Scrum elements necessary to regard the approach as Scrum. Nevertheless, the six cases we originally analyzed clearly show that companies in practice do use a supermarket approach, taking some Scrum fragments into use and eschewing others. But our study of the six cases also showed that at least six-seven fragments of the 12 identified were taken into use in all six cases. This effect suggests that there is indeed a minimum of at least 50% of the fragments that are intuitively taken into use.
Finally, the evaluation confirmed that for none of the six companies had adapted Scrum as a one-shot event. They were all continuously adapting Scrum in a discursive process as we have presented in this paper.
Conclusion
Among the limitations in the work above, the approach used cases representing one instance of agile methodology (Scrum). While observationally consistent, it limits the confidence that the study findings will generalize across all agile methodologies. We also studied instances of Scrum projects, which limited observations of any longitudinal evolution from project-to-project. The adaptation of agile methods is a special case of an adoption process where users purposefully adopt parts of the methodology and discard other parts. In this sense our study has implications for future studies of adoption in which a technology (such as a tool, method, or process) grows more adoptable by promoting its re-articulation through discursive usage.
The main contribution of this paper is the framework explaining how Agile methods, and in particular Scrum, are constantly articulated and re-articulated when diffused in practice. This framework includes a two dimensional groupings that include three classes of fragments: Objects, Organization, and Process. The fourth class involves a discursive articulation that occurs on the same logical plane as the fragments. Unlike method engineering, the discourse is an inseparable part of the methodology itself, not a separate "meta" method. Agile method adaptation is a functional part of routine development practice.
There is practical value in the nexus and the technological rules. They have a demonstrated prescriptive design value useful for many development managers employing agile methods.
Fig. 1
1 Fig. 1.
OR1. 1 :
1 If you want to want to use Scrum in a team where the team members have different (or very uneven) competences -In a situation where you have decided to use self-organizing teams of equals --then you may need to assign specialist roles to different team members. You need to adapt the process-elements (PR-1 to PR-5) to allow for non-equals.; and you may consider having a traditional project manager Grounding: InterFin had a test specialist in a Scrum team. SuperSystems also used specialist roles as part of their Scrum adaptation OR1.2: If you have a larger team than 8-10 people -In a situation where you have decided to use self-organizing of equals --then you may organize a number of Scrum-teams each with a Scrum Master, and then the Scrum Masters can meet (daily) in a Scrum-of-Scrum Grounding: We saw this done in both DareYou and InterFin with good results Organization: Product Owner role OR-2: If you need a decision making ability in relation to all user-or functionality-oriented issues (e.g. to make firm decisions on what functionality is included and excluded) -In a situation where you are considering the use of agile methods; specifically Scrum --then you should have a highly decisive customer take on the role as Product Owner Grounding: DareYou for example had a manager from the customer site in the Product Owner role.
Table 1 .
1 The six companies studied. Pseudonym names used to preserve anonymity.
Name Characteristics Roles and Number of Subjects
(Pseudonym)
Develops engineering products. 3 people interviewed: Danish and
GlobeRiver 500 employees in R&D func-tion world-wide when inter- Indian Scrum master, and Danish Facilitator
viewed
Develops software for the mili- 4 people interviewed: a Lead devel-
SuperSystem tary, the banking industry, hos-pitals, etc. Approx. 400 em- oper, a Scrum master, manager, and person in charge of implementing
ployees when interviewed. Scrum
An off-and online gaming 2 people interviewed: The Project
DareYou suppliers located in different company; works with several manager and the Product owner.
places
A software house producing 20 people interviewed. Observation
ShipSoft software for international pro- over three periods of time including
duction. 150 employees full week
Public organization that con-
PubliContract tracted a private software
house.
If you have meetings in teams that take up too much time and you want to have shorter and more effective meetings without long discussions of the agenda and/or specific problems -In a situation where you are considering the use of agile methods; specifically Scrum --then use daily stand-up meeting lasting no longer that 15 minutes and with a standard agenda: If the functionality that is developed in a sprint can be put into production immediately and you want customers or end users to see what they are getting out of each sprint (e.g. because you know that is likely to increase their satisfaction with the development) -In a situation where you are considering the use of agile methods; specifically Scrum --then demonstrate that you have developed something of value at the end of a sprint If you want to capture learning and put lessons learned into use quickly -In a situation where you are considering the use of agile methods; specifically Scrum --then carry out a retrospective at the end of a sprint (iteration) Grounding: All six companies had adopted adapted this practice
PR3.1: If you want to use short stand-up meetings
-In a situation where you do not have full-time resources (people)
--then organise the stand-up meeting weekly, bi-weekly, every 2 nd day or the like
Grounding: In GroundRiver they did not have meetings every day due to part-time resources.
Instead they had a weekly meeting between the people working in India and the people from
Europe (a project manager and a facilitator)
Process: Demo of value at the end
PR-4: Grounding: PubliContract and DareYou did this
PR4.1: If you want to adapt to an existing release schedule
-In a situation where you uses agile methods in combination with more traditional schedule
--then demonstrate value but do not release
Grounding: InterFin did this to fit Scrum with traditional mainframe-oriented release plan every
3-4 months
Process: Retrospective at the end
PR-5:
(1) What have you been doing? (2) What are you doing now? (3) Problems encountered?
Grounding: Five out of six companies did this. In one ShipSoft project they were even standing
in front of PC screen when doing daily meeting between Denmark and India. In one project in
InterFin they were not standing up for their daily meeting because they were in an open office
environment where it bothered other projects when they were standing. And in DareYou the
daily meeting was conducted on phone with same standardised agenda but sitting down
Process: Organize work in short iterations PR-1: If you want to organize work in small iterations to deliver something of value fast -In a situation where you are considering the use of agile methods; specifically Scrum --then use Sprints Grounding: All six companies did this. The shortest iterations we saw were one week (ShipSoft). The longest was eight weeks Process: Sprint Planning Meeting PR-2: If you want to start the iteration with a planning meeting where work breakdown and estimation takes place -In a situation where you are considering the use of agile methods; specifically Scrum --then have a one day Sprint Planning meeting on the first day of the iteration with the development team and the product owner present Grounding: All six companies did this. In a few instances the product owner was not present which caused delays and indecisiveness due to the lack of needed information on what the user actually wanted PR2.1: If you need estimates for tasks fast -In a situation where you have decided to use agile method and Sprint Planning meetings --then use Planning Poker to come up with estimates -Alternatively you can use any other estimation techniques Grounding: Several companies used Planning Poker i.e. DareYou and ShipSoft Process: Daily Stand-up Meeting PR-3: | 39,079 | [
"983581",
"983582"
] | [
"459409",
"459422"
] |
01467785 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467785/file/978-3-642-38862-0_18_Chapter.pdf | Rosa Michaelson
email: [email protected]
Is Agile the answer? the case of UK Universal Credit
Keywords: failure analysis, sociotechnical framework, Agile methods, large-scale systems
In 2010 the UK government responded to a catalogue of failing large-scale IT projects by cancelling most of them. In 2011 they announced the Universal Credit (UC) project, described as "the biggest single change to the system of benefits and tax credits since 1945, affecting some 6 million households and 19 million people". UC will integrate a number of legacy databases with the Real Time Information (RTI) system, administered by Her Majesty's Revenue and Customs (HMRC) and due to complete by October 2013. The coupling of these two large-scale IT projects will affect millions of UK citizens; it is crucial that both complete successfully and on time. Government has responded to criticisms by stating that the use of Agile methods will solve the failures of the past. This paper critically assesses the adoption of Agile methods for software development, project management and procurement in the case of Universal Credit.
Introduction
Whilst investigating wide-reaching UK Taxation problems which came to light in 2010 and 2011, research into the limitations of the UK Pay-As-You-Earn system were overtaken by events. Firstly, despite the problems of under and over payment of tax associated with the delays in this large-scale public sector project, an announcement was made that instead of continuing with year-end reconciliations of employee data, HMRC (Her Majesty's Revenue and Customs) was adopting a new system which would receive real-time information from employers throughout the year, on a monthly basis, so that tax collection was more frequent and employee tax status was kept up-to-date. What is more, this new system, known as RTI, was to be delivered in 18 months, a relatively short time-scale compared with the time for procurement and implementation associated with previous large-scale public sector IT projects. Within 6 months of this announcement, the basis and rationale for RTI was further expanded; the RTI system was now the enabling technology for a complete overhaul of the UK welfare system. The delivery of the simplified payment system of benefits called Universal Credit (UC) requires the integration of 6 separate legacy systems administered by the Department of Work and Pensions, and features data sharing with tax systems (administered by the separate government division of HMRC) via RTI. The deadline for this ambitious change was originally planned within the timescale for RTI completion, with the stated aim of both RTI and UC being live in October 2013.
The Universal Credit project is a messy one, as are many of the large-scale public sector projects of the last 20 years -messy in the sense that Law applies to social science case studies [START_REF] Law | System upgrade? The first year of the Government's ICT strategy[END_REF]. Law uses a number of sociotechnical case studies which deploy ethnographic methods to illustrate a range of ways to deal with "what happens when social science tries to describe things that are complex, diffuse and messy" [START_REF] Law | System upgrade? The first year of the Government's ICT strategy[END_REF]. Here, I use a socio-technical framework in order to make sense of the many factors and viewpoints that influence a real world problem, that of the histories of inter-linked large-scale public sector IT projects. In this paper, I present what Geertz called "a thick description" of policy and practise with respect to the Universal Credit project, where the process of amassing data and writing the case study forms part of the analysis [START_REF] Geertz | The interpretation of cultures[END_REF]. This is similar to Beynon-Davies' 'web-based analysis' of failure which focuses on the social and political contexts of software adoption as a way of improving our understanding of such examples [START_REF] Beynon-Davies | Human error and information systems failure: the case of the London ambulance service computer-aided despatch system project[END_REF].
In 2010, the UK Coalition government responded to the legacy of disastrous public IT projects by cancelling most of them [START_REF] Doh | The future of the National Programme for IT[END_REF]NAOa, 2011;[START_REF] Tarr | Implementing Universal Credit: will the reforms improve the service for users?[END_REF]. Given this response, what makes government sure that the ambitious Universal Credit project will succeed? Their answer is a fundamental change in ICT development by the public sector, abandoning the overly bureaucratic processes of the past by adopting Agile methods for software development, project management and procurement (CO, 2011;2012). [START_REF] Nerur | Challenges of migrating to agile methodologies[END_REF][START_REF] Denning | Evolutionary system development[END_REF]. In fact, the roots of Agile can be found in earlier software engineering methods of the 60s which deploy iteration and incremental processes, and eschew overly-bureaucratic documentation and project management techniques (Larman, 2003). Similar development techniques of more recent times include Prototyping, Rapid Application Development and Extreme Programming, amongst others (Larman, 2003).
In a footnote in the Cabinet Office report entitled One Year On: Implementing the Government ICT Strategy, we find this definition for agile:
Agile is an iterative development process where deliverables are submitted in stages allowing projects to respond to changing business requirements and releasing benefits earlier. (CO, 2012) However, the way Agile is discussed in the government documents detailed below shows a difference between the Agile Manifesto objectives and how Agile is understood by managers and civil servants. For example, Agile is frequently presented as the opposite of the 'waterfall' method, but the definition above is not the opposite of a classic waterfall approach, rather it echoes some of the waterfall methodologythat is the iterative nature of development.
Changes in UK Government ICT Development
Failure in software development and adoption has been analysed in several different ways over the last 30/40 years. There are those who differentiate between development failure and system failure, and classify project abandonment in terms of total, substantial and partial, as noted in [START_REF] Beynon-Davies | Human error and information systems failure: the case of the London ambulance service computer-aided despatch system project[END_REF]. In 2002 Wilson and Howcroft revisited and reconceptualised ideas of failure classification by noting that even after successful adoption software could be regarded as having failed at a later date, given the contextual changes and inherent uncertainties of system use over time [START_REF] Wilson | Reconceptualising Failure: Social Shaping Meets IS Research[END_REF]. Fincham goes so far as to suggest that we cannot use the labels of failure or success as those concerned modify their views as the project unfolds [START_REF] Fincham | Narratives of Success and Failure in Systems[END_REF]. Goldfinch takes the pessimistic view that large-scale projects of this nature are so inherently driven by politics that they should not be attempted in the first place [START_REF] Goldfinch | Pessimism, Computer Failure, and Information Systems Development[END_REF]. However, though it is difficult to distinguish between software projects that have a strong possibility of success and those that may succeed, but deliver incomplete solutions, it is self-evident that some software does actually fail to work, and much is not-fit-for purpose when implemented.
The UK has a history of large-scale public-sector IT failures which include (i) the e-Borders project; (ii) the UK ID central database project; (iii) the NHS for IT project in England and Wales; (iv) 5 shared services projects for central government departments; (v) the integrated Fire Services Management project; and (vi) the Rural Payments project [START_REF] Anderson | Database State: A Report Commissioned by the Joseph Rowntree Reform Trust Ltd[END_REF]NAO, 2011a). Several factors are regarded as contributing to each of these failures, namely size, spiralling costs and over-runs, the software acquisition process, political contexts, and so on (PAC, 2011). Major reviews of these failures have resulted in a change in government views about development and procurement of ICT. These are documented in several reports, namely (i) UK Government changes in ICT strategy (CO, 2011;2012); and (ii) in responses from the National Audit Office to this change of strategy (NAO, 2011b;[START_REF] Nao | Implementing the Government ICT Strategy: six-month review of progress[END_REF].
In particular, the idea that Agile methods might be preferable to previous overly-bureaucratic processes emerged in an important Public Administration Committee report in 2011 (PAC, 2011). Here, having recommended that departments look at alternative development methods, the report summary notes the tension between the need for in-house expertise in Agile methods, and the major outsourcing of such expertise because there is little or no in-house civil service knowledge of such matters. Others also suggested that Agile methods might solve the problems of failure (Magee, 2012); this idea was then taken up and expanded in the new government ICT strategy that emerged after the 2010 election: "Applying lean and agile methodologies that will reduce waste, be more responsive to changing requirements and reduce the risk of project failure and moving away from large projects that are slow to implement" (NAO, 2011c). This strategy included the aim that Agile would be used in 50% of all government IT projects by April 2013 (NAO, 2011b). In these and subsequent statements, Agile is presented as not only the answer to project failure and cost over-runs, but also a means to challenge the oligopoly of large software companies in large-scale IT procurement, allowing more SME (Small to Medium-sized Enterprise) involvement in public sector projects: "Government is consulting on new frameworks that will enable more agile procurement, and open the market to more SMEs" (NAO, 2011c).
3
The Real Time Information Project
PAYE
Pay-as-you-earn (PAYE) was introduced in 1944 as a means of maximising revenue during the inter-war and post-war period (HMRC, 2011). PAYE is a method of collecting tax at source via employee payments, so that taxpayers pay their tax as they work. The employee pays in advance each month (or payment period) towards an estimated amount of tax which is then checked against actual employment and allowances at each year end. An employee is given a tax code by HMRC which consists of items which show the allowances for an individual. The employer is responsible for sending on the total tax owed by the employees of the organisation to HMRC in an annual payment. However, given the need for annual reconciliations of employee status, often the employer is in arrears for the overall tax burden to the state.
In September 2010, HMRC announced that due to errors in PAYE coding a total of 6M people in the UK had paid the wrong amount of tax in previous periods. It was estimated that 4.3M had paid too much and were due a refund, and 1.4M had underpaid (on average each had underpaid by £1,428.) A Treasury report into the administration and effectiveness of HMRC noted that "these over-and underpayments had arisen because the amounts of tax collected from individual taxpayer under the PAYE system in 2008-09 and 2009-10 had not been checked" (Treasury, 2011). Soon after, in October, more errors in payments of tax (of up to £24,000 in some cases) were announced by HMRC. Further details emerged at this point; in particular, it was said that the 2009 introduction of a computer system had produced the large number of anomalies in tax payment.
Prior to the roll-out of 2009, the previous PAYE system was based on manual as well as computer-based processes. There were 12 different regional databases across the UK, and taxpayers' details were often cross-checked manually. HMRC stated that the old system led to mistakes which were then flushed out by the new system. With the previous system, HMRC manually reconciled between 16-17M cases at the year-end; once the 2009 system was fully embedded it was hoped that this would fall to around 3M cases, but this proved to be over optimistic [START_REF] Cit | PAYE Reconciliation -questions and answers[END_REF].
The problems appeared to date to 2008, but were, in fact, an accumulated set of PAYE coding errors which had a far longer history. In 2010, it was estimated that 15 million taxpayers had not had their tax affairs settled since 2004.
Organisational Context
The organisational context for RTI involves changes to the structure and role of HMRC over the last decade (Treasury, 2011). In 2005/6 the UK Tax and Inland Revenue departments were merged. In 2007, the department had to reduce its annual operational costs by five per cent each year over the period 2007/08 to 2010/11 [START_REF] Kablenet | HMRC extends Aspire deal to 2017: Cost cutting drive[END_REF]. Between 2006 and 2011, it is estimated that 25,000 jobs were lost, many as a result of the merger in 2005/6, leaving HMRC with 74,000 posts in 2011. By 2009, it was claimed that staff morale at HMRC was at an all-time low. In 2010, the closure of 130 local offices was announced, and planned reductions in staffing levels with the loss of what managers at HMRC claimed was 3000 jobs over 5 years. However, it is estimated that the Comprehensive Spending Review of 2010 in reality threatens a further 10,000 jobs over the next 4 years (ICAEW, 2010).
A recent external report on employment issues at HMRC stated: "At the heart of the engagement challenge in HMRC is a disconnect between employees and the overall organisation. Many employees feel that the organisation as a whole neither values, listens to, nor respects them" [START_REF] Clarke | People Engagement in HMRC: A report to ExCom and the HMRC trade unions[END_REF]. Low levels of morale in HMRC have been reported over the last 10 years [START_REF] Brookes | Morale among HMRC workers falls to new low[END_REF]. This is due to constant re-structuring, changes to conditions of services, job reductions and lack of resources [START_REF] Clarke | People Engagement in HMRC: A report to ExCom and the HMRC trade unions[END_REF]. Several internal surveys have shown that employees do not feel trusted by management and that there is a "strong blame culture". This low morale is reflected in the number of strikes that have occurred in recent years. Carter et al. argue that the impact of Lean management processes within HMRC has had "a detrimental effect on employees, their working lives, and the service that is provided to the public" [START_REF] Carter | Lean and mean in the civil service: the case of processing in HMRC[END_REF].
2010 saw a major government review of HMRC in light of complaints from clients, and with regard to the loss of crucial tax income due to mistakes and errors in data handling, as noted above in 3.1. Perhaps it is no coincidence that over this period there have been issues with the suppliers of computing systems, and problems with the resulting processes and tax collections.
Technologies
The main commercial supplier for the Inland Revenue's Tax and National Insurance system from 1994 to 2004 was EDS, then the second largest software company in the world (BBC, 2003). In 2003, the launch of a new tax credit system led to over-payments of £2B to over two million people and the contract with EDS was scrapped. After eight years, EDS paid £71.25M in compensation for this debacle [START_REF] Oates | EDS pays for tax failure: Final divorce settlement[END_REF].
In 2004, the computing systems contract was awarded to Capgemini with Fujitsu and BT as minor partners [START_REF] Oates | EDS pays for tax failure: Final divorce settlement[END_REF]. This contract, which was originally to run until 2014, was one of the biggest ever IT outsourcing contracts, with a value of £2.6B, and a lifetime value of £8.5B (Computer Weekly, 2009). Aspire (Acquiring Strategic Partners for the Inland Revenue) was set up to replace the contracts Inland Revenue had with EDS and Accenture for IT services respectively [START_REF] Kablenet | HMRC extends Aspire deal to 2017: Cost cutting drive[END_REF].
In 2009, HMRC revised the contract for the Aspire system with Capgemini and extended it until 2017 (Computer Weekly, 2009). This locking in of IT services for such a long period has been heavily criticised, and illustrates the issues associated with assessing the claims made for cost-savings as a result of Aspire:
The Aspire contract between HMRC and Capgemini covers a 13 year period and was originally valued at £2.8 billion. This contract is a case study of what is wrong with the present procurement culture. Such a large contract is too complex to manage. The assessment of costs and benefits is opaque and it commits too much power and money to a single supplier. (PAC, 2011) Unfortunately the new computer systems implementation over ran, and software problems, which were estimated to cost £395M, delayed the processing of the 2008-2009 PAYE details for at least a year.
RTI
In 2010, further changes were proposed to the HMRC PAYE computing system in order to avoid the annual reconciliation process, which was producing large numbers of anomalies in tax-codes, and to obtain real-time data from organisations with regard to employee status. The argument was that in comparison to the early period of PAYE when an individual had the same job for many years, nowadays there are rapidly changing patterns of employment, which lead to increases in the under or overpayment of tax each year. To allow for these rapid changes in employment, an individual's details will be held in a large database known as the Real-Time Information (RTI) system (a data warehouse). The new system requires high levels of data quality (presumably because there is no plan for recovery from the errors that are present in the un-reconciled tax payments based on estimates), and all organisations have been briefed on the need for the provision of clean data. However, the current status of the quality of data held by HMRC is in doubt. As noted by the 2011 Treasury report: "Data quality has been a key weakness in the PAYE system to date. The success of both the National Insurance and PAYE Service and Real-time Information will depend to a large extent on how effectively HMRC can 'cleanse' the data it receives and holds" (Treasury, 2011).
In 2010, a one-off payment of £100M was given to HMRC to help fund RTI which was to be rolled out over a period of 18 months, starting with a pilot in spring 2011. The timetable slipped somewhat, and the pilot started in April 2012 using a selected number of partners, and then proceeding to a full engagement from all employers by October 2013, despite opposition from several interested parties (such as major employers, and professional tax bodies). By July 2012 the pilot involved 500 employers, with approximately 1.7m employees [START_REF] Fuller | Gauke rejects postponement of RTI[END_REF]. The pilot was widened further in November 2012.
RTI requires all employers to change their reporting systems to be compliant with HMRC software. Few of the software suppliers who support businesses in PAYE returns had developed appropriate packages in the early part of the project. The original specification also required employers to make payments to HMRC using BACS (Bankers' Automated Clearing Services) instead of EDI (Electronic Data Interchange), competing standards for electronic cash transfers. Since the majority of SME were using EDI there were objections from interested parties, and other problems arose with moving from EDI to BACS as a means of communicating employee records, by September 2011, HMRC had agreed to continue to use several channels for payment, including both EDI and BACS [START_REF] Say | RTI: Inside the massive IT project at the heart of universal credit, Guardian professional[END_REF].
This is but one of several changes made to specifications for RTI during the development phase. Other objectives that were changed due to stakeholder responses include the following examples. Firstly, 'end of work period' P45 forms were to be abolished. A P45 is the document which allows for continuity in the tax process as an individual changes work. However, this decision was reversed after extensive complaints from employers and accountants [START_REF] Woods | HMRC performs u-turn on plans to axe P45[END_REF]. Secondly, the pilot was originally planned to take under 12 months. The idea that employers should then use a system that had not been fully tested for a complete tax year in pilot form caused some concern:
We welcome the move to introduce Real-time Information (RTI). We agree with the professional bodies that the system must be tested thoroughly before full implementation, with full consultation with users and close co-operation with the Department for Work and Pensions at all stages. We note that large employers will be required to use the new system in January 2013, which is before the system has been tested through one complete tax year. (Treasury, 2011) As a result of growing external pressure, in October 2012 it was announced that the pilot would continue until April 2013, at which point all employers would use RTI, though larger firms were given between April and October to join the new system.
This responsiveness might be seen as a positive way of developing a new system, if it were not for the impression that the scoping of requirements had been rushed and did not take account of stakeholder views. If anything, these reverses added to the mistrust of HMRC technologies.
The HMRC web-site is predicting that the complete RTI system will be in place for October 2013, or rather that this final date for completion cannot be changed due to the pressure from Ministers who are driving welfare reform. Given the history of over-run on previous HMRC projects, as noted above, and UK public sector IT projects in general, this is a worrying stance. The Treasury has warned of the dangers of such inflexibility in the roll-out of RTI:
HMRC has committed to an ambitious timescale to deliver Real-time Information, driven in part by the importance of the project in delivering the Universal Credit. The history of large IT projects subject to policy-driven timescales has been littered with failure. The timetable is made more ambitious by the fact HMRC will still be resolving the legacy of open cases and stabilising the National Insurance and PAYE Service during the project's early stages.
Introducing Real-time Information before HMRC and the Government can be sure it will work correctly would run unacceptable risks for the reputation of the Department and the tax system. (Treasury, 2011) As of March 2013, though the pilot has been described as a success, the numbers of small businesses ready for the April change is as low as 32% (PAC, 2012). Not only does HMRC have no contingency plans in place if RTI fails, all remaining 281 tax advice centres will close during 2013-2014 during the period of the roll-out of Universal Credit [START_REF] Wade | A Taxing Time as PAYE Regulations Change[END_REF][START_REF] Bbc | HMRC to close all of its 281 Enquiry Centres[END_REF].
Universal Credit
Universal Credit is a new benefit for people of working age which will be introduced over a four year period from 2013 to 2017. It will replace existing means-tested benefits and tax credits (including income-based Jobseekers; Allowance and Employment and Support Allowance; Income Support; Child Tax Credits; Working Tax Credits; and Housing Benefit). It is the Government's key reform to simplify the benefits systems and to promote work and personal responsibility. (CSC, 2012) Mark Hoban, who became a minister at the Department for Work and Pensions (DWP) in September 2012, produced figures for the estimated cost of the UC project of £638M, which included IT development, associated integration with other systems and infrastructure requirements. The cost of design, development and software was estimated at £492M, with the remainder for changes to dependant systems and infrastructure (Work & Pensions, 2012). Accenture was awarded the £500M 7-year contract to manage the IT systems that support UC; IBM was awarded a £525M contract which runs until 2018, to provide computing systems across 60 services and will also be involved in the integrating UC project.
They are providing a customer information system, resource management, and fraud referral and intervention management, some of which will be used in the delivery of Universal Credit. The DWP also signed a £100 million deal with HP for delivery of software covering the core benefits system and department application support [START_REF] King | DWP signs fifth large deal with HP[END_REF]. In addition DWP signed a contract with Capgemini for 7 years, of between £5M and £10M per year, for the provision and maintenance of business applications [START_REF] Hall | IBM signs £525m DWP contract to provide Universal Credit systems[END_REF].
The effects of the roll-out of the Universal Credit project can be gauged by reading the transcripts of Select Committee reviews, the reports of various government bodies such as the National Audit Office and the Cabinet Office. In addition there are the reports and briefings which each public department makes about the management of IT and special projects, including the metrics and key performance indicators. All of these are in the public domain. There are over 70 organisations directly affected by changes to benefits and which have responded to consultation exercises. The Local Government Association (LGA) is one these and it has made several representations to the DWP and other committees asking that the Agile methodology for UC be revised as it is 'not grounded in reality' [START_REF] Hitchcock | LGA: revise 'agile' approach to universal credit[END_REF]. In addition, UK Local Authorities have an interest in the change to Universal Credit, since there are various benefits which affect the holder's status with regard to local taxes, such as council tax, and the subsequent need to reconcile monies between central and local government. Local authorities also provide the face-to-face contact for those who receive many benefits. The locally managed Council Tax is not part of the overhaul of Universal Credit, but is part of the total benefit assessment. There are differences as to how benefit payments are made across the UK which directly affect the integration process [START_REF] Tarr | Implementing Universal Credit: will the reforms improve the service for users?[END_REF]. Benefit payments have weekly cycles; RTI operates on a monthly cycle. As a result UC has also been planned as a monthly cycle of payment, thereby creating problems for the disadvantaged who are used to budgeting on a weekly basis. Benefits are to be calculated per household not for an individual which may lessen income for women.
Other sources of information concerning policy and practise are the publications of interested bodies including (i) public sector unions, such as Unison; (ii) Charities, such as the Joseph Rowntree Foundation; and (iii) representatives of professions, including accountants and tax professionals, such as the Institute of Chartered Accountants and the Chartered Institute of Taxation, amongst others. For example, the Chartered Institute of Taxation has a group called the Low Income Tax Reform Group (LITR) which is very worried by the possible effects of Universal Credit. They have warned of problems with the accumulation and transition of tax credit debt which may result as a result of linking benefits payments with taxation in this way, stating "HMRC and DWP need a clear and well thought-out strategy to ensure that the start of UC is not blighted by inheriting the £6.5 billion debt that may have accumulated in tax credits by 2014/15" (LITR, 2012).
Under the new system all claimants will be 'digital by default'. The plan is to have all communications from an individual about benefits on-line, and for payments to be made into an online account. 'Digital by Default' has been criticised for affecting the most vulnerable, who are those who are most likely to receive many of the benefits [START_REF] Tarr | Implementing Universal Credit: will the reforms improve the service for users?[END_REF]. Many of those claiming benefits do not have bank accounts, and budget with cash set aside for specific items such as rent, debts and food. Community charity Citizens Advice warns that the Universal Credit system "risks causing difficulties to the 8.5 million people who have never used the internet and a further 14.5 million who have virtually no ICT skills" (WPC, 2012).
Unfortunately the DWP has not published the fuller details of the technology that will integrate the existing systems (See Figure 1 for a data-flow diagram of real-time payments for Universal Credit proposed in 2010 by DWP). The October 2012 Joseph Rowntree Foundation report discussed the problem of lack of details about the IT as follows:
However, there is still very limited publically available information on how the IT will operate and what will happen if things go wrong; DWP should address this and provide reassurance on how the system will operate, what training staff will undergo to understand it, and what processes will be in place in case IT systems fail. [START_REF] Tarr | Implementing Universal Credit: will the reforms improve the service for users?[END_REF] Other information that is not in the public domain includes how Agile methods are actually being deployed in UC development [START_REF] Slater | Universal Credit Programme: Freedom of Information request[END_REF]. Is Agile regarded as a software development method, a project management process, at odds with the 'waterfall' method, or scalable? This is the main focus of the next section in which the efficacy of the adoption of Agile methods is discussed.
Is Agile the Answer?
In September 2012, in response to criticisms of timescale, the DWP claimed that '"The IT is already mostly built. It is not a single IT system, but is being built part-by-part on an agile basis as well as bringing in existing systems. It is built and tested, on-time and on-budget" [START_REF] Hall | IBM signs £525m DWP contract to provide Universal Credit systems[END_REF]. Iain Duncan Smith, the government Work and Pensions Secretary, told the Commons Select Committee when it took evidence on Universal Credit in September 2012:
The thing about the agile process which I find frustrating at times, because we cannot quite get across to people, is that agile is about change. It is about allowing you to get to a certain point in the process, check it out, make sure it works, come up with something you can rectify, and make it more efficient. So you are constantly rolling forward, proving, and making more efficient. (CSC, 2012)
The Cabinet Office Major Projects Authority (MPA) issued the Starting Gate Review of Universal Credit in September 2011. This report notes some concerns about the take-up of agile methods as follows: "Overall, the use of an agile methodology remains unproven at this scale and within UK Government however, the challenging timescale does present DWP with few choices for delivery of such a radical programme." Thus Agile was chosen not because it was a tried and tested methodology, but because of the short timescale [START_REF] Collins | Universal Credit review now published[END_REF]. This report also shows that there are doubts about the scalability of the Agile process stating further that the programme is using conventional, multi-million pound contracts with large suppliers to deliver the system, with RTI being developed simultaneously using a conventional waterfall methodology [START_REF] Ballard | Universal Credit deadline forced DWP to use "unproven" agile development[END_REF][START_REF] Collins | Universal Credit review now published[END_REF].
There is a lack of understanding in the government and civil service that adopting agile processes will require changes to organisational structures [START_REF] Nerur | Challenges of migrating to agile methodologies[END_REF]. Commenting that government IT is not just a cost to be managed, Sir Ian Magee, who co-edited the 'System Upgrade?' report, questioned the ability of senior civil servants as follows:
Agile requires real changes in departmental procurement, policy development and operational management processes, and it is not clear that government IT leaders feel sufficiently confident or supported to challenge departmental board leaders and ministers to do things differently. Meanwhile, in my experience, many top level civil servants express discomfort about challenging IT leaders to deliver better, more responsive services, in part due to a shortage of knowledge but also due to a distinct shortage of information on chief information officer (CIO) performance. (Magee, 2012)
In the case of Universal Credit, Agile has become an answer to critics of the scale and speed of the process, a means by which SME can become part of the software solution, and is also regarded as a project management and procurement process, rather than a software development method.
Many questions are as yet unanswered with respect to this example of interlinked large-scale public sector IT projects. Will the use of Agile design in government departments mean that the 'IT rip-offs' of the past are no longer going to happen? How can Agile be used successfully if civil servants and managers appear to have little or no technical understanding in the first place? To what extent can such grandiose schemes be de-coupled from the political contexts in which they are conceived and driven? There is growing evidence that the use of Agile methods is not compatible with large-scale projects or organisations that are bureaucratic [START_REF] Nerur | Challenges of migrating to agile methodologies[END_REF]. In 2011 the US Department of Defence had to impose an emergency reform of IT projects using Agile methods, after 11 major computer systems went $6bn over budget and were estimated to be 31 years behind schedule [START_REF] Ballard | Soldiers nail data for agile offensive on $6bn cock-up[END_REF]. As noted in section 2.1 above, Agile requires a daily commitment from clients to meet with developers. In large-sale projects it can be hard to identify appropriate clients for sub-projects, and for those clients to be available for daily meetings with small development teams. It is also difficult to imagine how the needs of public accountability can be met without some level of administrative control and documentation.
The DWP states in the 21 st Century Welfare paper: "In planning the transition to the new system, we would be guided by our principles of simplicity, fairness and affordability" (DWP, 2010). What is evident in the analysis presented above is that the addition of major IT change adds complexity to the process of simplification. Whether this grand project is affordable will only be determined at some point in the future. However, it seems likely that there will be losers in this roll-out -those with the most dependency on benefits and welfare. This is hardly fair.
It is unlikely that we can identify one factor in particular that is the main cause of systems failure, but given the complex nature of large-scale public sector projects it is also unlikely that relying on one factor alone, such as a change in software methodology, will guarantee success. The rationale of government entering on new large-scale complex IT projects after the experiences of the previous disasters discussed above was the use of Agile methods. In the cases of RTI and UC outlined above, I would argue that Agile has become a rhetorical device rather than the answer to large-scale public sector IT failure. Ian Watmore, who was Permanent Secretary of the Cabinet office in 2011, at the time of the major review, identified three main reasons for large-scale IT failure: "policy problems, business change problems or big-bang implementation" (PAC, 2011). The Universal Credit project has all of these features. It is unlikely that the adoption of a new method of software design and procurement will affect such major structural processes.
Glossary
Fig. 1 .
1 Fig. 1. Proposed Real-Time Payment System (Source: DWP, 2010, p. 35)
This paper assesses to what extent Agile is the solution to the historical problems of large-scale public sector IT failure by examining the case of Universal Credit. The structure is as follows: firstly, Agile methods are briefly discussed, and recent changes in government attitudes to software project design, development and acquisition are noted; secondly, the story of the HMRC RTI project is presented, with a description of PAYE, the organisational context, the technologies and history; thirdly, I describe the case of Universal Credit with responses from several different interested parties. Finally, I discuss to what extent Agile methods can or will ensure the successful outcome the government expects.Agile methods were originally defined by programmers and software developers in the Agile Manifesto of 2001. The focus of Agile development is on (i) individuals and interactions rather than processes and tools; (ii) the production of working software rather than extensive documentation; (iii) collaboration with the customer not contract negotiation; and (iv) responding to change rather than following a plan(Agile Manifesto, 2001). Further, the methods requires developers and clients to work together on a daily basis and insists that face-to-face interactions are vital. Thus Agile development applies to projects which are: modular, iterative, responsive to change, and in which the users' needs are central to the delivery. Such projects are characterised by not having a fixed or detailed knowledge of the final solution at initiation, but must have clear business objectives. Agile has been of recent interest to many in Computer Science and Information Systems fields, with caveats concerning what type of organisation and scale of IT project are suitable for such methods
2 Agile Methods and UK Government ICT Development
2.1 Agile Methods
Previously in December 2009, it was suggested that 7M people had mispaid in 2008/09 as a result of coding anomalies. The problems of tackling these mis-payments was further compounded by additional disruption in January 2010, when HMRC issued nearly 26M new tax codes to taxpayers, almost twice as many as expected on an annual basis. It appeared that the annual reconciliation of PAYE codes based on the returns of data from employers compared with those held by HMRC had not happened for several years. In 2011 the Treasury noted that HMRC had committed to clearing the backlog of open cases, which stood at 17.9M in late 2010, by 2012(Treasury, 2011). | 39,498 | [
"1001526"
] | [
"300845"
] |
01467786 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467786/file/978-3-642-38862-0_19_Chapter.pdf | John Stouby Persson
email: [email protected]
The Cross-Cultural Knowledge Sharing Challenge: An Investigation of the Collocation Strategy in Software Development Offshoring
Keywords: cross-cultural software projects, offshore outsourcing, knowledge sharing, communities-of-practice, longitudinal case study
Cross-cultural offshoring in software development challenges effective knowledge sharing. While research has suggested temporarily collocating participants to address this challenge, few studies are available on what knowledge sharing practices emerge over time when collocating cross-cultural software developers. This paper presents a longitudinal case study of an offshoring project with collocation of Indian and Danish software developers for 10½ months. A community-of-practice (CoP) analysis is offered of what knowledge sharing practices emerge over time and how these where facilitated. The study supports previous studies' suggestion of collocation in offshoring for helping cross-cultural knowledge sharing. However, the short initial period of collocation suggested in these studies, was insufficient for achieving knowledge sharing practices indicating a CoP. In conjunction with a longer period of collocation four facilitators of cross-cultural knowledge sharing were shared office, shared responsibility for tasks and problems, shared prioritization of team spirit, and a champion of social integration.
Introduction
The substantial research knowledge base on information technology and systems accumulated over decades does not prevent the persistent failures in both public and private enterprises. One explanation is the challenge of sharing knowledge such that it becomes embedded in the working practices of the involved practitioners. The knowledge sharing challenge is present throughout most aspects of information technology and systems development in and across organizations. The challenge can even be further exacerbated by cultural diversity when crossing not only organizational but also national boundaries. Offshoring in software development is a setting where the cross-cultural knowledge sharing challenge has a very persistent presence. With a history of numerous failures, many research efforts have investigated risks particular to offshoring and distribution [START_REF] Iacovou | A Risk Profile of Offshore-Outsourced Development Projects[END_REF][START_REF] Lamersdorf | A rule-based Model for Customized Risk Identification and Evaluation of Task Assignment Alternatives in Distributed Software Development Projects[END_REF][START_REF] Persson | A Process for Managing Risks in Distributed Teams[END_REF][START_REF] Singh | Risks Identification in an Offshore-Onshore Model Based IT Engagement[END_REF]. Knowledge shar-ing is one of the key challenges in software development with offshoring that is further exacerbated by different national cultures [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF][START_REF] Dibbern | Explaining Variations in Client Extra Costs between Software Projects Offshored to India[END_REF][START_REF] Nakatsu | A Comparative Study of Important Risk Factors Involved in Offshore and Domestic Outsourcing of Software Development Projects: A Two-Panel Delphi Study[END_REF][START_REF] Persson | Managing Risks in Distributed Software Projects: An Integrative Framework[END_REF]. A general suggestion to alleviate risks related to offshoring in software development is collocation of developers. Previous research suggest that liaisons between sites address the risks related to knowledge management and cultural diversity in distributed software development projects [START_REF] Persson | Managing Risks in Distributed Software Projects: An Integrative Framework[END_REF]. A recent study of knowledge sharing practices and the impact of cultural factors found that spending time at the other site in software development offshoring, is very good [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF]. In global software development, face-to-face meetings, temporal collocation, and exchange visits are best practices with benefits such as trust, cohesiveness, and effective teamwork but also constrained by extra costs (Šmite et al., 2010). While these suggestions has also been argued for virtual work in general, other research has found that teams composed of distributed members can perform effectively without ever meeting face-to-face [START_REF] Watson-Manheim | Perceived Discontinuities and Constructed Continuities in Virtual Work[END_REF]. However, collocation of project participants is still an often-reoccurring suggestion for addressing the knowledge sharing challenge in the offshore software development with cross-cultural relations. The extent, to which this suggestion of collocation should be taken, has been given little attention in the above literature. Thus, a longitudinal case study has been conducted to investigate the following research question: When collocating cross-cultural software developers, what knowledge sharing practices may emerge over time and how can such practices be facilitated?
The research question was investigated through a case where a financial company engaged a large amount of software developers from an Indian outsourcing provider and collocated them with their own developers in Denmark.
Based on a Communities-of-Practice (CoP) perspective on knowledge sharing, an analysis is conducted of what practices emerged over time and how they were facilitated. The following sections present the literature on cultural diversity and offshoring (2) and CoP (3). Followed by the case study research approach (4) and findings (5) of knowledge sharing practices in a collocated cross-cultural software project. The contribution and implications of these findings are discussed (6) followed by the conclusion (7).
Cultural diversity and offshoring in software development
Culturally and geographically distributed collaborators in software development projects or organizations have different conceptualizations. Three common conceptualizations are 1) virtual teams [START_REF] Bergiel | Nature of Virtual Teams: A Summary of their Advantages and Disadvantages[END_REF][START_REF] Gibson | Unpacking the Concept of Virtuality: The Effects of Geographic Dispersion, Electronic Dependence, Dynamic Structure, and National Diversity on Team Innovation[END_REF], 2) global software development [START_REF] Damian | Guest Editors' Introduction: Global Software Development: How Far have we Come[END_REF][START_REF] Mishra | A Review of Non-Technical Issues in Global Software Development[END_REF], and 3) offshore outsourcing [START_REF] Doh | Offshore Outsourcing: Implications for International Business and Strategic Management Theory and Practice[END_REF][START_REF] Nakatsu | A Comparative Study of Important Risk Factors Involved in Offshore and Domestic Outsourcing of Software Development Projects: A Two-Panel Delphi Study[END_REF]. Indicated by several literature studies in different research fields, virtual teams is a widespread and frequently used conceptualization [START_REF] Persson | Managing distributed software projects[END_REF]. The majority of studies see virtual teams as functioning teams that rely on technology-mediated communication while crossing several different boundaries [START_REF] Martins | Virtual Teams: What do we Know and Where do we Go from here[END_REF]. Commonly-noted boundaries are geographic, time, and organizational dispersion, while additional characteristics are electronic dependence, structural dynamism, and national diversity [START_REF] Gibson | Unpacking the Concept of Virtuality: The Effects of Geographic Dispersion, Electronic Dependence, Dynamic Structure, and National Diversity on Team Innovation[END_REF][START_REF] Martins | Virtual Teams: What do we Know and Where do we Go from here[END_REF][START_REF] Powell | Virtual Teams: A Review of Current Literature and Directions for Future Research[END_REF]. The term "team" suggests groups displaying high levels of interdependency and integration [START_REF] Powell | Virtual Teams: A Review of Current Literature and Directions for Future Research[END_REF]. However, virtual teams are often assembled from different organizations via outsourcing, or through joint ventures crossing organizational boundaries [START_REF] Martins | Virtual Teams: What do we Know and Where do we Go from here[END_REF][START_REF] Zigurs | Leadership in Virtual Teams:-Oxymoron Or Opportunity?[END_REF]. A virtual team perspective on collaboration has also been adopted for software development with offshore outsourcing [START_REF] Persson | Managing distributed software projects[END_REF][START_REF] Siakas | The Need for Trust Relationships to Enable Successful Virtual Team Collaboration in Software Outsourcing[END_REF]. Offshore outsourcing involves cross-organizational transactions by the use of external agents to perform one or more organizational activities [START_REF] Dibbern | Information Systems Outsourcing: A Survey and Analysis of the Literature[END_REF] crossing national borders. This can apply to everything from the use of contract programmers to third-party facilities management. Offshore outsourcing arrangements can include a virtual team setting, pursuing high levels of interdependency and integration, while other arrangements go in opposite directions pursuing high levels of independence [START_REF] Dibbern | Information Systems Outsourcing: A Survey and Analysis of the Literature[END_REF][START_REF] Kaiser | Evolution of Offshore Software Development: From Outsourcing to Cosourcing[END_REF][START_REF] Siakas | The Need for Trust Relationships to Enable Successful Virtual Team Collaboration in Software Outsourcing[END_REF].
The participants in offshore software development may not share language, traditions, or organizational culture, which makes knowledge sharing very difficult. Language barriers are typically present in cross-national projects when sites and participants do not share a common native language or norms of communication resulting in misinterpretations and un-conveyed information [START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF][START_REF] Sarker | Implications of Space and Time for Distributed Work: An Interpretive Study of US-Norwegian Systems Development Teams[END_REF]. Overall, it takes more time and effort to communicate effectively in offshore projects [START_REF] Iacovou | A Risk Profile of Offshore-Outsourced Development Projects[END_REF]. However, studies have also shown successful knowledge sharing and collaboration among geographically and culturally distributed software developers through information and communication technologies [START_REF] Persson | Agile Distributed Software Development: Enacting Control through Media and Context[END_REF][START_REF] Yalaho | Key Success Factors for Managing Offshore Outsourcing of Software Production using the ICT-Supported Unified Process Model: A Case Experience from Finland, India, Nepal and Russia[END_REF]. Differences in work culture, team behavior, or organizational culture may also lead to difficulties [START_REF] Connaughton | Multinational and Multicultural Distributed Teams A Review and Future Agenda[END_REF][START_REF] Nakatsu | A Comparative Study of Important Risk Factors Involved in Offshore and Domestic Outsourcing of Software Development Projects: A Two-Panel Delphi Study[END_REF][START_REF] Persson | Managing distributed software projects[END_REF], that can be caused by divergence between sites, in balancing collectivism and individualism, perception of authority and hierarchy, and planning and punctuality [START_REF] Herbsleb | Global Software Development[END_REF][START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF]. This may lead to decreased conflict-handling, lower efficiency, or even paralyze the software project. In general, when projects are distributed across time, space, and culture, it is difficult to obtain the same level of group cohesion and knowledge sharing expected in collocated teams [START_REF] Sakthivel | Virtual Workgroups in Offshore Systems Development[END_REF]. One suggestion for addressing the risks in offshore software development is working face-to-face [START_REF] Sakthivel | Managing Risk in Offshore Systems Development[END_REF]. Working face-to-face for limited periods of time in global or offshore software development has been suggested by numerous studies [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF][START_REF] Kotlarsky | Social Ties, Knowledge Sharing and Successful Collaboration in Globally Distributed System Development Projects[END_REF][START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF][START_REF] Šmite | Empirical Evidence in Global Software Engineering: A Systematic Review[END_REF]. This may include the use 'cultural bridging' staff with people rooted in both cultures or locals as on-site workers at the supplier [START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF], exchange visits (Šmite et al., 2010), or the use of liaisons between sites to address the risks related to knowledge management and cultural diversity [START_REF] Persson | Managing Risks in Distributed Software Projects: An Integrative Framework[END_REF][START_REF] Persson | A Process for Managing Risks in Distributed Teams[END_REF]. [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF] found in their study of knowledge sharing practices and the impact of cultural factors in offshore software development, that spending time at the other site is very good. While, the suggested best practices in face-to-face meetings, temporal collocation, and exchange visits can give rise to benefits in trust, cohesiveness, and effective teamwork it is constrained by the extra costs (Šmite et al., 2010). In general, it has been argued that in order to avoid project failures that the onshore and offshore teams from the vendor and client sides should work as an integrated project team [START_REF] Philip | Exploring Failures at the Team Level in Offshore-Outsourced Software Development Projects[END_REF].
While numerous studies have suggested collocation for alleviating knowledge sharing difficulties in cross-cultural and offshored software development, there is an apparent need for in-depth studies of how knowledge sharing practices can emerge with collocation. This may help managers make more informed decisions on how and to what extend collocation can be used for alleviating the risks associated with cross-cultural knowledge sharing in offshore software development. The following section presents the CoP framework for understanding knowledge sharing in practice.
Communities of Practice
Software development in and across organizations requires extensive knowledge sharing, that can be conceptualized as collective learning.
… collective learning results in practices that reflect both the pursuit of our enterprises and the attendant social relations. These practices are thus the property of a kind of community created over time by the sustained pursuit of a shared enterprise. It makes sense, therefore, to call these kinds of communities communities of practice. (Wenger, 1998 p.45) The CoP conceptualization has been used extensively for explaining or cultivating knowledge sharing in distributed settings [START_REF] Hildreth | Communities of Practice in the Distributed International Environment[END_REF][START_REF] Kimble | Dualities, Distributed Communities of Practice and Knowledge Management[END_REF][START_REF] Wenger | Cultivating communities of practice: A guide to managing knowledge[END_REF] also called virtual communities of practice [START_REF] Ardichvili | Learning and Knowledge Sharing in Virtual Communities of Practice: Motivators, Barriers, and Enablers[END_REF][START_REF] Dubé | Towards a Typology of Virtual Communities of Practice[END_REF]. However, the influential works introducing CoP [START_REF] Brown | Organizational Learning and Communities-of-Practice: Toward a Unified View of Working, Learning, and Innovation[END_REF][START_REF] Lave | Situated learning: Legitimate peripheral participation[END_REF][START_REF] Wenger | Communities of practice: Learning, meanings, and identity[END_REF][START_REF] Wenger | Cultivating communities of practice: A guide to managing knowledge[END_REF] conceptualize it differently [START_REF] Cox | What are Communities of Practice? A Comparative Review of Four Seminal Works[END_REF]. These works differs markedly in their conceptualizations of community, learning, power and change, diversity, and informality, for instance is the concept of community presented in the following ways [START_REF] Cox | What are Communities of Practice? A Comparative Review of Four Seminal Works[END_REF]:
A group of people involved in a coherent craft or practice, e.g. butchers OR Not a neatly group at all [START_REF] Lave | Situated learning: Legitimate peripheral participation[END_REF]. An informal group of workers doing the same or similar jobs [START_REF] Brown | Organizational Learning and Communities-of-Practice: Toward a Unified View of Working, Learning, and Innovation[END_REF]. A set of social relations and meanings that grow up around a work process when it is appropriated by participants [START_REF] Wenger | Communities of practice: Learning, meanings, and identity[END_REF]. An informal club or Special Interest Group inside an organization, set up explicitly to allow collective learning and cultivated by management action [START_REF] Wenger | Cultivating communities of practice: A guide to managing knowledge[END_REF].
This study investigates what knowledge sharing practices emerge over time when collocating a project's cross-cultural software developers. Thus, [START_REF] Wenger | Communities of practice: Learning, meanings, and identity[END_REF] focus in the third bullet above on social relations and meanings (knowledge sharing) that grow up around a work process (software development project) when it is appropriated by participants (Indian and Danish Software developers), is adopted instead of his more recent work in the fourth bullet above [START_REF] Wenger | Cultivating communities of practice: A guide to managing knowledge[END_REF]. Cox summarize Wengers (1998) definition of CoP as "a group that coheres through 'mutual engagement' on an 'indigenous' (or appropriated) enterprise, and creating a common repertoire" [START_REF] Cox | What are Communities of Practice? A Comparative Review of Four Seminal Works[END_REF]. Wenger (1998) associates community with practice that is the source of coherence for a community. He proposes three dimensions of the relation including, mutual engagement, joint enterprise, and shared repertoire (Fig. 1). (Wenger, 1998 p.73) [START_REF] Wenger | Communities of practice: Learning, meanings, and identity[END_REF] presents 14 indicators of CoP (Table 1), that show an emphasis of close relations created by sustained mutual engagement opposed to the less tight knit community relations in his following work [START_REF] Wenger | Cultivating communities of practice: A guide to managing knowledge[END_REF]. While these indicators can be a strong aid in clarifying the nature of CoP, they have not been widely referenced by subsequent researchers [START_REF] Cox | What are Communities of Practice? A Comparative Review of Four Seminal Works[END_REF]. These indictors serve as the analytical framework for identifying knowledge sharing practices emerging over time in the investigated crosscultural software project.
1) Sustained mutual relationshipsharmonious or conflictual 2) Shared ways of engaging in doing things together 3) The rapid flow of information and propagation of innovation 4) Absence of introductory preambles, as if conversations and interactions were merely the continuation of an ongoing process 5) Very quick setup of a problem to be discussed 6) Substantial overlap in participants' descriptions of who belongs 7) Knowing what others know, what they can do, and how they can contribute to an enterprise 8) Mutually defining identities 9)
The ability to assess the appropriateness of actions and products 10) Specific tools, representations, and other artifacts 11) Local lore, shared stories, inside jokes, knowing laughter 12) Jargon and shortcuts to communication as well as the ease of producing new ones 13) Certain styles recognized as displaying membership 14) A shared discourse reflecting a certain perspective on the world Table 1. Indicators of CoP (Wenger, 1998 p.125-126) The following section presents the investigated case, the date collection, and how the content of these data was analyzed for any supportive or opposing findings in relation each of the 14 indicators of CoP.
4
Research approach
The research question was investigated through a longitudinal case study, exploiting that "knowledge sharing practices need to be studied in context and longitudinally" when dealing with knowledge sharing and cultural diversity in software development with offshoring [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF]. The adopted case study approach was in the terms of [START_REF] Cavaye | Case Study Research: A multi-faceted Research Approach for IS[END_REF] single case with use of qualitative data for discovery based on an interpretive epistemology. Interpretive research allow investigation of knowledge sharing and CoP in its organizational and cross-cultural context as socially constructed and thus open to several interpretations by organizational actors but also to the researcher [START_REF] Klein | A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems[END_REF][START_REF] Walsham | Doing Interpretive Research[END_REF].
The Case
The case was a software development project in a large financial company in northern Europe with a history of national mergers and acquiring companies in neighboring countries. Each acquisition requires a significant effort from the company's IT division, implementing the standard IT platform as quickly as possible in all new branches to achieve economies of scale. The responsi-bility for the IT platform resides at the company's headquarters. However, some acquired companies have their own IT departments that became engaged in making the shared IT platform adhere to specific financial software system requirements in their respective countries. The company's most recent acquisition is different from previous acquisitions. It's significantly larger, has a sophisticated IT platform, and is located in a country with a different language tradition from the dominant language within the company. Previous acquisitions were smaller, had an inferior IT platform, and involved a language tradition similar to or easily understandable to the employees of the company. This implementation project of the company's standard IT platform had more than 500 participants and a strict one-year deadline. The project required a large number of software developers and the company engaged an Indian software outsourcing provider. The company had limited experience with offshore outsourcing but had engaged an Indian outsourcing provider experienced in outsourcing relations with financial companies. The large integration project consisted of numerous subprojects associated with different departments of the company's IT division. This case study was initiated through contact with a department manager who would supply the project managers and developers for the subprojects related to his department. Participants from the company's internal consultancy organization, responsible for both locally recruited employees and the consultants from the Indian outsourcing provider, would also populate these subprojects. The Indian consultants available for the department's subprojects were placed in a single subproject and collocated with the Danish participants at the department offices in Denmark. This subproject with the Indian participants was the focus of this case study. The sub-project manager had two rather different tasks, one related to the company's telephone system and the other to the system managing payment agreements. Thus in practice, the subproject had two subprojects working on these tasks. Eleven people were involved in this subproject, including a project manager, a business developer, a test coordinator who left the company and was replaced by one of the developers, and eight developers. Three of the developers were Indian consultants, while the developer who replaced the test coordinator was a newly hired employee from the consultancy organization within the company.
The sub-project delivered on the two tasks on time without a high level of last minute pressure and with only one reported error that was easily amendable. Following delivery, the subproject participants spent most of their time on documentation and helping other subprojects. Other sub-projects of the integration project also had extended collocation of Indian developers, but some of these projects experienced limited success in making them valuable contributing members. The overall integration project was implemented at the acquired company on the initially set date. However, the implementation was followed by numerous errors and a large amount of negative attention from the news media in the country of the acquired company. Within the company, the integration project was initially perceived as successful. However, the negative press eventually influenced this view. Over time, the news media attention is now more positive to the company and the integration project. However, the integration project cannot easily be labeled as one of the grand successes or failures in IT, even though different stakeholders have attempted to label the project as either a big success or failure.
Data collection and analysis
The data collection spanned 1 year and 6 months (Fig. 2) and included various documents and audio-recorded meetings, observations, and interviews for understanding the context of the subproject with collocated Indian developers.
The subproject was investigated through six rounds of semi-structured interviews with all available participants (Fig. 2) resulting in 56 audio-recorded interviews.
Project initiated
Interviews round 5
Apr.
Mar.
The big release
Project dissolved
May
Interviews round 6
Jun.-Jul. Feb.
Interviews round 4
Nov.
Interviews round 3
Aug.
Interviews round 2
Jun.
Interviews round 1
Jun.
Indians join project
Feb.
Initial interviews & observations
Mar. Jan.
Fig. 2. Timeline of project
The author conducted the six rounds of interviews with an interview guide based on Wenger's (1998) conceptualization of CoP. The interview guide included questions to the activities on an ordinary day, use of tools, collaborators, collegial relations, professional inspiration, current challenges, and changes since last interview. The interviews were planned to take ½-hour pr. person but some were longer while others were shorter. The first round of interviews was conducted two weeks after the first Indian developer arrived while they went back to India few days after the fifth round of interviews 10 months later. The knowledge sharing practices revealed through analysis of the first (9 interviews lasting 4h7m) and fifth (9 interviews 4h4m) rounds (Fig. 2), is the primary focus of this paper.
The author conducted a content analysis of the audio-recordings of the first and fifth round of interviews using Nvivo 8. A content analysis involves observing repeating themes and categorizing them using a coding system elicited in a grounded way (built up from the data) or from some external source (in this case the indicators of CoP presented in the theory section, Table 1). The content analysis was qualitative as the indicators of CoP were studied in their location in the source audio-recording, where the addition of context can help to identify additional relevant factors. The analysis software NVivo 8 was used for the code-based analysis, distinguishing between theoretical constructs (Wengers (1998) indicators of CoP) and descriptive codes based on the language of the interviewees [START_REF] Fielding | Computer analysis and qualitative research[END_REF].
Findings
The analysis of the interviews two weeks after the Indians arrival and two days before their departure resulted in 208 coded indications of CoP with 86 opposing and 112 supportive findings. The arrival interviews had 99 CoP indicators, while the departure interviews had 109 CoP indicators (Table 2).
Indicators of CoP
V 1 X 1 ! 5/2 ! 1/1 V 4 ! 7/7
2) Shared ways of engaging in doing things together
X 3 X 5 X 6 V 2 V 5 V 7
3) The rapid flow of information and propagation of innovation
X 3 X 7 V 2 V 1
4) Absence of introductory preambles, as if conversations and interactions were merely the continuation of an ongoing process
X 1 X 1 V 1 5) Very quick setup of a problem to be discussed X 2 X 5 X 2 V 2 V 1 V 3 6) Substantial overlap in participants' descrip- tions of who belongs V 1 V 2 ! 7/2 V 1 ! 6/3
7) Knowing what others know, what they can do, and how they can contribute to an enterprise
X 1 X 1 V 2 V 6 8) Mutually defining identities X 1 X 1 X 5 V 2 V 3 V 7
9) The ability to assess the appropriateness of actions and products X 2 X 1 V 2 V 3 10) Specific tools, representations, and other artifacts
X 4 X 3 V 1 V 1
11) Local lore, shared stories, inside jokes, knowing laughter
X 2 V 1 V 4 V 7
12) Jargon and shortcuts to communication as well as the ease of producing new ones
X 3 X 4 V 1 V 1 13) Certain styles recognized as displaying membership X 1 X 2 X 3 V 1 V 4
14) A shared discourse reflecting a certain perspective on the world.
X 1 X 1 X 2 V 2 V 3 V 6
Table 2. Indications of CoP from the project manager (PM), Indian Develops (IDs) and, Danish Developers (DDs) (V: Supportive, X: Opposing, !: Mixed, "empty field": No findings)
Table 2 summarizes the supportive and opposing findings indicating CoP from interviews with the project manager, Indian developers, and Danish developers. An example of this coding, with the descriptive code "Indians are colleagues, but the language is difficult", is part of a Danish developer's response to the question of whether he considers the Indians his colleagues:
" I share an office with one… I think we try to include him (the Indian developer) and he tries to be included. There are off course some language issues. Sometimes we want to discuss something with the person next to you in Danish because it is difficult to do in English. Then he is excluded because we talk Danish in some situations. But, when we have had a discussion, we sometimes follow up with a summary to him-saying we have just discussed this and this, what do you think of that?" (DD, arrival)
The interpretation of this quote was that the Danish developer described the Indian as belonging (CoP indicator 6) and that they have a sustained mutual relationship (CoP indicator 1). However, the quote also showed opposition to rapid flow of information (CoP indicator 3), a very quick setup of problems to be discussed (CoP indicator 5), and shortcuts to communication (CoP indicator 12). Thus, this quote was coded with two supportive and three opposing indicators of CoP. Some of the 14 CoP indicators had neither supportive nor opposing findings, while two of the indicators had both supportive and opposing findings in the interview or interview grouping. Most of the statements reflecting CoP involved more than one of the 14 indicators presented in Table 2. In total, 73 statements reflecting CoP was coded with a description based on the language of the interviewees. Of the 73 statements, 25 had only opposing indicators, 36 had only supportive indicators, while 12 had both opposing and supportive indicators (as the example presented above).
Two weeks after the Indians arrival
The nine interviews after the Indians arrival included the project manager, two Indian developers, and six Danish developers and a test coordinator. The analysis revealed 99 indicators of CoP of which 75 were opposing and 24 were supportive (Table 2). The second CoP indicator, "shared ways of engaging in doing things together" was with 14 opposing findings from all but one Danish developer the most frequently identified. The project manager shows an example of differences in ways of engagement between the Danish and Indian developers in this quote:
"… I have introduced project meetings every two weeks; I call them "buzz meetings"… I have not invited the Indians because people sometimes need to talk to me in Danish…" (PM, arrival)
In this way, the project manager established distinct engagement practices for the Indian and Danish developers. One Indian also mentioned that he usually eats lunch with the 10-15 other Indians participating in different projects at different departments and rarely with the other project participants. In addi-tion, does his description of a regular workday differ from that of the Danes'; by being more individualized work on tasks from a schedule defined by the Danish project participants. These distinctive practices between the Indians and Danes also appeared in the use of software development tools (CoP indicator 10):
" … at present in the specification phase I am using RSM [IBM Rational Software Modeler]… it's a customized version for *the company*… *name of the other Indian project participant* is also making use of it, the others are also supposed to use it, but since it is a new tool they need some training… they do the work they are comfortable with, and I do the work I am comfortable with…" (ID, arrival)
The quote shows the Indian developers bring knowledge to the project not held by the other participants. The Indians are not sharing this knowledge through mutual engagement with the Danish developers; instead, it is used for a division of labor. However, the Danes valued the Indians held this knowledge and they used it later to present the Indian developers as valuable project participants. The Danes did also reveal resistance to the inclusion of the Indian developers:
" … I think that we all would prefer to avoid having the Indians because it takes time and nobody has time… " (DD, arrival) Not all the project participants shared this resistance towards the Indian developers. At the time of their arrival most of the participants described the Indians as belonging and the Indians also described themselves as belonging (CoP indicator 6). The project manager and some of the Danish developers also emphasized the current and future mutual relationship between them and the Indians (CoP indicator 1).
Two days before the Indians departure
The nine interviews two days before the Indians departure after 10½ months of collocation included the project manager, three Indian developers, and five Danish developers and a test coordinator. The analysis revealed 109 indicators of CoP of which 98 were supportive and 11 were opposing (Table 2). In the interviews close to the Indians departure, the second CoP indicator "shared ways of engaging in doing things together" was also one of the most frequently identified but now with 14 supportive findings. This time, the project manager shows an example of a shared way of engaging both the Danish and Indian developers in the project planning with small responsive meetings:
" … we did not make a plan, saying this is the plan for the next three months, you continuously relate to it and if it is drifting, then we have a small meeting in payment agreement [subproject] to deal with it…" (PM, departure)
In the above quote, the project manager is inclusive of the Indians in the term we while mainly distinguishing between project participants based on what subproject they belong. Thus, they have a joint enterprise but also high mutual engagement between the two nationalities, as stated by an Indian developer:
"… They feel we are one among them, not separated by you being from India. Their treatment brought us close. Our thoughts are similar, that is the thing, that made us more close. We used to make fun in our rooms when we are working and we used to laugh together and that gives a better relation…" (ID, departure)
The quote show their mutual engagement have a shared repertoire in laughing together (CoP indicator 11) and an inclusive treatment of the Indian developers as members (CoP indicator 13) with similar ways of thinking (CoP indicator 14). The Danish developers mentioned elements of a mutual engagement with a shared repertoire in their joint enterprise, such as a good team spirit, also mentioned by the Indian developers:
"… There is a good team spirit… Off course, we know the Indians are going home now, so we need to get something out of them before they disappear. We have been involved in most of the things they have been doing, but some of the details are unknown to us…" (DD, departure)
While the good team spirit show the Indians was seen as belonging to the project (CoP indicator 6), this Danish developer also know what they have contributed to their joint enterprise (CoP indicator 7). Knowing this, there was a concern related to the end of the collocated sustained mutual relationship with the Indian developers (CoP indicator 1).
Change in knowledge sharing practices
The 10 months of collocated software development in addition to the two initial weeks, resulted in considerable changed knowledge sharing practices when comparing CoP indicators (Table 2). All of the 14 knowledge sharing practices indicating CoP had emerged over the 10 months of collocation in the cross-cultural software project. The following four knowledge sharing facilitators, was synthesized from the descriptive codes attached to the indicators of CoP in the interviews two days before the Indians departure. The Danish project participants viewed these facilitators as distinguishing their successful integration of Indians compared to the other subprojects. These other subproject had limited success in making the Indians valuable contributing members, despite of similar extended collocation. The four knowledge sharing facilitators are exemplified with a quote from the interviews:
Shared office "… they have been sitting close together, that is what everybody says, it would not have been the same, if they have been sitting in an entirely different office…" (PM, departure) Shared responsibility for tasks and problems "… once *name of Danish developer* felt stressed and asked for help, the others working on lower priority tasks immediately offered their help…it helps a lot that I am not alone with a task, it is a group assignment we are doing…" (DD, departure) Shared prioritization of team spirit "I think we all wanted to create a good working relationship and we all know the importance of team spirit." (DD, departure)
A champion of social integration "… One of the reasons for the high integration of our Indians in our project is *name of ID* who arrived first… it is rare to see Indians eating lunch with Danes, but he often did that from day one, and influenced the other Indian project participants in that way… he tries to learn Danish and he is good at being extrovert… he did not come just to sit with the other Indians …" (DD, departure)
Discussion
The cross-cultural knowledge sharing challenge and the collocation strategy was investigated in software development offshoring with the research question "When collocating cross-cultural software developers, what knowledge sharing practices may emerge over time and how can such practices be facilitated?". This study show that with collocation of a project's cross-cultural software developers, knowledge sharing practices covering all of Wenger's (1998) 14 indicators of CoP can emerge, but not after only two weeks of collocation. These knowledge sharing practices was according to the project participants facilitated differently to projects with less successful use of collocation by: 1) shared office, 2) shared responsibility for tasks and problems, 3) shared prioritization of team spirit, and 4) a champion of social integration.
The study contributes to our understanding of the cross-cultural knowledge sharing challenge in the context of software development offshoring [START_REF] Dibbern | Explaining Variations in Client Extra Costs between Software Projects Offshored to India[END_REF][START_REF] Nakatsu | A Comparative Study of Important Risk Factors Involved in Offshore and Domestic Outsourcing of Software Development Projects: A Two-Panel Delphi Study[END_REF][START_REF] Persson | Managing Risks in Distributed Software Projects: An Integrative Framework[END_REF]. Investigating the suggestion of working face-to-face proposed by numerous studies of global and offshore software development [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF][START_REF] Kotlarsky | Social Ties, Knowledge Sharing and Successful Collaboration in Globally Distributed System Development Projects[END_REF][START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF][START_REF] Šmite | Empirical Evidence in Global Software Engineering: A Systematic Review[END_REF]. The study supports the potential value of collocation for risk alleviation [START_REF] Persson | A Process for Managing Risks in Distributed Teams[END_REF][START_REF] Sakthivel | Managing Risk in Offshore Systems Development[END_REF], but also extend these studies by showing how longer periods of collocation can support alleviation of the knowledge sharing challenge in cross-cultural offshoring. This investigation supports the [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF] study of knowledge sharing practices and the impact of cultural factors in offshore software development, in their finding that spending time at the other site is very good. However, this study adds, that an extended period of collocation may benefit cross-cultural knowledge sharing more substantially. This is based on the finding that knowledge sharing practices reflected in Wenger's (1998) indicators of CoP was not achieved in the first two weeks of collocation. Thus, this study extends the research suggesting face-to-face work for supporting cross-cultural knowledge sharing [START_REF] Boden | Knowledge Sharing Practices and the Impact of Cultural Factors: Reflections on Two Case Studies of Offshoring in SME[END_REF][START_REF] Kotlarsky | Social Ties, Knowledge Sharing and Successful Collaboration in Globally Distributed System Development Projects[END_REF][START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF][START_REF] Šmite | Empirical Evidence in Global Software Engineering: A Systematic Review[END_REF] by showing that shorter periods of collocation may only have limited effect on achieving knowledge sharing practices that indicate a CoP. Yet the achievement of a CoP may be critical to the success of software development offshoring. This is supported by [START_REF] Philip | Exploring Failures at the Team Level in Offshore-Outsourced Software Development Projects[END_REF] claiming that in order to avoid project failures, the onshore and offshore teams from the vendor and client sides should work as an integrated project team. The CoP framework (1998) provide a sophisticated theoretical explanation of working as such an integrated project team, without being a team in the traditional sense [START_REF] Powell | Virtual Teams: A Review of Current Literature and Directions for Future Research[END_REF]. However, this study found that collocation without facilitation, even for extended periods, might not result in successful cross-cultural knowledge sharing. Thus, four ways to facilitate cross-cultural knowledge sharing was proposed, when adopting the extended period of collocation strategy in software development offshoring. These suggestions may contribute to frameworks for guiding the collocation strategy in software development offshoring or supplement other studies' suggestions [START_REF] Krishna | Managing Cross-Cultural Issues in Global Software Outsourcing[END_REF][START_REF] Persson | Managing Risks in Distributed Software Projects: An Integrative Framework[END_REF] in managing crosscultural knowledge sharing in distributed settings without collocation.
The study has implications for managers in software development with offshoring, who may consider extended collocation as a potentially costly but effective mitigation strategy for projects with high risk-exposure related to cross-cultural knowledge sharing. But also taking a critical stance towards the effect of initial short collocation on cross-cultural knowledge sharing practices at the level of ambition reflected in [START_REF] Wenger | Communities of practice: Learning, meanings, and identity[END_REF] indicators of CoP.
Choosing an extended collocation period strategy, managers should carefully monitor and facilitate the emergence of knowledge sharing practices over time. Future research is however needed of what knowledge sharing practices can emerge over shorter periods of time when collocating a project's crosscultural software developers, exploring the possibility of reducing the cost constraints of this strategy (Šmite et al., 2010). More research is also needed of why the four facilitators of cross-cultural knowledge sharing in collocated settings may be successful and how they should be implemented. Furthermore, future research is needed of the possibility for bringing CoP knowledge sharing practices from a collocated to a distributed setting. However, such research should consider the limitation of this study in the adopted conceptualization of CoP based on [START_REF] Wenger | Communities of practice: Learning, meanings, and identity[END_REF] indicators, emphasizing close relations created by sustained mutual engagement. The findings based on this conceptualization of CoP, may not be directly transferable to other conceptualizations of CoP [START_REF] Cox | What are Communities of Practice? A Comparative Review of Four Seminal Works[END_REF] more suitable for exploring less tight knit community relations, as in Wenger's later work [START_REF] Wenger | Cultivating communities of practice: A guide to managing knowledge[END_REF].
Conclusion
This paper presents an investigation of the cross-cultural knowledge sharing challenge addressed by the collocation strategy in software development off-shoring. A longitudinal case study of collocated Indian and Danish software developers revealed a positive change on 14 indicators of CoP over 10 months. While almost none of the 14 indicators of CoP had emerged after 2 weeks of collocation. The participants' contrasting with less successful use of collocated Indian developers in other projects, was synthesized into four distinctive facilitators of cross-cultural knowledge sharing: 1) shared office, 2) shared responsibility for tasks and problems, 3) shared prioritization of team spirit, and 4) a champion of social integration. This study helps understand the potential of the collocation strategy for mitigating the cross-cultural knowledge sharing challenge in software development offshoring, but also presents a critical stance towards the effect of shorter periods of collocation on cross-cultural knowledge sharing at project initiation.
Fig. 1 .
1 Fig.1. Dimensions of practice as the property of a community(Wenger, 1998 p.73)
joint enterprise mutal engagement shared repertoire
negotiated enterprise
mutual accountability
interpretations
rhythms
local response
engaged diversity doing things together relationships stories artifacts actions styles tools
social complexity community maintenance historical events concepts discourses | 48,748 | [
"1001527"
] | [
"300821"
] |
01467787 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467787/file/978-3-642-38862-0_1_Chapter.pdf | Karlheinz Kautz
email: [email protected]
Dubravka Cecez-Kecmanovic
email: [email protected]
Sociomateriality and Information Systems Success and Failure 1
Keywords: IS success, IS failure, IS development, IS assessment, sociomateriality, actor-network-theory (ANT)
The aim of this essay is to put forward a performative, sociomaterial perspective on Information Systems (IS) success and failure in organisations by focusing intently upon the discursive-material nature of IS development and use in practice. Through the application of Actor Network Theory (ANT) to the case of an IS that transacts insurance products we demonstrate the contribution of such a perspective to the understanding of how IS success and failure occur in practice. The manuscript puts our argument forward by first critiquing the existing perspectives on IS success and failure in the literature for their inadequate consideration of the materiality of IS, of its underling technologies and of the entanglement of the social and material aspects of IS development and use. From a sociomaterial perspective IS are not seen as objects that impact organisations one way or another, but instead as relational effects continually enacted in practice. As enactments in practice IS development and use produce realities of IS success and failure.
Introduction
IS success and failure has been a prominent research topic since the very inception of the field. The whole Information Technology (IT) industry, as [START_REF] Fincham | Narratives of Success and Failure in Systems Development[END_REF] notes, loudly trumpets its successes and failures and in particular "seems perversely capti-vated by its own failures" (p. 1). Some examples of high-profile IS project failures include the disastrous development of 'Socrate' by the French Railways [START_REF] Mitev | More than a failure? The computerized reservation systems at French Railways[END_REF], the dramatic failure of Taurus at the London Stock Exchange [START_REF] Currie | Computerizing the Stock Exchange: A Comparison of Two Information Systems[END_REF], the failed patient administration system at NSW Health in Australia [START_REF] Sauer | Fit, Failure and the House of Horrors: Toward a Configurational theory of IS Project Failure[END_REF], and the Internal Revenue Service's development of a new US Tax Modernisation System [START_REF] Nelson | Understanding the Causes of IT Project Failures in Government Agencies[END_REF]. High failure rates of IS projects and our inability to understand and explain, let alone prevent, the failures suggest that perhaps existing assumptions and approaches to IS research have not served us too well.
In 2001, the Australian subsidiary of a large multinational insurance company dealing primarily in business and life insurance, which we call Olympia, undertook to become the first insurance provider in Australia of web-based e-business services to their brokers. In 2006, the web-based information system (IS), named 'Olympiaonline' emerged as a sophisticated IS, eagerly adopted and highly praised by brokers. Olympia-online's success in the broker community created a competitive advantage for the company leading to an increase in their profit margins. However, being over time and over budget, and not delivering expected internal functionality the system was perceived as a big failure by the top business managers. That Olympia-online was considered simultaneously as a success and a failure, with both views firmly supported by evidence, is extraordinary and challenges the established understandings of IS success and failure. From a rationalist perspective IS success or failure are defined as discrete, objectively measured and definite states and outcomes contingent upon simple causation, e.g. certain technical characteristics and social factors or causally linked variables [START_REF] Delone | Information System Success: The Quest for the Dependent Variable[END_REF]. The socio-technical and process oriented perspective assumes that there is no "objectively correct account of failure" (Sauer, 1993, p. 24) or success, but it too assumes an objectified view resulting from politically and socially determined flaws and processes. Neither the rationalist nor the socio-technical process approach can explain the persisting co-existence of Olympiaonline success and failure. From a social constructivist perspective [START_REF] Fincham | Narratives of Success and Failure in Systems Development[END_REF], these co-existing perceptions can be explained by conflicting subjective interpretations and discourses of relevant social groups [START_REF] Bijker | Do Not Despair: There Is Life after Constructivism[END_REF][START_REF] Bartis | A Multiple Narrative Approach to Information System Failure: A Successful System that Failed[END_REF]. Assuming 'interpretive flexibility' of IS, this perspective helps understand how different social groups attribute different meanings and construct different assessments of an existing IS. This perspective has been critiqued for black-boxing IS and for putting too much emphasis on the interpretation and signification of IS while overlooking the ways in which IS' materiality is always already implicated in its social constructions [START_REF] Orlikowski | Sociomaterial Practices: Exploring Technology at Work[END_REF].
In response we propose a sociomaterial perspective of IS success and failure informed by the works of [START_REF] Orlikowski | Sociomateriality: Challenging the Separation of Technology, Work and Organization[END_REF], [START_REF] Latour | Reassembling the Social: An Introduction to Actor-Network-Theory[END_REF], and [START_REF] Law | Notes on the Theory of the Actor-Network: Ordering, Strategy and Heterogeneity[END_REF][START_REF] Law | After Method: Mess in Social Science Research[END_REF] among others. While social construction focuses on performativity of language and discourses, the sociomaterial perspective focuses on IS enactment in practice that implies performativity of both discourses and technologies. We suggest that an IS assessment is not only an interpretation or social construction, but a result of IS enactments in practice that produce realities [START_REF] Law | After Method: Mess in Social Science Research[END_REF]). If an IS can be differently enacted in different practices we should expect a possibility that such enactments can produce different realities. It is this reality making capacity of IS enactment in different practices that we propose to understand the co-existing realities of IS success and failure. When different IS enactments create multiple realities, contradicting realities of IS assessments may emerge and coexist. To substantiate our claim we draw from sociology of science and technology studies and specifically actor-network theory (ANT) [START_REF] Callon | Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fisherman of St. Brieuc Bay", in Power, Action and Belief: A New Sociology of Knowledge?[END_REF][START_REF] Latour | Reassembling the Social: An Introduction to Actor-Network-Theory[END_REF][START_REF] Law | Notes on the Theory of the Actor-Network: Ordering, Strategy and Heterogeneity[END_REF][START_REF] Law | After Method: Mess in Social Science Research[END_REF] as one prominent way of dealing with sociomateriality of IS [START_REF] Orlikowski | Sociomaterial Practices: Exploring Technology at Work[END_REF]). An ANT account of the Olympiaonline grounds and illustrates the sociomaterial perspective on IS success and failure.
We first review different approaches to IS success and failure, then we introduce the key assumptions of the sociomaterial perspective. This is followed by a description of ANT in the research methodology section which leads to the ANT account of the Olympia-online development and use. The ensuing discussion focuses on the ways Olympia-online was enacted in the practices of the brokers, the developers, and the business managers, and how these different enactments created multiple, only partially overlapping realities. Finally we summarize the contributions of the sociomaterial perspective to understand the entangled, discursive-material production of IS success and failure.
Literature Review and Theoretical Background
We base our literature review on [START_REF] Fincham | Narratives of Success and Failure in Systems Development[END_REF] who distinguishes three perspectives on IS success and failure. The rationalist view explains success and failure as brought about by factors which primarily represent managerial and organizational features in system development and which are related through simple causation. De-Lone and McLean's (1992) 2005) claim that as much as 90% of IS failures are attributed to these factors. Underlying the rationalist view is an assumption that IS success and failure are discrete states that can be identified and predicted by the presence/absence of certain factors. Although the lists of factors do not provide coherent explanations of why and how success and failure occur and rarely explain the phenomenon across different organizations, this is still the dominant view in the literature [START_REF] Delone | The DeLone and McLean Model of Information System Success: A Ten Year Update[END_REF].
The process perspective addresses these shortcomings. It emphasizes organizational and social-political processes and explains success and failure as the result of a socio-technical interaction of different stakeholders with IS. [START_REF] Kautz | Introducing Structured Methods: An Undelivered Promise? A Case Study[END_REF] provide one example for this view; but [START_REF] Sauer | Why Information Systems Fail: A Case Study Approach[END_REF] model of IS failure is the most comprehensive framework utilizing this perspective. Although it focuses on organizational and socio-technical processes, these are still seen to cause failure/success as discrete outcomes [START_REF] Fincham | Narratives of Success and Failure in Systems Development[END_REF]. The perspective remains anchored to some rational assumptions such as failure having a clear-cut impact and being objectivised as irreversible. Thus it does not cater for ambiguity in socio-political processes nor does it allow reflections on the relationship between success and failure, such as why success and failure are simultaneously attributed to the same IT artefact or how and why success is so often created out of failure.
Alternatively, [START_REF] Fincham | Narratives of Success and Failure in Systems Development[END_REF] puts forward a social constructivist, narrative perspective where the organizational and socio-political processes and the actions and stories accompanying it, are based on sense-making and interpretation and where IS success and failure are explained as a social construction. The social constructivist perspective draws attention to different viewpoints and interpretations of IS by relevant social groups thus resulting in interpretive flexibility of IT artefacts [START_REF] Bijker | Do Not Despair: There Is Life after Constructivism[END_REF][START_REF] Wilson | Power, Politics and Persuasion in IS Evaluation: A Focus on 'Relevant Social Groups[END_REF]. Extending the social constructivist perspective with organizational power and culture, Bartis and Mitev (2008) explain how the dominant narrative of a more powerful relevant social group prevailed and disguised an IS failure as success. [START_REF] Mitev | Are Social Constructivist Approaches Critical? The Case of IS Failure[END_REF] also proposes extending the social constructivist perspective by using ANT. Like McMaster et al. (1997) and [START_REF] Mcmaster | Success and Failure Revisited in the Implementation of New Technology: Some Reflections on the Capella Project[END_REF], she utilizes ANT with its concepts of human and non-human actants interrelated in actor-networks going beyond simple explanations of technological determinism to explain success and failure. However, these accounts neglect a view of IT artefacts as more than social constructions, but less than reified physical entities [START_REF] Quattrone | What Is IT? SAP, Accounting, and Visibility in a Multinational Organisation[END_REF].
This view emphasises the sociomaterial nature of an IT artefact: its agency resides neither in a technology nor in a human actor, but in a chain of relations between human and technological actants. With its rich theoretical background it inspired us to propose a fourth, sociomaterial perspective on IS success and failure which goes beyond social construction and the representational perspective where IS success and failure are represented either by objective measures or by subjective perceptions of social actors assuming that representations and the objects they represent are independently existing entities. The sociomaterial perspective assumes inherently inseparable sociality and materiality of IS [START_REF] Orlikowski | Sociomaterial Practices: Exploring Technology at Work[END_REF][START_REF] Orlikowski | Sociomateriality: Challenging the Separation of Technology, Work and Organization[END_REF]. It introduces a way of seeing an IS development and use and its assessment not only discursively constructed but also materially produced and enacted in practice. Exemplified by ANT the sociomaterial perspective assumes a relational ontology involving human and non-human actors that take their form and acquire their attributes as a result of their mutual relations in actor-networks. Its relationality "means that major ontological categories (for instance, 'technology' and 'society', 'human' and 'nonhuman') are treated as effects or outcomes, rather than as explanatory resources" (Law, 2004, p. 157); IS development and use can thus be seen as relational effects performed within actor-networks. We focus on the performed relations and the phenomena of IS development, use and assessment as the primary units of analysis and not on a given object or entity. In following [START_REF] Barad | Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter[END_REF] we understand phenomena as ontologically primitive relations without pre-existing relata which exist only within phenomena; they are ontologically inseparable of agentially intra-acting components. The notion of intra-action constitutes an alteration of the traditional notion of causality. Intra-actions within a phenomenon enact local agential separability and agential cuts which effect and allow for local separation within a phenomenon. Hence, within inseparable phenomena agential separation is possible. Performativity then is understood as the iterative intra-activity within a phenomenon [START_REF] Barad | Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter[END_REF]. This perspective allows us to identify and to better understand IS-related phenomena by investigating them in their inseparability as well as in their local separability, intra-action and agency through agential cuts, both in the context of utilization and the development of IS. Table 1 summarizes the four perspectives on IS success and failure. The sociomaterial perspective helps us turn the epistemological questionhow can we find out and predict whether an IS is 'true' success or failure?into ontological ones: How does IS success or failure come about? How is an IS enacted in practice? How do these enactments produce different realities, including the coexisting assessments of IS success and failure? To answer these questions we do not take the social factors or processes or technology as given. Instead we investigate the actors and actants as they enrol and perform in heterogeneous actor-networks; we follow the emergence and reconfiguration of actor-networks and the ways IS enactments in practice are negotiated and realities are created. Within the sociomaterial perspective IS are seen as sociotechnical relational actants or actor-networks that come into being through enactment in practice. This enactment involves mutually intertwined discursive and material production. Different perceptions and interpretive flexibility of an IS, as advocated by social constructivists, reveal only one side of a cointhe discursive production of IS. The sociomaterial perspective broadens our gaze by attending to the ways in which IS are enacted and performed simultaneously and inseparably socially, discursively and materially, technologically in relations in practice. We acknowledge that being performed and enacted in different practices IS are creating multiple realities and there are various possible reasons why an IS enactment creates one kind of reality rather than another.
Research Methodology
ANT embodies several key aspects of sociomateriality relevant to our examination of IS success and failure. ANT does not make a priori assumptions about the nature of actors or the ways they act to make up their worlds. Any human or non-human actor can be involved in relations, form alliances and pursue common interests in an actornetwork. An IS development and implementation as well as utilization endeavour can be seen as emergent, entangled, sociomaterial actor-networks created by aligning interests of developers, users, documents, methodology and technologies. The alignment of interests within an actor-network is achieved through the enrolment of allies and translation of their interests in order to be congruent with those of the network [START_REF] Walsham | Actor-Network Theory and IS Research: Current Status and Future Prospects[END_REF]. The actors enrolled in a network have "their own strategic preferences [and] the problem for the enroller therefore is to ensure that participants adhere to the enroller's interests rather than their own" (McLean and Hassard 2004, p. 495).
Translation can be achieved through scripts, which influence actors to act in a particular way so as to achieve an actor-network's goals. An actor-network does not imply existence of its constituting actors, but rather sees them constituted by the relations they are involved in. It is the morphology of relations which tells us what actors are and what they do [START_REF] Callon | Actor-Network Theory: the Market Test[END_REF]. The network changes through enrolment of new actors, creation of new alliances and changing relations among its actors. With an increasing alignment of interests and strengthening of relations an actor-network becomes more stabilized. This is what actor-networks strive for. However, they do not necessarily succeed; they may get weaker, break up and disappear. How they strengthen and stabilize or break up and dissolve is an interesting theoretical question with serious practical implications.
In line with other IS researchers who adopted ANT to provide robust accounts of the production and reconfiguration of relations in the development and implementation of IS seen as actor-networks (see e.g. [START_REF] Mitev | More than a failure? The computerized reservation systems at French Railways[END_REF]Vidgen and McMasters 1997;[START_REF] Walsham | GIS for District-Level Administration in India: Problems and Opportunities[END_REF][START_REF] Holmström | Understanding IT's organizational consequences: An actor network theory approach[END_REF] we develop an ANT account of the Olympia-online project. Without many prescriptions in the literature about how to do that we focused on practices of IS development and use and adopted the general advice 'follow the actors'. One member of the research team spent 6 months as staff on the Olympia-online development team. This was useful for gaining knowledge of the company, its management and IS development processes and for subsequent examination of actors and actor-networks. However the actual ANT study of the project started after her contract in the company had been concluded.
We initially focused on the development team, but then expanded our view as we traced enrolments, actors and their associations. The tracing of associations and identification and exploration of the creation and emergence of actor-networks led us to new human and non-human actors in the project team and beyondto managers and brokers; the e-business platform, the insurance industry, etc. At some point following the actors and tracing further associations had to stop. We had to learn to recognize and when and where to 'cut the network', in the words of [START_REF] Barad | Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter[END_REF] identifying or making agential cuts. "The trick is", says Miller (1996, p. 363), "to select the path you wish to follow, and those which you wish to ignore, and do so according to the assemblage you wish to chart".
During our study we encountered 46 human actors, engaged with technologies, important documents and other non-human actors, at different stages and locations. We had informal conversations with 21 human actors, traced additional 13 that played a role in the past but had left, and formally interviewed 12: 2 architects, 2 application developers, a data migration developer, a senior business analyst, a business project manager, a test team leader, a business expert underwriting, a business expert liaising with brokers, a senior IS executive and a senior General Insurance (GI) business manager. Documents that played an important role included a business plan, a business case and scope document, business information requirements, change requests, test plans, test cases, and project reports; important technology actants included the webbased e-business platform, a rule-based software engine, mainframe resources, application programs, interface designs and programs, and IT architecture.
The empirical data helped us reveal and reconstruct the trajectories of the actornetworks. It often exposed tensions and the political nature of the issues discussed. In the interviews as well as in the informal discussions we let the actors make sense of the project, their experiences and various events. Following the actors and their relations emerging in the project as well as executing, identifying and analysing agential cuts helped us map the creation and reconfiguration of several actor-networks. This was an iterative process that involved describing, analysing and revising these actor-networks, using ANT inquiry to reveal the inner workings of various actors and their networks e.g. enrolling actors into a network and ensuring that members of a network align with the enroller's interests; using delegates such as technology or documents to exert power and influence others. In this way we exposed different enactments of Olympiaonline in practice and the ways in which they produced multiple realities including the coexisting and controversial assessments of its success and failure.
The Company Olympia and its Olympia-Online Project
Olympia-online was an industry-first e-business system in the Australian insurance market that transacted the company's insurance products directly to brokers over the web. Knowledge about building such systems was scarce in both the insurance and the IT industry. The final system was highly innovative in the way it represented the company's insurance products and enabled on-line engagement and interaction with brokers, who as intermediaries sell these products to customers. The withdrawal of the top management support for further Olympia-online development created a worrying situation for the company. Olympia-online was vital to the company since all its business was mediated through brokers; unlike other insurance companies, it had no direct contact with individual customers. System developers in particular were acutely aware that the lack of top managers' commitment to continue funding its further development would put the company at risk and seriously threaten its future competitiveness.
Prior to the development of Olympia-online Olympia was not seen as a major competitor in the Australian general insurance market. All e-business in the Australian Insurance Industry was conducted via 'BrokerLine', an outdated mainframe-based electronic platform, run by Telcom, an Australian telecommunications company. Early in 2001 Telcom announced that they were ceasing operation of BrokerLine and that all companies were required to move their business operations to a new webbased platform 'Horizon'.
Most insurance companies transacted their business both directly with individual businesses and via brokers, being reciprocally aligned with both; thus Olympia was particularly vulnerable to the platform change. Fearing loss of their business and simultaneously recognising opportunities of a new web-based platform, Olympia's GI Business Division and the Strategy & Planning Division went about putting together a business case for the development of a new web-based IS, Olympia-online. They inscribed Olympia's interest and its new strategy into the Olympia-online Business Plan, which became an effective instrument for enrolling the Information Services Department (ISD) into the new IS development actor-network. This inscription was strong enough to motivate ISD to attempt alignment with Horizon and the brokers. As a key actor in the new emerging actor-network ISD was charged with the responsibility to develop a concrete solutiona new IS that enabled transecting with brokers via Horizon's web-based platform as described in the Business Case documentation. With a prospect of becoming the only channel through which Olympia would interact with brokers to sell its products, Olympia-online development became a strategic IS project in the company.
Olympia-online was a new type of IS in the insurance industry. Without in-house experience or skills and resources, Olympia searched for a supplier with the capabilities to develop Olympia-online thus attempting to enrol an actor to ensure Olympia's alignment with Horizon. Based on the scripts expressed in the Business Case documentation, two companies bided for a contract with Olympia thus attempting to forge an alignment with the company. This process was mediated through Olympia's Senior ISD Architect. The company HighTech was successful as it did promise delivery within the desired timeframe and a fixed-price contract. Developers from HighTech succeeded in demonstrating that the Emperor, a proprietary rules engine of which HighTech was the sole reseller in Australia, was an appropriate technology upon which the new system could be built. By successfully aligning themselves with Olympia's strategy inscribed in the Business Plan, HighTech and Emperor became enrolled into the Olympia-online development actor-network. The signed contract marked the beginning of Phase 1 of the Olympia-online development. Phase 1 development began with initial requirements gathering sessions by the HighTech team.
Phase 1 Olympia-online Development
The Hightech team had to understand the insurance business, the data and rules in insurance products, as they had no experience in insurance applications. Once development was underway, several problems emerged. It became clear to ISD staff that HighTech's developers had not grasped the breadth and depth of the problem, resulting in the project running seven months over schedule. In retrospect, the Senior ISD Architect involved in commissioning HighTech noted that the HighTech team "didn't understand the problem at hand" and underestimated its complexity, costs and the required development time. During this time, ISD staff realized that Emperor "was not the right engine for Olympia-online's purpose". When used to model insurance products and their complex business rules, Emperor exhibited severe limitations and rigidity. As a result the design of the application software was cumbersome and complex, requiring the development of extra software components to compensate for its insufficiency. Instead of working with a rule engine that had a "natural fit" with insurance products as HighTech developers initially claimed, the ISD team discovered a "dramatic misfit". Emperor was misaligned with Olympia-online's objectives.
In the initial Olympia-online development actor-network there were several attempts of alignment and many translations going on. The HighTech Project Manager attempted alignment with the Senior ISD Architect and at the same time exerted power over the Olympia team's work by making design decisions regarding the use of Emperor in the system's development based on his architect's advice. While he never fully disclosed Emperor's limitations for modelling insurance products, he successfully negotiated and established its key role in Olympia-online.
As the relations between the software components built on Emperor and those on the mainframe grew tighter the implications for the functionality and efficient operations of Olympia-online became more evident. The relations between the HighTech team and the Olympia team, Emperor and the mainframe system were highly contentious, yet critical for the development of Olympia-online. By insisting on Emperor as a platform for the application software the HighTech Project Manager by way of his Architect ensured that his company's interests were inscribed in the software. The more this software became dependent on Emperor the more this actor-network became irreversible 2 .
During Phase 1 the development actor-network was continually reconfigured through a series of translation processes that strengthened some alignments but failed others, and thus prevented its stabilization. The two actors overseeing this work, the HighTech Project Manager and the Olympia Head Architect, had trouble ensuring the delivery of the system with the specified functionality on time. At the beginning of 2002, as Phase 1 was significantly delayed, the GI Business Division was anxious to announce to the brokers that the new system was ready for use. They publicly promised that full functionality would be available by mid 2002 which upset ISD staff. When finally delivered to the brokers, despite nine months delay, Olympia-online was a great success: brokers were delighted with the new technology. The web-based specification of insurance products enabled brokers' flexible interaction with Olympia while selling its products to customers. Able to focus on customer needs and tailor products to meet these needs brokers gradually changed their work processes. Their enactment of Olympia-online produced different practices in transacting business with Olympia and its customers.
2 Phase 1 was further complicated by yet another translation process going on in the development of the interface between Olympia-online and Horizon. This was carried out by another third party, that is, another actor-network which we did not dig further into as this was not relevant for answering our research.
This first implementation of Olympia-online however exposed numerous technical problems, slow performance, frequent crashes and defects. As a result ISD staff had huge difficulties in maintaining it. Furthermore, its design was not modular and hence the system lacked the ability to be scaled to Olympia's future needs. ISD staff and their Senior Architect in particular, believed that Olympia-online's technical failures were caused primarily by the use of Emperor that "could not easily model complex insurance products". The Emperor's rule engine, they found out too late, was originally developed for specification of physical products such as machinery and had never been used before for products as complex as insurance. The enrolment of HighTech and Emperor into the Olympia-online development was, in their view, a wrong decision. In the meantime, the broker community strengthened their relation with Olympia and communicated its satisfaction with the system to the GI Business Division. Being first-to-market Olympia-online attracted new brokers and boosted business so GI revenue for business insurance grew significantly. Through their contacts with the HighTech Project Manager GI Business Managers believed that Emperor was the key contributor to the success. They were not aware of the problems experienced in the development nor did they realise the full extent of the system's technical failures and instability in operations. They thought the Olympia-online system was an unqualified success.
Based on this market success, GI Business Managers, in discussions with HighTech, made the decision to purchase $1 million worth of Emperor Licenses such that the existing system could be extended and more systems and products could be developed in the future. This decision was made without consulting ISD staff, as their mutual relations had deteriorated by that time. In the meantime, ISD staff were busy struggling to maintain an unstable system and respond to numerous defects. When Olympia-online became so unstable that its maintenance and use could no longer be sustained, ISD proposed Phase 2 of the Olympia-online development. Since GI managers had already spent $1 million on licensing it ISD had no other option but to continue the further development of Olympia-online with Emperor.
Phase 2 Olympia-online Development
Phase 2 started mid 2003 and the system went live in April 2005 with one major goal being to bring the Olympia-online development and knowledge in house, since "it was the key to Olympia's overall strategy" to prevent expertise from leaving the company. This goal was not easily achievable since Olympia continued to be reliant on HighTech as the only resource provider for Emperor in Australia. The other goal was the delivery of the system on time and budget. Consequently Olympia-online was now developed under a stringent project governance and management regime. The emerging situation resulted in three partially overlapping actor-networks: a Steering Committee, a development, and a Brokers actor-network.
A Steering Committee consisting of stakeholders from the GI Business Division, the Strategy and Planning Division, as well as ISD, was created. The Steering Committee was financially responsible for the project and thus primarily concerned with timeframes and costs. According to a team member, Phase 2 "focused disproportionately on short-term issues and cost considerations, at the expense of long-term quality and functionality". This, in his view, stemmed from the Steering Committee via the Business Project Manager and the IS Project Manager, two new roles, who were re-sponsible for short term goals -the system's delivery on time and on budget, but "seemed not concerned with the system's objectives in the long term".
The emerging Steering Committee actor-network grew more aligned with the commitment to impose tighter control over the Olympia-online project, keen not to repeat the mistakes from Phase 1. At the same time its relations with the development network deteriorated, leaving few options for interaction. In the meantime the development actor-network continuously reconfigured. Continued problems with Emperor and inadequate resources as requests for additional resources were rejected by the Steering Committee increased tensions and prevented its stabilization. In Phase 2 again the Olympia development team had not enough time and resources to design a modular architecture based on which all subsystems would be developed including future system expansion. The tight budget control and insistence on the planned timeline by the IS Project Manager increased tensions and did little to resolve the key problems in the development of the system.
A Brokers' actor-network emerged and grew strong throughout Phase 2. Major efforts and resources were allocated to redesigning Olympia-online to serve brokers due to the Business Experts' continued parallel engagement in the development actornetwork and with the broker community. They successfully translated the project objectives to be aligned with their own and the broker community interests. Their involvement and influence ensured that the system was not implemented until a sufficient level of functionality and quality required by brokers had been delivered. While this caused tensions with the IS Project Manager "who was constantly pushing for fast delivery" the engagement of the Business Experts resulted in the inscription of the brokers' views and the translation of their interests in Olympia-online leading to a strong alignment between the system and the broker community and to a network stabilization. The quality of the resulting Olympia-online was, as the Business Expert liaising brokers confirmed, exceptional. This ensured that Olympia-online continued and enhanced its market success. This was acknowledged by a Senior GI Business Manager. However, he also said that the project overran, costed too much, and didn't deliver the expected internal functionality.
This view prevailed in the Steering Committee actor-network despite attempts of alignment by the two Business Experts to convince the Steering Committee about the system's market success. While they were initially enthusiastic about the Olympiaonline development, the GI business managers did not engage with the Phase 2 development, as their enrolment in that actor-network became weaker rather than stronger. The reports to the Steering Committee by the IS Project Manager, providing the key relations with the development network, did not indicate early enough that Phase 2 would be delayed and over budget, nor did they indicate that the internal functionality would not be delivered. The managers' view was that they had pretty regular requirements for the system's core internal functionality, management and operational reporting, that "any IS would normally deliver". Their requests were not translated into the system. When the system at the end of Phase 2 did not deliver the requested functionality and when it became evident that it was again over time and over budget, there was no doubt in the Steering Committee network that the Phase 2 Olympiaonline was an "obvious failure". The GI Business Division, ultimately responsible for funding, withdrew their support and the Steering Committee did not approve plans for building Olympia-online further. This decision might jeopardise Olympia's market position and a loss of competitive advantage.
5
How Olympia-online became both a Success and a Failure
The conflicting assessments of Olympia-online cannot be explained within existing perspectives. Taking the rationalist perspective (DeLone andMcLean 1992, 2003) (see Table 1) one would expect that 'senior management support' evident in most part of the project is a good predictor of Olympia-online success. However this factor was probably more directly linked to system failure: GI managers support led to the decision to purchase the license that played a major role in the production of system failure. Another expected success factor, 'strict management control', evident in Phase 2, can also be associated more with the failure than the success. Circumstances and dynamics of organisational processes in any non-trivial IS development are so complex that a simple explanation of causally linked factors does not make much sense.
The process perspective would reveal organizational and socio-technical processes that led to successful innovation in selling insurance products and the brokers' interaction with the company, therefore leading to success. It would also reveal many technical flaws in designing Olympia-online that led to failure. The perspective however cannot deal with ambiguous and changing assessments nor with contradicting outcomes -Olympia-online is neither abandoned nor supported for future development. While the process perspective does not attempt to provide an objective account of IS success and failure it still sees them as discrete and irreversible 'outcomes', resulting from certain organisational and sociotechnical processes.
The social constructivist perspective would explain how both the success and failure of Olympia-online were socially constructed. From this perspective the stories and narratives of relevant social groups, GI managers, developers, and brokers, can be seen as producing the discourses of failure as well as the discourses of success. The Olympia-online system is perceived and interpreted differently by these relevant social groups, thereby implying its interpretive flexibility. Pluralist views and different Olympia-online assessments thus are perspectival in nature. The problem with social constructivism is that excessive power is attributed to representations and words and discourses of IS success and failure without recognising their material foundation.
Reconceputalizing the success and failure of IS
The sociomaterial perspective considers IS success and failure as relational effects, that do not exist by themselves but are endlessly generated in actor-networks [START_REF] Law | After Method: Mess in Social Science Research[END_REF]. It directs attention to the different practices enacted by Olympia-online development and use and to the ways in which such enactments produced multiple realities of system success and failure. The sociomaterial perspective is premised on a conception of technology and IS as non-human actors which are constitutively entangled with human actors in webs of relations in situated practices [START_REF] Orlikowski | Sociomaterial Practices: Exploring Technology at Work[END_REF]. In-stead of investigating how one impacts on the other, we experience these actors' worlds. This enables us to see how IS success and failure are produced by sociomaterial dynamics in actor-networks. Actor-networks are not clearly distinguishable entities, they are (parts of) sociomaterial entanglements and become visible through agential cuts.
The production of Olympia-online success in the brokers' actor-network can be traced to the Business Experts' engagement to translate the brokers' needs and interests into the development of the Olympia-online software. Their engagement became even more prominent in Phase 2 as they forged close interaction and further alignment between brokers, the development team, the Olympia-online software and Horizon. Emerging through these processes was the brokers' network which had an evident overlap with the development network; they shared actors and relations that assisted their mutual alignment. Being heavily engaged in the system testing, the brokers expressed their high appreciation for the system quality. The new reality of Olympiaonline enacted in the brokers' actor-network resulted from transformation of their work practices and their innovative ways of customizing products for customers and transacting business with Olympia. The wide adoption of these practices created market success and tangible benefits for Olympia. The production of success was not only the result of the brokers' attribution of meanings and discursive construction of Olympia-online in their practices. It was also material and technological; the development actor-network and the overlapping brokers' actor-network jointly created such a sociomaterial constellation that the development of Olympia-online and its enactment in the brokers' practices became closely connected, mutually triggering changes in each other. The brokers were part of and contributors to the sociomaterial constellation as they innovated their practices through the appropriation of Olympia-online and based on this experience suggested changes in the system.
A sociomaterial entanglement is a network arrangement, a mangle of practice, which implies inseparability of the social, discursive and the material, technological that are "mutually and emergently productive of one another" (Pickering 1993, p. 567). They are inseparable in the overall reconstruction of organisational reality, but become locally separable through agential cuts. It is in fact very difficult, sometimes impossible, to separate the social, discursive from the material, technological production of the Olympia-online reality in the brokers' practices. They are intimately fused in the brokers' situated experiences and their enactment of new and innovative practices in the sociomaterial constellation emerging in the brokers' actor-network. The sociomaterial constellation that fused together multiple meanings and material technologies of Olympia-online development, its software proper, and the brokers' practices was created simultaneously in both networks. The two networks were porous enough to co-create such a sociomaterial constellation that in the brokers' network produced the reality of the Olympia-online success in the market.
The trajectory and dynamics of the Olympia-online development actor-network are even more complicated. The complexity of this network arose due to enrolment of numerous actants, complex translation processes and continuous building and reconfiguring of relations during both phases. The key enrolment in Phase 1 of the development network was that of the rule-engine Emperor. Due to Emperor's central role in modelling insurance products and related business rules the Olympia-online software became intimately dependent on it. With the purchase of the Emperor Li-cense, the actor-network further increased its dependence on Emperor with significant ramifications.
When the Olympia team engaged with Emperor's rule-based Enginewhile designing and testing the software's structure, processes, user interface, security procedures, etc.they experienced severe limitations in modelling insurance products. The built software built was therefore cumbersome and complex. The sociomaterial constellation that emerged in the relations between the Olympia team, the HighTech team, Emperor, the mainframe, and the business experts only allowed for a limited and ineffective translation of insurance products and business rules into the Olympiaonline software. Emperor's agency in this translation process was not only a social construction. It was relationally and materially enacted through the project managers', the architects' and the developers' practices within this sociomaterial entanglement which did not leave them with much design alternatives, allowing only particular design practices, and leaving traces in the designed structures, application programs, and processes. This is congruent with [START_REF] Quattrone | What Is IT? SAP, Accounting, and Visibility in a Multinational Organisation[END_REF] findings that agency of technology extends beyond human responses to it and that it resides in the chain of relations between the actants.
While after Phase 1, the general consensus amongst Olympia-online development team members was that Emperor should be abandoned this became politically unfeasible due to the money spent on Emperor licensing. Furthermore, the more code the development team developed based on Emperor the less likely they were to abandon it. Dependency on Emperor in the development network became thus more and more irreversible making it almost impossible to "go back to a point where alternative possibilities exist[ed]" (Walsham and Sahay 1999, p. 42). The increasing irreversibility of the development actor-network made its sociomaterial entanglement increasingly more critical and consequential for the final system.
Another important dynamics arose through this network's relations with the Steering committee network. During Phase 2 the Olympia team requested further resources arguing that the complexity of Olympia-online and the problems with Emperor necessitated much more than initially planned. However, no additional resources were approved by the Steering Committee. The very objective of creating this committee and the two new management roles in Phase 2 were to enforce strict budget control and the delivery deadline. This objective was firmly held by the committee and the network formed around it. Without additional resources the team was not able to deliver full functionality. Seeing the functionality for the brokers as a priority the team, to some extent influenced by the Business Experts, allocated all their resources to develop this functionality first. It however meant delaying the development of functionality required by the GI managers. This was not known outside this network and was first reported by the IS Project Manager to the Steering Committee just before the end of Phase 2.
Although the Olympia-online development was not completed and its actornetwork did not stabilize Phase 2 was concluded as the project was already over time and budget. At this point in time the developers were fully aware of the technical deficiency of the Olympia-online design. Highly limited resources and the complexity of the design had prevented a radical change of the architecture in Phase 2. It was a series of sociomaterial entanglements that we traced during this network reconfiguration in Phases 1 and 2, which produced the final Olympia-online system. But this did not happen in isolation. Relations emerging in the other two networks, partially overlapping with the development network, played their role as well.
The Steering Committee actor-network had a direct influence on the IS Project Manager and the Business Project Manager that were charged with the responsibility to impose stringent project and budget control. This was a major relation between the Steering Committee actor-network and the development network. By purchasing the Emperor license and thereby effectively enrolling it in the development actor-network the Steering Committee showed their commitment and support for the project. However, the relations between the two networks became weaker. Attempts by the Business Experts to strengthen the ties with the Steering Committee and align its network's objectives with that of the development network had not been doing well. In Phase 2 the two networks emerged less connected and less aligned than before. The market success of Olympia-online was acknowledged by the GI managers but this only confirmed their appreciation for Emperor. The sociomaterial entanglement within the Steering Committee actor-network was enacted by the managers' preoccupation with budget control and deadlines, sporadic relations with the HighTech team and reports by the IS Project Manager, including the final one informing them that the expected functionality requested by GI managers was not going to be delivered, and that the development was over time and budget. There were no relations with the development team or the application software. The resulting failure verdict seemed an inevitable outcome.
This analysis suggests the relevance of the emergence and reconfiguration of actor-networks understood as sociomaterial entanglements for the comprehension of different enactments and assessments of IS. The success and the failure of Olympiaonline were more than different perceptions and social constructions by relevant social groups. Due to the assembling and reconfiguring of the actor-networks multiple sociomaterial constellations emerged. The sociomaterial entanglements involved inseparable and mutually constituting discursive and material constructions which we turned visible through agential cuts. The analysis shows it is not just humans who discursively through the communication of knowledge or the lack hereof or through timely or untimely reporting construct the success or failure, nor is it only material resources, technology and material components of an IS such as internal and external functions, modular structures or platforms that exert influence on human actors thus causing the success or failure, it is the emergence of the(ir) sociomaterial relations within which they encounter each other and through which the discursive and the material technology are entangled and preformed, f. ex. through the purchase of licenses or resource constraints, and construct a success or a failure. Agential cuts turn these entanglements visible and render them locally separable. The success and failure of IS are made in and by multiple actor-networks.
2 Multiple IS realities and IS success and failure
Olympia-online was continuously re-enacted in practices of different actor-networks, which produced multiple, alternative realities. The recognition of multiplicity of IS realities in practice is conceptually different from plurality implied by social constructivism [START_REF] Law | After Method: Mess in Social Science Research[END_REF][START_REF] Mol | Ontological Politics: A Word and some Questions[END_REF]. Plurality assumes a single reality that is observed, perceived and interpreted differently by different social groups, hence plurality of views and assessments (Bartis and Mitev 2008;[START_REF] Wilson | Power, Politics and Persuasion in IS Evaluation: A Focus on 'Relevant Social Groups[END_REF]. Multiplicity implies multiple realities that are "done and enacted rather than observed. Rather than being seen by a diversity of watching eyes while itself remaining untouched in the centre, reality is manipulated by means of various tools in the course of a diversity of practices" (Mol 1999, p. 77, emphasis in the original). Enactments of the Olympia-online development and use in different practices within the three different actor-networks produced multiple realities. The resulting co-existence of multiple Olympia-online realities created a problem Olympia was incapable of resolving: it was stuck with contradicting assessments; unable to reconcile this multiplicity. The decision regarding investment into Olympia-online's further development was stalled.
Beyond the project's fate Olympia's relationship with brokers, its market position, and ultimately the company's future are at stake. Such a situation raises the question: what could be done? As conditions of creation and emergence of actor-networks are not given but created and re-created, realities might be done in other ways, that different sociomaterial entanglements of an IS development "might make it possible to enact realities in different ways" (Law 2004, p. 66). Enacting an IS and performing a reality one way or another can thus be open for debate. Understanding the Olympia-online case may help both practitioners and researchers gain deeper insights into the production of the IS realities of success and failure, help undo some deeds, and perhaps prevent failure. The trajectories of the actor-networks in Phase 1 and 2 reveal the conditions for possible options at any point in time, with some actions and reconfigurations playing a more significant role than others.
The reality of Olympia-online's success in the market was produced by the brokers' network; key were the actions of the Business Experts who actively engaged in translating the brokers' needs into the IS and attracted brokers to engage in testing and who contributed to the resource allocation for testing Olympia-online's usability. This strengthened the relations between development and brokers networks thus producing success. However it hid the resource allocation to meet the brokers' needs. While it ensured high quality functionality for the brokers it withdrew resources from development of internal functionality relevant for the GI managers. On reflection, this could have been different, choosing perhaps a more balanced resource allocation.
The reality of Olympia-online enacted by the Steering Committee actor-network resulted in the assessment of failure. Neither the GI Managers nor other members of the committee ever questioned or revised the initial prediction of the project resources and duration for Phase 2 despite new evidence about the increasing complexity of the project and a need for larger resources to complete the projectcontained in reports submitted to the Steering Committee. During Phase 2 the committee actor-network became more stabilized and at the same time more disconnected from the development actor-network. Consequently this network was narrowed and steadfast, leaving no options for alternative considerations. Despite the evidence of market success, Olympia-online for them was a failure. The failure verdict was natural and obvious: it was seen as "based on hard facts" as "the system was over time and over budget" and "its internal functionality was not delivered". However the use of these particular measures of project success/failure was never explicated. Options to discuss different assessment criteria and to question and revise initial estimates of required resources and time were not considered. A possibility of questioning assumptions regarding a stable and robust infrastructure had not been entertained. This might have led to opportunities for enactment of a different reality and for taking different action regarding Olympia-online's future development.
Finally, for the development actor-network the HighTech enrollment was highly consequential with Emperor playing a key role in the Olympia-online development. It was plagued by the developers' battle with Emperor and its integration with the mainframe. It engaged unexpectedly large resources thus contributing to prolonged delivery and missing functionality of the final system. Choosing an option to reject working with Emperor, even after the license was purchased, would have changed the network's trajectory; other options included contract termination with HighTech and the enrollment of other partner companies, fulfillment of the initial objectives to develop a modular architecture, and a more balanced resource allocation to deliver full system functionality.
The discussion reveals multiple and largely incoherent realities within the identified actor-networks. It reveals some possibilities and options to make different choices at particular points in time and enact realities in different ways. Some of these options still existed but were not seen by the actors when the decisions concerning Olympia-online's future were made. To see them actors needed to reflect on and understand the ways in which multiple Olympia-online realities were enacted in different practices. The more all stakeholder succeed in understanding the making of these multiple realities the more open they might become for re-negotiation and reconciliation of multiple assessments in the light of strategic objectives and market implications.
CONCLUSION
This essay proposes a sociomaterial perspective on IS success and failure. The investigation of the Olympia-online case, resulting in concurrently contradicting and unreconciled assessments, provided an opportunity to demonstrate a distinct theoretical and practical contribution of the sociomaterial perspective to the understanding of IS success and failure. The sociomaterial perspective focuses on IS enactments in practices that are not only performed discursively but also and inevitably materially through sociomaterial relations involving material encounters which would be only partially understood by social constructivism. The sociomaterial perspective reveals how the success and the failure of IS are produced as relational effects in and by actor-networks. It draws attention to the contingently enacted realities of IS within emergent actor-networks of IS development and use in practice. Through the analysis of reconfigurations of the actor-network, we illustrate how multiple realities of system success and failure have been produced concurrently. The lessons from the case teach us that there are options to make different choices along the way and to re-enact realities differently [START_REF] Law | After Method: Mess in Social Science Research[END_REF].
In addition, we examined how the networks could have been reconfigured differently. An actor-network "is not a network of connecting entities which are already there but a network which configures ontologies." (Callon 1999, pp.185). What an enrolment would do or change in an actor-network, is rarely known or well understood when it happens. This suggests further research. There are many open questions: How can multiple and contradicting realities be reconciled and and thus failure be prevented? How can premature stabilization of an actor-network be avoided and how can greater congruence between relevant actor-networks be enabled? How can alternative options for enacting IS reality differently be identified?
Finally, conducting and presenting an ANT study poses many challenges. Following the actors and investigating the relations within actor-networks by making agential cuts reveal complexities that resist clear and simplified presentations and a linear story typically expected of academic writing. An ANT study emphasises flow and change as key to understanding the being and doing of actors as well as the emergence and reconfiguration of actor-networks. As this is not easily communicated we produced snapshots and momentary outside views of actors' worlds at different points in the project timeline. These are inherently limited by the nature of the printed medium and a potential domain of future research might explore alternatives genres and new electronic media in presenting actor-networks and achieved research results.
model of success and Lyttinen and Hirschheim's (1987) classification of the IS failure concept are well-known examples of this perspective. Numerous other studies found that social/organisational factors, rather than technical, had been dominant contributors to failure (e.g. Luna-Reyes et al. 2005; Lee and Xia 2005). Luna-Reyes et al. (
Table 1 :
1 IS success and failure Perspectives (extended from[START_REF] Fincham | Narratives of Success and Failure in Systems Development[END_REF]
Perspective Form of organiza- Methodological IS success and
tional behavior and focus failure seen as
action
Rationalist Managerial and Simple cause and Objective and po-
organizational effect larized states -
structures and goals outcomes of tech-
nological and so-
cial factors
Process Organizational and Socio-technical Socially and politi-
socio-political pro- interaction cally defined -
cesses outcomes of organ-
izational processes
and flaws
Constructivist, Organizational and Interpretation and Social construc-
narrative socio-political pro- sense-making of tions, implying
cesses; symbolic relevant social interpretive flexi-
action, themes, groups; narrative, bility of IS
plots, stories rhetoric and per-
suasion
Sociomaterial, A relational view of Emergence and IS enactment and
such as ANT organizations and IS reconfiguration of production of mul-
as sociomaterial IS development tiple realities in
arrangements of and use actor- practice
human and non- networks; enact-
human actors ments of IS in
practice
The argument presented in this keynote essay has subsequently been further developed in Cecez-Kecmanovic, D., Kautz, K. and Abrahall, R. "Reframing Success and Failure of Information Systems: A Performative Perspective", to appear in MIS Quarterly
, 2013.
Acknowledgements. We like to recognize Rebecca Abrahall who as a research student collected the data which build the empirical basis for our analysis. | 63,482 | [
"983556"
] | [
"313498",
"74661"
] |
01467791 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467791/file/978-3-642-38862-0_23_Chapter.pdf | Akhlaque Haque
email: [email protected]
Ph.D Kamna L Mantode
GOVERNANCE IN THE TECHNOLOGY ERA: IMPLICATIONS OF ACTOR NETWORK THEORY FOR SOCIAL EMPOWERMENT IN SOUTH ASIA
Keywords: Social entrepreneurship, governance, information utilization, actor network theory, public administration
Information and communication technologies (ICT) have proven their value in delivering time-sensitive and relevant information to targeted communities. Information has been the key resource to social development. Social entrepreneurs have leveraged ICT to reach out to people who are marginalized from public discourse. Despite successes however, some ICT initiatives have failed due to underestimating the social requirements of technology and to relying more on information systems than on the information the system transports. How information is produced and applied to a social context to create meaning is more important than the means by which it is represented through portable monitors and mobile devices. The paper argues in order to take advantage of today's ICT, it is critical that we understand how technology and society mediate within a socio-technical framework. Using the Actor Network Theory, the paper explains the process of mediation to highlight that the journey to technology-based solutions is not smooth. The Village Knowledge Center (VKC) project in India and the Access to Information (A2I) project in Bangladesh provide sound evidence of how ICT-led social development can be effective in the short run but meaningful long term changes will depend on the collaboration of social entrepreneurs and public administrators.
INTRODUCTION
The success of social entrepreneurship has regenerated interest in partnerships between government and civil society organizations (CSO) to solve the world's most pressing problems including, among others, dealing with demands for democratic rights, coping with climate change, and giving access to healthy living and social justice for marginalized communities. With the hopes of mobilizing citizens to become productive partners in economic revival, the international development agencies including The World Bank, United Nations Development Program (UNDP), the DFID (Britain), and GIZ (Germany) have invested in sustainable social development projects through collaborating with social entrepreneurs.1 Social entrepreneurship is arguably a mobilization tool used by catalytic entrepreneurs who leverage the social capital in helpless communities to develop sustainable partnership as they empower and transform the human condition (Waddock and Post, 1991;Waddock, 1991).
Information plays a critical role in motivating citizens by identifying and contextualizing information towards a purposeful goal. Indeed, democracy is strengthened by an informed citizenry as citizens take ownership of their situation to become empowered and take charge of their destiny. In this regard, information and communication technologies (ICT) have proven advantage in delivering time-sensitive and relevant information to targeted communities. However, evidence suggests there are more failures than successes using ICT for social empowerment because of over reliance on the information systems rather than on the information it transports within a given social context. A systematic surveillance of the social context is a precondition to applying technology for social benefit. The paper uses actor network theory to show how linkages between human actors and the new technology can be established to form the social basis of technology deployment. Social entrepreneurs have been important catalyst in introducing new ideas through technology for social transformation. Social entrepreneurs' use of ICT for social development provides sound evidence of social mobilization using ICT.
The purpose of the proposed research agenda is to evaluate the process by which social entrepreneurs as leaders, in conjunction with public administrators, utilize information technology to activate and mobilize citizens to reach a sustainable and socially desirable outcome. For empowerment initiated through technology, the outcome depends on a complex social process independent of the technological supremacy. The growing literature on Actor Network and ethnomethodology will support discussion of the implications of action oriented information for empowerment in two independent civil society led projects in Bangladesh and India. The case studies highlight how new information that becomes available through ICT can mediate within society to build social relationships. Despite similarities in the mediation process, the approach taken by social entrepreneurs will ultimately determine the sustainability of ICT-based developments.
The paper comprises three broad sections. The first section delves into the discussions of the sociology of association as it affects our understanding of the role of technology in the larger scheme of human and non-human interaction. This section introduces Actor Network Theory (ANT). The second section connects the theoretical discussion of ANT to the case study on the Village Knowledge Center project in Pondicherry, India, and the Access to Information (A2I) project in Bangladesh. In conclusion, we discuss the implication of social entrepreneurship in public administration.
SOCIAL ENTREPRENEURSHIP AND ICT
The roots of social entrepreneurship can be traced in the works by scholars engaged in civic and community empowerment, social responsibility and social justice [START_REF] Harmon | Public Administration's Final Exam : A Pragmatist Restructuring of the Profession and the Discipline: University of Alabama Press[END_REF][START_REF] Frederickson | The spirit of public administration[END_REF][START_REF] King | Government is us : public administration in an anti-government era[END_REF]; however, the role of information and information systems in the process of achieving the same goals needs further investigation. As opposed to business entrepreneurs who take risk for making new opportunities to profits, the social entrepreneurs are interested in making mission-related social impact [START_REF] Martin | Social entrepreneurship: the case for definition[END_REF][START_REF] Yunus | Building social business : the new kind of capitalism that serves humanity's most pressing needs[END_REF]. Although a growing literature on social entrepreneurship is emerging, the creative process of leveraging resources towards social mobilization is not well understood [START_REF] Dacin | Social Entrepreneurship: Why We Don't Need a New Theory and How We Move Forward From Here[END_REF]. This becomes particularly of interest in developing countries where public agencies play a critical role in realizing the social entrepreneurial goals.
The role of ICT for social empowerment is unclear, due in part to the fact that far more ICT dependent projects fail than succeed [START_REF] Goldfinch | Pessimism, Computer Failure, and Information Systems Development in the Public Sector[END_REF][START_REF] Heeks | Understanding success and failure in information age reform[END_REF][START_REF] Korac-Boisvert | Transcending soft-core IT disasters in public sector organizations[END_REF]. Institutional impediments and failures to mobilize government support for action have often confounded ICT's role in the process [START_REF] Heeks | ICTs and the MDGs: On the Wrong Track? Information for Development, III (3). Retrieved from ICTs and MDGs in Wrong Track website[END_REF]De Rahul and Ratan, 2009). Understanding the impact of technology on social empowerment requires a deeper understanding of ICT, beyond institutional receptivity [START_REF] Fountain | Building the virtual state: Information technology and institutional change[END_REF] and into comprehension of social institutions including cultural norms and standardization of routine work [START_REF] Northrop | Payoffs from Computerization: Lessons over Time[END_REF]. These can have direct impact on the livelihood of the population in question.
Social entrepreneurs are unelected bodies who need to be competent in what they do. Competency provides one of the bases by which policy decisions are deemed legitimate [START_REF] Dahl | After the revolution; authority in a good society[END_REF]. Therefore, how to mobilize information and knowledge authoritatively in the society is a fundamental task of the social entrepreneurs. Whereas elected politicians can make value judgments about policy decisions, they have a disadvantage when it comes to gathering empirically sound, unbiased information to validate their judgments and make them acceptable to the public. The public may question the neutrality of elected officials. In addition, the qualities that helped someone win election may not always include competency in data gathering and validation (Vibert, 2007, p. 49). When it comes to policy issues, respectable social entrepreneurs and independent international development organizations can offer the skills to gather empirical evidence about what works and what does not. They can apply the technical knowledge and leverage resources specific to the mission of the development projects. However, the normative judgments about what is best for the society are reserved by the politicians; they ultimately decide what ought to be the public interest. Whereas the technocratic function (i.e. information gathering and resource mobilization) can be performed by independent social entrepreneurs, political value judgments are made by the politicians, be they liberal or conservative, pro-business vs. pro-liberation, or otherwise. Therefore, the social development formula in a democracy has a technocratic component for developing techniques and a political component to justify implementing projects seen as critical in maintaining a stable democracy. Social entrepreneurs can bring innovative ideas and technical knowledge to reach specific social goals. In places where development challenges have been an uphill battle due to political and/or socioeconomic situations, ICT has become an expedient tool for social connectivity and access to information hastening social mobilization and empowerment. But the process is not always clear as to how ICT can be effective in social mobilization. Actor network theory provides a framework that can be helpful to link ICT to society in general.
ACTOR NETWORK THEORY (ANT)
Social problems are complex and require comprehensive understanding of the relationships of the social networks and each actor's relationship to technology and the artifacts that define the socio-technical network. In other words, the society is technologically shaped as we tie ourselves to routines that are built around a network of relations to humans as well as non-human actors. Actor-Network Theory (ANT) describes how society is an assemblage of actors, each linked to create meaningful relationships. The seminal works of Bruno [START_REF] Latour | Science in action : How to follow scientists and engineers through society[END_REF][START_REF] Latour | Reassembling the social : an introduction to actor-network-theory[END_REF], John [START_REF] Law | Notes on the Theory of the Actor-Network: Ordering, Strategy and Heterogeneity[END_REF]Michael Callon (1986, 1992) are recognized as foundations of ANT. The subsequent work and related research within Science and Technology Studies (STS) provides further basis for understanding the evolution of ANT as a multidisciplinary study (See for example, [START_REF] Bijker | The Social Construction of Technological Systems[END_REF][START_REF] Mackenzie | The social shaping of technology[END_REF][START_REF] Feenberg | Critical Theory of Technology[END_REF] The theory asserts that the role of technology in society depends on the interpretation of the actors who use their social lenses to arrive at a mutually recognizable usage of the technology towards a given routine, while at the same time balancing their social network relationships. Technology therefore is a social construct whereby technical artifacts in society become meaningful as reliance on them becomes part of the society's routine. By way of becoming part of a societal routine, the technology is stabilized to affect social roles and relations, political arrangements, organizational structures and even cultural beliefs. Figure 1 is an attempt to describe the ANT process. The ANT defines the non-linear negotiation among differing actors as they interpret the role of other actors' (including non-human actants) which culminates to a shared mode of thinking about the normative role of technology within existing social and organizational relationship. The figure describes, in general, the process of change within a socio-technical framework where an individual initiator of change (the actor) creates his or her own vision of the future based on understanding of the societal motives, socio-cultural and political biases, and assuming that morality, technology, science and economy will evolve in particular ways. A large part of the work of the actors involved in the initial phase of deploying an artifact is that of "inscribing" the vision or prediction about the world in the technical context of the new idea. In other words, the individual vision is combined with the technical world to meet the purpose. Until the individual idea crystalizes as an organized action, the negotiation of "idea" and "reality" continues in translation. Translation is the phase in which existing social settings including agents and institutions are aligned to meet the demands of the new idea. The evolutionary process is fraught with failures and improvisations at each stage of translation by differing actors in the process. The formal world of institutions and technology is used to translate the message (of the change) and to standardize the process so the desired change can emerge specific to the people and their context. The negotiation is said to have been resolved or standardized when one form of the initial idea appears to be acceptable by others given the human and non-human contingencies. Once the idea reaches a standard interpretation, it provides the stability and continuity required to replicate and translate it to the masses. What has been eventually "created" is the result of collective interpretation of the actors, what Heideggar (1977) calls "revealing" through "enframing" of the human mind. The essence of technology therefore, according to Heideggar, is nothing technological; it is the collective realization of transforming (revealing) idea into an art (or technè as defined by Plato; see Heideggar, 1977, p. 34) as if it is pleasing to see it from different perspectives. Therefore, technology reveals itself by meshing with the given societal norms.
The ANT's emphasis on giving equal weight to non-human actors (technology) and human actors in social development is to differentiate between situated information and objective information. Whereas, objective information is imposed on the existing social setting, situated information is applied and improvised to match the existing social norms. Translation varies with people, time and context, yet once it is stabilized it becomes part of the routine. For example, social entrepreneurs can utilize technology as a supporting element in an effort to shape the environment to favor a desired effect on a community; yet the social entrepreneurs do not have full control of the outcome given the inherent limitations within the translation phase. The outcome could be affected by, for example, differing understandings of the technology's role within existing routine. The flexibility of the translation process is directly associated with how technology may be used.
Society Vision
Regardless of the technological sophistication, moral reasoning as to "why do it?" must take precedence above technical rationality in the translation process. The greater the reliance on technology, not the human and cultural beliefs, to standardize the desired result, the more difficult it becomes to translate idea into action. The question of "why do it?" is answered when others' interests becomes one's own. Social entrepreneurs and civil society organizations provide that moral basis for initiating change. For successful implementation, however, the moral basis must also be congruent with the social values and the political judgments of the elected officials. In the translation process, the instrumental knowledge required to make the change is critical but secondary to social and political knowledge. In other words, the global knowledge must be in line with the local knowledge. This is also described as the micro-macro problem or the local-global problem in the translation process discussed in the next section [START_REF] Misa | Modernity and Technology[END_REF]. Local and global perception must be synchronized in order to sustain a stable network.
ACCOUNTABILITY OF INFORMATION TECHNOLOGY
The concept of ANT allows us to focus specifically on the accountability of technology to society. Just as individuals are accountable for their role in society as responsible citizens, technology must also account for its role in shaping society. That accountability can be measured by the value ICT generates through to its users. If the shared information that is gathered and disseminated among members using a particular ICT raise conflicts with the cherished values of the society, the given ICT will have a harder time situating itself in the social group. Thus, "information has an inalienable ethical dimension," noted information scientist Joseph Goguen (1997, p. 47). If technologies, such as surveillance tools, are used to compromise citizens' rights, such technologies will be incompatible with democratic values. Whether it is the right kind of information for social advancement will depend how well the ICT is able to integrate itself into the normal and acceptable routine of the social group. If the technology demands a significant shift from normal routine, adaptation will be slower and the failure rate will increase to the point where the user critical mass will not be sufficient to have significant impact in social behavior. It becomes incumbent that the local actors embrace ICT for their social advantage, and for that, global actors should pay adequate attention to the social-self of the local actors. Otherwise, the local-global conflict will destabilize a negotiated network.
How fast the local actors embrace a particular technology has to do with the type of technology introduced in the early phases of technology deployment. Elected representatives often fail to address the social value of ICT, particularly if they are deployed in large scale. In the absence of actions from elected representatives, social entrepreneurs can easily fill the void by bringing pertinent ideas of social mobilization using technological means. As Waddock (1991) carefully noted: "Social entrepreneurs generate followers' commitment to the project by framing it in terms of important social values, rather than purely economic terms, which results in a sense of collective purpose [Burns, 1978] among the social entrepreneur and those who join the effort" (p. 394).
Social entrepreneurs capitalize on a local network to earn trust as they focus on a target population to address pertinent social concerns. They leverage social capital to articulate larger, complex social problems within the task environment (concept coined by Thompson [1967]). Aiding the process of translation are intermediaries, technical resources employed to mobilize the actors. Examples of intermediaries can be maps, policy documents, mobile apps and even financial resources which symbolize a social order and power in the network. The intermediaries aid in the translation by standardizing the message across time and place. Inscriptions like "reports, texts and documents refer to the way technical artifacts embody pattern of use" [START_REF] Rhodes | Using Actor-Network Theory to Trace an ICT (Telecenter) Implementation Trajectory in an African Women's Micro-Enterprise Development Organization[END_REF]. Government can play a key role in facilitating development of intermediaries. Modern day internet and ICT in general are powerful intermediaries, providing standardized platforms to expedite e-government services (paying taxes, getting licenses, online procurement, etc.) and other useful functions. Intermediaries are passive when it comes to transforming the social order to address larger socioeconomic concerns such as poverty, social equity and social justice issues.
Unlike intermediaries, mediators transform the message as opposed to just transport it without distortion or addition.2 With respect to mediators, when a message is being transported, it customizes it based on local context. Therefore mediators impede standardization of the message as it finds ways to channel the message through improvisation to address the needs of the day.
While clearly the internet and ICT in general are effective intermediaries; they can also be powerful mediators when used to disseminate situated information for social transformation. Intermediaries are the primary vehicles for creating "black-boxes" or closed systems where the input leads to a given output and the interlocking of the coordination between input and output is not clearly identified [START_REF] Kaghan | Out of machine age?: complexity, sociotechnical systems and actor network theory[END_REF]. When a network or part of the network is successfully black-boxed, it can be treated as a simple input/output device that is expected to perform a routine operation with precision and without creating any disturbances within the larger system. Since black boxes work with near certainty, they can be transferred from one black box to another set or subset of black boxes. They can be effectively used to mobilize an individual or group to mediate in addressing larger socioeconomic concerns such as poverty, social equity and social justice issues. For example, social media tools (targeted apps, twitter, Facebook, etc.) can be powerful intermediaries to mobilize a large mass for a certain cause. The masses can then become mediators to make changes on the ground or even utilize the same social media tools to startoff another series, effectively mobilizing another group of masses for some other cause. Look for example at "the Arab Spring" of 2012 and subsequent movements in Libya, Syria and other parts of the world. What intermediary is to technique is what mediator is to act on that technique for specific solutions. The two case studies refer in this paper highlight how information technology can become an active "social tool" to change the human condition as local actors utilize the tools of their daily social routine for expanded purpose. With the help of global actors such as social entrepreneurs and governmental agencies who provide technical or political or moral support, technology can be an effective tool to mobilize social empowerment.
INFORMATION TECHNOLOGY AND SOCIETY: CASE STUDIES
Village Knowledge Center Project (VKC), India
The Village Knowledge Center project started as a pilot initiative in 1998 in Pondicherry, India, a rural region that was once a French colony in the southern state of Tamil Nadu. The project was initiated by the M. S. Swaminathan Research Foundation (MSSRF), a rural development nonprofit organization founded by Professor M. S. Swaminathan in 1988. Among many different projects undertaken by MSSRF, the village center project is of particular interest to this study because it systematically blends technology with social context for social development. The materials for the case study are gathered from the work of [START_REF] Swindell | The Information Villages of Pondicherry: a case study in capacity building for sustainable development[END_REF], 2007), Bhatnagar et.al., (2006), published reports by the MSSRF (Senthikumaran S. and Arunachalam, 2002;[START_REF] Nanda | Reaching the Unreached[END_REF] and the archived reports from the official website of MSSRF http://www.mssrf.org.
The vision of the MSSRF projects is to increase the capacity of the marginalized communities in the rural areas through community-demand driven technology. At the initial phase of the VKC project, a need assessment survey was conducted in the target area of Pondicherry to find what the local people already knew about the resources available to them and what they needed to know to improve their livelihood. Rather than creating a new technology-driven system, technology was brought in to improve upon the existing sociotechnical design. There was clear methodology about how to create Village Knowledge Centers, methodology which was refined over more than 10 years. A large group of volunteers have been trained to maintain and operate the VKCs. A detailed handbook titled Toolkit for setting up Rural Knowledge Centers (RKC) is widely circulated to standardize the process. Once established, VKCs worked as the information hub for a several communities. For example, the farmers had incomplete or, in some cases, no information about market prices for their crops. Fishermen had no scientific way to forecast the weather or their prospects for good fishing the next day. Technology helped deliver such information through very high frequency radio wave broadcast within a 12 kilometer radius. Technology also enabled voice data transfer to be converted to text and then to fax out which then was uploaded and displayed on a computer screen.
The village centers were initially housed in private residences with limited access to all farmers, particularly those belonging to the lower caste, poor communities. Inability to ensure equitable access to all the farmers was seen a major obstacle for mobilizing the local network. Once MSSRF recognized the drawback, they had to revisit the strategy. They closed the village information centers after a few months. When they reintroduced the project with a revised action plan, they also established VKCs for an additional 12 villages.
Participation in VKCs was contingent on an expressed request of the village community. Also the village community was required to provide premises in a public building and to ensure the center was accessible to everyone in the village. In most cases these centers were located in public places like temples, government offices, noon meal program centers and panchayat (village assembly) office. The village community was responsible for the upkeep of the rooms and utility bills. Finally, every village was also required to provide local volunteers who were trained by the foundation staff and placed in the centers to function as information facilitators, computer maintenance experts and local information gatherers. In order to establish and stabilize the new information gathering method, a great number of volunteers, especially women, were trained in basic PC operations, use of data cum voice networks, maintenance of user log register, management of queries and handling data requests. Involving the village community from the beginning and encouraging local people to take ownership of the VKC was critical to the longevity of the project. Involvement was essential to drive the amount of social impact necessary to empower people in the communities through information and knowledge sharing. In terms of power, the global network represented by MSSRF was scaling back its control when it allowed the villages to control the location and provide the volunteers. By providing autonomy to the local network, MSSRF was able to mobilize and strengthen stakeholders in the implementation of the project. Transferring ownership was critical for building trust among global and local networks. VKCs clearly focused on situational information as opposed to objective information, and this helped the citizens to incorporate technology into their daily routine. The new technology was able to earn trust which in turn speeded deployment to larger groups. The special value of this project is the manner in which local knowledge was given importance. In particular, for example, all information including databases was translated into the local Tamil language. A variety of visual multimedia resources were also used to standardize the message.
The VKCs vision and commitment to learning and engaging the poor attracted the support of the Indian government in the form of a monetary grant of 100 core rupees and technical support from Indian Space Research Organization (ISRO) for launching a separate satellite for the program. Under the National Virtual Academy (NVA) program women empowerment groups are being trained in organic farming, herbal healing. Self-help groups also hold regular video conferences with rural communities and experts, manufacturers, government officials and experts. Fishermen are being offered training in the use of GPS and fish finding equipment. NVA launched a program called "Knowledge on Wheels" in 2007, in partnership with the Sankara Nethralaya Medical Research Foundation. The purpose of the project is to provide eye care information and eye care facilities to the rural poor. In collaboration with Hindustan Petroleum and ISRO, NVA has plans to use a mobile soil testing van that will help to detect the chemical composition of soil, including its pH and availability of various nutrients. This mobile equipment can propagate knowledge about crop cultivation, livestock management and harvesting technologies to locations not yet connected to permanent centers. NVA also plans to help educate villagers on methods of agro packaging. In collaboration with Bosch, a machine is made available to NVA for them to use for demonstrations across the villages to spread knowledge about hygienic packaging.
In 2004, MSSRF created a multiple stakeholder ICT partnership labeled as "Mission 2007: Every Village a Knowledge Centre". The target for this partnership was to connect 600 thousand villages via internet and radio communication by the year 2007. In 2007, global partners of MSSRF Microsoft and Telecenter.org (a joint effort of Microsoft, IDRC, Canadian and Swiss development agencies constituted a rural innovation fund (RIF). The sole purpose of this fund is to provide resources for development of technologies customized to fit the needs of rural population and their development. In particular, the mission of the fund is to encourage technology entrepreneurs. The response to RIF was encouraging; of the 1400 applications received, 9 software programs have been developed by the project. The software applications range from an e-commerce web portal, to animal husbandry, to account maintenance for self-help groups. As of 2009, MSSRF had Village Knowledge Centers in 5 states of India: Tamil Nadu, Kerala, Orissa, Maharashtra and Pondicherry, in total about 101 village knowledge centers and 15 village resource hubs.
Access to Information (A2I) Project -Bangladesh
A2I is the one of the largest technology-driven initiatives undertaken by the United Nations Development Program to expand e-Services capacity for the Government of Bangladesh (GoB). The A2I initiative used a grassroots approach to training and educating a critical mass of government officials, individual entrepreneurs and volunteers in ICT to create ICT-driven services (e-Services) at the door steps of citizens. Initially launched in 2009, the overall goal of the project is to create an e-Service environment to provide access to information and services that can reach the most vulnerable population in society. Unlike the Village Knowledge Centers discussed earlier, the A2I partnered with the government from the beginning of the project. This approach not only mobilized resources quickly but also placed the large governmental apparatus at the disposal of the A2I initiative. The primary information discussed in this paper about A2I is gathered from reports published by UNDP (2012) and reports published on the official A2I website by the Government of Bangladesh (http://a2i.pmo.gov.bd/index.php).
The A21 project aimed to utilize situated information to build capacity for local actors and give them ownership for sustainable e-Services throughout the country. The GoB took the A2I as one of their own projects as it was synonymously identified with the "Digital Bangladesh" goal within the national development agenda declared by the current government. The project attracted a large critical mass through its Quick Wins (QW) e-Services projects. Quick Wins is referred to e-Services that could be quickly developed to facilitate citizen government interaction at the grass root level to work at the district, Upazila (regional jurisdiction) and village levels to create accessibility infrastructure. In the first two years of the project, 53 QW e-service projects encompassing 9,000 independent entrepreneurs trained to run and manage over 4,500 Union Information and Service Centers (UISC), were begun, covering the whole country. Currently, there are 700 QW projects in the pipeline.
A notable outcome of the project has been the development of multimedia classrooms in some 500 schools. This is expected to be scaled up to 15,000 secondary schools within two years. The following table highlights some of the signature QW projects that are significant due to their social impact on ordinary citizens of the country. Over 200,000 sugar cane farmers benefitting from more transparent system where they are informed of when to deliver sugar (in the past they sometimes never received the paper "chalan", or had to pay rent seekers a fee or travelled to the mill in vain) and when they will be paid; mills are benefitting from more efficient delivery Source: UNDP, 2011 Over 200,000 sugar cane farmers benefit from a more transparent system through which they are informed of the best time to deliver sugar and when they will be paid. In the past the farmers sometimes failed to receive the paper (chalan) or they were obliged to pay a fee for information to unscrupulous petty officials. Sometimes the farmers traveled to the mill in vain. The new system to disseminate accurate information in a timely manner minimizes such problems. Mills too receive benefits from more efficient delivery One of the unique features of the project was to enroll top level bureaucrats, senior government officials at the ministerial/federal level, public representatives and local entrepreneurs in ways that made them aware and informed about the developing services. Therefore they felt less threatened by the new means of governing-from-a-distance (e-Services). Awareness was followed by ownership which was fundamental in the translation phase to mobilize citizens towards using any particular e-Service activity. For example, all Ministries had to come up with their own Quick Win projects that were tied to the existing infrastructure of ongoing A2I projects. Although ministries varied in terms of their competency and commitment for such projects, there was a sense of pride among peers when a particular e-Service was launched and citizens embraced those services. The A2I project has interested many businesses and international donor organizations including the World Bank, Intel, International Rice Research Institute, D.Net, Asian Development Bank, UNESCO and UNICEF.
Policy Implications of VKC and A2I
We can glean very important insights from these two projects. First, the application of technology must directly address the fundamental matter of improving the quality of life of the local actors even if the new activity appears to be trivial or mundane in the eyes of the global actors, i.e., social entrepreneurs or the government. The technology adaptation can be smoother when the normal routine within the social association remains undisturbed. This is critical because during the translation phase when the new technology is introduced the actors can easily negotiate common definitions and meanings of the new way of doing things. Second, rather than introducing a big change through a big project, a gradual and incremental approach can have a wider and more meaningful impact in the society. This is because by keeping things simple, the standard definition can be easily and quickly replicated to serve greater numbers of people in a greater variety of small ways. For example, thorough the Quick Win projects for A2I in Bangladesh, the farmers were not learning anything new about farming, but they were getting valuable information quickly at insignificant cost. This enabled the farmers to focus on increasing production and diversifying their greater earnings to invest in other productive uses, perhaps for their children's education or for beginning a small handicraft business.
The VKC project in India started as a small scale, pilot investment in private homes. Within the first few months of operation the problems were revealed within the existing infrastructure regarding lack of access by potential beneficiaries within the lower caste population. We note that technology must be adjusted and in some case improvised in order to meet the demands of the existing socio-cultural circumstances. Whereas technical adjustment can be easier, especially when undertaken in smaller scale, value adjustments take time and may be difficult without political support. When technocratic functions are imposed without regard to both political and sociocultural context, a high failure rate is inevitable, at least when measured in terms of usage and mobilization. Whether the global actors are NGOs or governments, the values of the local actors involved must take precedence to the values or demands of the global actors. As argued earlier, the greater the supremacy of technology in the design of a project over the values that justify the project, the more difficult it becomes to translate the idea into action. Technology cannot address questions of values. Therefore, the ideals of democracy, freedom and justice must be addressed through avenues that deal with empowerment and awareness of the citizens' limitations. Information technology has proven that it can mobilize and empower the citizenry.
The deep rooted caste system that pervades rural India provided the impetus for the global actors (social entrepreneurs) to intervene to override the societal bias via information technology tools. Values drove the strategy and design of the Village Knowledge Centers to introduce technological tools for the benefit of all citizens. VKCs placed technology in the role of a mediator, less as an intermediary, and that effective policy enabled the VKCs to address social bias and help to empower masses that had been marginalized.
Similarly, in Bangladesh where political turmoil and corruption impede social development, social entrepreneurs intervened and played a dominant role in transforming the way the central government and its local counterparts ran their business. By partnering with international NGOs, the social entrepreneurs were able to break through political barriers to reach out to citizens via Quick WIN projects. The applications (apps) are more often intermediaries than mediators. Whereas in India technology was able to mediate deep into the social prejudices and the culture; Bangladesh was able to solve a problem quickly and to replicate the simple model exponentially to deliver accessible benefits widely throughout the country. The extent of the cultural shift which was very apparent in Pondicherry may not be as apparent in Bangladesh, at least in the short run, but both case studies reveal Actor Network Theory succeeding to improve the quality of life through applications of technology. In both cases the values and realities of the citizen beneficiaries informed the design and implementation of the technological tools.
The balance of societal values with functional abilities of technology is a promising formula for success, yet all projects are vulnerable. Many of the gains brought about by the social entrepreneurs, NGOs, or any global actors can easily be undermined unless vigilant and engaged public administrators act on behalf of the local citizens. For example, in India the elite class may find it to their advantage to reinforce historical norms of social discrimination and devise means to incapacitate the VKCs. The sustainability of the VKCs will depend on how effectively local citizens take ownership of the centers' mission and services. Should they be blinded to the advantages or doubtful of their need to be involved, they may withdraw their support and see the demise of the project. In Bangladesh, diligent oversight by village citizens and the public managers who represent their interests may well be necessary to protect the large scale, widely uniform ICT projects from abuse by ruling political parties who can misuse the projects for their own political gain. To compound the risks, successful projects can and do attract the attention of national or even international interests with no regard to the quality of life of the participating local actors. Projects are vulnerable to sophisticated hi-jacking orchestrated by distant powers who can exploit the local citizens. Anecdotal evidence suggests that both VKCs and A21 projects are becoming vulnerable to some or all of these intrinsic risk factors. Public administrators have the responsibility to monitor and maintain the innovative projects that have direct social and economic implications for their societies.
CONCLUSION
Information technology plays a critical role in balancing our life and work in society today. Applications of technology are instrumental in shaping our values as we develop a deeper understanding of the roles they can take in all aspects of our lives. The modern era has seen a sudden shift towards ICT-based policy developments, a shift with wide ranging implications in our social and economic life. Being in the midst of the transition, the millennia generation may take for granted the changes without questioning how the social and economic values have shifted in response to ICTs role in society.
Information technology enthusiasts have long argued that ICT is an empowerment tool and liberator for the marginalized. They argue, by introducing ICT into the governing process (i.e., automation of service delivery through E-Government) government can be accessible and convenient for citizens. Indeed today government is much closer to citizens through electronic means and is probably more transparent as far as service delivery is concerned. Even so, whether the citizens are empowered in the sense of taking control of their own livelihood is debatable. Societal empowerment demands sustainable social and economic development for all people including the most vulnerable populations. Technology can be the mediator for connecting citizens, but it cannot be the translator for action. Action requires the support of global network visionaries who help to mobilize the local citizenry network.
In the information age, implications of this study for public managers must not be underestimated. Public administrators, as non-elected representatives, occupy the desks where citizens come to ask for what they need their government to do; yet public administrators are bounded by procedures that are often antithetical to empowerment of the citizens who stand before them. Restricted by limitations of their ability to reach out to citizens, public managers can use ICT as the mediator to deliver an essential resource, information, to the doorsteps of citizens who will use it. Unlike food that will almost certainly be consumed when provided to the hungry, information may not be readily consumed. Potential beneficiaries require strategic direction about where and how to use the information. They need to comprehend the benefits of using the new information. In other words they ask, "What's in it for me?" When the "fundamental purpose of social entrepreneurship is creating social value for the public good," (Christie and Honig, 2006, p.3) it is only fitting for public administrators to answer that question and align with such a cause that brings social value to the public.
As our study alludes, social entrepreneurs provide the vision for information resources to be utilized for individual advantage. In the absence of visionaries within the local elected representatives, public administrators can partner with social entrepreneurs and civil society organizations.
In the U.S., organizations such as Imagine Chicago (http://www.imaginechicago.org) and Everyday Democracy (http://www.everyday-democracy.org) have provided exemplary social entrepreneurial leadership within their communities. [START_REF] Zukin | A new engagement? Political participation, civic life, and the changing American citizen[END_REF] points out, "citizens need to be able to engage in the institutions and process of government and of civil society, since both are authoritative determiners of how goods, services, and values are allocated in a society" (p. 207). Today civic participation is an integral part of democracy, but it is open to question whether awareness of government and of political issues and participation in government services are constructive within the society. Leadership from public administrators dedicated to represent the citizens is crucial. Public administrators and public managers will best succeed in their efforts to deliver service when they accurately assess the local situationthe abilities, impediments, cultural mores and values --and devise strategies to serve the citizens through technologies designed with the local situation in mind. Indeed, what is needed is an intention and desire to change the nature of the relationships amongst and between citizens and government. Some initial relationships may have to come from active citizens who will mobilize the resources towards a sustainable, beneficial impact in our communities.
Figure 1 Actor Network
Table 1
1 Impact of Popular A2I-Quick Win Initiatives
Initiative Impact
UISC (Union Three million users have access to growing e-service portfolio;
Information saves citizens time & money through reduction in travel
Service Center) 3M grassroots people/month generating $150K/month
DESC (District Significant reductions in delay (time for certified document
E-service center) reduced by half); 50% more requests processed per day; more
transparent 5,000 applicants/month
Multimedi Students interest in lessons increased 50%
a Classroom
E-Purjee (Digital
Cane
Procurement
System)
Investment in social entrepreneurship in the developed world is also noteworthy. For example, the Obama administration, through its newly created Office of Social Innovation and Civic Participation (OSICP) has allocated 1.1 billion dollars. The newly created Social Investment Fund (SIF) has given to some of America's most successful non-profit organizations to expand their work and encourage investment in health care, vocational training and direct assistance to bring people out of poverty.
A good discussion about intermediaries and mediator can be found in[START_REF] Latour | Reassembling the social : an introduction to actor-network-theory[END_REF], pp.
37-42. | 47,565 | [
"1001531",
"1001532"
] | [
"153548",
"485121"
] |
01467797 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467797/file/978-3-642-38862-0_29_Chapter.pdf | Michel Thomsen
email: [email protected]
Maria Åkesson
email: [email protected]
Understanding ISD and Innovation through the Lens of Fragmentation
Keywords: ISD, Innovation, Process ambiguity, Knowledge fragmentation
Information systems development (ISD) and innovation is a complex and challenging endeavor. In this paper we inquire into the process of ISD and innovation to shed light on the ambiguous nature of such processes. This was done in an interpretive study of 10 governmental ISD projects where 14 interviews with key persons were conducted. In addition 11 interviews with senior IT consultants were conducted. Based on this study we propose an analytical lens to understand ISD and innovation. This lens is based on a metaphor grounded in the empirical material. This metaphor, fragmentation, mediates a deeper understanding of ISD and innovation regarding three aspects of complexity: knowledge, culture and discourse, and time and space.
Introduction
Ever since IT artifacts became strategic resources in organizational practices, information systems development (ISD) and innovation has been a challenging endeavor. As a consequence, there are seemingly never ending reports of projects not delivering on time, to budget or scope. Given this erratic practice, there are reasons to challenge our assumptions and theoretical understanding of ISD and innovation.
Digital technology has progressed to support many aspects of human life including social activities [START_REF] Yoo | Computing in everyday life: A call for research on experiential computing[END_REF]. Our field is widening to untraditional settings, where ISD can make a difference that matters [START_REF] Walsham | Are we making a better world with ICTs? Reflections on a future agenda for the IS field[END_REF]. For example, information technology and IS research has an important role for environmental sustainability (see e.g. [START_REF] Elliot | Transdisciplinary perspectives on environmental sustainability: a resource base and framework for it-enabled business transformation[END_REF][START_REF] Watson | Information Systems and environmentally sustainable development: Energy Informatics and new directions for the IS community[END_REF]. Widened and global settings, bringing with it societal complexity gives rise to new challenges for ISD -ISD is increasingly engaging in emergent design domains and emergent design solutions.
In this paper, we refer to ISD coping with emergent design domains and emergent design solutions as ISD and innovation. Innovation is a term that widely refers to an outcome perceived as new, whether it is an idea, object, or process, as well as to the process of creating this newness [START_REF] Slappendel | Perspectives on Innovation in Organizations[END_REF]. Digital innovation refers to innovations enabled by digital technology [START_REF] Yoo | Innovation in the Digital Era: Digitization and Four Classes of Innovation Networks[END_REF]. The newness may also be a recombination of old solutions changing established domains in such a way that it is new to the people involved [START_REF] Van De Ven | Central Problems in the Management of Innovation[END_REF].
We conclude that ISD and innovation is an increasingly complex and challenging endeavor. The question we address in this paper is; How can the ambiguity of ISD and innovation be understood and gestalted? We inquire into this and propose an analytical lens to understand the nature of ISD and innovation processes. This lens is based on a metaphor grounded in our analysis of 10 public sector projects, and interviews with senior IT consultants. Our aim is to contribute to the understanding of success and failure of ISD and innovation.
The paper proceeds as follows. First we present literature on ISD and innovation followed by a description of the research design. Thereafter we present the empirical accounts and the metaphor of fragmentation. The paper is concluded with a discussion on the findings.
For decades we have been guided by a variety of design ideals when developing IT artifacts. These ideals are reflected in different ISD paradigms. [START_REF] Hirschheim | Information Systems Development and Data Modeling: Conceptual and Philosophical[END_REF] identify seven generations of traditional approaches ranging from formal and structured approaches, through socio-technical, to emancipatory approaches. ISD is an ever-changing practice. Over time new models, methods, techniques, and tools have been introduced to support development processes. For example, agile development has emerged as an evolutionary approach [START_REF] Abrahamsson | Agile Software Development Methods: Review and Analysis[END_REF][START_REF] Highsmith | Agile software developmentthe business of innovation[END_REF]. This evolution leads to new generations of approaches. These generations have been classified according to inherent structures and their paradigmatically rooted assumptions (Iivari et al., 1999).
In structured and formal approaches design domains and design solutions are considered as well established. One underlying assumption or logic underpinning traditional approaches is that systems development is a coherent and rational process in established domains, and thus can be managed to have quality, be predictable and productive (see e.g. [START_REF] Paulk | Capability Maturity Model for Software v. 1.1[END_REF][START_REF] Herbsleb | Software quality and the capability maturity model[END_REF][START_REF] Aaen | A conceptual MAP of software process improvement[END_REF][START_REF] Mathiassen | A contingency model for requirements development[END_REF]. A dominant theoretical foundation of traditional approaches is grounded in an ideal, prescribing solutions to be well organized, efficient, reliable, and esthetically pleasing.
In traditional approaches, system requirements are regarded fundamental for successful ISD. Requirements engineering is a well established line of research in IS, reaching back to when ISD became an academic subject. Much focus was put on meeting complex requirements [START_REF] Larman | Iterative and incremental development: A brief history[END_REF], which among other things resulted in numerous techniques for requirement documentation. Later, from the 70´s, experimental techniques and iterative approaches were introduced. Ever since then, new techniques to identify, specify, prioritize, etc. have been suggested, as well as new approaches to risk and complexity management in ISD (see e.g. [START_REF] Mathiassen | A contingency model for requirements development[END_REF][START_REF] Taylor | Information Technology Projekt Risk Management: Bridging the Gap Between Research and Practice[END_REF].
There is extensive literature in IS recognizing that ISD and innovation has emergent properties. [START_REF] Orlikowski | Improvising Organizational Transformation Over Time: A Situated Change Perspective[END_REF] propose an alternative perspective of organizational change related to ISD as emergent from situated actions. This perspective acknowledges that actions can be intentional, initiated improvisations to meet contextual circumstances or inadvertent slippage. [START_REF] Truex | Deep Structure or Emergence Theory: Contrasting Theoretical Foundations for Information Systems Development[END_REF] outline a theory of emergence in ISD. They suggest that there are underlying structures, which can, if they are uncovered be of guidance in ISD and recognizes change and flexibility and requirements as emergent. [START_REF] Truex | Growing Systems in Emergent Organizations[END_REF] propose that organizations can be regarded as emergent rather than stable. In emergent organizations, social features (e.g. culture, meaning, social relationships, decision processes) are continuously emerging, not following any predefined pattern. These features are a result of constant negotiations, redefining the organization. Given this emergent nature of design domains and design outcomes, [START_REF] Truex | Growing Systems in Emergent Organizations[END_REF] suggest a continuous redevelopment perspective on ISD.
The emergent nature of design domains is accompanied with emergent properties of ISD and innovation [START_REF] Aydin | On the Adaptation of An Agile Information Systems Development Method[END_REF]. There are unpredictable factors that have implications for ISD that need to be continuously managed. [START_REF] Truex | Amethodical systems development: the deferred meaning of systems development methods[END_REF] describe ISD as a result of a myriad of activities that emerge more or less independently. The ISD and innovation process is described as disconnected and fragmented, and as a sequence of activities determined by emergent events. ISD and innovation has also been described as culturally differentiated, temporal and spatial, having restricting implications for knowledge sharing [START_REF] Bresnen | Social practices and the management of knowledge in project environments[END_REF]. These ambiguities of ISD have also been highlighted in research on project risk management (see e.g. [START_REF] Taylor | Information Technology Projekt Risk Management: Bridging the Gap Between Research and Practice[END_REF]. In this research ISD projects are described as having a number of dimensions linked to project risks such as uncertainty, incomplete information and complexity. [START_REF] Taylor | Information Technology Projekt Risk Management: Bridging the Gap Between Research and Practice[END_REF] conclude that research findings can only be utilized in practice if they are transformed to cope with these dimensions.
In recent literature on digital innovation, processes are recognized as networked spanning organizational boundaries, and traditional industry boundaries [START_REF] Yoo | Organizing for Innovation in the Digitized World[END_REF]. Process coordination and control is distributed, and the nature of knowledge resources is heterogeneous [START_REF] Yoo | Innovation in the Digital Era: Digitization and Four Classes of Innovation Networks[END_REF]. Adding to the complexity is the conflicting goals of co-opetition in digital innovation [START_REF] Vanhaverbeke | Open Innovation in Value Networks[END_REF]. This results in highly dynamic processes, characterized not only by technical complexity, but also complex social processes within the associated networks ( [START_REF] Van De Ven | The innovation journey[END_REF]). Yet, the ambiguity of IDS and innovation has been recognized to be of value. In a study on digital innovation projects, [START_REF] Austin | Supporting Valuable Unpredictability in the Creative Process Organization[END_REF] found that accidents such as breakage, malfunction, movement outside of intention, and accepted chain of logic can be of value to achieve novel design outcomes.
Drawing on the literature review, we suggest that ISD and innovation can be regarded as ambiguous (see fig. 1). Design solutions can be established and thereby possible to prescribe with requirements, such as a replicated standard solution. However, design solutions can also be emergent, and thereby not possible to foresee or plan for in advance. This is the case with, for example, a not yet existing novel solution. Design domains can be established with well-known needs, for example a standardized governmental practice. Design domains can also be emergent, meaning that problems and needs are unknown or rather evolving, and thereby impossible or even undesirable to pre-define. This could for example be the case in novel practices or organizational environments under rapid and radical change. ISD of established design solutions, in established design domains, we argue is less ambiguous, while ISD of emergent solutions in emergent design domains is highly ambiguous. This argument has implications for how ISD project success and failure can be understood. With an understanding of the process as stable, predictable and thereby manageable it seems reasonable to deem projects as failures when not delivering according to budget, schedule, and scope expectations (see e.g. [START_REF] Sauser | Why projects fail? How contingency theory can provide new insights[END_REF]. This would be the case for ISD of established solutions in established design domains. However, understanding the process as highly ambiguous, with emergent properties not predictable and definable, the conventional assessments of delivering according to budget, schedule, and scope expectations can be inequitable.
Research design
In this research we studied 10 cases of ISD projects in the Swedish public sector, and IT consultants´ experiences from public sector projects.
In the10 public sector projects the main goal was to develop novel, tailored, or fully developed (IEEE 1062 1998) systems. The systems were all realized with aid of external IT consultants, and aimed at supporting governmental core operations. The projects lasted a minimum of 6 months, and the budgets ranged from half a million to several hundred million SEK. They were initiated around the Millennium and the last project was finalized by 2011. 8 of the 10 projects were finalized, one was abandoned and one was absorbed by a following ISD initiative. The majority of the projects can be regarded as failures in terms of not delivering on budget, schedule, or to scope expectations. In more than one case, systems lacked functions that were added post implementation. Documentation and testing were in some cases not completed. However, as mentioned above, 8 governmental organizations got their new and tailored systems implemented, even though the projects exceeded budgets or schedule.
The projects were studied through interviews with key project members employed by the governmental organizations and in leading positions such as project leaders, department managers, and senior consultant. Interviewees were selected with initial assistance of each governmental organization, suggesting individuals with deep insight into the projects. In sum, 14 key project members were interviewed. The interviews were semi-structured and lasted approximately 1.5 hours. Topics guiding the interviews were background information about the interviewee, project details, interviewee´s role in the project and cooperation, project organization, scope and outcome. The interviews also inquired into the interviewees' project experiences, accounts for problems and challenges as well as lessons learned in the projects.
In order to get external consultants´ perspective on and experience from governmental ISD projects, interviews with 11 IT consultants were conducted. The consultants were selected by the aid of management in three major IT consultant firms. Management was asked to selectively choose senior consultants with relevant experiences. Nine of the consultants were interviewed in groups by three, and two were interviewed individually. The group interviews lasted 1.5 to 2 hours and, the individual interviews 1 to 1.5 hours. The interviews were guided by the following topics: consultant roles in governmental projects, cooperation, common and reoccurring problems in ISD projects, experiences and important learnings. In total, 25 people were interviewed. All interviews were recorded with informed consent, transcribed, summarized and sent back to the interviewees for confirmation. This research was guided by [START_REF] Klein | A set of principles for conducting and evaluating interpretive field studies in information systems[END_REF] principles for hermeneutic research. We initiated the analysis by organizing the empirical material according to the themes informing the inquiry into the process of ISD and innovation. The material was then interpreted to understand the ambiguity of ISD and innovation. The interpretation was done iteratively to seek patterns arising from the material as a whole. This resulted in three conceptualized themes [START_REF] Walsham | Doing interpretative research[END_REF]: knowledge, culture and discourse and, time and space. These conceptualized themes opened for a reinterpretation of the empirical material that we chose to represent in a metaphor, fragmentation. Metaphors are important tools for how we interpret, understand and act upon reality (Morgan, 1998;[START_REF] Morgan | Paradigms, metaphors and puzzle solving in organization theory[END_REF], and are powerful to help us understand complex phenomena (Lackoff and Johnson, 1980). In research, metaphors are also useful in exploration, analysis and interpretation of empirical material and for theory construction [START_REF] Inns | Metaphor in organization theory: Following in the footsteps of the poet? I[END_REF]. [START_REF] Walsham | Reading the organization: metaphors and information Management[END_REF] uses the metaphors culture and political system in order to widen perspectives on development and implementation of information systems.
Empirical accounts
In this section we establish the rationale for the proposed metaphor. The quotes illustrate common patterns in the empirical materialaccounts of knowledge fragmentation related to culture and discourse, and to time and space.
Knowledge
ISD and innovation is knowledge intensive. On the subject of knowledge the interviewees described the challenges of satisfying the needs for knowledge on different subject areas. They also described that knowledge and competence is spread on several persons, and on different organizational levels within and outside of the government. When identifying knowledge and competence needs in a project, a holistic view was emphasized. One interviewee described it as follows:
One has to consider the whole process, It is not enough to have a god business lawyer, it is not enough to have a good requirement specification, it is not enough to have a committed project leader, it is not enough to have the technical competence, and it is not enough to hire a competent consultant.
Further, the knowledge need concerning the specific governmental application area was emphasized. One interviewee put it like this:
Apart from knowing what you want to develop, which data to store, which output and the effects for the customer, you must also master the rules for the governmental operation…. You also need to know the organization if you as we did, also change the organization.
Another theme that appeared in the interviews was the need to match knowledge and competence to decide if a suggested design solution was feasible or not, as illustrated by this quote:
As a manager of operations you are not able to decide whether a suggested design solution is feasible or not. You know the conditions related to your field of competence and for example clarify what rules that might change during the process and how that affects the coming system. … That in turn requires that you have sufficient dialogue with the IT people so that they can help one understand and explain what the consultants are speaking about.
Even if knowledge needs seemed to be clear in the beginning of projects, they changed when projects were confronted with problems. One interviewee described this as:
It was very vulnerable because they were programming continuously. We also exchanged consultants a few times, with more adequate programming skills. The consultants thought they were doing the right things, and we thought they were on track. However, after a while when we tested it became apparent that we all were wrong about the correctness of calculations.
One of the experienced consultants described how the complexity of knowledge needs in ISD is increasing:
25 years ago you knew everything after 2-3 years in IT business. Today, you only manage to keep up and stay competent in a niched area, so you have to cooperate with a lot of other people to be able to accomplish things. Reality is so much more complex today…. there are hundreds of things you need to know, and if you don´t have support for that you will miss out of things and make mistakes, that´s just the way it is.
Regarding different types of knowledge, the interviewees described the need for a blend between practical, theoretical, and social skills and knowledge. Interviewees made statements relating to explicit knowledge (regulations, rules etc.), individual knowledge (e.g. personal insights), conditional knowledge (e.g. knowing when to make a decision), relational knowledge (e.g. knowing who can help), and procedural knowledge (e.g. on project management).
As illustrated, quotes reflect the need for knowledge in different areas, and the need for different types of knowledge. One consultant claimed that:
As a customer you cannot know everything, but you have to know where to get help and what is needed, and be able to take it in.
For example, social or relational knowledge is needed for the identification and involvement of people with essential knowledge on different subject areas:
Anita, the project leader, managed to recruit skilled people working in different areas. Some from our head office with competence on general operations, some from our regional offices, the end users, and our own specialists holding specific knowledge required for the system.
Interviewees stressed the need for social skills and ability to coordinate. One governmental official described this as:
In most projects there are relationships to two and maybe three stakeholders that needs to be coordinated. They can be for example an external consultant, an internal IT department, and governmental officials. Somebody needs to have the competence to manage and coordinate these different stakeholders. Such a person has to be flexible and responsive, and must really be know some about each area, and also be a skilled negotiator between the involved stakeholders.
Another example is externalizing tacit knowledge on governmental operations important to design solutions according to expectations and needs. One consultant described it as:
The problem is that it is very difficult to describe what is in the head of the managers, and then design it. It is really all that is in their head that they actually want.
A final example of need for different types of knowledge is exemplified is the need for explicit knowledge related to the specific design domain. One governmental official gave the following example:
I have to know the European regulations for this government to be able to sort and clear out what to include in the system. I must be able to organize that in order to show the consultants what rules that could be of value and can be built into the system.
The quotes illustrate that ISD and innovation span over several and different knowledge areas, for example, knowledge on requirement specifications and competence to communicate operational needs, and to describe operations and existing IT systems. Some other examples are knowledge to formulate project goals, ability to express specific domain knowledge, knowledge on project management, legal requirements, and on being a competent negotiator.
Culture and discourse
Regarding culture and discourse, interviewees described that lack of understanding of different cultures and differing vocabularies caused problems in the projects. One example of a cultural collision that one interviewee described was between seniors from the internal IT department and external consultants in one of the projects:
Young people between 25 and 30 were supposed to discuss and solve problems together with the COBOL-people in their 50´s and 60´s. None of them understood each other, and there I was in the middle trying to over bridge the gap.
Another example of a cultural collision leading to miscommunication was between department staff and consultants as described by one interviewee:
It was quite an irritated situation here for about 6 months before we got things to work. The governmental department staff reacted on the consultant being to technically orient in the dialogues. I think so too, but after a while I understood that it was his way of communicating. To understand that we speak different languages was an important insight for me. However, at the department they thought he was difficult to deal with. For example, when something went wrong in the system, he started to draw tables and stuff to explain, but the department staff was totally uninterested, they did not understand.
Differing vocabularies between stakeholders and the ability to deal with that in ISD was another distinct theme in the interviews. One interviewee described how there is a risk of misconceptions if this is not dealt with.
It is always an advantage in the relation between people, clients and consultants, if you have the same vocabulary.
When we sit and talk there might be words that mean one thing for us, and another for somebody else, that is, we use the same word for different things. Since I have a broad education I realize this. I sometime think that this guy is probably talking about this and not that. He is not talking about the same thing as I, and then you realize hey, now we are talking about different things, even though we use the same words.
Incomprehensible vocabulary was portrayed by one governmental official in terms of:
There is a certain jargon. Sometimes I had to stress, hey now you are talking to me you know… It was Greek to me when they were talking about their packages, tables and whatever they talk about. The system models are really designed for someone who knows a lot about systems development.
These quotes illustrate that interviewees talked about cultural gaps and clashes, language barriers, disruptive jargon, lack of mutual understanding, misconceptions, unfamiliar work practices, etc. To sum it up, the majority of interviewees expressed that differences in culture and discourse is problematic.
Time and space
Regarding time and space, a common pattern in the interviews was that the coordination of time between the project and everyday business was challenging. One interviewee said that this was the most pressing about participating in the project. The interviewee gave the following advice on the subject:
One advice is to make sure your closest manager realizes how much time your engagement in the project will take, and make sure that you are given that time to do it. That is what I think has been most pressing. I have spent hundreds of hours on this project during these years, but these hours have never been part of any plan.
Another interviewee described the dilemma as follows:
This was one of the first really large projects we started and we really did not have the time required to do it. It is always difficult to allocate time when you are in the middle of every day operations.
During the time a project is ongoing, unexpected things happen. This has implications for how the project proceeds. One interviewee told a story about how key people left the project, and the consequences it had for the project.
We started with a small project organization where I was project leader with four department staff. After a while they quit, one by one. In the end, there was only me and the consultant left, and then we had come so far in the project that it was no use trying to recruit someone else. So it was very heavy. I was in a situation where I had to manage business as usual full time as well as to run this project. There was no one with IT skills in those days, and thus no understanding for this taking time.
Time pressure was also a source for communication problems in the projects. One interviewee expressed the difficulty of finding time to communicate with people not engaged in the process.
The time pressure sometimes made I difficult to find time to inform about project progression, and to find time to discuss with other colleagues not directly engaged in the project.
The majority of interviewees stated that they wished more time for dialogue and interaction, and that the governmental organizations would have allocated more time for those engaged in the projects.
Another distinct theme in the interview material was the distribution of knowledge and competence between locations. This was considered to have consequences for the availability of resources when needed, as illustrated by this quote:
The consultancy firm we hired was taken over by a competitor, and consultants were exchanged, and that really did not make things better you know.
Another example related to time and space is that experiences from one project, had implications for the project at hand. This was exemplified as follows by one of the interviewees:
We had difficulties cooperating because the consultant had worked with the tax authorities, and I think he made many parallels from that experience. Sometimes I had to remind him that now you are in this government and you cannot transfer things just like that.
One of the consultants shared the following reflection on IT projects from a general point of view:
IT projects is like everything else in society. It is not like you can state in the beginning how things are going to be and decide how it is going to be. It is not like you will never make mistakes along the way. It is constantly under change. We work in a changing world, we learn, we learn to be attentive to problems. We work with people, some people structure their thoughts in certain ways, others in other ways. You design something and some think the design is good. A new person starts and says it not possible to use.
In sum, the quotes illustrate how people described aspects of time and space. Interviewees described how people were occupied with other things, spread on different locations, moved around, were exchanged, not having access to competence, being under time pressure etc. Experiences developed over time, in other spaces, was also a theme that interviewees talked about having implications for the ISD and innovation process.
The metaphor of fragmentation
In gestalting our interpretations we propose the metaphor of fragmentation. Fragmentation embraces three conceptual themes identified in the empirical material: knowledge, culture and discourse and, time and space. We acknowledge that knowledge, culture and discourse, time and space are interdependent phenomena, but for the sake of clarity of our interpretations we treat them separately.
The analysis of the empirical material led us to three notable interpretations concerning knowledge:
• the need for knowledge and competence on different subject areas, and the need for different types of knowledge • client respectively consultant incapacity to span over all relevant knowledge domains • knowledge and competence specialized and inherent in one or a few key persons, knowledge and competence split between several persons, inherent in people in different locations, and on different organizational levels within as well as outside the organizations To sum it up; we identified design domains and situations dependent on a blend of skills, capabilities, explicit knowledge including tacit knowing. The latter created by and inherent in the individual, in contrast to (social) knowledge created by and inherent in ISD-project members collective actions (see e.g. [START_REF] Alavi | Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues[END_REF].
The aim here is not to penetrate or to discuss categorization or labeling of knowledge, it is merely to illustrate that ISD and innovation requires complex and multifaceted knowledge and competence that challenges interorganizational and organizational systems.
The analysis of the empirical material led us to three notable interpretations concerning culture and discourse, and three interpretations concerning time and space.
Our interpretations concerning culture and discourse point out:
• cultural clashes and discursive gaps,
• communication breakdowns,
• asymmetries of knowledge and diverging expectations.
Our interpretations concerning time and space reflect:
• project members having a hard time to keep up with project tasks and their every day work • project members, consultants and competent colleagues that were too busy to aid when problems emerged, or competent people that were moved to solve non project specific problems in other locations, • knowledge spread over time and space, and thereby not activated in projects problem solving.
Our study portrays ISD and innovation as temporal, spatial and culturally differentiated. The study also point out that the knowledge required in ISD and innovation is multifaceted, heterogeneous and too complex to hold for an individual, possibly even for an organization (compare e.g. [START_REF] Bresnen | Social practices and the management of knowledge in project environments[END_REF][START_REF] Yoo | Innovation in the Digital Era: Digitization and Four Classes of Innovation Networks[END_REF]. The empirically grounded metaphorfragmentationcaptures critical characteristics of ISD and innovation. Moreover, the metaphor mediates an understanding of three sources of complexity and ambiguity: knowledge, culture and discourse and time and space. We believe fragmentation to be useful as an analytical lens to understand the nature of ISD and innovation.
Discussion and conclusions
The overall aim of this paper is to contribute to the understanding of success and failure of ISD and innovation. In particular, this research contributes to our understanding of the nature of such processes. While many studies have focused on for example software process improvement, requirement engineering, critical success factors, heuristics or best practices, limited research has been devoted to frameworks (see e.g. [START_REF] Sauser | Why projects fail? How contingency theory can provide new insights[END_REF]) that provide us with deeper understanding of ISD and innovation. In this paper we propose the metaphor of fragmentation as an analytical lens to understand the nature of ISD and innovation processes. The metaphor provides a tool to interpret, understand and act upon process ambiguity.
In this paper we argue that processes in emergent design domains aiming at emergent design solutions are highly ambiguous. This implies that conventional assessments of delivering according to budget, schedule, and scope expectations can be misleading. If we recognize ISD and innovation processes as fragmented, we need to explore other dimensions of assessments; assessment dimension that are not based on the dominant logic that processes and design outcomes are predictable, rational and standardizable between design domains. Alternative or complementary dimensions of assessments could for example be more focused on the innovativeness of design solutions and implications in the design domain. This being said, we of course recognize that there are limits to how much resources that organizations are willing to risk and spend on ISD and innovation.
If we accept the gestalt of ISD and innovation as of nature fragmented, it has implications for ISD practice. Firstly, it is of significance for how we design and evaluate ISD models, methods, techniques and tools. Secondly, it is of significance for how we contextualize these to emergent design domains. It seems reasonable that we need approaches that on one hand can reduce destructive fragmentation, and on the other hand enhance valuable ditto to reach emergent design outcomes (see e.g. [START_REF] Austin | Supporting Valuable Unpredictability in the Creative Process Organization[END_REF]. The latter is important to reflect on, given a future where ISD and innovation is widening into emergent design domains and emergent design solutions. In this paper we have argued that this direction is accompanied with increased process ambiguity. In our future research, we aim to investigate the explanatory capacity of the metaphor on novel digital innovation initiatives in public transport and health sectors.
Fig. 1 .
1 Fig. 1. ISD and innovation ambiguity | 36,203 | [
"990476",
"990475"
] | [
"459411",
"459411"
] |
01467798 | en | [
"shs",
"info"
] | 2024/03/04 23:41:44 | 2013 | https://inria.hal.science/hal-01467798/file/978-3-642-38862-0_31_Chapter.pdf | Tor J Larsen
email: [email protected]
Linda Levine
Learning from Failure: Myths and Misguided Assumptions About IS Disciplinary Knowledge
Keywords: failure, scientific disciplines, disciplinary knowledge, reference disciplines, information systems, sociology of scientific knowledge Prologue
The division of men's intellectual lives and activities into distinct disciplines is easy to recognize as fact. But it is less easy to explain... How, for instance, are such disciplines
Introduction
IS researchers hold wide ranging views on the makeup of IS and other fields. Some have observed that IS research is cited in many scientific fields (see, for example, [START_REF] Backhouse | On the Discipline of Information Systems[END_REF][START_REF] Truex | Deep Structure or Emergence Theory: Contrasting Theoretical Foundations for Information Systems Development[END_REF][START_REF] Davis | Information Systems Conceptual Foundations: Looking Backward and Forward[END_REF]. Under the category of reference discipline (for IS), [START_REF] Vessey | Research in Information Systems: An Empirical Study of Diversity in the Discipline and Its Journals[END_REF] include: cognitive psychology, social and behavioral science, computer science, economics, information systems, management, management science, and others. [START_REF] Holsapple | Business Computing Research Journals: A Normalized Citation Analysis[END_REF] distinguish between academic journals and practitioner publications in the area of business computing systems. They further divide academic journals into "business-managerial orientation, computer science-engineering orientation, and general-social sciences orientation" (p. 74). [START_REF] Pfeffers | Identifying and Evaluating the Universe of Outlets for Information Systems Research: Ranking the Journals[END_REF] distinguish between journals in IS and in allied disciplines. These subdisciplines are seen as making up IS or serving as reference disciplines for IS. [START_REF] Harzing | JOURNAL QUALITY LIST[END_REF] identifies sixteen subject areas that make up the business domain, of which MIS-KM is one. Her comprehensive list of journals in the business school domain numbers 933. The Association of Business Schools (2011) defines 22 subject fields and lists 821 journals in the field of business. Finally, [START_REF] Taylor | Focus and Diversity in Information Systems Research: Meeting the Dual Demands of a Healthy Applied Discipline[END_REF] argue that IS is composed of six sub fields, including: inter-business systems, IS strategy, Internet applications miscellany, IS thematic miscellany, qualitative methods thematic miscellany, and group work & decision support. Baskerville and Myers (2002) claim that IS has become a reference discipline for other fields. They base this on a three stage model where (1) initially IS imports theories, methods and results from other fields, (2) IS builds content internally, and (3) IS exports its theories, methods and results to other fields. In stage three, IS has become a reference discipline.
To investigate claims that (a) IS has matured and is now a reference discipline for other fields (Baskerville and Myers 2002) and (b) in the tradition of the sociology of science 2 , IS is borrowing from and lending to other fields, we tried to define and operationalize the construct of "referencing disciplines." In other words, how are the processes of maturation--of borrowing, building, and exporting ideas--made visible and how can this phenomenon be systematically verified? Thus, we were interested in developing a robust way to talk about the disciplines that referenced IS and how IS was coming to be referenced by these other fields. We believed that the range of other disciplines and their diversity were important. To explore this, we created a framework based on the concept of family of fields and we used this framework as a tool for a deductive analysis of journals from IS and other fields. Hence our research questions are:
What are the relationships among fields relative to IS? Does the family of fields concept elucidate these relationships and can scientific journals be mapped to a family of fields?
The paper proceeds with our conceptual model and research design, method, discussion, and conclusion.
2 How is a body of scientific knowledge constituted and matured? Some researchers have referred to this transformation as the sociology of scientific knowledge [START_REF] Crane | Invisible Colleges: Diffusion of Knowledge in Scientific Communities[END_REF][START_REF] Lodahl | The Structure of Scientific Fields and the Functioning of University Graduate Departments[END_REF][START_REF] Toulmin | Human Understanding: the Collective Use and Evolution of Concepts[END_REF][START_REF] Ben-David | Sociology of Science[END_REF][START_REF] Cole | The Hierarchy of Sciences?[END_REF][START_REF] Pfeffer | Barriers to the Advance of Organizational Science: Paradigm Development as a Dependent Variable[END_REF]. This is the investigation of science as a social activity, particularly dealing "with the social conditions and effects of science and with the social structures and processes of scientific activity" (Ben-David and Sullivan 1975, p. 203).
Conceptual Model and Research Design
Our formulation of the concept of family of fields was made up of four parts. First, recall that reference disciplines (Baskerville and Myers 2002) is used in a general manner; thus, we reviewed uses of the term. Second, we explored the relationships among various scientific disciplines, recognizing that some appeared closer to IS than others. We thought that the metaphor of the family could effectively illustrate close and distant relationships. For example, siblings are closer than cousins; and first cousins are closer than 2 nd cousins. This metaphor offered a vehicle to grapple with IS and its reference disciplines.
We considered a broad distinction between IS and all other scientific disciplines lumped together under the heading of Supporting Fields. This corresponds to [START_REF] Pfeffers | Identifying and Evaluating the Universe of Outlets for Information Systems Research: Ranking the Journals[END_REF] distinction between journals in IS and in allied disciplines. However, we felt that this differentiation was too crude. For example, most would agree that a field such as software engineering is closer to IS than marketing. This line of thinking led us to distinguish between (1) IS, (2) Related Fields, e.g. software engineering, and (3) Supporting Fields, e.g. marketing. But how sharp is the distinction between IS and Related Fields? To illustrate, we realize that many (U.S.) universities organize decision sciences (DS) and IS in a single department. DS journals publish IS articles and IS journals publish DS findings. Despite the overlap, some maintain that DS is separate from IS. In the family of fields framework, DS and IS are close relatives.
Third, we speculated that Supporting Fields might include disciplines (i.e., marketing) that were connected to IS but more distant than Related Fields. Fourth, and finally, we defined the umbrella term of Wider Fields for those disciplines outside the business school domaine.g., civil engineering, agriculture, and psychology. Obviously, these differentiations between IS and DS, Related Fields, Supporting Fields, and Wider Fields are imperfect but provide a starting point. Figure 1, below, depicts these categories of referencing fields and their proximity to IS.
Fig. 1.
Categories of referencing fields and their proximity to IS.
Method
The exploration of family of fields is part of our larger study of IS, focusing on citation analysis and citation patterns of exemplar articles in IS and other scientific fields [START_REF] Larsen | Citation Patterns in MIS: An Analysis of Exemplar Articles[END_REF]. Our investigations employ a set of exemplar IS articles since, according to [START_REF] Ritzer | Sociology: A Multiple Paradigm Science[END_REF], an exemplar is one of the primary components of a paradigm. [START_REF] Kuhn | The Structure of Scientific Revolutions[END_REF] defines exemplars as "concrete problem-solutions" that can be found in a range of sources including laboratories, examinations, texts, and periodical literature (p. 187). Exemplars are illustrative of important contributions in the field of IS. Consequently, in creating our dataset, we employed three steps: (1) we defined a portfolio of exemplar IS articles, (2) we identified any articles which cited to these exemplars, (3) we coded journals (containing articles citing to the exemplars) into the family of fields scheme. These steps are described below.
Step 1: Defining a Portfolio of Exemplar IS Articles
We employed two approaches for compiling our list of exemplar IS articles: (1) award winning articles, and ( 2) evaluation by peers. First, our sample of award winning articles was drawn from MIS Quarterly "articles of the year" and Society for Information Management (SIM) competition-winner articles. For the period 1993-1999, MISQ named eight articles of the year. For the period 1994-2000, five SIM competition articles were named. Henceforth, these are referred to as "award articles." Second, we reflected that peers might have their personal IS research article favorites. We identified 17 peers who were well known in the community. These were senior scholars, professors from across the globe, who were recognized for their achievements. At the time, an AIS World senior scholars' list (or a basket of journals) did not exist (ICIS 2010 program guide, p. 26). Our 17 peers were contacted by email and asked to nominate their "top four" classic, seminal, or influential articles in the field of IS. After one email reminder, 15 had responded, providing us with 23 "peer-nominated articles" (see Appendix A for details.)
IS -Decision sciences -Related fields -Supporting fields -Wider
None of the award articles were peer nominated. We refer to the grand total of 36 articles as "exemplar articles"consisting of the two categories of award articles ( 13) and peer-nominated articles [START_REF] Malone | Electronic Markets and Electronic Hierarchies[END_REF].
Step 2: Locating the Journals Citing our 36 IS Exemplar Articles
We looked at the 36 exemplar articles and where they were cited in other (articles in) journals. The social sciences citation index in the Thomson Reuters ISI Web of Knowledge was used for this purpose; it is the dominant, authoritative source for scientific research. In all, 418 journals were identified as having articles citing one or more of the 36 exemplar articles.
Step 3: Coding Journals into Families of Fields
The final step in our data preparation was the allocation of each of the 418 journals (citing the 36 exemplars) into one of the five Families of IS, DS, Related Fields, Supporting Fields, or Wider Fields. We performed separate coding and then joint reliability checks. The process of coding journals required three meetings. In coding, we recognized the need to account for combinations, i.e., IS and DS, etc. We allocated each of the 418 journals to one of the five Families of Fields (see Figure 1) or one of the ten combinations. Meetings were scheduled at least a week apart to allow for reflection.
Discussion
In our analysis, we reached a large degree of agreement on journals classified as "Pure IS journals" as in Walstrom and Hardgrave (2001) or "IS research journals" as in [START_REF] Pfeffers | Identifying and Evaluating the Universe of Outlets for Information Systems Research: Ranking the Journals[END_REF]. However, other journals were not so easily classified in their relationship to IS. Examples of journals we view as being a combination of two families are Communications of the ACM and Management Scienceas we see it, belonging to both IS and Related Fields (and within Related Fields, to the sub-fields of computer science and operations research, respectively). We interpret International Journal of Electronic Commerce as belonging to IS and Supporting Fields (marketing). Clearly, this analysis involves interpretation. In several cases, we were unfamiliar with a particular journal and struggled with journal names that were ambiguous. The creation and use of coding schemes like ours involve judgment calls, which are open to debate.
We looked for representations of knowledge networks to assist with our coding of journals and their relationships to IS. Baskerville and Myers' (2002) conceptual model of knowledge networks shows IS as a disciplinary node (see figure 2 below).
They do not define what makes these nodes recognizable, but refer in passing to key people, events attended, and core journals. But questions remain: how can we talk about the IS community and others? Is there a "them" and an "us," or is this distinction a red herring? They succeed with an impressionistic representation of IS and the surrounding "other" disciplines. But the clouds they draw around these entities are indeed cloud-like-simply assuming that disciplinary borders exist, without providing any sharp distinctions or definitions. Our exploration of family of fields and combinations proved equally problematic. Most journal were coded outside of IS and into multiple categories. This resulted in blurred distinctions. Appendix B illustrates the messiness that we tried to contend with in the development of our coding scheme. We were also unable to make claims about specific familial relationships--disciplines were associated rather than connected in a precise manner. Thus, we were unable to come any closer than Baskerville and Myers (2002) in their characterization of IS as a "reference discipline in a discourse with other reference disciplines" (figure 2, p. 8). Due to our increasing concerns about blurred interpretation and unwieldy complexity, we concluded that the family of fields concept was not viable to pursue. Among the preconditions for a family of fields concept is a degree of agreement on subject areas and journal lists. This does not exist, for example, see Association of Business Schools (2011) and [START_REF] Harzing | JOURNAL QUALITY LIST[END_REF]. Nonetheless, we remain convinced that some fields have a closer relationship to IS than others. A deep understanding of proximity among fields also requires further investigation of detailed content, as proximity most likely derives from (a) similar topics or topics under the same umbrella, (b) domain, (c) shared theory, (d) common methods, and (e) common underlying technology.
How, otherwise, might the landscape be depicted? Diagrams differ in their granularity and composition, as well as their underlying theory. If we focus on the sociology of scientific knowledge, specifically the hierarchy of the sciences, we can illustrate the disciplines in closest proximity to IS. [START_REF] Cole | The Hierarchy of Sciences?[END_REF] employed Auguste Comte's hypothesis of the hierarchy of the sciences, which maintains "that the sciences progress through ordained stages of development at quite different rates…. The hierarchy of the sciences described not only the complexity of the phenomena studied by the different sciences but also their stage of intellectual development" (p. 112). Cole refined the hierarchy and developed six salient characteristics: theory development, quantification, cognitive consensus, predictability, rate of obsolescence, and rate of growth. This hierarchy distinguishes between the physical and social sciences, and the in/ability to make verifiable predictions. Figure 3, below, illustrates these two dimensions in our representation of IS and related (sibling) fields.
Fig. 3. Information Systems in proximity to sibling fields (adapted from [START_REF] Cole | The Hierarchy of Sciences?[END_REF] This view shows four sibling fields which include their own distinct theories, focii, and research areas. Yet they also exhibit a high degree of overlap and no sharp borders. Similarly, [START_REF] Polites | Using Social Network Analysis to Analyze Relationships Among IS Journals[END_REF] find overlap among the areas of computer science, information systems, management--professional, operations research, and multiple/unclassified (see their Figure 2, p. 607). Additionally, we acknowledge the physical sciences' concern with predictability and the social sciences' concern with human activity in organizational settings (description and interpretation). This representation depicts the transactions and exchanges among disciplines, which is also in line with Toulmin's thinking on human understanding and intellectual authority. He states: "By its very nature, the problem of human understandingthe problem of recognizing the basis of intellectual authoritycannot be encompassed within any single technique or discipline. For the very boundaries between different academic disciplines are themselves a consequence of the current divisions of intellectual authority, and the justice of those divisions is itself one of the chief questions to be faced afresh" (Toulmin 1972, p. 7).
Few IS researchers have tackled the topic of disciplinary knowledge but aspects of the evolution of the field of IS have been touched upon [START_REF] King | Reach and Grasp[END_REF][START_REF] Lyytinen | Nothing at the Center?: Academic Legitimacy in the Information Systems Field[END_REF][START_REF] Gregor | The Nature of Theory in Information Systems[END_REF][START_REF] Taylor | Focus and Diversity in Information Systems Research: Meeting the Dual Demands of a Healthy Applied Discipline[END_REF][START_REF] Hirscheim | A Glorious and Not-So-Short History of the Information Systems Field[END_REF]. [START_REF] Davis | Information Systems Conceptual Foundations: Looking Backward and Forward[END_REF] comments on the history of the field, noting that two views have predominated, on: (1) observed systems and organizational functions, and ( 2) underlying concepts and deep-structure information phenomena. For the most part, discussion about the field of IS is lively but preoccupied with local issues such as the nature of a core and the role of diversity, and the matter of rigor versus relevance.
Conclusions
Our investigation into the relationships among IS and reference disciplines in the form of a family of fields turned out to be a failure. The approach was not viable but yielded four important lessons. We present these lessons followed by our concluding remarks.
First, fields are not so easily defined, and display a great deal of overlap with fuzzy borders as is evident in Appendix C. We discovered that IS is being used in almost any field imaginable, because information systems are in use everywhere and can be the subject of research in any domain. This makes IS and its underlying information communication technology (ICT) a broad area of study with a vast number of opinions and options. Additionally, the discipline of IS often blurs with the issues of ICT in context (e.g., agriculture, medicine, geography, etc). This can also confuse the goals of university education and vocational training. Second, the borrowing of theories and ideas across disciplines is complicated and not linear. The conventional understanding of the maturation of a field describes processes of importing, developing, and exporting ideas. Maturation, including borrowing, is not a clear sequence but rather one that is iterative, reciprocal and networked. Idea development and refinement is messy and unpredictable. Third, the concept of a reference discipline is obscured and even exploded. The term is commonly used and a convenient one but on examination it proved simplistic and not very meaningful. Any discipline can be a reference discipline for IS. Consequently, the term has no specific or special meaning. If we are going to continue to use the term "reference discipline" we would be wise to reexamine it closely and define it more usefully. Fourth, we need to ask: what are the requirements for a discipline, one among the social sciences? Rather than focusing continuously on the content and core of IS, we should pause to define and discuss the criteria for constituting a discipline.
More broadly, some aspects of the diversity and uncertainty that we perceive in IS, may be functions of a larger loss of order and unity, emanating from aging, brittle models of academic institutions. These eroding forms govern our current understanding of disciplines and the university itself-the house of learning for bodies of knowledge.
Reinventing the university and disciplinary knowledge challenges us to look at organizations as ecosystems, rather than as edifices. By doing so, we open the door to seeing the university institution, not as a massive file cabinet or catalogue of content, but through alternative metaphors for networks and systems of systems.
IS is not alone in reexamining its identity and value as a field, within the university, and in relation to industry practice. In addition to fractures in the discipline of Sociology, similar concerns have been expressed by researchers in Organizational Communication [START_REF] Corman | Perspectives on Organizational Communication: Finding Common Ground[END_REF]Poole 2000, as noted in Ashcraft 2001), Organization Science [START_REF] Rynes | Across the Great Divide: Knowledge Creation and Transfer Between Practitioners and Academics[END_REF] and Information Science [START_REF] Monarch | Information Science and Information Systems: Converging or Diverging?[END_REF][START_REF] Ellis | Information Science and Information Systems: Conjunct Subjects Disjunct Disciplines[END_REF]. Other signs of disciplines under stress, such as competing for funding and recognition in the university environment, translate into a plethora of applied R&D institutes, including inter-disciplinary centers of an overlapping nature.
We introduced this study by acknowledging our early observations some of which we now see as myths or faulty assumptions. This change of heart occurred over time as we conducted a number of studies on the identity and dynamics of the field of IS, including the present one on family of fields. To make progress in understanding our field, we believe it is necessary to investigate the workings of other disciplines and the sociology of scientific knowledge. We must do this keeping in mind that the maturation of the discipline is clouded by an innovation bias for ICT. IS is a hybrid field built upon technology breakthroughs, "silver bullets", enduring knowledge, and capabilities from the social, engineering, and physical sciences. The first step in making progress is reckoning with this complexity and the challenge it poses.
Epilogue
The end of the research on family of fields is not the end of the story. Taking time for reflection, we were left with standard citation material--our exemplars and who cites to them. We persisted in asking: what kind of distinctions among scientific fields could be made? How can you identify an IS journal? And, could we reframe our research so that it contributed to an understanding of how knowledge evolves in interaction between IS and other fields? We stepped back to pose the most basic and fundamental question: what constitutes a field? This reassessment opened the door to bodies of literature well outside of IS--to field theory and philosophy of science. We are still grappling with this question. Given the complexity of categorizing fields, perhaps the only workable distinction that can be sustained is a coarse one between IS, related, and other fields.
The research tradition on the sociology of scientific knowledge holds promise to enrich our theorizing about IS in a holistic manner, rather than in isolation. The IS exemplar articles and citation analysis can be used as a lens and method for an operational investigation into the sociology of scientific knowledge, as applied to IS (Larsen and Levine, forthcoming). Theorizing from this vantage point allows comparisons with the workings of other disciplines and potential insight into how changes in theory and method unfold over time.
Figure 2 from
2 Figure 2 from Richard L. Baskerville and Michael D. Myers, "Information Systems as a Reference Discipline," MIS Quarterly (26:1), 2002, p. 8. Copyright © 2002, Regents of the University of Minnesota. Reprinted by permission.
S
= Supporting field and of general nature S-A = Supporting field, various application domains. S-B = Supporting field, international or global business S-C = Supporting field, communication S-F = Supporting field, finance, accounting or similar S-G = Supporting field, group related issues S-H = Supporting field, health care, hospital, etc. S-I = Supporting field, individual level, for example psychology S-L = Supporting field, managerial and leadership issues S-M = Supporting field, marketing S-P = Supporting field, manufacturing S-S = Supporting field, systems thinking or other modeling approaches S-U = Supporting field but unclear what type S-W = Supporting field, wider environment, society, etc, H = Hybrid journal, in the sense that it's primary dedication is not IS per ce but allows relatively frequently IS type publications. Examples are in particular Management Science and Organization Science. H-P = Hybrid, but probably of a practitioner type. H-T = Hybrid, but mixed with computer science Notes: I= Field of Information Systems, R=Related Fields, S=Supporting Fields, W=Wider Fields, H=Hybrid Field.
Appendix C: Random Sample of Journals in Supporting and Wider Fields and their ISI Subject Categories
Key: Subject category (SC) | 25,721 | [
"1001544"
] | [
"468786",
"336222"
] |