text
stringlengths
7
9.74M
pile_idx
int64
101
134M
Q: Removing stuck water pump - GM Corsa B I have a GM Corsa 1.0 MPFI (Brazillian version of the Opel Corsa B) and today I started replacing some parts, including the water pump. I had heard from other owners and mechanics that sometimes this water pump gets stuck because of rust and it's really hard to remove, requiring that you "hammer" it out. With me it wasn't different. The water pump doesn't move out when I pull it with my hand. Below is an image of a the working area, not from my car but probably the same model. The water pump is right below the alternator: Any tips on how to remove it? Maybe other cars have similar issues with rusted water pumps. Even if I should use brute force, how should I hit the pump and where? I even thought about making something similar to a pulley remover and try to pull it out from the small gear, but I don't think it would handle the force. A: If you have removed the three bolts surrounding the water pump, and the ultimate goal is to replace the water pump with a new one, just use a hammer and hit the water pump pulley. It should pop right off. You are not worried about the pump itself, because it is getting replaced. There should be a gasket residually holding the pump in place. You just need to break the pump free of this. Besides the gasket, it should be flat on flat, meaning there is nothing there to get damaged. Ensure you clean the block side off so you'll have a clean mating surface for the new gasket with the new pump.
66,075,747
Q: NTP client Undisciplined Local Clock We have VM which is using ntp client for syncing time. Following is my config. My question is do i need server 127.127.1.0 # local clock line in client ntp.conf file? if yes then why? # Permit time synchronization with our time source, but do not # permit the source to query or modify the service on this system. restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery # Permit all access over the loopback interface. restrict 127.0.0.1 restrict -6 ::1 server ntp1.example.com server ntp2.example.com # Undisciplined Local Clock. This is a fake driver intended for backup # and when no outside source of synchronized time is available. server 127.127.1.0 # local clock fudge 127.127.1.0 stratum 10 driftfile /var/lib/ntp/drift A: NTP Recommendations Note: VMware recommends you to use NTP instead of VMware Tools periodic time synchronization. NTP is an industry standard and ensures accurate time keeping in your guest. You may have to open the firewall (UDP 123) to allow NTP traffic. This is a sample /etc/ntp.conf tinker panic 0 restrict 127.0.0.1 restrict default kod nomodify notrap server 0.vmware.pool.ntp.org server 1.vmware.pool.ntp.org server 2.vmware.pool.ntp.org driftfile /var/lib/ntp/drift This is a sample (RedHat specific) /etc/ntp/step-tickers: 0.vmware.pool.ntp.org 1.vmware.pool.ntp.org The configuration directive tinker panic 0 instructs NTP not to give up if it sees a large jump in time. This is important for coping with large time drifts and also resuming virtual machines from their suspended state. Note: The directive tinker panic 0 must be at the top of the ntp.conf file. It is also important not to use the local clock as a time source, often referred to as the Undisciplined Local Clock. NTP has a tendency to fall back to this in preference to the remote servers when there is a large amount of time drift. An example of such a configuration is: server 127.127.1.0 fudge 127.127.1.0 stratum 10 Comment out both lines. After making changes to NTP configuration, the NTP daemon must be restarted. Refer to your operating system vendor’s documentation. Source: VMWare Knowledge Base
66,075,886
The latest Super Bowl victory is barely a month old, and already you can hear the panic from Patriots fans about the offseason plan. The Patriots had an opportunity to keep McCourty off the market, and they declined. And they haven’t made much headway on redoing Darrelle Revis’s deal, either. This time, it was over the use of the franchise tag. Most of the NFL world expected it to go to impending free agent safety Devin McCourty. Instead on Monday, it went to kicker Stephen Gostkowski, who now will be locked in for 2015 at a $4.59 million salary — more than every kicker in the league last year other than Dallas’s Dan Bailey, but still a modest number in light of the $143 million salary cap that was announced Monday. The Patriots’ front office did it again. Everyone thought the Patriots would zig, and then they zagged. In a week, McCourty and Revis will become unrestricted free agents (assuming the Patriots decline Revis’s $20 million option). There’s genuine worry that the Patriots will lose at least one member from the NFL’s best secondary in 2014. I guess it’s possible that the Patriots could go cheap this offseason after winning their fourth Super Bowl ring last month. They don’t necessarily have the same urgency that they did the previous nine championship-less offseasons. Both Revis and McCourty are going to get paid, one way or another. This isn’t like last year, when Julian Edelman was allowed to test the market, and then returned to the Patriots. Revis and McCourty are the unquestioned top players at their positions in free agency. Letting them hit the open market would be risky, to say the least. But this is where that phrase “In Bill We Trust” gets invoked. Bill Belichick knocked pretty much every single free agent decision out of the park last year en route to a championship, and we’ll give him the benefit of the doubt here. He knows what he’s doing in regards to his elite cornerback and elite free safety. With Revis, there’s still a whole week to work something out. Revis’s option kicks in on the first day of the new league year, which is March 10 at 4 p.m. A week is plenty of time for the Patriots and Revis to pound out a new deal. And there really is no Plan B if Revis moves on. The top free agent cornerbacks are Byron Maxwell, Brandon Flowers, Tramon Williams, and Antonio Cromartie — no one close to Revis’s skill level. If the Patriots are serious about winning a fifth championship next year, Revis will be back. With McCourty, it just didn’t really make much sense to use the franchise tag, despite the many assumptions that he was a prime candidate for it (including from this writer). The franchise tag for a safety — the average of the top five salaries at the position — is $9.6 million for this year. Using the handy resources at OverTheCap.com, we see that that’s more cash than any safety is currently scheduled to make this year (Jairus Byrd is the current leader at $8.1 million). In 2014, only Earl Thomas, Donte Whitner, and Dashon Goldson made more than $9.6 million in cash, and that was because they signed new deals with large signing bonuses. It’s why McCourty was borderline giddy last week at the prospect of receiving the franchise tag. No, the tag wouldn’t have given him long-term contract security, but it would have paid him a heck of a lot of money — certainly a lot more than the $3.92 million he made last year. “The franchise tag is player-friendly now. It’s a good number,” he told reporters. So the Patriots correctly held off. But that doesn’t mean he’s out of the plans, either. McCourty wants to return to New England but is smartly keeping his options open. His twin brother, Jason, is going to recruit him hard to Tennessee. “Sitting here looking at the clock, waiting for 4 p.m. to come and putting my recruiting packet together,” Jason McCourty jokingly tweeted on Monday. Devin McCourty will have plenty of teams gunning for him. He’ll be 28 in August, the prime of his athletic career. ProFootballFocus.com rated him as the eighth-best safety in the NFL out of 87 last season. On top of it, he’s incredibly smart (football-wise and book-wise), a great leader in the locker room, and a willing participant in community endeavors. He’s everything you want in an NFL player. And, as with Revis, there is no Plan B if the Patriots lose McCourty. Duron Harmon, Tavon Wilson, and Patrick Chung are nice players, but they can’t play center field and line up the defense like McCourty can. The top free agent safeties — Rahim Moore, Tyvon Branch, and Antrel Rolle — are not suitable replacements, either. Which is why Monday’s non-announcement doesn’t leave me panicked. The Patriots love having McCourty, and he has loved being a Patriot. There hasn’t been much communication between the sides, but they will speak this week in advance of next Tuesday’s deadline. The Patriots are playing hardball with McCourty and Revis now, but there’s still an entire week to hammer out these deals. The Patriots know how important these players are, and how tough they will be to replace. And they need to meet spending requirements as set forth in the new collective bargaining agreement. They’ve spent only 82 percent of the salary cap the last two years, and need to get that number up to 89 percent by 2016. And if the Patriots have to spend money, they might as well spend it on two of the key pieces of last season’s Super Bowl run, who also happen to be the best players at their positions. So don’t be too upset that McCourty and Revis appear headed to free agency. There’s still a week left to figure it all out. The Patriots are just playing the game and keeping everyone guessing. Related coverage: • Christopher L. Gasper: Don’t be surprised if Patriots lose Revis, McCourty • On football: Patriots still have time to do deals with Revis, McCourty • Patriots place franchise tag on kicker Stephen Gostkowski Ben Volin can be reached at [email protected]. Follow him on Twitter @BenVolin
66,075,924
Liquid crystal display devices have found a wide range of applications in display technology field. Typically, liquid crystal display devices include an array substrate and a opposing substrate (e.g., a color filter substrate) packaged together. The array substrate and the opposing substrate are fabricated separately, then assembled to produce a display panel.
66,076,115
Q: Removing First 2 years Real Time Software Testing Experience I finished my education in May 2007 and below is my Software Tester job (real time) experience. Company A - March - 2008 to Feb 2010 ( 2 years) Company B - March - 2010 to Nov 2011 ( 1 year 6 months) Company C - Nov - 2011 to till date( 5 years ) Right now, I do not have any Automation experience and I have done a course on Selenium WebDriver to get a job. I thought of removing the first 2 years of experience in company A. Can someone suggest if it is better to do that? If I am trying to apply to Company D, HR from Company D can call my previous companies and know my details. Note : I am not worried about my salary. A: Are you proposing simply leaving a two year gap on your resume? Future employers will assume one of the following: you did nothing worthwhile in that time you have something to hide about what you did in that time Both of these are far worse than the simple fact that you did a job that isn't particularly relevant to the one you hope to have. Don't do it. A: So you had a job as a software tester at Company A, B, and C, but you want to remove Company A from your CV? It seems an odd proposition, the more experience you have on the CV, the better it would be for you. Does it cause some concern if the HR company did contact company A? Unless you have something to hide from working at Company A, I would leave it on. Even if you did have something to hide, they will ask about gaps in your CV and you should answer honestly, so they will most likely discover that you did work for Company A. In the end, removing it from your CV is doing you more harm than good and should be left on. Can some one suggest me, how far it is good, If I am trying in Company D, HR from Company D can call to my previous companies and know my details. It's always better to have experience on a CV than to remove it. Companies don't always follow up with previous employers, however it shows you've held a job for a while and gives you a nice talking point during an interview.
66,076,152
Introduction {#Sec1} ============ Tactile suppression is a well-known phenomenon characterized by a decrement in tactile sensitivity, typically occurring on our upper limbs in relation to movements that we perform. Also known as tactile attenuation, or simply as gating, tactile suppression has been found in a multitude of motor tasks, by utilizing a wide array of tactile sensitivity measurements (see^[@CR1]^, for a review). This study focuses on the sensory suppression known to occur in goal-directed reach-to-grasp movements. Our aim is to test whether and how vision modulates tactile gating manifestation. Tactile suppression is closely intertwined with movement, with the *timing of tactile stimulation* being the first determining factor of tactile suppression^[@CR2]^. For example, in an earlier study, participants were asked to make repeated reach-to-grasp movements for an object placed in front of them, in line with a series of auditory tones. A discrimination task was used to measure tactile sensitivity. Specifically, participants decided which one of two stimuli delivered to their resting left hand and their moving right hand was stronger, with stimulation delivered at various times during the movement, from preparation, through execution, and post-movement phases. Results indicated tactile suppression, that is, higher thresholds (or poorer performance) during movement execution, as compared to both preparation and post-movement phases, with no significant difference in sensitivity between these two^[@CR3]^, see also^[@CR4],[@CR5]^, for a replication. Tactile suppression typically makes an appearance during goal-directed movement and it has comparable profiles for either the right or the left hand moving. The next factor determining gating in the tactile domain is *context-dependence*. Contextual influences on suppression are differently approached by different labs working on tactile suppression: That is, suppression has been shown to be highly-dependent on the exact body part involved in the movement or not (i.e., relevance in tactile suppression^[@CR6]--[@CR8]^). Further, tactile suppression has been shown to be highly affected by the motor task at hand (e.g., active versus passive reaches, exploratory movements versus reaches^[@CR9]^; see also pantomimed movements^[@CR5]^; as well as precision reaching^[@CR10]^). Lastly, and perhaps the factor with the largest influence, is the exact *type of tactile task*, or the specific dependent measure used to assess tactile suppression in relation to movement. Most likely owing to the tradition in visual science, the majority of tactile suppression studies have focused on measuring tactile thresholds to assess suppression. Extensive psychophysics is fundamental for understanding the tactile suppression phenomenon, but this approach comes at the cost of having threshold measures hard to directly compare across labs (e.g., how to *easily* compare thresholds provided in milliampère to those in decibels) and, most importantly, thresholding alone cannot account for criterion changes in the data. Yet, criterion shifts appear to consistently contribute to tactile suppression (i.e., not only do participants feel less when they move, they are also less inclined to report the presence of a tactile stimulus), therefore, tactile suppression needs to *always* be assessed with appropriate measures of response bias^[@CR1]^. Here, we focus on the relevance aspect of tactile suppression, by delivering touches at the index finger involved in the grasp at different timings during movement. Our starting point is the crucial finding that tactile suppression manifests differently at each digit involved in the process of reaching and grasping an object. Colino and his colleagues were the first to demonstrate that the index finger involved in a grasping action experiences less suppression, as compared to the little finger not participating in the grasp, or the completely unrelated forearm of the resting hand^[@CR7],[@CR11]^. Further studies have attempted to replicate and extend claims on this finding; however, their methods violated the first rule of the timing of tactile suppression, by having delivered stimulation either too early (i.e., at movement initiation when suppression is maximal^[@CR12]^), or too late (i.e., once the movement has terminated^[@CR5]^). Having convincingly established the relevance of the motor effector when assessing tactile suppression, the authors next investigated whether the tactile suppression effect is affected by the availability of visual information during movement. For this, they had their participants perform reach-to-grasp movements under conditions of full vision, or limited visual availability, with only a short period of fixation at the beginning of the movement, and the rest of the movement performed with vision occluded. Their results indicated that visual information availability contributes to decrease the overall magnitude of tactile suppression experienced during movement^[@CR6]^. To assess the temporal profile of vision's contribution to tactile suppression, here we consider the tactile stimulation delivery timing, the effector relevance, and the requirements for tactile perception measurement during movement. For this, we define timing based on real-time spatial coordinates of the hand, as opposed to stimulation delivery relative to the imperative cue, as it was previously studied as far as relevance in tactile suppression is concerned^[@CR5],[@CR7],[@CR11]^. Our participants reached for and grasped an object placed in front of them, under conditions of full visual information or limited visual information. Because we were interested in the timing of contact with the object (i.e., to investigate tactile facilitation given by any feedback from the tactile receptors involved in the grasp), we defined the different timings *spatially*. That is, we utilized the traditional timings of preparation and execution, but also added two timings for tactile stimulus delivery: *(1*) the 'just before grasp' timing, where the index and thumb are within less than half a centimetre from landing on the goal object and, *(2)* the 'while lifting' timing, when the digits have landed on the goal object, and they are now immobile, but they are nevertheless engaged in holding it and lifting it off the table surface. Tactile stimulation could be delivered, with equal probability, to either the moving or the resting hand. To assess criterion change, 50% of trials had no tactile stimulus delivered, thus, all the behavioural results reported are based on signal detection theory measures such as sensitivity (*d'*) and the relative criterion location, denoted as *c'*^[@CR13],[@CR14]^. We hypothesized that any (sensory feedback-driven) contribution to tactile sensitivity just before grasping the object and/or lifting it should be evident in a significantly improved tactile performance measured at the moving hand, as opposed to performance at the resting hand. Additionally, if visual information were to be responsible for what is felt as the hand lands on an object of interest (i.e., in connection to the well-researched visual preference for the index finger in reach-to-grasp tasks^[@CR15],[@CR16]^), then we expect a significantly better moving hand sensitivity in those conditions where vision is available during reach, as opposed to the reaches performed under limited visual information. Results {#Sec2} ======= Behaviour {#Sec3} --------- Mean tactile detection thresholds derived at rest are presented in Fig. [1](#Fig1){ref-type="fig"}. No significant difference was recorded between participants' left hand detection threshold and participants' right hand detection threshold at rest \[*t*(14) = 0.22, *p* = 0.832, *r* = 0.452\]. Importantly no false alarms were detected in the thresholding procedure for our sample of 15 participants. Scatter plots of individual sensitivity data together with their corresponding means are presented in Fig. [2](#Fig2){ref-type="fig"}. Means with standard error for the two dependent measures collected are presented in Table [1](#Tab1){ref-type="table"}.Figure 1Scatter plots on individual threshold data recorded at rest (in blue) together with their mean (in black), plotted in both mA (left panel), as well as a ratio (dB, right panel). Vertical error bars represent the standard error of the mean.Figure 2Scatter plots of individual sensitivity (*d'*) data (upper row) and relative criterion *c'* data (lower row) together with means and their corresponding standard error.Table 1Mean behavioural data (*d*' and *c*' over rows) for all conditions tested.PreparationExecutionBefore graspWhile liftingFull visionLimited visionFull visionLimited visionFull visionLimited visionFull visionLimited visionRestMoveRestMoveRestMoveRestMoveRestMoveRestMoveRestMoveRestMoved'3.9 (0.6)3.5 (0.3)3.9 (0.5)3.8 (0.2)4.1 (0.4)3.9 (0.2)3.9 (0.5)3.9 (0.1)4.0 (0.7)3.7 (0.2)3.7 (0.8)3.1 (0.3)4.0 (0.6)3.0 (0.3)3.7 (0.7)3.0 (0.3)c'0.17(0.02)0.38 (0.1)0.12 (0.01)0.17 (0.04)0.13 (0.01)0.19 (0.03)0.12 (0.01)0.12 (0.01)0.17 (0.03)0.2 (0.03)0.19 (0.05)0.41 (0.13)0.15 (0.02)0.45 (0.09)0.17 (0.03)0.48 (0.14) Sensitivity (d') {#Sec4} ---------------- The existence of sensory suppression was clearly indicated with a significant main effect of TIMING \[*F*(3,42) = 8.84, *p* \< 0.001, *η*^2^~*p*~ = 0.387\]. That is, tactile sensitivity was significantly lower while lifting the object \[*M* = 3.45, *SE* = 0.20\] as compared to while preparing the movement \[*M* = 3.79, *SE* = 0.16, *F*(1,14) = 9.02, *p* = 0.009, *η*^2^~*p*~ = 0.392\]; Similarly, significant perceptual decrements were evident for the preparatory phase \[*F*(1,14) = 6.82, *p* = 0.021, *η*^2^~*p*~ = 0.327\], the just before grasp phase \[*M* = 3.63, *SE* = 0.19, *F*(1,14) = 9.62, *p* = 0.008, *η*^2^~*p*~ = 0.407\], and the lifting phase \[*F*(1,14) = 26.57, *p* \< 0.001, *η*^2^~*p*~ = 0.655\], in relation to the execution phase \[*M* = 3.96, *SE* = 0.12\]. A significant main effect of HAND \[*F*(1,14) = 14.35, *p* = 0.002, *η*^2^~*p*~ = 0.506\] indicated that the resting hand sensitivity \[*M* = 3.93, *SE* = 0.14\] was, as expected, significantly higher than that of the moving hand \[*M* = 3.49, *SE* = 0.19\]. A significant two-way interaction between TIMING and VISION AVAILABILITY \[*F*(3,42) = 5.76, *p* = 0.002, *η*^2^~*p*~ = 0.292\] was found; post hoc tests indicated that this was given by participants being significantly more sensitive to tactile stimulation in the before grasp timing under conditions of full vision \[*M* = 3.82, *SE* = 0.16\], as compared to the same timing, but when no vision was available \[*M* = 3.44, *SE* = 0.23, *t*(14) = 3.06, *p* = 0.008, *r* = 0.861\]. Furthermore, a significant interaction between TIMING and HAND was also found on the *d* prime data \[*F*(3,42) = 7.85, *p* \< 0.001, *η*^2^~*p*~ = 0.359\]. Participants were significantly more sensitive to detect the tactile stimulus at the resting hand for both the before grasp \[*M* = 3.85, *SE* = 0.19\], as well as the while lifting the object conditions \[*M* = 3.89, *SE* = 0.17\], as compared to their moving hand performance for the same timings of the movement \[before grasp: *M* = 3.41, *SE* = 0.20, *t*(14) = 3.52, *p* = 0.003, *r* = 0.795; while lifting: *M* = 3.00, *SE* = 0.27, *t*(14) = 4.24, *p* \< 0.001, *r* = 0.628\]. Lastly, a three-way interaction between TIMING, VISION AVAILABILITY, and HAND proved to be significant \[*F*(3,42) = 3.20, *p* = 0.033, *η*^2^~*p*~ = 0.186\]. In accordance with our hypothesis, we looked at the significant two-way interactions, for each of the resting and the moving hands. For the resting hand, the main effects of TIMING \[*F*(3,42) = 1.59, *p* = 0.207, *η*^2^~*p*~ = 0.102\] and VISION AVAILABILITY \[*F*(1,14) = 3.71, *p* = 0.075, *η*^2^~*p*~ = 0.209\] failed to reach statistical significance. The interaction between the two factors, at the limit of significance \[*F*(3,42) = 2.82, *p* = 0.050, *η*^2^~*p*~ = 0.168\] was given by participants' sensitivity being higher for the full vision condition \[*M* = 4.04, *SE* = 0.16\], as compared to the limited vision condition \[*M* = 3.75, *SE* = 0.19\], only when participants were lifting the object \[*t*(14) = 2.91, *p* = 0.011, *r* = 0.858\]. In what regards the moving hand, no main effect of VISION AVAILABILITY was found \[*F*(1,14) = 0.622, *p* = 0.443, *η*^2^~*p*~ = 0.043\], but a significant main effect of TIMING \[*F*(3,42) = 10.21, *p* \< 0.001, *η*^2^~*p*~ = 0.422, *ε = *0.706\]. Planned comparisons indicated a significant performance drop while lifting the object \[*M* = 3.00, *SE* = 0.27\] as compared to both preparing the movement \[*M* = 3.64, *SE* = 0.23, *F*(1,14) = 11.02, *p* = 0.005, *η*^2^~*p*~ = 0.440\], and to before grasping the object \[*M* = 3.41, *SE* = 0.20, *F*(1,14) = 5.28, *p* = 0.038, *η*^2^~*p*~ = 0.274\]. Further, participants' sensitivity was significantly lower in the preparation \[*F*(1,14) = 5.24, *p* = 0.038, *η*^2^~*p*~ = 0.272\], before grasp \[*F*(1,14) = 15.55, *p* = 0.001, *η*^2^~*p*~ = 0.526\], and while lifting periods \[*F*(1,14) = 28.67, *p* \< 0.001, *η*^2^~*p*~ = 0.672\], as compared to the execution period \[*M* = 3.90, *SE* = 0.15\]. Lastly, a significant two-way interaction between TIMING and VISION AVAILABILITY \[*F*(3,42) = 5.04, *p* = 0.004, *η*^2^~*p*~ = 0.265\] was evident in the moving hand *d'* data. Post hoc tests indicated that this effect was stemming from the measured moving right hand sensitivity in the full vision condition \[*M* = 3.67, *SE* = 0.17\] being significantly higher as compared to the limited vision condition \[*M* = 3.14, *SE* = 0.27\], specifically in the timing of just before grasping the goal object \[*t*(14) = 2.78, *p* = 0.014, *r* = 0.703\]. Relative criterion c' {#Sec5} --------------------- We concentrate our discussion of the criterion results strictly on those reflecting the relative criterion *c'*, i.e., the criterion location *c* scaled by sensitivity. It is advised that for those studies where *d'* differs between experimental conditions (such is the case of the present report), sensitivity to be taken into account when considering and discussing response bias^[@CR14]^. The analysis indicated a significant main effect of HAND \[*F*(1,14) = 7.17, *p* = 0.018, *η*^2^~*p*~ = 0.339\], with participants more likely to say no tactile stimulus was present when stimulation was delivered at their moving hand \[*M* = 0.30, *SE* = 0.05\], as compared to when stimulation was delivered to their resting hand \[*M* = 0.15, *SE* = 0.02\]. In addition, a significant main effect of TIMING was found \[*F*(3,42) = 3.39, *p* = 0.027, *η*^2^~*p*~ = 0.195, *ε = *0.655\]. Planned comparisons indicated that this was given by participants' criterion in the lifting timing of the movement being significantly more conservative \[*M* = 0.31, *SE* = 0.05\], as compared to both the preparation \[*M* = 0.21, *SE* = 0.04, *F*(1,14) = 6.73, *p* = 0.021, *η*^2^~*p*~ = 0.325\], and execution periods of the movement \[*M* = 0.14, *SE* = 0.01, *F*(1,14) = 12.88, *p* = 0.003, *η*^2^~*p*~ = 0.479\]. Furthermore, a significant two-way interaction between TIMING and HAND was identified for the relative criterion *c'* data \[*F*(3,42) = 3.72, *p* = 0.046, *η*^2^~*p*~ = 0.210, *ε = *0.566\]. Post hoc tests indicated that for the before grasp period, participants were clearly more inclined to report that no stimulus was presented for the before grasp period when stimulation was delivered at the moving hand \[*M* = 0.31, *SE* = 0.07\], as compared to the resting hand \[*M* = 0.18, *SE* = 0.04, *t*(14) = 3.19, *p* = 0.006, *r* = 0.841\]. Similarly, participants were significantly more conservative in reporting moving hand stimuli once the reach was concluded and they were lifting the object \[*M* = 0.46, *SE* = 0.10\], as compared to stimuli delivered to the resting hand for the same lifting timing \[*M* = 0.16, *SE* = 0.02, *t*(14) = 2.87, *p* = 0.012, *r* = −0.224\]; see Fig. [3](#Fig3){ref-type="fig"}.Figure 3Depiction of the timing by hand interaction on the average relative criterion *c'* data. With 0 taken to reflect a point of no bias, positive values of relative criterion *c'* indicate a general inclination to respond 'NO'. Vertical error bars represent the standard error of the mean. Movement kinematics {#Sec6} ------------------- Means together with their standard error for all the dependent measures considered for analysis are presented in Table [2](#Tab2){ref-type="table"}. Due to the extensive amount of data analysed, we only report those main effects and interactions that were found to be significant in the present study.Table 2Mean kinematic data together with SEs. RTs -- reaction times, MT -- total movement time, PGA -- peak grip aperture, TPGA -- time to peak grip aperture, PV -- peak velocity, TPV -- time to peak velocity, PA -- peak acceleration, TPA -- time to peak acceleration, PD -- peak deceleration, TPD -- time to peak deceleration.PreparationExecutionBefore graspWhile liftingFull visionLimited visionFull visionLimited visionFull visionLimited visionFull visionLimited visionRestMoveRestMoveRestMoveRestMoveRestMoveRestMoveRestMoveRestMoveRTs, ms721 (29)713 (33)601 (23)610 (30)766 (34)774 (35)670 (32)673 (33)787 (35)810 (40)666 (29)701 (26)773 (34)783 (38)707 (41)672 (27)MT, ms875 (39)877 (40)926 (29)926 (34)882 (42)881 (40)927 (35)950 (35)891 (42)877 (38)943 (38)928 (34)879 (39)886 (42)941 (43)931 (39)PGA, mm162 (4)161 (4)188 (5)189 (5)161 (4)160 (4)187 (5)187 (5)162 (4)161 (4)189 (5)188 (5)161 (4)161 (5)187 (5)188 (4)TPGA, ms695 (11)697 (11)769 (9)767 (8)695 (12)690 (11)770 (9)776 (10)700 (11)702 (12)772 (8)771 (12)686 (10)694 (10)784 (9)763 (10)PV, m/s1.4 (0.06)1.4 (0.06)1.3 (0.06)1.3 (0.05)1.4 (0.06)1.4 (0.06)1.3 (0.6)1.3 (0.06)1.4 (0.06)1.4 (0.06)1.3 (0.06)1.3 (0.06)1.4 (0.06)1.4 (0.06)1.3 (0.06)1.3 (0.06)TPV, ms405 (17)401 (19)396 (15)396 (17)403 (18)406 (18)406 (17)409 (18)412 (20)400 (18)412 (19)398 (16)396 (17)407 (21)410 (20)405 (20)PA, m/s^2^6.5 (0.4)6.5 (0.4)6.2 (0.4)6.3 (0.4)6.2 (0.4)6.3 (0.4)6.0 (0.4)6.0 (0.4)6.2 (0.4)6.3 (0.3)5.9 (0.4)6.1 (0.4)6.3 (0.4)6.3 (0.4)6.0 (0.4)6.1 (0.4)TPA, ms206 (19)207 (21)205 (20)201 (17)205 (19)208 (18)208 (18)211 (20)217 (20)209 (19)213 (21)210 (15)205 (19)209 (19)215 (18)205 (18)PD, m/s^2^5.5 (0.4)5.3 (0.4)5.1 (0.4)5.0 (0.3)5.3 (0.4)5.3 (0.4)4.9 (0.3)5.0 (0.4)5.3 (0.4)5.3 (0.4)4.9 (0.4)5.0 (0.4)5.4 (0.4)5.4 (0.5)4.8 (0.4)5.0 (0.4)TPD, ms576 (23)580 (24)555 (19)558 (20)576 (24)574 (21)564 (20)566 (24)589 (25)576 (24)575 (22)558 (21)579 (26)578 (27)582 (27)566 (24) Timing of tactile stimulation {#Sec7} ----------------------------- Reaction times differed as a function of the TIMING of tactile stimulation delivery \[*F*(1,13) = 25.47, *p* \< 0.001, *η*^2^~*p*~ = 0.662\]. Specifically, participants' RTs were significantly faster in the preparation period \[*M* = 660.97 ms, *SE* = 25.83 ms\], as compared to execution \[*M* = 720.55 ms, *SE* = 30.90 ms, *F*(1,13) = 37.17, *p* \< 0.001, *η*^2^~*p*~ = 0.741\], before grasp \[*M* = 740.68 ms, *SE* = 30.46 ms, *F*(1,13) = 46.04, *p* \< 0.001, *η*^2^~*p*~ = 0.780\], and while lifting periods \[*M* = 733.83 ms, *SE* = 30.81 ms, *F*(1,13) = 21.87, *p* \< 0.001, *η*^2^~*p*~ = 0.627\], with the execution period RTs significantly faster than the before grasp period \[*F*(1,13) = 7.15, *p* = 0.019, *η*^2^~*p*~ = 0.355\]. Further, a main effect of TIMING of tactile stimulation delivery was found on the mean peak grip aperture \[PGA, *F*(3,39) = 3.59, *p* = 0.022, *η*^2^~*p*~ = 0.216\]. Planned comparisons indicated that the average PGA was significantly smaller when stimulation was delivered during the execution period of the movement \[*M* = 173.68 mm, *SE* = 4.03 mm\], as compared to both the preparatory phase \[*M* = 175.06 mm, *SE* = 4.32 mm, *F*(1,13) = 5.15, *p* = 0.041, *η*^2^~*p*~ = 0.284\], and the just before grasp phase \[*M* = 175.05 mm, *SE* = 4.34 mm, *F*(1,13) = 6.57, *p* = 0.024, *η*^2^~*p*~ = 0.336\]. Similarly, the mean peak acceleration (PA) was also influenced by the TIMING of tactile stimulation delivery \[*F*(3,39) = 3.68, *p* = 0.020, *η*^2^~*p*~ = 0.220\]. This effect was given by the participants' reaches exhibiting on average a significantly elevated PA for those times when the tactile stimulation was delivered during the preparatory phase of the movement \[*M* = 6.36 m/s, *SE* = 0.34 m/s\], as compared to the execution period \[*M* = 6.14 m/s, *SE* = 0.39 m/s, *F*(1,13) = 9.16, *p* = 0.010, *η*^2^~*p*~ = 0.413\], the just before grasp period \[*M* = 6.12 m/s, *SE* = 0.37 m/s, *F*(1,13) = 5.33, *p* = 0.038, *η*^2^~*p*~ = 0.295\], as well as the lifting period of the movement \[*M* = 6.18 m/s, *SE* = 0.40 m/s, *F*(1,13) = 8.95, *p* = 0.010, *η*^2^~*p*~ = 0.408\]. The higher PA for stimulation delivered in the preparatory phase, together with the reaction times found to be faster for the same period could signal the typical arousal effect found for reaction times, which are faster when a tactile stimulus is delivered in connection to another sensory stimulus, the go signal to initiate the movement in our case^[@CR17],[@CR18]^. Vision availability {#Sec8} ------------------- Availability of vision affected most kinematic measures throughout the duration of the reach-to-grasp movement, see Fig. [4](#Fig4){ref-type="fig"}. That is, participants were significantly slower to initiate the movement under conditions of full vision \[*M* = 766 ms, *SE* = 33 ms\] as compared to the limited vision movement condition \[*M* = 662 ms, *SE* = 26 ms, *F*(1,13) = 47.15, *p* \< 0.001, *η*^2^~*p*~ = 0.784\]. Their total movement time was significantly longer under conditions of limited vision \[*M* = 934 ms, *SE* = 35 ms\] as compared to the full vision movement condition \[*M* = 881.21 ms, *SE* = 40 ms, *F*(1,13) = 43.62, *p* \< 0.001, *η*^2^~*p*~ = 0.770\]. The lack of vision affected the peak grip aperture as well, with participants exhibiting a significantly larger PGA when no vision was available \[*M* = 188.08 mm, *SE* = 5 mm\] relative to the full vision condition \[*M* = 161.06 mm, *SE* = 4 mm, *F*(1,13) = 125.07, *p* \< 0.001, *η*^2^~*p*~ = 0.906\]. Relatedly, participants on average achieved their PGA significantly later when no vision was available \[*M* = 772 ms, *SE* = 7 ms\], as compared to those times when they were allowed full vision during movement \[*M* = 695 ms, *SE* = 10 ms, *F*(1,13) = 107.22, *p* \< 0.001, *η*^2^~*p*~ = 0.892\]. Lastly, as expected, the transport component of the grasp was clearly affected when no visual information was available during the reach-to-grasp movement, with significant decrements recorded for mean peak velocity \[*M* = 1.32 m/s, *SE* = 0.6 m/s\], mean peak acceleration \[*M* = 6.08 m/s, *SE* = 0.04 m/s\], and mean peak deceleration \[*M* = 4.97 m/s, *SE* = 0.04 m/s\], as compared to those transport measures recorded under conditions of full vision \[PV: *M* = 1.38 m/s, *SE* = 0.06 m/s, *F*(1,13) = 14.05, *p* = 0.002, *η*^2^~*p*~ = 0.519; PA: *M* = 6.32 m/s, *SE* = 0.04 m/s, *F*(1,13) = 13.44, *p* = 0.003, *η*^2^~*p*~ = 0.508; PD: *M* = 5.35 m/s, *SE* = 0.04 m/s, *F*(1,13) = 8.38, *p* = 0.013, *η*^2^~*p*~ = 0.392\].Figure 4Depiction of the vision availability main effect for various kinematic markers tested (from left to right: total movement time, peak grip aperture, time to peak grip aperture, peak velocity, and peak acceleration). Vertical error bars represent the standard error of the mean. Timing by Hand interaction {#Sec9} -------------------------- A significant interaction between the TIMING of tactile stimulation delivery and the HAND executing the movement was found on the total movement time data \[*F*(3,39) = 2.91, *p* = 0.047, *η*^2^~*p*~ = 0.183\], however, none of the post hoc tests conducted survived the correction for multiple comparisons. The same interaction between the TIMING of tactile stimulation delivery and the HAND executing the movement was also found on the time to peak velocity \[*F*(3,39) = 3.62, *p* = 0.021, *η*^2^~*p*~ = 0.218\]. Post hoc tests indicated that this result was given by faster time to peak velocity recorded for stimulation delivered in the period before grasp at the moving hand \[*M* = 398.88 ms, *SE* = 17 ms\], as compared to stimulation delivered to the resting hand \[*M* = 411.91 ms, *SE* = 19 ms\], *t* (13) = 3.65, *p* = 0.003, *r* = 0.992\]. Discussion {#Sec10} ========== This study investigated the time course of the contribution of visual information to tactile suppression during the execution of a goal-directed reach-to-grasp movement. We focused on the stimulation delivery timings of before grasping an object, as well as when lifting said object, with the purpose of elucidating the specific timing of the previously reported tactile suppression reduction when vision is available^[@CR6]^. Our participants reached for, grasped, and lifted an object placed centrally on the table in front of them. We expected tactile suppression for the entire time the hand was in motion. Our results indicate clear tactile suppression for the moving hand, as compared to the resting hand. As expected, tactile suppression magnitude differs among the stimulus delivery timings^[@CR3],[@CR4],[@CR7]^, with the worst performance for the moving hand observed the moment before grasping the goal object. A similar pattern was reported for reaches^[@CR10]^, as well as a significant deterioration in movement accuracy was reported following proprioceptive tendon vibration for the later stages of a goal-directed movement^[@CR19]^. Even though performance deteriorates at the moving hand for both the preparatory and execution phases of the movement, the recorded average sensitivity is very good and comparable to the previous reports^[@CR5],[@CR7],[@CR11]^. This may reflect an almost-ceiling effect given by the utilisation of the 90% detection threshold; future studies need to test a significantly lower threshold (e.g., uniform suppression was described throughout movement for discrimination thresholds tested at 79.4% correct responses^[@CR3],[@CR20]^). Having such a high threshold for detection likely facilitates the "pop-out" of those tactile features known to be easily detected over movement. For example, when participants perform on speeded detection tasks, tactile response times tend to be faster, specifically for the movement execution period, as compared to movement preparation^[@CR21]^. In a similar fashion, enhanced brain responses have been documented over the execution period of the movement in response to uninformative tactile probes delivered to a moving hand, with the authors suggesting that the processing of incoming tactual information is prioritized with the potential purpose of adjusting the ongoing motor plan, in the eventuality of an unexpected event^[@CR22]^. Importantly, suppression was maximal for *the moving hand* specifically at those timings of interest of just before grasping the object and while lifting the object. The availability of visual information clearly influenced participants' tactile sensitivity: Their sensitivity to detect a tactile stimulus delivered to their moving hand just before grasping the object was significantly higher, when they performed the movement under full vision conditions. A likely contributing factor to this tactile enhancement from vision as found here is the well-demonstrated fact that we reliably tend to fixate near the index finger future contact points on the object^[@CR15],[@CR16],[@CR23]^. Additionally, this enhancement effect of what is felt just before grasping an object begs the question regarding what specific type of visual modulation is at play. Specifically, is sensory enhancement at grasp driven by visual attention? Furthermore, is the found sensory enhancement a direct result of the specific type of visual information availability during the reach? If the answer to the latter question is affirmative, then which visual cues contribute to improved tactile sensitivity just before grasping an object: vision of the index finger, or rather, generally vision of the hand and/or object? Recent studies indicate that specific visual information being made available differentially affects the movement profile of the hand^[@CR24]^. An additional explanation for the enhanced sensitivity found in the full vision condition is that the timing of contact between hand and object could very reliably be predicted when vision is available, as compared to the limited vision condition. This improved temporal prediction could be the trigger for the better tactile detection performance, a result supported by the shorter total movement times recorded when vision was available. That is, vision allows to reliably distinguish the (external) tactile stimulation from any tactile feedback expected/encountered when making contact with the object. The specific visual contribution needs to be ascertained, especially because once the grasp has taken place and the participants are involved in the lift of the object off the table surface, our results further highlight a significantly improved tactile performance at *the resting hand*. For this reason, an additional explanation could be that this visually-triggered enhancement at grasp, and/or lift, is simply the result that the limb is seen, an explanation in line with the classical tactile spatial attention modulations demonstrated in a resting state of the body^[@CR25],[@CR26]^. It is important to note that our behavioural results demonstrated a clear movement effect on tactual sensation, and this effect was accompanied by a criterion shift. Specifically, participants were more likely to report a lack of tactile stimulation when this was delivered at their moving right hand, as compared to stimulation delivered to their resting left hand. These results are in line with previous reports of a significant conservative criterion shift once a goal-directed movement is initiated^[@CR7],[@CR11],[@CR27],[@CR28]^. Crucially, the availability of visual information did not affect the relative criterion data, suggesting that the conservative criterion shift is a purely tactually-driven effect, most likely reflecting the perceptual uncertainty given by the ongoing movement (see^[@CR1]^, for further discussion). A further point of discussion must acknowledge *how* the vision availability affects the movement profile of the hand. As expected^[@CR29]--[@CR31]^, the movement profiles displayed significantly fewer features indicative of closed-loop control when vision was removed. Specifically, movements became longer, with significantly later-occurring and larger peak grip aperture, as well as significant decrements in peak velocity, peak acceleration, and peak deceleration. Moreover, visual cues removal caused significantly faster reaction times to initiate the movement. While this result might seem counterintuitive at first, faster reaction times in the dark likely reflect the exact timing of vision removal, e.g., over the preparatory phase of the movement in the case of our study. Participants are faster to initiate movement so that they reduce the representation of the movement space over time^[@CR31],[@CR32]^. Additionally, supporting the finding underlining that our eyes land at the goal location at the same time as the hand achieving peak acceleration^[@CR33]^, our results indicate that participants achieve peak velocity faster when tactile stimulation is delivered to the moving hand, as compared to stimulation delivered to the resting hand, specifically just before grasping the goal object. Taken together, we further confirm the existence of tactile suppression throughout the entire duration of a goal-directed movement. Furthermore, our data indicate that the visual system is at work to counteract this perceptual decrement and act to enhance what is felt at key grasp timings, such that, what we feel at our moving hand is enhanced just before our digits land on the object. Additionally, the resting hand tactile sensitivity seems to also benefit from visual enhancement once the grasp has been resumed and the moving hand is actively making use of the sensory feedback available to perform the lift of the object. Visual availability therefore does not prove beneficial for the lifting phase at the moving hand, but rather seems to be working in favour of enhancing what is felt at the resting hand. This would allow the possibility for our eyes to monitor next points of interest once the object has been grasped and the lifting of it is ongoing. Future studies need to investigate the exact contribution of visual information availability at the moving/resting effectors for differential goals of our actions. Methods {#Sec11} ======= Participants {#Sec12} ------------ Twenty participants took part in this study. However, we excluded data from five participants due to technical problems experienced during data collection. The remaining participants comprised 6 male participants, mean age: 26.06 years, SD = 7.76. All participants reported normal or corrected-to-normal vision and no known impairment in their sense of touch. The experiment took approximately 120 minutes to complete and the participants were remunerated 15 EUR for taking part. The study received ethical clearing (CPP SUD EST II) and written informed consent was obtained from all participants before beginning the experiment. All participants were debriefed with respect to the study purpose at the end of experiment. All research was performed in accordance with relevant guidelines (i.e., Public Health Code, Title II of the first book on biomedical research) and regulations (i.e., authorized by AFSSAPS, Agence Française de Sécurité Sanitaire des Produits de Santé -- French Agency for Sanitary Security of Health Products). This study conforms to the Declaration of Helsinki and to all subsequent amendments (Declaration of Helsinki, 1964, 2013). Apparatus {#Sec13} --------- The experiments were conducted in a dark room with illumination provided by a table-top lamp. Participants reached for and grasped a custom-made rectangular object (two-thirds wood and one-third styrofoam, 10 cm tall, 3.8 cm wide, 68 g mass), placed on the table in front of them. See Fig. [5a](#Fig5){ref-type="fig"} for a depiction of the object utilized in the study.Figure 5(**a**) Experimental set-up. Participants start each trial by pinch-grasping the start IRED marker. Tactile stimulation could be delivered while preparing the movement, during movement execution (mid-way from start position to goal object, as represented by the dotted line), shortly before the grasp (gray bars indicate spatial landing positions, i.e., 0.5 cm before landing on object, irrespective of elevation, for both index and thumb), and while lifting the object. (**b**) *Trial timeline*. The auditory go signal is depicted with thicker bar, movement initiation in yellow, grasp in blue. (**c**) *Experimental design*. P is Preparation, E is Execution, B is Before grasp, and L is While lifting. Participants wore a pair of liquid crystal display goggles (PLATO goggles, Translucent Technologies, Toronto, ON, Canada) and headphones (ATH-PRO5MK3, Audio-Technica, Tokyo, Japan). Tactile stimulation was delivered by means of two isolated bipolar constant current stimulators (Digitimer DS5, Digitimer Ltd, Welwyn Garden City, UK) which were driven through a NI amplifier (NI USB-6001, National Instruments, Austin, TX, US). Participants had one electrode attached to the ventral part of the fingertip and the ground attached to the middle phalanx of both their index fingers (Neuroline Surface Electrodes 70015-K/12, Ambu AS, Ballerup, Denmark). Movement of participants' right hand was tracked with an Optotrak Certus (NDI, Waterloo, ON, Canada), positioned at 2.3 meters distance to the left hand side of participants' start position. Participants wore three infra-red emitting diodes (IREDs) positioned on the index, thumb, and wrist. Extra IRED markers were attached to the table at the start position, to the top of the object, as well as just underneath the object. The experiment was conducted in Matlab (Matlab 2013a, MathWorks, Natick, MA, US), utilizing custom-written scripts in connection to functions from several available toolboxes, such as the Pschychophysics toolbox v3^[@CR34],[@CR35]^, the Optotrack toolbox (V. H. Franz, <http://www.ecogsci.cs.uni-tuebingen.de/OptotrakToolbox>), and the Data acquisition toolbox. Procedure {#Sec14} --------- The experiment consisted of two phases: a thresholding procedure performed at rest and the experimental phase involving goal-directed movements of the right hand. For the tactile thresholding procedure, we designed two phases. In the first phase, aimed at finding the preliminary detection threshold, the experimenter instructed participants to sit with their eyes closed and both their forearms pronated on the table top. For each hand, we used two intermixed limits staircases^[@CR36]--[@CR38]^, with a lower staircase starting at 0 mA (i.e., no stimulation) and a higher one starting at 2.2 mA. That is, there were 4 staircases opened in parallel at the beginning of the procedure. In each trial we delivered a 2 ms square wave pulse stimulus followed, 500 ms later, by an auditory beep (450 Hz, 100 ms) requesting a response from the participants. Participants made a foot-pedal response (stimulus present or absent), irrespective of the hand where this stimulus could be delivered. The inter-trial-interval was set to 2 s. The descending staircases' step was set at 0.05 mA and the step was doubled for the ascending staircases. Tactile stimulation for the ascending staircases increased one step after each NO response, while it kept the same value following a YES response. Tactile stimulation for the descending staircases decreased one step following a YES response and kept the same value following a NO response. The procedure terminated after four consecutive YES responses for the ascending staircases. These values at the time of termination were taken as the preliminary 90% detection threshold. In a second phase, to further test the stability of the detection threshold for each hand, we took the preliminary 90% detection threshold values and their corresponding values for the descending staircases at the time of termination and derived 6 more values (by first adding, and then by also subtracting the step value, the doubled step value, or the tripled step value from the detection threshold and the corresponding value from descending staircase, respectively). Altogether we thus computed, for each hand, 8 individual stimulation values. In a separate procedure, for each hand, we administered these 8 values for 10 times, together with 40 trials without stimulation, all randomly intermixed, giving a total of 200 trials/participant. Our particular aim with this extra procedure was to test for false alarms, a procedure which is not available when using the classical adaptive psychophysical measures. At the end of this procedure, the final 90% detection threshold was chosen by the experimenter as the final value of 90% detection stimulation, or, if more available, the highest 90% detection value. At the beginning of each trial in the experimental phase, participants pinch-grasped the IRED located at the start location (see Fig. [5a](#Fig5){ref-type="fig"}). The object was shown for one second. Depending on the trial type (either full vision or limited vision), participants further viewed (or not) the object for a randomly chosen duration between 1 and 1.5 seconds (the randomized foreperiod). This foreperiod was followed by the delivery of the auditory go signal (a beep, 450 Hz, 100 ms). Participants were instructed to reach forward and grasp the object following the delivery of the go signal, shortly lift it off the table, place it back, and return to the start position. They were instructed to only initiate movement upon hearing the go signal and execute an accurate movement at a comfortable speed. Once they returned to the start position, they gave a response with respect to whether they felt the tactile stimulus or not, by means of two foot pedals placed under the table. Response assignments to the left and right pedal (by the ipsilateral foot) were counterbalanced across participants. The tactile stimulus (a 2 ms square wave, its amplitude established during thresholding procedure) could be delivered at four different timings: *(1)* during *movement preparation* (following the initial one second period where we showed the object, the beep was played halfway into the randomized foreperiod used); *(2)* during movement execution (delivered once the hand travelled more than 15 cm from the start position, that is, half of the total distance); *(3)* just before fingers contacted the object (when the hand was still in motion and both the index and the thumb were detected within less than 0.5 cm from landing on the object); *(4)* while lifting the object (when the hand was in motion lifting the object, and the IRED marker positioned underneath the object became visible). See Fig. [5b](#Fig5){ref-type="fig"} for a depiction of the trial timeline. Design {#Sec15} ------ The experimental phase consisted of 4 blocks of 64 trials each, with a total of 256 trials. Half of the trials were stimulus present trials, whereas in the other half no stimulation was delivered. Half of the total number of trials were conducted under full vision for the entire duration of the trial, whereas for the remaining half participants were given only 1 second of visual information at the beginning of the trial, with the reach-and-grasp movement being performed with closed goggles. Further, if tactile stimulation was present, in half of the trials tactile stimulation was delivered at the resting left hand, and the other half at the moving right hand. Lastly, for each type of vision availability, for each hand, stimulation could be delivered during either the motor preparation period, during execution, just before the grasp, or while lifting the goal object. See Fig. [5c](#Fig5){ref-type="fig"} for experimental design. Data collection and reduction {#Sec16} ----------------------------- The six IREDs data were sampled at 250 Hz for a total time of 4 s. For each trial, the displacement data were filtered offline with a second order dual-pass Butterworth filter, employing a low-pass cut-off frequency of 10 Hz. The analysis program derived velocities by differentiating the displacement data with a three-point central finite difference algorithm. The analysis program further differentiated displacement data to obtain acceleration. The kinematic analysis program defined movement initiation by determining the first sample after which the velocity of the IRED attached to participants' wrist attained and maintained a value of 50 mm/s for ten consecutive frames (i.e., 50 ms). Contrastingly, movement offset was defined as the point at which the wrist IRED fell below 50 mm/s and remained below this criterion for ten consecutive frames (i.e., 50 ms). If visibility of any of the three IREDs attached to the participants' hand was lost for the duration of the trial, the trial was repeated at the end of the experiment. Statistical analysis {#Sec17} -------------------- Statistical analysis was performed on both behavioural response data and the kinematic movement data recorded. The data that support the findings of this study are available from the corresponding author, \[GJ\], upon request. Behavioural data analysis {#Sec18} ------------------------- For each participant and for each of the conditions (see Fig. [5c](#Fig5){ref-type="fig"}), hit rate (i.e., YES responses when a tactile stimulus was delivered), as well as false alarms (i.e., YES responses when no tactile stimulus was present) were calculated. Experimental conditions were split considering the manipulated experimental variables of TIMING of stimulation (preparation versus execution versus before grasp versus while lifting), VISION AVAILABILITY (full vision versus limited vision), and HAND receiving the stimulation (resting hand versus moving hand). These percentages were normalized and sensitivity measures (*d'*) and the relative criterion *c'* were derived according to signal detection theory (SDT^[@CR13],[@CR14]^, see also^[@CR27],[@CR28]^ for similar methods). Whenever accuracy was perfect for a given condition (i.e., participants always detected the tactile stimulus), or no false alarms were recorded, the proportions of 1 and 0 were adjusted by 1/(2 *N*), and 1/(1--2 *N*), respectively, where *N* is the number of trials for a given condition on which the proportion was calculated^[@CR39]^. For each of the derived SDT measures (*d'* and *c'*) we performed repeated-measures analyses of variance (ANOVAs) with the factors TIMING of tactile stimulation (preparation versus execution versus before grasp versus while lifting), VISION AVAILABILITY (full vision versus limited vision), and HAND receiving the stimulation (resting hand versus moving hand). Mauchly's test of sphericity was used to identify violations of the sphericity assumption. If the assumption was violated, then the Greenhouse-Geisser correction was applied to correct the degrees of freedom; corrected *p* values are reported throughout. Hypothesis-driven analyses of variance followed any three-way interaction found in the data. Sidak-corrected paired-samples *t*-tests followed two-way interactions found in the data. Partial *η*^*2*^ is reported as an effect size estimate for the ANOVA results; the correlation coefficient *r* is used as effect size for the *t*-tests. For all the analyses, only those significant main effects and interactions found in the data are reported. Kinematic data analysis {#Sec19} ----------------------- The kinematic dependent measures considered were: reaction time (RT), total movement time (MT), peak grip aperture (PGA), peak velocity (PV), peak acceleration (PA), and peak deceleration (PD), together with their latencies, that is the time needed to reach each of the PGA, PV, PA, and PD. For each of these kinematic measures separate repeated-measures ANOVAs were conducted with the same factors as for the behavioural statistical analysis. One participant was excluded from the kinematic analysis as we consistently missed the IRED markers during movement. **Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Conceived and designed the study: G.J., G.B., A.F. Performed the study: X.M., G.J. Analysed the data: G.J. Contributed analysis tools: G.J., F.L.C., G.B. Wrote the paper: G.J., F.L.C., G.B., A.F. Competing Interests {#FPar1} =================== The authors declare no competing interests.
66,076,458
Q: IF statement using output of SPROC SQL I am creating stored procedures (sprocs) to perform operations on my tables in my database. I have a sproc SubjectExists that returns '1' if the subject name entered is in the subject table and returns '0' if it does not exist. CREATE PROCEDURE SubjectExists @SubjName varhcar(20) AS SELECT CASE WHEN EXISTS( SELECT * FROM Subject WHERE Subject_Name = @SubjName ) THEN CAST (1 AS BIT) ELSE CAST (0 AS BIT) END I am now making another sproc that deletes a subject from the table. I want to make this sproc such that it uses SubjectExists and if the output of it is a 1 (i.e. the subject does exist) then it deletes the subject and if the output from SubjectExists is 0, it does nothing. How would I go about doing this? I have tried experimenting with the below but no luck so far. CREATE PROCEDURE DeleteSubject @SubjName varchar(20) AS IF (EXEC StudentExists Bob) DELETE FROM Subject WHERE Subject_Name = @SubjName; Can anyone please guide me as to how I would do this. Thanks A: At first, your stored procedure should return value: CREATE PROCEDURE SubjectExists @SubjName varchar(20) AS BEGIN DECLARE @ReturnValue int SELECT @ReturnValue = CASE WHEN EXISTS( SELECT * FROM Subject WHERE Subject_Name = @SubjName ) THEN CAST (1 AS BIT) ELSE CAST (0 AS BIT) END RETURN @ReturnValue END Then you can declare some table to store results of your stored procedure and if it is eligible, then run your code DECLARE @FooValue int; EXEC @FooValue = SubjectExists 'helloWorld!:)' IF @FooValue = 1 BEGIN DELETE FROM Subject WHERE Subject_Name = @SubjName; END
66,076,578
No effect of kinins on DNA synthesis in LNCaP prostate cancer cells. 1. Prostate has kininogenase activity and expresses members of the tissue kallikrein gene family. The present study examined the effect of exogenous and endogenous kinins on growth of LNCaP prostate adenocarcinoma cells. 2. Rate of DNA synthesis was measured by incorporation over 4 h of [3H]-thymidine into a TCA insoluble fraction of LNCaP cells that had been cultured for 24 h. 3. Increased [3H]-thymidine incorporation was seen in response to 10 nmol/L testosterone (+103 +/- 5 s.e.%), dihydrotestosterone (+113 +/- 14%) and R1881 (+64 +/- 10%) (P < or = 0.001; n = 4). 4. In contrast 0.05, 5 and 1000 nmol/L lysyl-bradykinin had no effect (15 +/- 4, 10 +/- 9 and 5 +/- 3 s.e.%, respectively; n = 7). Des-Arg9[Leu8]-bradykinin (a B1 receptor antagonist) and/or D-Arg-[Hyp3,Thi5,8,D-Phe7]-bradykinin (a B2 receptor antagonist), 1 nmol/L, and indomethacin, 5 mumol/L, also had little or no effect. 5. In conclusion, kallidin and endogenous kinins and prostaglandins have little or no effect on DNA synthesis and therefore on the growth of LNCaP cells in comparison to the two-fold stimulation produced by androgens.
66,076,615
Q: Windows API editbox reflecting SendMessage I am trying to learn windows API. I have managed to create a window with a button and an edit box. I wanted to try and change the text in the editbox when I click the button. Here is main loop: while(GetMessage(&Msg, NULL, 0, 0) > 0) { TranslateMessage(&Msg); DispatchMessage(&Msg); } Here is windows call back LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { case WM_CREATE: { HWND hWndEdit = CreateWindowEx(WS_EX_CLIENTEDGE, "EDIT","", WS_CHILD|WS_VISIBLE|ES_MULTILINE|ES_AUTOVSCROLL|ES_AUTOHSCROLL, 50,100,200,100,hwnd, (HMENU) IDC_EDITBOX, GetModuleHandle(NULL), NULL); HWND hWndButton = CreateWindowEx(NULL, "BUTTON", "OK", WS_TABSTOP|WS_VISIBLE|WS_CHILD|BS_DEFPUSHBUTTON, 50, 220, 100, 24, hwnd, (HMENU)IDC_BUTTON, GetModuleHandle(NULL), NULL); } break; case WM_COMMAND: switch(LOWORD(wParam)) { case IDC_BUTTON: { SendMessage(hWndEdit,WM_SETTEXT,NULL,(LPARAM)"BUTTON"); } break; case IDC_EDITBOX: { MessageBox(NULL,"EDIT","editbox", MB_ICONINFORMATION|MB_OK); } break; default: MessageBox(NULL,"default","Command",MB_ICONINFORMATION|MB_OK); break; } break; case WM_SETTEXT: { MessageBox(NULL,"SetTEXT","BOX",MB_ICONINFORMATION|MB_OK); } break; case WM_CLOSE: DestroyWindow(hwnd); break; case WM_DESTROY: PostQuitMessage(0); break; default: return DefWindowProc(hwnd,msg,wParam,lParam); } return 0; } When I click the button, I call SendMessage(...) so shouldn't that be picked up in my main loop and sent to WndProc()? If so, then why aren't my switch cases catching it? If not, how do I go about setting up callback functions for this edit box? EDIT: Full Code #include <windows.h> #define IDC_BUTTON 101 #define IDC_EDITBOX 102 HWND hWndEdit; const char g_szClassName[] = "myWindowClass"; //Step 4: the Window Proc LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { case WM_CREATE: { HWND hWndEdit = CreateWindowEx(WS_EX_CLIENTEDGE, "EDIT","", WS_CHILD|WS_VISIBLE|ES_MULTILINE|ES_AUTOVSCROLL|ES_AUTOHSCROLL, 50,100,200,100,hwnd, (HMENU) IDC_EDITBOX, GetModuleHandle(NULL), NULL); HWND hWndButton = CreateWindowEx(NULL, "BUTTON", "OK", WS_TABSTOP|WS_VISIBLE|WS_CHILD|BS_DEFPUSHBUTTON, 50, 220, 100, 24, hwnd, (HMENU)IDC_BUTTON, GetModuleHandle(NULL), NULL); } break; case WM_COMMAND: switch(LOWORD(wParam)) { case IDC_BUTTON: { //MessageBox(NULL,"EDIT","editbox", MB_ICONINFORMATION|MB_OK); SendMessage(hWndEdit,WM_SETTEXT,NULL,(LPARAM)"BUTTON"); } break; case IDC_EDITBOX: { MessageBox(NULL,"EDIT","editbox", MB_ICONINFORMATION|MB_OK); } break; default: MessageBox(NULL,"default","Command",MB_ICONINFORMATION|MB_OK); break; } break; case WM_SETTEXT: { MessageBox(NULL,"SetTEXT","BOX",MB_ICONINFORMATION|MB_OK); } break; case WM_CLOSE: DestroyWindow(hwnd); break; case WM_DESTROY: PostQuitMessage(0); break; default: return DefWindowProc(hwnd,msg,wParam,lParam); } return 0; } int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { WNDCLASSEX wc; HWND hwnd; MSG Msg; //Registering the Window Class wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW|CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = hInstance; wc.hIcon = LoadIcon(NULL, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH) (COLOR_WINDOW); wc.lpszMenuName = NULL; wc.lpszClassName = g_szClassName; wc.hIconSm = LoadIcon(NULL,IDI_APPLICATION); if(!RegisterClassEx(&wc)) { MessageBox(NULL, "Window Registration Failed!", "Error!", MB_ICONEXCLAMATION | MB_OK); return 0; } //Creating the Window hwnd = CreateWindowEx( 0, //WS_EX_CLIENTEDGE, g_szClassName, "Inventory", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, 400, 300, NULL, NULL, hInstance, NULL); if(hwnd == NULL) { MessageBox(NULL, "Window Creation Failed!", "Error!", MB_ICONEXCLAMATION | MB_OK); return 0; } ShowWindow(hwnd, nCmdShow); UpdateWindow(hwnd); // Step 3: The Message Loop while(GetMessage(&Msg, NULL, 0, 0) > 0) { TranslateMessage(&Msg); DispatchMessage(&Msg); } return Msg.wParam; } A: Your calls to CreateWindowEx() are assigning the returned HWNDs to local variables that go out of scope when WM_CREATE is done being processed. Your WM_COMMAND handler is using an hWndEdit variable from a different scope, but that variable is never being initialized with the HWND of the edit control. That is why your text never appears. Regarding WM_SETTEXT, your main window will not receive that message. It is sent directly to the edit control, which does not have a custom WndProc() assigned to it, so all messages sent to it will go through DefWindowProc(). You can send WM_SETTEXT to the edit control, and it will be updated (by DefWindowProc()) as expected, but your MessageBox() will not appear. An edit control does send WM_COMMAND messages to its parent window for various EN_... notifications, like EN_CHANGE, so your main window WndProc() will call MessageBox() for WM_COMMAND messages related to IDC_EDITBOX.
66,076,637
The present methods relate to screening for possible solid forms of a sample and include solidifying the sample in at least on receptacle defining a capillary space, such as a capillary tube of well plate. The present methods also relate to screening a sample according to its solid forms and include solidifying the sample in a plurality of receptables and at least one receptacle defines a capillary space. The solid for of the sample refers to its arrangement at the molecular or atomic level in the solid. The solid forms generated by the solidification step are analyzed and classified, such as by x-ray diffraction patterns. The present methods increase the likelihood of generating all or a high percentage of possible solid forms. In the chemical field, the unpredictability and variability of compounds, mixtures, and processes are well established. Certain chemical compounds or mixtures may have utility for numerous different applications, including vital biological applications, yet a slight change in those compounds or mixtures, even with respect to a single atom, may reduce or eliminate their utility for their beneficial purpose. Similarly, certain chemical processes may have significantly better or worse performance based upon seemingly minor differences. In the pharmaceutical field, a great deal of time, effort and expense is spent on the identification of particular compounds and mixtures that will have beneficial effect. Furthermore, exhaustive research must be done as to whether such compounds and mixtures will have harmful effects. Once again, even slight differences in chemical composition or structure may yield significant differences in biological activity. Thus, researchers frequently test many different compounds and mixtures for biological activity and other effects as well as testing different processes and conditions for the preparation of such chemical compounds and mixtures. The process of thorough analysis of different chemical compounds, elements, mixtures, processes, or structures is commonly referred to as screening. Screening may be a function of time and effort, with the quality or results of screening being a function of the number of samples prepared and/or analyzed as well as the quality of preparation and/or analysis underlying those samples. Screening plays a vital role in the pharmaceutical field, as the most advantageous formulation of a biologically active compound or mixture is frequently found through successful screening processes. However, screening processes can require significant amounts of time, effort and resources. There is a continuous need for improved screening processes having increased reliability and efficiency. Processes have been used for screening chemical compounds according to their form. When a compound has different solid or crystalline forms, the different forms are frequently referred to as polymorphs of the compound. A xe2x80x9cpolymorphicxe2x80x9d compound as used herein means a compound having more than one solid form. For example, a polymorphic compound may have different forms of its crystalline structure, or different forms based upon hydration, or it may have a crystalline form and an amorphous form. In the past, screening processes have not identified with sufficient consistency and reliability a high percentage of possible solid and semisolid forms. The form of a compound or mixture may have an impact on biological activity. The same chemical compound may exhibit different properties depending upon which form (such as amorphous or crystalline or semisolid) that compound is in. A xe2x80x9csemisolidxe2x80x9d form is used herein to indicate materials like waxes, suspensions, gels, creams, and ointments. The term xe2x80x9csolid formxe2x80x9d herein includes semisolid forms. Furthermore, a chemical compound may exist in different solid forms, and those different solid forms may also exhibit different properties. As a result, different solid forms, including different crystalline forms, of a chemical compound may have greater or lesser efficacy for a particular application. The identification of an optimal solid form is important in the pharmaceutical field, as well as in other fields including nutraceuticals, agricultural chemicals, dyes, explosives, polymer additives, lubricant additives, photographic chemicals, and structural and electronic materials. The new methods described herein may be useful in any of these fields as well as others where solid materials are used. A chemical compound or mixture may be amorphous, meaning that it is not characterized by a regular arrangement of molecules. Alternatively (or even to a limited extent within a mostly amorphous form), a compound or mixture may be arranged in a crystalline state, where the molecules exist in fixed conformations and are arranged in a regular way. The same compound or mixture may exhibit different properties depending upon which solid form that compound or mixture is in. It is important in the pharmaceutical field as well as other fields to find the form of a chemical compound that exhibits appropriate physical and chemical properties. One form may be more stable or have other properties that make it preferable over other forms. One form of a chemical composition may have better bioavailabilty, solubility, or adsorption characteristics or in other ways be more suitable for delivery of therapeutic doses than other forms. As part of a screening method, it may be advisable to evaluate different salts of a chemical compound (or more precisely, different salt compounds of a given biologically active ion). It is frequently desirable within a screening process to generate, or at least search for, a high percentage of the possible solid forms of a compound or mixture. Past attempts to generate a variety of solid forms involved flash evaporations, cooling under different conditions and/or the addition of seeds of solid material. However, some materials strongly resist the generation of new solid forms. One or more solid forms may be generated by crystallization of the sample. Among the phenomena in crystallization are nucleation and growth. Crystal nucleation is the formation of an ordered solid phase from liquids, supersaturated solutions, saturated vapors, or amorphous phases. Nucleation may be achieved by homogeneous or heterogeneous mechanisms. In heterogeneous mechanisms, some solid particle is present to provide a catalytic effect and reduce the energy barrier to formation of a new phase. Crystals may originate on a minute trace of a foreign substance (either impurities or container walls) acting as a nucleation site. Since nucleation may set the character of the crystallization process, the identity of the foreign substance is an important parameter. The presence of xe2x80x9cseedsxe2x80x9d of other crystalline compounds in a crystallization environment can be beneficial, detrimental, or both, but in any event, must be considered. Growth is the enlargement of crystals caused by deposition of molecules on an existing surface. In homogeneous mechanisms, it has been theorized by others that nucleation is achieved spontaneously with the solution comprising the solute to be crystallized in solvent typically by evaporation, temperature reduction, or addition of antisolvent. Typically, a solid to be crystallized is present in a solution at, above, or below its saturation point at a given temperature. Crystallization is initiated or facilitated by removing solvent, changing temperature, and/or adding an antisolvent. The solvent may be removed by evaporation or other means. Eventually the solution reaches a point where crystals will grow. A specific chemical substance may crystallize into different forms or transition from one polymorph form, pseudopolymorph form, or amorphous form to another form. This crystallization into a different form or transition into a different form may be accompanied by other physical or chemical changes. For example, novobiocin has at least two different forms: an amorphous form and a crystalline form. Dog plasma levels of novobiocin vary depending on which form of novobiocin is administered. In one study, two hours after the amorphous form of the drug was administered, the concentration of novobiocin was 29.3 mg/mL. In contrast, when crystalline novobiocin was administered, there was no drug detectable in the dog plasma two hours after the drug was administered. In another example, furosemide has two different crystalline forms, and furosemide solubility in aqueous buffer at pH 3.2 varied depending on which polymorph was studied. After three hours, Form I and Form II had solubilities of approximately 0.025 mg/mL. Under the same conditions and dissolution time, the DMF and dioxane solvates of furosemide had solubilities of approximately 0.035, and Form III had a solubility of approximately 0.045 g/mL. It is known to generate crystalline samples in capillary tubes. For example, U.S. Pat. No. 5,997,636 discusses a method for growing crystals within a capillary tube. As another example, D. Amaro-Gonzxc3xa1lez et al., xe2x80x9cGas Antisolvent Crystallization Of Organic Salts From Aqueous Solutionxe2x80x9d, Journal Of Supercritical Fluids, 17 (2000) 249-258, discloses results of crystallization of lobenzarit, including crystallizations in capillaries. Lobenzarit is an anti-arthritic agent. Amaro-Gonzxc3xa1lez et al. state that particle size and agglomeration varied depending on the size of the capillary, that it is shown that the size distribution and particle shape can be controlled using different capillary diameters, and that it is possible to obtain individual crystals without agglomeration. Neither reference discloses that different forms (meaning different arrangements on the molecular or atomic level) were produced, nor does either reference suggest a new method for searching for possible forms or screening a sample according to its form. A different particle size or shape does not necessarily mean there is a different crystal form since a solid form can crystallize into many different shapes. For example, snowflakes may comprise a single crystal form having many different crystal shapes. It is also known to subject samples within capillary tubes to various spectroscopic analyses, including diffraction analysis such as x-ray diffraction analysis. However, in such instances, it has been the common practice to prepare a solid sample outside the capillary tube before it is placed in the capillary tube for analysis. There are several factors that discourage the use of capillary tubes for solidifying compounds or mixtures. One factor is that capillary tubes are more difficult to work with than other containers. Another factor is that there has been no general recognition that the use of capillary spaces may affect reactions or lead to compositional or chemical differences. Thus, since it was believed that the same forms and reactions could be done in other containers, it is believed that capillary tubes have not been used as an integral part of a screening process or to search for and generate solid and semisolid forms. There is a need for improved screening methods that identify all or a high percentage of possible forms of a compound or mixture. There is a need for improved methods of searching for the possible forms of a sample. As one aspect, an improved method of searching for possible forms of a sample is provided. The method comprises the steps of disposing the sample on one or more receptacles, where at least one of the receptacles defines a capillary space, and the sample is disposed within the capillary space. The method next comprises solidifying the sample in or on the receptacles to generate at least one form, wherein the generated form(s) is a solid or semisolid. The form(s) is then analyzed and classified, such as by classification according to what form it is. As another aspect, an improved method of screening a sample according to its form is provided. This method is especially useful for screening a sample comprising a compound or a. mixture having biological activity in at least one form of the compound or mixture. The screening method comprises the steps of disposing the sample on a plurality of receptacles, where at least one of the receptacles defines a capillary space, and the sample is disposed within the capillary space. The method next comprises solidifying the sample in or on the receptacles to generate at least one form, wherein at least one form is a solid or semisolid. The method further comprises analyzing at least one form in a manner wherein the analytical result is indicative of the generated form(s), and classifying the generated form(s), such as by form type or according to analytical result. The screening method may be particularly useful where the compound or mixture has at least one form having biological application and it is desirable to determine if other forms are possible. The present methods may comprise generating at least one other form of the compound or mixture. The sample may comprise a known polymorphic compound or comprise at least one material that is not recognized as a polymorphic compound. The sample may consist essentially of a solution of one compound, or may comprise a mixture of compounds. Preferably, the present methods include disposing the sample on a plurality of receptacles, including at least two different types of receptacles. For example, one portion of a sample may be disposed in a capillary tube that defines a capillary space and another portion of the sample may be disposed on a glass slide that does not define a capillary space. The sample may be prepared in a single batch or in multiple batches. After the portions have solidified, the form disposed in the capillary tube and the form disposed on the slide may be analyzed, classified and compared. A preferred receptacle defining a capillary space is a capillary tube, and others include a well plate, a block and a sheet with holes or pores of appropriate size and shape. The present methods may further comprise the step of comparing the generated form to a known form. In many cases, the generating step may produce at least one different form of the sample. At least some of the receptacles may be subjected to substantially constant motion during the generating step. For example, a capillary tube may be rotated along its longitudinal axis during the generating step or subjected to centrifuging during the generating step. Centrifuging can be sufficient to concentrate the solid or semisolid at one end of a capillary tube and to facilitate in-situ analysis of the generated forms. Also, variations in centrifuging may provide environmental variation, which is desired in a screening method. Centrifuging may move the sample to the bottom of the receptacle when one end of the receptacle is closed. Centrifuging may be performed at a pressure lower than ambient pressure, or under vacuum. In the present methods, the sample may comprise a compound comprising a biologically active ion or one or more different salts of the compound. A second analyzing step may be performed on generated forms, where the second analyzing step provides data indicative of biological activity or bioavailability. In the present methods, the generated forms may be analyzed by any suitable means, such as methods selected from the group consisting of visual analysis, microscopic analysis, thermal analysis, diffraction analysis, and spectroscopic analysis. Preferred methods of analysis include Raman spectroscopic analysis and x-ray diffraction analysis, more preferably using synchrotron radiation as the radiation source for the analysis. The analysis may determine differences in arrangement of molecules in the solid or determine one or more other characteristics that directly or indirectly reflect the form. In the present methods, the step of analyzing the generated form may comprise analyzing the form without removing it from the receptacle in which it was generated. Thus, the present methods are useful for in situ analysis of generated forms. The use of capillary tubes as receptacles can facilitate such in situ analysis. It may be advantageous to place the sample in at least five receptacles defining capillary spaces, alternatively at least 100 receptacles defining capillary spaces. In some embodiments, a sample is placed in several sets of numerous capillary tubes (for example, from 5 to 2000 capillary tubes, alternatively 5 to 100 capillary tubes), and the different sets are subjected to different methods or conditions of solidification. The solidifying step may comprise crystallizing the sample, or may be selected from the group consisting of solvent evaporation, cooling, anti-solvent addition, gel diffusion, and thin-layer deposition. A supersaturated solution of the sample can be formed before, during, or after the sample is disposed on the receptacle(s). The generating step preferably comprises crystallizing the sample, or alternatively is selected from the group of methods consisting of solvent evaporation, cooling, anti-solvent addition, gel diffusion, and thin-layer deposition (with or without subsequent measures to quickly remove residual solvent, including air of various temperatures forced through the capillaries). The receptacle that defines a capillary space can be a capillary tube or appropriately sized multi-well plate. Alternatively, the receptacle that defines a capillary space may be a block or a sheet made of polymer, glass, or other material, which has holes or pores of a suitable shape and dimensions. Alternatively, some receptacles need not define a capillary space; indeed, it is considered preferable to employ different kinds of receptacles for generating solid and/or semisolid forms of a given sample. Additional receptacles may include a glass slide or a conveyer surface in addition to the receptacle(s) defining capillary spaces. The use of receptacles that define capillary spaces is an improvement over more labor-intensive methods of generating solid forms and enables one to obtain a high percentage of possible solid and semisolid forms. Another advantage of such receptacles is that smaller amounts of the compound or mixture are used. A compound is a substance composed of atoms or ions in chemical combination. A compound usually is composed of two or more elements, though as used in accordance with the present methods, a compound may be composed of one element. A xe2x80x9cpolymorphxe2x80x9d as used herein means a compound or mixture having more than one solid or semisolid form. The xe2x80x9cformxe2x80x9d of a compound or mixture refers to the arrangement of molecules in the solid. A xe2x80x9csemisolidxe2x80x9d form is used herein to indicate materials like waxes, suspensions, gels, creams, and ointments. The term xe2x80x9csolid formxe2x80x9d herein includes semisolid forms. xe2x80x9cCapillary spacexe2x80x9d is defined herein to mean a space having walls separated by from about 0.1 mm to about 30 mm, preferably from about 0.5 mm to about 5 mm, more preferably from about 0.5 mm to about 2.5 mm, in at least one dimension. A capillary tube having an inner diameter from about 0.5 mm to about 2.5 mm , is a preferred receptacle that defines a capillary space in the interior of the capillary tube. It is preferred that the capillary tubes are circular in their interior shapes. As used herein, the generation of solid and semisolid forms includes any suitable technique for solidification including but not limited to crystallization. Indeed, the forms which may be sought or generated may include amorphous forms, mixtures of amorphous forms, eutectic mixtures, mixed crystal forms, solid solutions, co-crystals, and other forms. In certain embodiments of the present methods, solid samples are generated in receptacles through a suitable means of solidification. Typically, a solution containing a compound or mixture to be solidified and a solvent is placed in a receptacle defining a capillary space, such as a capillary tube. The compound or mixture can be present in a solution below, at or above its saturation point at a given temperature at the time it is placed in a capillary tube. Through evaporation, the use of an antisolvent, temperature variation, and/or other suitable means, the system reaches a point where solidification begins. After a suitable amount of time, when solid or semisolid appears, the resulting sample is ready for analysis. Any suitable crystallization technique may be employed for obtaining crystals. For example, crystals may be obtained through cooling, heating, evaporation, addition of an antisolvent, reactive crystallization, and using supercritical fluids as solvents. Additionally, melt crystallization techniques may be used to generate a solid form. Through such techniques, the use of a solvent can be avoided. In such techniques, formation of crystalline material is from a melt of the crystallizing species rather than a solution. Additionally, the crystallization process may be done through sublimation techniques. Crystallization may be performed as a seeded operation or an unseeded operation. In a seeded operation, a selected quantity of seed crystals is included in the system. The characteristics of the seed crystals typically influence the characteristics of the crystals generated from the system. Crystallization may be performed by heterogeneous or homogeneous mechanisms. In other embodiments of the present methods, the form is generated other than by crystallization. The sample may be in the form of a melt that is then added to the capillary tube and allowed to solidify in an amorphous form. Alternatively, the mechanism by which solidification is accomplished may include gel diffusion methods, thin-layer deposition methods, or other suitable methods. Other thermodynamic and kinetic conditions may be employed to solidify the compound or mixture. Cooling of a saturated solution is a typical thermodynamic condition. An addition of a solution of the compound or mixture to an excess of cold anti-solvent is a typical kinetic condition. Any material capable of forming a solid or semisolid may be used in the present methods. In particular, the present methods are especially suited for materials characterized by molecules which are associated by non-bonded interactions (e.g. van der Waals forces, hydrogen bonding, and Columbic interaction). The present methods may be advantageously used with small organic drug molecules having solubility of at least 1 mg/mL in ethanol at ambient conditions. The present methods are also contemplated for use with large organic molecules and inorganic molecules. Examples of compounds having more than one solid form include 5-methyl-2-[(2-nitrophenyl)amino]-3-thiophenecarbonitrile and 4-methyl-2-nitroacetanilide, each of which may be different colors in connection with different forms, and novobiocin and furosemide, which are discussed above. This list cannot be exhaustive as the present methods may provide significant benefits for novel compounds and mixtures whose identities, or at least whose possible forms, are not yet identified. The generation of a variety of forms is an important object of screening. A sufficient number of diverse processes and conditions should be employed to maximize the likelihood that a high percentage of possible solid forms of a chemical compound is generated. Samples should be generated under various thermodynamic and kinetic conditions. It is preferable that the generation of solid and/or semisolid forms within the receptacles is carried out under a wide variety of conditions. For example, solids should be generated in the presence and absence of various solvents, as the solvent may play a role in the formation of certain forms. As another example it is also preferable to prepare samples under different conditions of temperature and pressure, as different solid forms may be favored by different conditions. The various forms generated may be identified by any suitable method, including but not limited to visual analysis (such as when different forms exhibit different colors), microscopic analysis including electron microscopy, thermal analysis such as determining the melting points, conducting diffraction analysis (such as x-ray diffraction analysis, electron diffraction analysis, neutron diffraction analysis, as well as others), conducting an infrared spectroscopic analysis, or conducting other spectroscopic analysis. Any appropriate analytical technique that is used to differentiate structural, energetic, or performance characteristics may be used in connection with the present methods. The classifying step may comprise classifying the generated form(s) according to any of the analytical results, such as appearance, solubility, or x-ray diffraction pattern. In a preferred embodiment, a synchrotron may be used as the source of radiation for conducting diffraction analyses. A synchrotron is a type of particle accelerator, which emits high energy, focused radiation. Synchrotron radiation is the byproduct of circulating electrons or positrons at speeds very close to the speed of light. Synchrotron radiation contains all the wavelengths of the electromagnetic spectrum and comprises the most intense source of wavelengths available in the x-ray and ultraviolet region. Synchrotron radiation allows analysis of smaller quantities of sample that would be difficult to analyze using other sources of x-ray radiation. One location for research using synchrotron radiation is the Stanford Synchrotron Radiation Laboratory (SSRL), which is funded by the Department of Energy as a national user facility. Another location is Argonne National Laboratory, which is available to outside users on a fee basis. Synchrotron radiation may be used to study structural details of solid samples with a resolution not practically attainable using traditional x-ray instrumentation. This may enable differentiation between different polymorphic forms or compounds that is not attainable with other x-ray radiation sources. Preferably, the present methods comprise generating more than one form such that a distribution of forms is obtained. However, by generating solid forms in receptacles defining capillary spaces, one may favor the formation of a variety of solid forms and increase the likelihood of generating all or a high percentage of possible forms. The present methods can significantly assist in the identification of the form of a compound or a mixture that is most stable or has other properties that make it preferable over other forms. For example, the present methods can be used as part of a screening method and can improve the likelihood of identifying a form having biological activity such as better bioavailability, solubility, or adsorption characteristics. In some cases, an identified form may have better activity as an active agent. After the sample is placed in a receptacle, the receptacle may be centrifuged. Centrifugation may be employed for a variety of reasons. First, centrifuging may assist evaporation or concentrate solid or semisolid material at one end of a capillary space. This has advantages in connection with in-situ analysis, in that the generated form will be located at a consistent place in the receptacle. Also or alternatively, centrifuging may be used to provide additional environmental variation, which is desirable in a screening method.
66,076,685
got open space? add a bouldergreat for both indoor & outdoor use request a quote Custom Free-Standing Playground Boulders Great for Parks, Churches & Playgrounds Children of all ages are naturally drawn toward outdoor adventures such as rock climbing. Boulders are a great and convenient way to bring the fun of outdoor adventure directly to your facility, playground or backyard! Climbing boulders can help challenge kids to develop coordination skills, balance, and build up core strength. Outdoor Escape has wide selection of boulders sizes suitable from toddlers and pre-school aged children all the way to teens and adults. Free-Standing Boulders Specifications The Speedbump Boulder is the smallest sized of all our playground boulders. This playground boulder is designed specifically for toddlers and pre-school aged children to help them develop strong muscles and increase hand-eye coordination. Additionally, climbing boulders helps tolders to increase their overall confidence when dealing with new obstacles. These playground boulders are also ideal as a landscape feature that visitors can sit and climb on so as well as being beautiful they are also functional pieces of playground equipment. This boulder meets all requirements to be approved as playground equipment, according to the NPPS (National Program for Playground Safety) and the CPSC (Consumer Product Safety Commission). Our playground boulders can be manufactured for either indoor or outdoor use and can be either 100% natural or have placements for artificial holds added. The Dreamtime Boulder – Small Playground Boulders (standardized) Dimensions: 5’ Wide x 6’ Long x 4.6’ Tall The Dreamtime Boulder is our small sized standard playground boulder. It is ideal for Pre-school aged children up to through grade school level children. It features large positive handholds and foot placements to help developing children begin to learn how to climb safely. This boulder meets all requirements to be approved as playground equipment, according to the NPPS (National Program for Playground Safety) and the CPSC (Consumer Product Safety Commission). These playground boulders can be manufactured for both indoor and outdoor use and can be either 100% natural or have placements for artificial holds added. The Esperanza Boulder – Medium Playground Boulders (standardized) Dimensions: 6’ Wide x 8’ Long x 6’ Tall The Esperanza Boulder is our medium sized playground boulder. It is ideal for grade school aged children and offers a substantial climbing challenge while having easier routes up and down the boulder. This boulder meets all requirements to be approved as playground equipment, according to the NPPS (National Program for Playground Safety) and the CPSC (Consumer Product Safety Commission). Our playground boulders can be manufactured for either indoor or outdoor use and can be either 100% natural or have placements for artificial holds added. The Mandala Boulder – Large Playground Boulders (standardized) Dimensions: 8’ Wide x 13’ Long x 8’ Tall The Mandala Boulder is the largest and most advanced of our standardized playground boulders. It offers both grade school level children and adults challenging climbing routes while still maintaining the safety guidelines required by the NPPS (National Program for Playground Safety) and the CPSC (Consumer Product Safety Commission). This boulder features overhanging terrain and increased difficulty. These playground boulders can be manufactured for both indoor and outdoor use and can be either 100% natural or have placements for artificial holds added. These boulders are the pinnacle of artificial climbing systems, designed to replicate Mother Nature with a multitude of naturally incorporated climbing routes ranging in difficulty from simple down climbs to extreme overhanging crimpers! Our playground boulders can be manufactured for either indoor or outdoor use and can be either 100% natural or have placements for artificial holds added. The boulders are designed using your ideas and inspirations to create truly unique and specialized climbing systems perfectly suited for whatever application you intend to use it for. A WORD FROM OUR CLIENTS: After interviewing several wall builders we decided that Outdoor Escape was the company we needed to work with. At the initial meetings Steve and Rees answered our questions and offered suggestions. Steve helped us contact suppliers and was there for us every step of the way. They were patient when we called and asked for modifications to our layout and sent us countless photos of the progress. The finished product proves that we placed our trust in the correct people. I would wholeheartedly recommend Outdoor Escape to anyone wanting to build climbing walls.
66,076,781
Q: Is an integral basis for $\mathbb{R}^n$ a basis for $\mathbb{Q}^n$? Let $a_1,a_2,\cdots,a_n$ be a integral basis for $\mathbb{R}^n$, which means $a_1,\cdots,a_n$ is a basis and components of each of them are integers. Let $b$ be an integral vector. As we know, $b$ is a linear combination of $a_1,\cdots,a_n$ over $\mathbb{R}$. But could we write $b$ as a linear combination of $a_1,\cdots,a_n$ with rational coefficients? A: Yes, this is true. We know that $\{a_1,\dots,a_n\}\subset\mathbb{Q}^n$, and they are linearly independent over $\mathbb{R}$ and hence over $\mathbb{Q}$ as well. But every linearly independent set of $n$ elements in an $n$-dimensional vector space is a basis, so $\{a_1,\dots,a_n\}$ must be a basis for $\mathbb{Q}^n$.
66,076,804
[Ultrasonic tissue characterization of parotid tumors--analysis with the acoustic microscope]. The purpose of this study was to investigate acoustic properties of parotid tumors in detail by using the mechanically scanned acoustic microscope. The frequency of ultrasound was fixed at 200MHz. The amplitude images and the phase contrast images were taken in pictures. Mean attenuation, mean velocity, microscopic variation of the attenuation and that of the sound velocity were also measured and analyzed. The results are summarized as follows: 1) The acoustic properties of the tissue were optically displayed in the amplitude image and the phase contrast image. 2) The mean attenuation of malignant tumors was stronger than that of benign ones. 3) Microscopic variation of the attenuation was greater for malignant tumors than for benign ones. 4) No difference in velocity was found when malignant tumors and benign ones were compared as a whole, although the mean velocities are known to vary depending on the histological types. 5) The microscopic variation of the velocity was greater for malignant tumors than for benign ones. 6) Values of mean attenuation and mean velocity were strongly affected by the number of collagenous fiber. Microscopic variation of attenuation and that of velocity were affected by the arrangement of the tissue materials. 7) It is concluded that mean attenuation, mean velocity, microscopic variation of attenuation and that of velocity were useful parameters in investigating the acoustic properties of the tissue.
66,076,890
Wednesday, March 13, 2013 Jupiter IRBM Even as the Cold War was heating up between the superpowers during the mid-1950s, another conflict was brewing in the United States, a bureaucratic battle between different branches of the nation's armed forces. Beset with post-Korea budget cutbacks, the USAF, Army, and Navy were all trying to ensure their future by hopping on the nuclear bandwagon. Nuclear arms promised devastating firepower without the manpower cost of conventional forces, but the rush to nuclearization would lead to a serious flare-up of the roles and missions controversy between the Army and Air Force. The USAF attempted to claim hegemony over ballistic missiles with the assertion that the new weapons were aerospace vehicles, and as such, evolutionary extensions of the strategic aircraft operated by the Air Force. On the other hand, the Army was of the opinion that it had greater experience with guided missiles, and in any case, missiles were merely very long range artillery. The Army Ballistic Missile Agency, headquartered at Redstone Arsenal and staffed with many of the German A-4/V-2 personnel that had helped develop the Redstone, was established to develop an IRBM with a range of 1,500 miles. By mid-1956 this had coalesced as the Jupiter missile, with Chrysler being contracted to develop the new weapon. Powered by an S-3D engine, the single stage Jupiter had similar performance to the Air Force's Thor, although a slightly higher-yield warhead would be used. A key technical innovation of Jupiter would be its use of an RV coated with ablative material, which would disperse reentry heat by gradually melting. This would allow weight-savings when compared to heat sink-type RVs, but there was little experience with ablative materials and construction. ABMA tested the concept using the Jupiter-C vehicles, which were in fact Redstones that had been uprated with a longer first stage and three solid propellant upper stages to loft subscale ablative test vehicles on profiles that would encounter heating conditions similar to those that would be encountered by operational Jupiters. A successful recovery of a Jupiter-C RV and the article's subsequent examination helped to confirm that the ablative design was feasible. The Jupiter-A was another Redstone-derived testbed, in this case for proving the guidance system. Flight testing of this version began on March 14, 1956, and a total of twenty-five were launched. Flight testing of actual Jupiters began on March 1, 1957, and by March of the following year the ablative RV design had been conclusively demonstrated by the launch and recovery of a full-sized test article. Unlike the Thor, Jupiter was housed vertically, sitting on a pedestal. Army hopes of operating the Jupiter had been dashed in November 1956 when the Secretary of Defense issued an edict that forbade the Army from operating SSMs with ranges beyond 200 miles. Jupiter would have to be operated by the USAF, which was already developing the comparable Thor, putting the Redstone-developed missile's future in jeopardy. Secretary of Defense Wilson appointed a small committee that was charged with settling the IRBM debate, and many thought that the program would be cut altogether. Combining the technologies of both missiles into a new type was also considered, but both weapons had many similarities in the first place, and starting anew would have of course entailed major delays. The decision kept being pushed back, and in the meantime the Soviets orbited the first artificial satellite, Sputnik 1, in October 1957. Overnight, US missile programs were almost guaranteed a future, and in November of that year it was announced that both IRBM programs were to be continued. Jupiter's intermediate range dictated basing near the peripheries of the USSR; deployment to several locations in the Pacific was looked at, but in the end, Jupiters were only to be emplaced in NATO countries. In 1958, France turned down a proposal to host the Jupiter system, but in the following year, Italy and Turkey agreed to deploy the missiles. The Jupiter deployment on NATO's southern flank was one of the factors that provoked the Soviets into deploying missiles to Cuba, and as part of the settlement of the Cuban Missile Crisis, the US agreed to withdraw the Jupiters. This withdrawal was not as significant as it might have seemed to some, as by late 1962 increasing numbers of Polaris A1-armed SSBNs could be assigned to hit targets formerly assigned to the Jupiters, and could do so with far less vulnerability. Disclaimer: aviationheritage.blogspot.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com Disclosure: aviationheritage.blogspot is an eBay Partner Network affiliate, an affiliate program that provides a means for sites to earn advertising fees by linking to eBay.com
66,076,933
--- author: - 'Valentin A.Zagrebnov' title: | Trotter-Kato product formulae in Dixmier ideal\ *On the occasion of the 100th birthday of Tosio Kato* --- Preliminaries. Symmetrically-normed ideals {#S0} =========================================== Let $\mathcal{H}$ be a separable Hilbert space. For the first time the Trotter-Kato product formulae in [Dixmier ideal]{} $\mathcal{C}_{1, {\infty}}(\mathcal{H})$, were shortly discussed in conclusion of the paper [@NeiZag1999]. This remark was a program addressed to extension of results, which were known for the von Neumann-Schatten ideals $\mathcal{C}_{p}(\mathcal{H})$, $p \geq 1$ since [@Zag1988], [@NeiZag1990]. Note that a subtle point of this program is the question about the rate of convergence in the corresponding topology. Since the limit of the Trotter-Kato product formula is a strongly continuous semigroup, for the von Neumann-Schatten ideals this topology is the trace-norm $\|\cdot\|_{1}$ on the trace-class ideal $\mathcal{C}_{1}(\mathcal{H})$. In this case the limit is a Gibbs semigroup [@Zag2003]. For self-adjoint Gibbs semigroups the rate of convergence was estimated for the first time in [@DoIchTam1998] and [@IchTam1998]. The authors considered the case of the Gibbs-Schrödinger semigroups. They scrutinised in these papers a dependence of the rate of convergence for the (exponential) Trotter formula on the smoothness of the potential in the Schrödinger generator. The first abstract result in this direction was due to [@NeiZag1999]. In this paper a general scheme of *lifting* the operator-norm rate convergence for the Trotter-Kato product formulae was proposed and advocated for estimation the rate of the trace-norm convergence. This scheme was then improved and extended in [@CacZag2001] to the case of *nonself-adjoint* Gibbs semigroups. The aim of the present note is to elucidate the question about the existence of other then the von Neumann-Schatten proper two-sided ideals $\mathfrak{I}(\mathcal{H})$ of $\mathcal{L}(\mathcal{H})$ and then to prove the (non-exponential) Trotter-Kato product formula in topology of these ideals together with estimate of the corresponding rate of convergence. Here a particular case of the Dixmier ideal $\mathcal{C}_{1, {\infty}}(\mathcal{H})$ [@Dix1981], [@Con1994], is considered. To specify this ideal we recall in Section \[S1\] the notion of *singular* trace and then of the Dixmier trace [@Dix1966], [@CarSuk2006], in Section \[S2\]. Main results about the Trotter-Kato product formulae in the Dixmier ideal $\mathcal{C}_{1, {\infty}}(\mathcal{H})$ are collected in Section \[S3\]. There the arguments based on the lifting scheme [@NeiZag1999] (Theorem 5.1) are refined for proving the Trotter-Kato product formulae convergence in the $\|\cdot\|_{1, \infty}$-topology with the rate, which is *inherited* from the operator-norm convergence. To this end, in the rest of the present section we recall an important auxiliary tool: the concept of *symmetrically-normed* ideals, see e.g. [@GohKre1969], [@Sim2005]. Let $c_0 \subset l^{\infty}(\mathbb{N})$ be the subspace of bounded sequences $\xi = \{\xi_j\}^\infty_{j=1} \in l^{\infty}(\mathbb{N})$ of real numbers, which tend to *zero*. We denote by $c_f$ the subspace of $c_0$ consisting of all sequences with *finite* number of non-zero terms (*finite sequences*). \[def6-2.1\] A real-valued function $\phi: \xi \mapsto \phi(\xi)$ defined on $c_f$ is called a [*norming function*]{} if it has the following properties: $$\begin{aligned} & & \phi(\xi) > 0, \qquad \forall \xi \in c_f, \quad \xi \not= 0, \label{6-2.1}\\ & & \phi(\alpha\xi) = |\alpha|\phi(\xi), \qquad \forall \xi \in c_f, \quad \forall \alpha \in \mathbb{R}, \label{6-2.2}\\ & & \phi(\xi + \eta) \le \phi(\xi) + \phi(\eta), \qquad \forall \xi,\eta \in c_f, \label{6-2.3}\\ & & \phi(1,0,\ldots) = 1. \label{6-2.4}\end{aligned}$$ A norming function $\phi$ is called to be [*symmetric*]{} if it has the additional property $$\label{6-2.5} \phi(\xi_1,\xi_2,...,\xi_n,0,0,\ldots) = \phi(|\xi_{j_1}|,|\xi_{j_2}|,...,|\xi_{j_n}|,0,0,\ldots)$$ for any $\xi \in c_f$ and any permutation $j_1,j_2,\ldots , j_n$ of integers $1,2,\ldots , n$. It turns out that for any [*symmetric norming*]{} function $\phi$ and for any elements $\xi,\eta$ from the positive *cone* $c^+$ of non-negative, non-increasing sequences such that $\xi,\eta \in c_f$ obey $\xi_1 \ge \xi_2 \ge \ldots \ge 0$, $\eta_1 \ge \eta_2 \ge \ldots \ge 0$ and $$\label{6-2.5-1} \sum^n_{j=1} \xi_j \le \sum^n_{j=1} \eta_j, \qquad n = 1,2,\ldots \ ,$$ one gets the Ky Fan [inequality]{} [@GohKre1969] (Sec.3, §3) : $$\label{6-2.5-2} \phi(\xi) \le \phi(\eta) \ .$$ Moreover, (\[6-2.5-2\]) together with the properties (\[6-2.1\]), (\[6-2.2\]) and (\[6-2.4\]) yield inequalities $$\label{6-2.5-3} \xi_1 \le \phi(\xi) \le \sum^\infty_{j=1} \xi_j, \qquad \xi \in c_{f}^{+}:= c_{f}\cap c^{+}.$$ Note that the left- and right-hand sides of (\[6-2.5-3\]) are the simplest examples of symmetric norming functions on domain $c_{f}^{+}$: $$\label{6-2.5-4} \phi_{\infty}(\xi):= \xi_1 \ \ \ {\rm{and}} \ \ \ \phi_{1}(\xi):= \sum^\infty_{j=1} \xi_j \ .$$ By Definition \[def6-2.1\] the observations (\[6-2.5-3\]) and (\[6-2.5-4\]) yield $$\begin{aligned} \label{6-2.5-5} &&\phi_{\infty}(\xi):= \max_{j \geq 1}|\xi_j| \ , \ \ \phi_{1}(\xi):= \sum^\infty_{j=1} |\xi_j| \ \ , \ \\ && \phi_{\infty}(\xi)\leq \phi(\xi) \leq \phi_{1}(\xi) \ , \ \ {\rm{for \ all}} \ \ \ \xi \in c_{f} \ . \nonumber\end{aligned}$$ We denote by $\xi^* := \{\xi^*_1,\xi^*_2,\ldots \ \}$ a decreasing *rearrangement*: $\xi^*_1 = \sup_{j\geq 1}|\xi_j|\ $, $\xi^*_1 + \xi^*_2 = \sup_{i\not=j}\{|\xi_i| + |\xi_j|\}, \ldots \ $, of the sequence of absolute values $\{|\xi_n|\}_{n\geq 1}$, i.e., $\xi^*_1 \ge \xi^*_2 \ge \ldots \ $. Then $\xi \in c_f$ implies $\xi^* \in c_f$ and by (\[6-2.5\]) one obtains also that $$\label{6-2.10} \phi(\xi) = \phi(\xi^*), \qquad \xi \in c_f \ .$$ Therefore, any symmetric norming function $\phi$ is uniquely defined by its values on the positive cone $c^+$. Now, let $\xi = \{\xi_1,\xi_2,\ldots\} \in c_0$. We define $$\label{6-2.6} \xi^{(n)} := \{\xi_1,\xi_2,\ldots,\xi_n,0,0,\ldots \ \} \in c_{f} \ .$$ Then if $\phi$ is a symmetric norming function, we define $$\label{6-2.7} c_\phi := \{\xi \in c_0: \sup_n\phi(\xi^{(n)}) < + \infty\}.$$ Therefore, one gets $$\label{6-2.8} c_f \subseteq c_\phi \subseteq c_0 \subset l^{\infty}.$$ Note that by (\[6-2.5\])-(\[6-2.5-2\]) and (\[6-2.7\]) one gets $$\label{6-2.8-1} \phi(\xi^{(n)})\leq \phi(\xi^{(n+1)})\leq \sup_n\phi(\xi^{(n)})\ , \ {\rm{for \ any}} \ \xi \in c_\phi \ .$$ Then the limit $$\label{6-2.9} \phi(\xi) := \lim_{n\to\infty} \phi(\xi^{(n)})\ , \qquad \xi \in c_\phi,$$ exists and $\phi(\xi) = \sup_n\phi(\xi^{(n)})$, i.e. the symmetric norming function $\phi$ is a *normal* functional on the set $c_\phi$ (\[6-2.7\]), which is a linear space over $\mathbb{R}$. By virtue of (\[6-2.3\]) and (\[6-2.5-5\]) one also gets that any symmetric norming function is *continuous* on $c_f$: $$\label{6-2.9-1} |\phi(\xi) - \phi(\eta)| \leq \phi(\xi - \eta) \leq \phi_{1}(\xi - \eta) \ , \ \forall \xi,\eta \in c_f \ .$$ Suppose that $X$ is a compact operator, i.e. $X \in \mathcal{C}_{\infty}(\mathcal{H})$. Then we denote by $$\label{6-2.11} s(X) := \{s_1(X),s_2(X),\ldots \ \} ,$$ the sequence of *singular values* of $X$ counting multiplicities. We always assume that $$\label{6-2.12} s_1(X) \ge s_2(X) \ge \ldots \ge s_n(X) \ge \ldots \ .$$ To define *symmetrically-normed* ideals of the compact operators $\mathcal{C}_{\infty}(\mathcal{H})$ we introduce the notion of a symmetric norm. \[def6-2.1-2\] Let $\mathfrak{I}$ be a two-sided ideal of $\mathcal{C}_{\infty}(\mathcal{H})$. A functional $\|\cdot\|_{sym}: \mathfrak{I} \rightarrow \mathbb{R}^{+}_{0}$ is called a *symmetric norm* if besides the usual properties of the operator norm $\|\cdot\|$: $$\begin{aligned} & & \|X\|_{sym}>0, \qquad \forall X \in \mathfrak{I} , \quad X \not= 0, \label{6-2.12-1}\\ & & \|\alpha X\|_{sym} = |\alpha|\|X\|_{sym} \ , \qquad \forall X \in \mathfrak{I} , \quad \forall \alpha \in \mathbb{C}, \label{6-2.12-2}\\ & & \|X + Y\|_{sym} \le \|X\|_{sym} + \|Y\|_{sym}\ , \ \forall X, Y \in \mathfrak{I} , \label{6-2.12-3}\end{aligned}$$ it verifies the following additional properties: $$\begin{aligned} & & \|A X B\|_{sym} \leq \|A\| \| X\|_{sym} \| B\|, \ X \in \mathfrak{I} , \ A,B \in \mathcal{L}(\mathcal{H}), \label{6-2.12-4}\\ & & \|\alpha X\|_{sym} = |\alpha|\|X\| = |\alpha| \ s_1(X) , \ {\rm{for \ any \ rank-one \ operator}} \ X \in \mathfrak{I}. \label{6-2.12-5} $$ If the condition (\[6-2.12-4\]) is replaced by $$\begin{aligned} \label{6-2.12-4U} && \|U X\|_{sym} = \|X U\|_{sym} = \| X\|_{sym} \ , \ X \in \mathfrak{I} \ , \\ && \hspace{1cm} {\rm{for \ any \ unitary \ operator}} \ U \ {\rm{on}} \ \mathcal{H} \ , \nonumber\end{aligned}$$ then, instead of the symmetric norm, one gets definition of *invariant norm* $\|\cdot\|_{inv}$. First, we note that the ordinary operator norm $\|\cdot\|$ on any ideal $\mathfrak{I}\subseteq \mathcal{C}_{\infty}(\mathcal{H})$ is evidently a symmetric norm as well as an invariant norm. Second, we observe that in fact, every symmetric norm is invariant. Indeed, for any unitary operators $U$ and $V$ one gets by (\[6-2.12-4\]) that $$\label{6-2.12-51} \|U X V\|_{sym} \leq \| X\|_{sym} \ , \ X \in \mathfrak{I} \ .$$ Since $X = U^{-1}U X V V^{-1}$, we also get $\| X\|_{sym} \leq \|U X V\|_{sym}$, which together with (\[6-2.12-51\]) yield (\[6-2.12-4U\]). Third, we claim that $ \| X\|_{sym} = \| X^\ast\|_{sym}$. Let $X = U |X|$ be the polar representation of the operator $X \in \mathfrak{I}$. Since $U^\ast X = |X|$, then by (\[6-2.12-4U\]) we obtain $\| X\|_{sym}=\||X|\|_{sym}$. The same line of reasoning applied to the adjoint operator $X^\ast = |X| U^\ast$ yields $\| X^\ast\|_{sym}=\||X|\|_{sym}$, that proves the claim. Now we can apply the concept of the symmetric norming functions to describe the symmetrically-normed ideals of the unital algebra of bounded operators $\mathcal{L}(\mathcal{H})$, or in general, the symmetrically-normed ideals generated by symmetric norming functions. Recall that any *proper* two-sided ideal $\mathfrak{I}(\mathcal{H})$ of $\mathcal{L}(\mathcal{H})$ is contained in compact operators $\mathcal{C}_{\infty}(\mathcal{H})$ and contains the set $\mathcal{K}(\mathcal{H})$ of finite-rank operators, see e.g. [@Piet2014], [@Sim2005]: $$\label{6-2.12-5-1} \mathcal{K}(\mathcal{H}) \subseteq \mathfrak{I}(\mathcal{H}) \subseteq \mathcal{C}_{\infty}(\mathcal{H}) \ .$$ To clarify the relation between symmetric norming functions and the symmetrically-normed ideals we mention that there is an obvious one-to-one correspondence between functions $\phi$ (Definition \[def6-2.1\]) on the cone $c^{+}$ and the symmetric norms $\|\cdot\|_{sym}$ on $\mathcal{K}(\mathcal{H})$. To proceed with a general setting one needs definition of the following relation. \[def6-2.3\] Let $c_\phi$ be the set of vectors (\[6-2.7\]) generated by a symmetric norming function $\phi$. We associate with $c_\phi$ a subset of compact operators $$\label{6-2.13} \mathcal{C}_{\phi}(\mathcal{H}) := \{X \in \mathcal{C}_{\infty}(\mathcal{H}): s(X) \in c_\phi\} \ .$$ This definition implies that the set $\mathcal{C}_{\phi}(\mathcal{H})$ is a proper two-sided ideal of the algebra $\mathcal{L}(\mathcal{H})$ of all bounded operators on $\mathcal{H}$. Setting, see (\[6-2.9\]), $$\label{6-2.14} \|X\|_\phi := \phi(s(X))\ , \qquad X \in \mathcal{C}_{\phi}(\mathcal{H}) \ ,$$ one obtains the *symmetric norm*: $\|\cdot\|_{sym} = \|\cdot\|_\phi \ $, on the ideal $\mathfrak{I} = \mathcal{C}_{\phi}(\mathcal{H})$ (Definition \[def6-2.1-2\]) such that this symmetrically-normed ideal becomes a Banach space. Then in accordance with (\[6-2.12-5-1\]) and (\[6-2.13\]) we obtain by (\[6-2.5-5\]) that $$\label{6-2.14-1} \mathcal{K}(\mathcal{H}) \subseteq \mathcal{C}_{1}(\mathcal{H}) \subseteq \mathcal{C}_{\phi}(\mathcal{H}) \subseteq \mathcal{C}_{\infty}(\mathcal{H})\ .$$ Here the trace-class operators $\mathcal{C}_{1}(\mathcal{H}) := \mathcal{C}_{\phi_1}(\mathcal{H})$, where the symmetric norming function $\phi_1$ is defined in (\[6-2.5-4\]), and $$\label{6-2.14-2} \|X\|_\phi \le \|X\|_1 \ , \qquad X \in \mathcal{C}_{1}(\mathcal{H})\ .$$ \[rem6-2.1-0\] By virtue of inequality (\[6-2.5-2\]) and by definition of symmetric norm (\[6-2.14\]) the so-called [*dominance*]{} property holds: if $X \in \mathcal{C}_{\phi}(\mathcal{H})$, $Y \in \mathcal{C}_{\infty}(\mathcal{H})$ and $$\label{6-2.14-3} \sum^n_{j=1}s_j(Y) \le \sum^n_{j=1}s_j(X)\ , \qquad n =1,2,\ldots \ ,$$ then $Y \in \mathcal{C}_{\phi}(\mathcal{H})$ and $\|Y\|_\phi \le \|X\|_\phi$. \[rem6-2.1\] To distinguish in (\[6-2.14-1\]) *nontrivial* ideals $\mathcal{C}_{\phi}$ one needs some criteria based on the properties of $\phi$ or of the norm $\|\cdot\|_\phi$. For example, any symmetric norming function (\[6-2.10\]) defined by $$\label{6-2.14-4} \phi^{(r)}(\xi) := \sum_{j=1}^{r} \ \xi^*_j \ , \qquad \xi \in c_f \ ,$$ generates for arbitrary fixed $r\in \mathbb{N}$ the symmetrically-normed ideals, which are trivial in the sense that $\mathcal{C}_{\phi^{(r)}}(\mathcal{H}) = \mathcal{C}_{\infty}(\mathcal{H})$. Criterion for an operator $A$ to belong to a nontrivial ideal $\mathcal{C}_{\phi}$ is $$\label{6-2.14-5} M = \sup_{m\geq 1} \|P_m A P_m\|_\phi < \infty \ ,$$ where $\{P_m\}_{m\geq 1}$ is a monotonously increasing sequence of the finite-dimensional orthogonal projectors on $\mathcal{H}$ strongly convergent to the identity operator [@GohKre1969]. Note that for $A \in \mathcal{C}_{\infty}$ the condition (\[6-2.14-5\]) is trivial. We consider now a couple of examples to elucidate the concept of the symmetrically-normed ideals $\mathcal{C}_{\phi}(\mathcal{H})$ generated by the symmetric norming functions $\phi$ and the rôle of the functional *trace* on these ideals. \[ex6-2.1\] The von Neumann-Schatten ideals $\mathcal{C}_{p}(\mathcal{H})$ [@Schat1970]. These ideals of $\mathcal{C}_{\infty}(\mathcal{H})$ are generated by symmetric norming functions $\phi(\xi) := \|\xi\|_{p}$, where $$\label{6-2.15} \|\xi\|_{p} = \left(\sum^\infty_{j=1} |\xi_j|^p\right)^{1/p}, \qquad \xi \in c_f,$$ for $1 \le p < +\infty$, and by $$\label{6-2.16} \|\xi\|_{\infty} = \sup_j|\xi_j|, \qquad \xi \in c_f,$$ for $p = +\infty$. Indeed, if we put $\{\xi_{j}^\ast := s_j(X)\}_{j\geq 1}$, for $X\in \mathcal{C}_{\infty}(\mathcal{H})$, then the symmetric norm $\|X\|_{\phi} = \|s(X)\|_{p}$ coincides with $\|X\|_{p}$ and the corresponding symmetrically-normed ideal $\mathcal{C}_{\phi}(\mathcal{H})$ is identical to the von Neumann-Schatten class $\mathcal{C}_{p}(\mathcal{H})$. By definition, for any $X \in \mathcal{C}_{p}(\mathcal{H})$ the trace: $|X|\mapsto {\rm{Tr}} |X| = \sum_{j\geq 1} s_j(X) \geq 0$. The trace norm $\|X\|_{1} = {\rm{Tr}} |X|$ is finite on the trace-class operators $\mathcal{C}_{1}(\mathcal{H})$ and it is *infinite* for $X \in \mathcal{C}_{p>1}(\mathcal{H})$. We say that for $p>1$ the von Neumann-Schatten ideals admit *no* trace, whereas for $p=1$ the map: $X \mapsto {\rm{Tr}}\, X$, exists and it is continuous in the $\|\cdot\|_{1}$-topology. Note that by virtute of the $\rm{Tr}$-linearity the trace norm: $\mathcal{C}_{1, +}(\mathcal{H})\ni X \mapsto \|X\|_{1}$ is *linear* on the positive cone $\mathcal{C}_{1, +}(\mathcal{H})$ of the trace-class operators. \[ex6-2.2\] Now we consider symmetrically-normed ideals $\mathcal{C}_{\Pi}(\mathcal{H})$. To this aim let $\Pi = \{\pi_j\}^\infty_{j=1} \in c^+$ be a non-increasing sequence of positive numbers with $\pi_1 = 1$. We associate with $\Pi$ the function $$\label{6-2.17} \phi_\Pi(\xi) = \sup_n\left\{\frac{1}{\sum^n_{j=1}\pi_j}\sum^n_{j=1}\xi^*_j\right\}, \qquad \xi \in c_f.$$ It turns out that $\phi_\Pi$ is a symmetric norming function. Then the corresponding to (\[6-2.7\]) set $c_{\phi_\Pi}$ is defined by $$\label{6-2.18} c_{\phi_\Pi} := \left\{\xi \in c_f: \sup_n\frac{1}{\sum^n_{j=1}\pi_j}\sum^n_{j=1}\xi^*_j < +\infty \right\} \ .$$ Hence, the two-sided symmetrically-normed ideal $\mathcal{C}_{\Pi}(\mathcal{H}):= \mathcal{C}_{\phi_\Pi}(\mathcal{H})$ generated by symmetric norming function (\[6-2.17\]) consists of all those compact operators $X$ that $$\label{6-2.19} \|X\|_{\phi_\Pi} := \sup_n\frac{1}{\sum^n_{j=1}\pi_j}\sum^n_{j=1} s_j(X) < +\infty \ .$$ This equation defines a symmetric norm $\|X\|_{sym}=\|X\|_{\phi_\Pi}$ on the ideal $\mathcal{C}_{\Pi}(\mathcal{H})$, see Definition \[def6-2.1-2\]. Now let $\Pi = \{\pi_j\}^\infty_{j=1}$, with $\pi_1 =1$, satisfy $$\label{6-2.20} \sum^\infty_{j=1} \pi_j = +\infty \qquad \mbox{and} \qquad \lim_{j\to\infty} \pi_j = 0 \ .$$ Then the ideal $\mathcal{C}_{\Pi}(\mathcal{H})$ is *nontrivial*: $\mathcal{C}_{\Pi}(\mathcal{H}) \not= \mathcal{C}_{\infty}(\mathcal{H})$ and $\mathcal{C}_{\Pi}(\mathcal{H}) \not= \mathcal{C}_{1}(\mathcal{H})$, see Remark \[rem6-2.1\], and one has $$\label{6-2.20-1} \mathcal{C}_{1}(\mathcal{H}) \subset \mathcal{C}_{\Pi}(\mathcal{H}) \subset \mathcal{C}_{\infty}(\mathcal{H}) \ .$$ If in addition to (\[6-2.20\]) the sequence $\Pi = \{\pi_j\}^\infty_{j=1}$ is *regular*, i.e. it obeys $$\label{6-2.21} \sum^n_{j=1} \pi_j = O(n\pi_n) \ , \qquad n \rightarrow \infty \ ,$$ then $X \in \mathcal{C}_{\Pi}(\mathcal{H})$ [if and only if]{} $$\label{6-2.22} s_n(X) = O(\pi_n) \ , \qquad n \rightarrow \infty \ ,$$ cf. condition (\[6-2.14-5\]). On the other hand, the asymptotics $$\label{6-2.22-1} s_n(X) = o(\pi_n) \ , \qquad n \rightarrow \infty \ ,$$ [[implies that $X$ belongs to: $$\mathcal{C}_{\Pi}^{0}(\mathcal{H}):= \{X \in \mathcal{C}_{\Pi}(\mathcal{H}): \lim_{n \rightarrow \infty}\frac{1}{\sum^n_{j=1}\pi_j}\sum^n_{j=1} s_j(X) = 0 \},$$ such that $\mathcal{C}_{1}(\mathcal{H})\subset \mathcal{C}_{\Pi}^{0}(\mathcal{H}) \subset \mathcal{C}_{\Pi}(\mathcal{H})$.]{}]{} \[rem6-2.2\] A natural choice of the sequence $\{\pi_j\}^\infty_{j=1}$ that satisfies (\[6-2.20\]) is $\pi_j = j^{-\alpha}$, $0 < \alpha \le 1$. Note that if $0 < \alpha < 1$, then the sequence $\Pi = \{\pi_j\}^\infty_{j=1}$ satisfies (\[6-2.21\]), i.e. it is regular for $\varepsilon = 1 -\alpha$. Therefore, the two-sided symmetrically-normed ideal $\mathcal{C}_{\Pi}(\mathcal{H})$ generated by symmetric norming function (\[6-2.17\]) consists of all those compact operators $X$, which singular values obey (\[6-2.22\]): $$\label{6-2.23} s_n(X) = O(n^{-\alpha}), \quad 0 < \alpha < 1, \quad n \rightarrow \infty \ .$$ Let $\alpha = 1/p \, , \ p>1$. Then the corresponding to (\[6-2.23\]) symmetrically-normed ideal defined by $$\label{6-2.23-1} \mathcal{C}_{p,\infty}(\mathcal{H}) := \{X \in \mathcal{C}_{\infty}(\mathcal{H}): s_n(X) = O(n^{-1/p}), \ p>1 \} \ ,$$ is known as the *weak*-$\mathcal{C}_{p}$ ideal [@Piet2014], [@Sim2005]. Whilst by virtue of (\[6-2.23\]) the weak-$\mathcal{C}_{p}$ ideal admit no trace, definition (\[6-2.19\]) implies that for the regular case $p > 1$ a symmetric norm on $\mathcal{C}_{p,\infty}(\mathcal{H})$ is equivalent to $$\label{6-2.24} \|X\|_{p,\infty} = \sup_n \frac{1}{n^{1- 1/p}}\sum^n_{j=1}s_j(X) \ ,$$ and it is obvious that $\mathcal{C}_{1}(\mathcal{H}) \subset \mathcal{C}_{p,\infty}(\mathcal{H}) \subset \mathcal{C}_{\infty}(\mathcal{H})$. [[Taking into account the Hölder inequality one can to refine these inclusions for $1\leq q \leq p$ as follows: $\mathcal{C}_{1}(\mathcal{H})\subseteq \mathcal{C}_{q}(\mathcal{H}) \subseteq \mathcal{C}_{p,\infty}(\mathcal{H}) \subset \mathcal{C}_{\infty}(\mathcal{H})$.]{}]{} Singular traces {#S1} ================ [[Note that (\[6-2.24\]) implies: $\mathcal{C}_{1}(\mathcal{H})\ni A \mapsto \|A\|_{p,\infty} < \infty$, but any related to the ideal $\mathcal{C}_{p,\infty}(\mathcal{H})$ linear, positive, and unitarily invariant functional (*trace*) is [zero]{} on the set of finite-rank operators $\mathcal{K}(\mathcal{H})$, or trivial.]{}]{} We remind that these not *normal* traces: $$\label{6-2.24-1} {\rm{Tr}} {_{\omega}}(X):= \omega (\{n^{-1 + 1/p}\sum^{n}_{j=1}\ s_j(X)\}_{n=1}^{\infty}) \ ,$$ are called *singular*, [@Dix1966], [@LoSuZa2013]. Here $\omega$ is an *appropriate* linear positive normalised functional (*state*) on the Banach space $l^{\infty}(\mathbb{N})$ of bounded sequences. Recall that the set of the states $\mathcal{S}(l^{\infty}(\mathbb{N}))\subset (l^{\infty}(\mathbb{N}))^*$, where $(l^{\infty}(\mathbb{N}))^*$ is dual of the Banach space $l^{\infty}(\mathbb{N})$. The singular trace (\[6-2.24-1\]) is continuous in topology defined by the norm (\[6-2.24\]). \[rem6-2.3\] (a) The *weak*-$\mathcal{C}_{p}$ ideal, which is defined for $p=1$ by $$\label{6-2.25} \mathcal{C}_{1,\infty}(\mathcal{H}) := \{X \in \mathcal{C}_{\infty}(\mathcal{H}): \sum^n_{j=1}s_j(X) = O(\ln(n)), \ n \rightarrow \infty\} \ ,$$ has a special interest. Note that since $\Pi = \{j^{-1}\}^\infty_{j=1}$ does [not]{} satisfy (\[6-2.21\]), the characterisation $s_n(X) = O(n^{-1})$, is *not* true, see (\[6-2.22\]), (\[6-2.23\]). In this case the equivalent norm can be defined on the ideal (\[6-2.25\]) as $$\label{6-2.26} \|X\|_{1,\infty} := \sup_{n \in \mathbb{N}}\frac{1}{1 +\ln(n)}\sum^n_{j=1}s_j(X) \ .$$ By, virtute of (\[6-2.20-1\]) and Remark \[rem6-2.2\] one gets that $\mathcal{C}_{1}(\mathcal{H}) \subset \mathcal{C}_{1,\infty}(\mathcal{H})$ and that $\mathcal{C}_{1}(\mathcal{H})\ni A \mapsto \|A\|_{1,\infty} < \infty$. \(b) In contrast to linearity of the trace-norm $\|\cdot\|_{1}$ on the positive cone $\mathcal{C}_{1, +}(\mathcal{H})$, see Example \[ex6-2.1\], the map $X \mapsto \|X\|_{1,\infty}$ on the positive cone $\mathcal{C}_{1,\infty, +}(\mathcal{H})$ is *not* linear. Although this map is homogeneous: $\alpha A \mapsto \alpha \|A\|_{1,\infty}$, $\alpha \geq 0$, for $A,B \in \mathcal{C}_{1,\infty, +}(\mathcal{H})$ one gets that in general $\|A + B\|_{1,\infty} \neq \|A\|_{1,\infty} + \|B\|_{1,\infty}$. But it is known that on the space $l^{\infty}(\mathbb{N})$ there exists a state $\omega \in \mathcal{S}(l^{\infty}(\mathbb{N}))$ such that the map $$\label{6-2.26-1} X \mapsto {\rm{Tr}} {_{\omega}}(X):= \omega (\{(1 +\ln(n))^{-1}\sum^{n}_{j=1}\ s_j(X)\}_{n=1}^{\infty} ) \ ,$$ is *linear* and verifies the properties of the (singular) *trace* for any $X\in \mathcal{C}_{1,\infty}(\mathcal{H})$. We construct $\omega$ in Section \[S2\]. This particular choice of the state $\omega$ defines the [*Dixmier trace*]{} on the space $\mathcal{C}_{1,\infty}(\mathcal{H})$, which is called, in turn, the *Dixmier ideal*, see e.g. [@CarSuk2006], [@Con1994]. The Dixmier trace (\[6-2.26-1\]) is obviously continuous in topology defined by the norm (\[6-2.26\]). This last property is basic for discussion in Section \[S3\] of the Trotter-Kato product formula in the $\|\cdot\|_{p,\infty}$-topology, for $p \geq 1$. \[ex6-2.3\] With non-increasing sequence of positive numbers $\pi = \{\pi_j\}^\infty_{j=1}$, $\pi_1 =1$, one can associate the symmetric norming function $\phi_\pi$ given by $$\label{6-2.27} \phi_\pi(\xi) := \sum^\infty_{j=1}\pi_j\xi^*_j \ , \qquad \xi \in c_f \ .$$ The corresponding symmetrically-normed ideal we denote by $\mathcal{C}_{\pi}(\mathcal{H}):= \mathcal{C}_{\phi_\pi}(\mathcal{H})$. If the sequence $\pi$ satisfies (\[6-2.20\]), then ideal $\mathcal{C}_{\pi}(\mathcal{H})$ does not coincide neither with $\mathcal{C}_{\infty}(\mathcal{H})$ nor with $\mathcal{C}_{1}(\mathcal{H})$. If, in particular, $\pi_j = j^{-\alpha}$, $j = 1,2,\ldots \ $, for $0 < \alpha \le 1$, then the corresponding ideal is denoted by $\mathcal{C}_{\infty,p}(\mathcal{H})$, $p = 1/\alpha$. The norm on this ideal is given by $$\label{6-2.28} \|X\|_{\infty,p} := \sum^\infty_{j=1} j^{-1/p}\ s_j(X) \ , \ \ \ \ p\in [1, \infty) \ .$$ The symmetrically-normed ideal $\mathcal{C}_{\infty,1}(\mathcal{H})$ is called the [*Macaev ideal*]{} [@GohKre1969]. It turns out that the Dixmier ideal $\mathcal{C}_{1, \infty}(\mathcal{H})$ is dual of the Macaev ideal: $\mathcal{C}_{1, \infty}(\mathcal{H}) = \mathcal{C}_{\infty,1}(\mathcal{H})^* $. \[pro7-1.1\] The space $\mathcal{C}_{1,\infty}(\mathcal{H})$ endowed by the norm $\|\cdot\|_{1,\infty}$ is a Banach space. The proof is quite standard although tedious and long. We address the readers to the corresponding references, e.g. [@GohKre1969]. \[pro7-1.2\] The space $\mathcal{C}_{1,\infty}(\mathcal{H})$ endowed by the norm $\|\cdot\|_{1,\infty}$ is a Banach ideal in the algebra of bounded operators $\mathcal{L}(\mathcal{H})$. To this end it is sufficient to prove that if $A$ and $C$ are bounded operators, then $B \in\mathcal{C}_{1,\infty}(\mathcal{H})$ implies $A B C \in \mathcal{C}_{1,\infty}(\mathcal{H})$. Recall that singular values of the operator $A B C$ verify the estimate $s_j(A B C)\leq \|A\| \|C\| s_j(B)$. By (\[6-2.26\]) it yields $$\begin{aligned} \label{7-1.1-0} && \|ABC\|_{1,\infty} = \sup_{n \in \mathbb{N}}\frac{1}{1 +\ln(n)}\sum^n_{j=1}s_j(A B C) \leq \\ && \|A\| \|C\| \sup_{n \in \mathbb{N}}\frac{1}{1 +\ln(n)}\sum^n_{j=1}s_j(B) = \|A\| \|C\| \|B\|_{1,\infty} \ , \nonumber\end{aligned}$$ and consequently the proof of the assertion. $\square$ Recall that for any $A \in \mathcal{L}(\mathcal{H})$ and all $B \in\mathcal{C}_{1}(\mathcal{H})$ one can define a linear functional on $\mathcal{C}_{1}(\mathcal{H})$ given by ${{\rm{Tr}}}_{\mathcal{H}} (A B)$. The set of these functionals $\{{{\rm{Tr}}}_{\mathcal{H}} (A \cdot)\}_{A \in \mathcal{L}(\mathcal{H})}$ is just the *dual* space $\mathcal{C}_{1}(\mathcal{H})^*$ of $\mathcal{C}_{1}(\mathcal{H})$ with the operator-norm topology. In other words, $\mathcal{L}(\mathcal{H})=\mathcal{C}_{1}(\mathcal{H})^* $, in the sense that the map $A \mapsto {{\rm{Tr}}}_{\mathcal{H}} (A \cdot)$ is the isometric isomorphism of $\mathcal{L}(\mathcal{H})$ onto $\mathcal{C}_{1}(\mathcal{H})^*$. With help of the *duality* relation $$\label{7-1.1} \langle A| B \rangle : ={{\rm{Tr}}}_{\mathcal{H}} (A B) \ ,$$ one can also describe the space $\mathcal{C}_{1}(\mathcal{H})_{*}$, which is a *predual* of $\mathcal{C}_{1}(\mathcal{H})$, i.e., its dual $(\mathcal{C}_{1}(\mathcal{H})_{*})^* =\mathcal{C}_{1}(\mathcal{H})$. To this aim for each *fixed* $B \in\mathcal{C}_{1}(\mathcal{H})$ we consider the functionals $A \mapsto {{\rm{Tr}}}_{\mathcal{H}} (A B)$ on $\mathcal{L}(\mathcal{H})$. It is known that they are not *all* continuous linear functional on bounded operators $\mathcal{L}(\mathcal{H})$, i.e., $\mathcal{C}_{1}(\mathcal{H}) \subset \mathcal{L}(\mathcal{H})^*$, but they yield the entire dual only of compact operators, i.e., $\mathcal{C}_{1}(\mathcal{H})=\mathcal{C}_{\infty}(\mathcal{H})^* $. Hence, $\mathcal{C}_{1}(\mathcal{H})_{*} = \mathcal{C}_{\infty}(\mathcal{H})$. Now we note that under duality relation (\[7-1.1\]) the Dixmier ideal $\mathcal{C}_{1,\infty}(\mathcal{H})$ is the dual of the Macaev ideal: $\mathcal{C}_{1,\infty}(\mathcal{H}) = \mathcal{C}_{\infty, 1}(\mathcal{H})^*$, where $$\label{7-1.2} \mathcal{C}_{\infty, 1} (\mathcal{H})= \{A \in \mathcal{C}_{\infty}(\mathcal{H}): \sum_{n\geq1} \frac{1}{n} \ s_{n}(A) < \infty \} \ ,$$ see Example \[ex6-2.3\]. By the same duality relation and by similar calculations one also obtains that the *predual* of $\mathcal{C}_{\infty, 1}(\mathcal{H})$ is the ideal $\mathcal{C}_{\infty, 1}(\mathcal{H})_* = \mathcal{C}_{1, \infty}^{(0)}(\mathcal{H})$, defined by $$\label{7-1.3} \mathcal{C}_{1, \infty}^{(0)} (\mathcal{H}): = \{A \in \mathcal{C}_{\infty}(\mathcal{H}): \sum_{j \geq1}^{n} \ s_{j}(A) = o(\ln (n)), \ n \rightarrow \infty\} \ .$$ By virtue of (\[6-2.25\]) (see Remark \[rem6-2.3\]) the ideal (\[7-1.3\]) is not self-dual since $$\mathcal{C}_{1, \infty}^{(0)}(\mathcal{H})^{**} = \mathcal{C}_{1,\infty}(\mathcal{H})\supset \mathcal{C}_{1, \infty}^{(0)}(\mathcal{H}).$$ The problem which has motivated construction of the Dixmier trace in [@Dix1966] was related to the question of a general definition of the *trace*, i.e. a linear, positive, and unitarily invariant functional on a *proper* Banach ideal $\mathfrak{I}(\mathcal{H})$ of the unital algebra of bounded operators $\mathcal{L}(\mathcal{H})$. Since any [proper]{} two-sided ideal $\mathfrak{I}(\mathcal{H})$ of $\mathcal{L}(\mathcal{H})$ is contained in compact operators $\mathcal{C}_{\infty}(\mathcal{H})$ and contains the set $\mathcal{K}(\mathcal{H})$ of finite-rank operators ((\[6-2.12-5-1\]), Section \[S0\]), *domain* of definition of the [trace]{} has to coincide with the ideal $\mathfrak{I}(\mathcal{H})$. \[rem6-2.4\] The *canonical* trace ${\rm{Tr}}_{\mathcal{H}}(\cdot)$ is nontrivial only on domain, which is the trace-class ideal $\mathcal{C}_{1}(\mathcal{H})$, see Example \[ex6-2.1\]. We recall that it is characterised by the property of *normality*: ${\rm{Tr}}_{\mathcal{H}}(\sup_{\alpha} B_{\alpha}) = \sup_{\alpha}{\rm{Tr}}_{\mathcal{H}}(B_{\alpha})$, for every directed increasing bounded family $\{B_{\alpha}\}_{\alpha \in \Delta}$ of positive operators from $\mathcal{C}_{1, +}(\mathcal{H})$. Note that every nontrivial *normal* trace on $\mathcal{L}(\mathcal{H})$ is proportional to the canonical trace ${\rm{Tr}}_{\mathcal{H}}(\cdot)$, see e.g. [@Dix1981], [@Piet2014]. Therefore, the Dixmier trace (\[6-2.26-1\]) : $\mathcal{C}_{1, \infty} \ni X \mapsto {\rm{Tr}} {_{\omega}}(X)$, is *not* normal. \[def7-1.1\] A *trace* on the proper Banach ideal $\mathfrak{I}(\mathcal{H})\subset \mathcal{L}(\mathcal{H})$ is called *singular* if it vanishes on the set $\mathcal{K}(\mathcal{H})$. Since a singular trace is defined up to trace-class operators $\mathcal{C}_{1}(\mathcal{H})$, then by Remark \[rem6-2.4\] it is obviously *not* normal. Dixmier trace {#S2} ============= Recall that only the ideal of trace-class operators has the property that on its [positive cone]{} $\mathcal{C}_{1,+}(\mathcal{H}):= \{A \in \mathcal{C}_{1}(\mathcal{H}): A \geq 0\}$ the trace-norm is *linear* since $\|A + B\|_{1} = {\rm{Tr}} \, (A + B) = {\rm{Tr}} \, (A) + {\rm{Tr}} \, (B)=\|A\|_{1} + \|B\|_{1}$ for $A,B \in \mathcal{C}_{1,+}(\mathcal{H})$, see Example \[ex6-2.1\]. Then the uniqueness of the trace-norm allows to extend the trace to the whole linear space $\mathcal{C}_{1}(\mathcal{H})$. Imitation of this idea *fails* for other symmetrically-normed ideals. This problem motivates the Dixmier trace construction as a certain limiting procedure involving the $\|\cdot\|_{1,\infty}$-norm. Let $\mathcal{C}_{1,\infty,+}(\mathcal{H})$ be a [positive cone]{} of the Dixmier ideal. One can try to construct on $\mathcal{C}_{1,\infty,+}(\mathcal{H})$ a *linear*, *positive*, and *unitarily* invariant functional (called *trace* $\mathcal{T}$) via *extension* of the limit (called Lim) of the sequence of properly normalised finite sums of the operator $X$ singular values: $$\label{7-2.2} \mathcal{T}(X) := {\rm{Lim}}_{n \rightarrow \infty} \ \frac{1}{1 +\ln(n)}\sum^n_{j=1} \ s_j(X) \ , \ X \in \mathcal{C}_{1,\infty,+}(\mathcal{H}) \ .$$ First we note that since for any unitary $U: \mathcal{H} \rightarrow U$, the singular values of $X \in \mathcal{C}_{\infty}(\mathcal{H})$ are invariant: $s_j(X) = s_j(U \, X \, U^*)$, it is also true for the sequence $$\label{7-2.3} \sigma_{n}(X) := \sum^n_{j=1} \ s_j(X) \ , \ n \in \mathbb{N} \ .$$ Then the Lim in (\[7-2.2\]) (if it exists) inherits the property of *unitarity*. Now we comment that positivity: $X \geq 0$, implies the positivity of eigenvalues $\{\lambda_{j}(X)\}_{j\geq 1}$ and consequently: $\lambda_{j}(X)= s_{j}(X)$. Therefore, $\sigma_{n}(X) \geq 0$ and the Lim in (\[7-2.2\]) is a *positive* mapping. The next problem with the formula for $\mathcal{T}(X)$ is its *linearity*. To proceed we recall that if $P: \mathcal{H} \rightarrow P(\mathcal{H})$ is an orthogonal projection on a finite-dimensional subspace with $\dim P(\mathcal{H}) = n$, then for any bounded operator $X \geq 0$ the (\[7-2.3\]) gives $$\label{7-2.4} \sigma_{n}(X) = \sup_{P}\ \{{\rm{Tr}} {_{\mathcal{H}}} \, (X P): \dim P(\mathcal{H}) = n\} \ .$$ As a corollary of (\[7-2.4\]) one obtains the Horn-Ky Fan inequality $$\label{7-2.5} \sigma_{n}(X + Y) \leq \sigma_{n}(X) + \sigma_{n}(Y) \ , \ n \in \mathbb{N} ,$$ valid in particular for any pair of bounded *positive* compact operators $X$ and $Y$. For $\dim P (\mathcal{H}) \leq 2 n$ one similarly gets from (\[7-2.4\]) that $$\label{7-2.7} \sigma_{2n}(X + Y) \geq \sigma_{n}(X) + \sigma_{n}(Y) \ , \ n \in \mathbb{N} \ .$$ Motivated by (\[7-2.2\]) we now introduce $$\label{7-2.8} \mathcal{T}_{n}(X) := \frac{1}{1 +\ln(n)}\sigma_{n}(X) \ , \ X \in \mathcal{C}_{1,\infty,+}(\mathcal{H}) \ ,$$ and denote by ${\rm{Lim}}\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}} := {\rm{Lim}}_{n \rightarrow \infty} \mathcal{T}_{n}(X)$ the right-hand side of the functional in (\[7-2.2\]). Note that by (\[7-2.8\]) the inequalities (\[7-2.5\]) and (\[7-2.7\]) yield for $n \in \mathbb{N}$ $$\begin{aligned} \mathcal{T}_{n}(X + Y) \leq \mathcal{T}_{n}(X) + \mathcal{T}_{n}(Y) \ \ , \ \ \frac{1 +\ln(2n)}{1 +\ln(n)} \ \mathcal{T}_{2n}(X + Y)\geq \mathcal{T}_{n}(X) + \mathcal{T}_{n}(Y) \ . \label{7-2.10}\end{aligned}$$ Since the functional Lim includes the limit $n \rightarrow \infty$, the inequalities (\[7-2.10\]) *would* give a desired linearity of the [trace]{} $\mathcal{T}$: $$\label{7-2.11} \mathcal{T}(X + Y) = \mathcal{T}(X) + \mathcal{T}(Y) \ ,$$ *if* one proves that the Lim$_{n \rightarrow \infty}$ in (\[7-2.2\]) [ exists]{} and [finite]{} for $X,Y$ as well as for $X+Y$. To this end we note that if the right-hand of (\[7-2.2\]) exists, then one obtains (\[7-2.11\]). Hence the ${\rm{Lim}}\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}}$ is a positive linear map ${\rm{Lim}}: l^{\infty}(\mathbb{N}) \rightarrow \mathbb{R}$, which defines a *state* $\omega \in \mathcal{S}(l^{\infty}(\mathbb{N}))$ on the Banach space of sequences $\{\mathcal{T}_{n}(\cdot)\}_{n\in \mathbb{N}} \in l^{\infty}(\mathbb{N})$, such that $\mathcal{T}(X) = \omega (\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}})$. \[rem6-3.0\] Scrutinising description of $\omega(\cdot)$, we infer that its values ${\rm{Lim}}\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}}$ are completely determined only by the “tail” behaviour of the sequences $\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}}$ as it is defined by ${\rm{Lim}}_{n \rightarrow \infty} \mathcal{T}_{n}(X)$. For example, one concludes that the state $\omega (\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}}) = 0$ for the whole set $c_0$ of sequences: $\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}} \in c_0$, which tend to zero. The same is also plausible for the non-zero converging limits. To make this description more precise we impose on the state $\omega$ the following conditions: $$\begin{aligned} &&(a) \ \ \omega (\eta)\geq 0 \ , \ {\rm{for}} \ \ \forall \eta = \{\eta_n \geq 0\}_{n\in \mathbb{N}} \ , \\ &&(b) \ \ \omega (\eta) = {\rm{Lim}}\{\eta_n \}_{n\in \mathbb{N}} = \lim_{n \rightarrow \infty} \eta_n \ , \ {\rm{if}} \ \{\eta_n \geq 0\}_{n\in \mathbb{N}} \ {\rm{is \ convergent}} \ .\end{aligned}$$ By virtue of (a) and (b) the definitions (\[7-2.2\]) and (\[7-2.8\]) imply that for $X,Y \in \mathcal{C}_{1,\infty,+}(\mathcal{H})$ one gets $$\begin{aligned} \label{7-2.12} && \mathcal{T}(X) = \omega (\{ \mathcal{T}_{n}(X)\}_{n\in \mathbb{N}}) = \lim_{n \rightarrow \infty} \mathcal{T}_{n}(X) \ , \\ &&\mathcal{T}(Y) = \omega (\{ \mathcal{T}_{n}(Y)\}_{n\in \mathbb{N}}) = \lim_{n \rightarrow \infty} \mathcal{T}_{n}(Y) \ , \label{7-2.13} \\ &&\mathcal{T}(X + Y) = \omega (\{ \mathcal{T}_{n}(X+Y)\}_{n\in \mathbb{N}}) = \lim_{n \rightarrow \infty} \mathcal{T}_{n}(X+Y) \ , \label{7-2.13-1}\end{aligned}$$ if the limits in the right-hand sides of (\[7-2.12\])-(\[7-2.13-1\]) exist. Now, to ensure (\[7-2.11\]) one has to select $\omega$ in such a way that it allows to restore the equality in (\[7-2.10\]), when $n \rightarrow \infty$. To this aim we impose on the state $\omega$ the condition of *dilation* $\mathfrak{D}_2$-[invariance]{}. Let $\mathfrak{D}_2 : l^{\infty}(\mathbb{N}) \rightarrow l^{\infty}(\mathbb{N})$, be [dilation]{} mapping $\eta \mapsto \mathfrak{D}_2(\eta)$: $$\label{7-2.14} \mathfrak{D}_2: (\eta_1, \eta_2, \ldots \eta_k, \ldots) \rightarrow (\eta_1, \eta_1, \eta_2, \eta_2, \ldots \eta_k,\eta_k, \ldots)\ , \ \forall \eta \in l^{\infty}(\mathbb{N}) \ .$$ We say that $\omega$ is dilation $\mathfrak{D}_2$-*invariant* if for any $\eta \in l^{\infty}(\mathbb{N})$ it verifies the property $$\label{7-2.15} (c) \hspace{1cm} \omega(\eta) = \omega(\mathfrak{D}_2(\eta)) \ . \hspace{3cm}$$ We shall discuss the question of *existence* the dilation $\mathfrak{D}_2$-invariant states (the *invariant means*) on the Banach space $l^{\infty}(\mathbb{N})$ in Remark \[rem6-3.1\]. Let $X,Y \in \mathcal{C}_{1,\infty,+}(\mathcal{H})$. Then applying the property (c) to the sequence $\eta = \{\xi_{2n}:=\mathcal{T}_{2n}(X+Y)\}_{n=1}^{\infty}$, we obtain $$\label{7-2.16} \omega(\eta) = \omega(\mathfrak{D}_2(\eta)) = \omega(\xi_2, \xi_2, \xi_4, \xi_4, \xi_6, \xi_6, \ldots)\ .$$ Note that if $\xi = \{\xi_{n} =\mathcal{T}_{n}(X+Y)\}_{n=1}^{\infty}$, then the difference of the sequences: $$\label{7-2.16-1} \mathfrak{D}_2(\eta) - \xi = (\xi_2, \xi_2, \xi_4, \xi_4, \xi_6, \xi_6, \ldots) - (\xi_1, \xi_2, \xi_3, \xi_4, \xi_5, \xi_6, \ldots) \ ,$$ converges to *zero* if $\xi_{2n} - \xi_{2n-1} \rightarrow 0$ as $n\rightarrow \infty$. Then by virtue of (\[7-2.13-1\]) and (\[7-2.16\]) this would imply $$\label{7-2.17} \omega (\{ \mathcal{T}_{2n}(X+Y)\}_{n\in \mathbb{N}})= \omega (\mathfrak{D}_2(\{ \mathcal{T}_{2n}(X+Y)\}_{n\in \mathbb{N}})) = \omega (\{ \mathcal{T}_{n}(X+Y)\}_{n\in \mathbb{N}})\ ,$$ or by (\[7-2.13-1\]): $\lim_{n \rightarrow \infty}\mathcal{T}_{2n}(X+Y) = \lim_{n \rightarrow \infty}\mathcal{T}_{n}(X+Y)$, which by estimates (\[7-2.10\]) would also yield $$\label{7-2.17-1} \lim_{n \rightarrow \infty}\mathcal{T}_{n}(X+Y) = \lim_{n \rightarrow \infty}\mathcal{T}_{n}(X) + \lim_{n \rightarrow \infty}\mathcal{T}_{n}(Y) \ .$$ Now, summarising (\[7-2.12\]), (\[7-2.13\]), (\[7-2.13-1\]) and (\[7-2.17-1\]) we obtain the linearity (\[7-2.11\]) of the limiting functional $\mathcal{T}$ on the positive cone $\mathcal{C}_{1,\infty,+}(\mathcal{H})$ if it is defined by the corresponding $\mathfrak{D}_2$-[invariant]{} state $\omega$, or a dilation-invariant mean. Therefore, to finish the proof of linearity it rests only to check that $\lim_{n\rightarrow\infty} (\xi_{2n} - \xi_{2n-1}) = 0$. To this end we note that by definitions (\[7-2.3\]) and (\[7-2.8\]) one gets $$\begin{aligned} \xi_{2n} - \xi_{2n-1} &=& \left[\frac{1}{\ln(2n)} - \frac{1}{\ln(2n-1)}\right]\sigma_{2n-1}(X+Y) \nonumber \\ &+& \frac{1}{\ln(2n)} s_{2n}(X+Y) \ . \label{7-2.18}\end{aligned}$$ Since $X,Y \in \mathcal{C}_{1,\infty,+}(\mathcal{H})$, we obtain that $\lim_{n\rightarrow \infty }s_{2n}(X+Y) = 0$ and that $\sigma_{2n-1}(X+Y) = O(\ln(2n-1))$. Then taking into account that $({1}/{\ln(2n)} - {1}/{\ln(2n-1)}) = o ({1}/{\ln(2n-1)})$ one gets that for $n\rightarrow \infty$ the right-hand side of (\[7-2.18\]) converges to zero. To conclude our construction of the trace $\mathcal{T}(\cdot)$ we note that by linearity (\[7-2.11\]) one can uniquely extend this functional from the positive cone $\mathcal{C}_{1,\infty,+}(\mathcal{H})$ to the real subspace of the Banach space $\mathcal{C}_{1,\infty}(\mathcal{H})$, and finally to the entire ideal $\mathcal{C}_{1,\infty}(\mathcal{H})$. \[def7-2.1\] The *Dixmier trace* ${\rm{Tr}}_{\omega}(X)$ of the operator $X\in \mathcal{C}_{1,\infty,+}(\mathcal{H})$ is the value of the linear functional (\[7-2.2\]): $$\label{7-2.19} {{\rm{Tr}}}_{\omega} (X): = {\rm{Lim}}_{n \rightarrow \infty} \ \frac{\sigma_{n}(X)}{1 +\ln(n)} = \omega (\{\mathcal{T}_{n}(X)\}_{n\in \mathbb{N}}) \ ,$$ where ${\rm{Lim}}_{n \rightarrow \infty}$ is defined by a dilation-invariant state $\omega \in \mathcal{S}(l^{\infty}(\mathbb{N}))$ on $l^{\infty}(\mathbb{N})$, that satisfies the properties (a), (b), and (c). Since any self-adjoint operator $X\in \mathcal{C}_{1,\infty}(\mathcal{H})$ has the representation: $X = X_+ - X_-$, where $X_{\pm} \in \mathcal{C}_{1,\infty,+}(\mathcal{H})$, one gets ${{\rm{Tr}}}_{\omega} (X) = {{\rm{Tr}}}_{\omega} (X_+) - {{\rm{Tr}}}_{\omega} (X_-)$. Then for arbitrary $Z\in \mathcal{C}_{1,\infty}(\mathcal{H})$ the Dixmier trace is ${\rm{Tr}}_{\omega}(Z) = {\rm{Tr}}_{\omega} ({\rm{Re}}Z) + i {\rm{Tr}}_{\omega}({\rm{Im}}Z)$. Note that if $X\in \mathcal{C}_{1,\infty,+}(\mathcal{H})$, then definition (\[7-2.19\]) of ${\rm{Tr}}_{\omega}(\cdot)$ together with definition of the norm $\|\cdot\|_{1,\infty}$ in (\[6-2.26\]), readily imply the estimate ${{\rm{Tr}}}_{\omega}(X) \leq \|X \|_{1,\infty}$, which in turn yields the inequality for arbitrary $Z$ from the Dixmier ideal $\mathcal{C}_{1,\infty}(\mathcal{H})$: $$\label{7-2.19-1} |{{\rm{Tr}}}_{\omega}(Z)| \leq \|Z \|_{1,\infty} \ .$$ \[rem6-3.1\] A decisive for construction of the Dixmier trace ${\rm{Tr}}_{\omega}(\cdot)$ is the *existence* of the invariant mean $\omega \in \mathcal{S}(l^{\infty}(\mathbb{N})) \subset (l^{\infty}(\mathbb{N}))^*$. Here the space $(l^{\infty}(\mathbb{N}))^*$ is *dual* to the Banach space of bounded sequences. Then by the Banach-Alaoglu theorem the convex set of states $\mathcal{S}(l^{\infty}(\mathbb{N}))$ is *compact* in $(l^{\infty}(\mathbb{N}))^*$ in the weak\*-topology. Now, for any $\phi \in \mathcal{S}(l^{\infty}(\mathbb{N}))$ the relation $\phi(\mathfrak{D}_2(\cdot)) =: (\mathfrak{D}_{2}^* \phi)(\cdot)$ defines the dual $\mathfrak{D}_{2}^*$-dilation on the set of states. By definition (\[7-2.14\]) this map is such that $\mathfrak{D}_{2}^*: \mathcal{S}(l^{\infty}(\mathbb{N})) \rightarrow \mathcal{S}(l^{\infty}(\mathbb{N}))$, as well as continuous and affine (in fact linear). Then by the Markov-Kakutani theorem the dilation $\mathfrak{D}_{2}^*$ has a fix point $\omega \in \mathcal{S}(l^{\infty}(\mathbb{N})): \mathfrak{D}_{2}^* \omega = \omega$. This observation justifies the existence of the *invariant mean* (c) for $\mathfrak{D}_{2}$-dilation. Note that Remark \[rem6-3.1\] has a straightforward extension to any $\mathfrak{D}_{k}$-dilation for $k>2$, which is defined similar to (\[7-2.14\]). Since dilations for different $k \geq 2$ *commute*, the extension of the Markov-Kakutani theorem yields that the commutative family $\mathcal{F} = \{\mathfrak{D}_{k}^* \}_{k\geq 2}$ has in $\mathcal{S}(l^{\infty}(\mathbb{N}))$ the common fix point $\omega = \mathfrak{D}_{2}^* \omega$. Therefore, Definition \[def7-2.1\] of the Dixmier trace does not depend on the degree $k\geq 2$ of dilation $\mathfrak{D}_{k}$. For more details about different constructions of *invariant means* and the corresponding Dixmier trace on $\mathcal{C}_{1,\infty}(\mathcal{H})$, see, e.g., [@CarSuk2006], [@LoSuZa2013]. \[pro7-2.1\] The Dixmier trace has the following properties:\ *(a)* For any bounded operator $B\in \mathcal{L}(\mathcal{H})$ and $Z\in \mathcal{C}_{1,\infty}(\mathcal{H})$ one has ${{\rm{Tr}}}_{\omega}(Z B) = {{\rm{Tr}}}_{\omega}(B Z)$.\ *(b)* ${{\rm{Tr}}}_{\omega}(C) = 0$ for any operator $C\in \mathcal{C}_{1}(\mathcal{H})$ from the trace-class ideal, which is the closure of finite-rank operators $\mathcal{K}(\mathcal{H})$ for the $\|\cdot\|_{1}$-norm.\ *(c)* The Dixmier trace ${{\rm{Tr}}}_{\omega}: \mathcal{C}_{1,\infty}(\mathcal{H}) \rightarrow \mathbb{C}$, is continuous in the $\|\cdot\|_{1,\infty}$-norm. \(a) Since every operator $B\in \mathcal{L}(\mathcal{H})$ is a linear combination of four unitary operators, it is sufficient to prove the equality ${{\rm{Tr}}}_{\omega}(Z U) = {{\rm{Tr}}}_{\omega}(U Z)$ for a unitary operator $U$ and moreover only for $Z\in \mathcal{C}_{1,\infty,+}(\mathcal{H})$. Then the corresponding equality follows from the unitary invariance: $s_{j}(Z) = s_{j}(Z U)= s_{j}(U Z) = s_{j}(U Z U^*)$, of singular values of the positive operator $Z$ for all $j \geq 1$.\ (b) Since $C\in \mathcal{C}_{1}(\mathcal{H})$ yields $\|C\|_{1} < \infty$, definition (\[7-2.3\]) implies $\sigma_{n}(C) \leq \|C\|_{1}$ for any $n \geq 1$. Then by Definition \[def7-2.1\] one gets ${{\rm{Tr}}}_{\omega}(C) = 0$. Proof of the last part of the statement is standard.\ (c) Since the ideal $\mathcal{C}_{1,\infty}(\mathcal{H})$ is a Banach space and ${\rm{Tr}}_{\omega}: \mathcal{C}_{1,\infty}(\mathcal{H}) \rightarrow \mathbb{C}$ a linear functional it is sufficient to consider continuity at $X =0$. Then let the sequence $\{X_k\}_{k \geq 1} \subset \mathcal{C}_{1,\infty}(\mathcal{H})$ converges to $X =0$ in $\|\cdot\|_{1,\infty}$-topology, i.e. by (\[6-2.26\]) $$\label{7-2.20} \lim_{k \rightarrow \infty} \|X_k\|_{1,\infty} = \lim_{k \rightarrow \infty} \ \sup_{n \in \mathbb{N}} \, \frac{1}{1 +\ln(n)}\sigma_{n}(X_k) = 0 \ .$$ Since (\[7-2.19-1\]) implies $|{{\rm{Tr}}}_{\omega}(X_k)| \leq \|X_k\|_{1,\infty} \ $, the assertion follows from (\[7-2.20\]). $\square$ Therefore, the Dixmier construction gives an example of a *singular* trace in the sense of Definition \[def7-1.1\]. Trotter-Kato product formulae in the Dixmier ideal {#S3} ================================================== Let $A\ge 0$ and $B\ge 0$ be two non-negative self-adjoint operators in a separable Hilbert space $\mathcal{H}$ and let the subspace $\mathcal{H}_0 := \overline{\operatorname{dom}(A^{1/2}) \cap \operatorname{dom}(B^{1/2})}$. It may happen that $\operatorname{dom}(A) \cap \operatorname{dom}(B)= \{0\}$, but the form-sum of these operators: $H = A \stackrel{.}{+} B$, is well-defined in the subspace $\mathcal{H}_0 \subseteq \mathcal{H}$. T. Kato proved in [@Kat1978] that under these conditions the *Trotter product formula* $$s-\lim_{n\to\infty}\left(e^{-tA/n}e^{-tB/n}\right)^n = e^{-tH}P_0, \qquad t > 0, \label{6-1.2-1}$$ converges in the *strong* operator topology *away from zero* (i.e., for $t \in \mathbb{R}^{+}$), and *locally uniformly* in $t \in \mathbb{R}^{+}$ (i.e. uniformly in $t \in[\varepsilon,T]$, for $0 < \varepsilon < T < +\infty \ $), to a *degenerate* semigroup $\{e^{-tH}P_0\}_{t > 0}$. Here $P_0$ denotes the orthogonal projection from $\mathcal{H}$ onto $\mathcal{H}_0$. Moreover, in [@Kat1978] it was also shown that the product formula is true not only for the *exponential* function $e^{-x}$, $x \ge 0$, but for a whole class of Borel measurable functions $f(\cdot)$ and $g(\cdot)$, which are defined on $\mathbb{R}^{+}_{0}:=[0,\infty)$ and satisfy the conditions: $$\begin{aligned} & & 0 \le f(x) \le 1, \qquad f(0) = 1, \qquad f'(+0) = -1, \label{6-1.3}\\ & & 0 \le g(x) \le 1, \qquad g(0) = 1, \qquad g'(+0) = -1. \label{6-1.4}\end{aligned}$$ Namely, the main result of [@Kat1978] says that besides (\[6-1.2-1\]) one also gets convergence $$\label{6-1.6} \tau -\lim_{n\to\infty}\left(f(tA/n)g(tB/n)\right)^n = e^{-tH}P_0, \qquad t > 0,$$ locally uniformly away from zero, if topology $\tau = s$. Product formulae of the type (\[6-1.6\]) are called the [*Trotter-Kato product formulae*]{} for functions (\[6-1.3\]), (\[6-1.4\]), which are called the *Kato functions* $\mathcal{K}$. Note that $\mathcal{K}$ is closed with respect to the *products* of Kato functions. For some particular classes of the [Kato functions]{} we refer to [@NeiZag1998], [@Zag2003]. In the following it is useful to consider instead of $f(x)g(x)$ two Kato functions: $g(x/2) f(x) g(x/2)$ and $f(x/2) g(x) f(x/2)$, that produce the self-adjoint operator families $$\label{6-1.16-1} F(t) := g(tB/2)f(tA)g(tB/2) \ {\rm{and}} \ T(t) := f(tA/2)g(tB)f(tA/2), \ \ t \ge 0.$$ Since [@NeiZag1990] it is known, that the *lifting* of the topology of convergence in (\[6-1.6\]) to the *operator norm* $\tau = \|\cdot\|$ needs more conditions on operators $A$ and $B$ as well as on the key Kato functions $f,g \in \mathcal{K}$. One finds a discussion and more references on this subject in [@Zag2003]. Here we quote a result that will be used below for the Trotter-Kato product formulae in the Dixmier ideal $\mathcal{C}_{1,\infty}(\mathcal{H})$. Consider the class $\mathcal{K}_{\beta}$ of Kato-functions, which is defined in [@IchTam2001], [@IchTamTamZag2001] as:\ (i) Measurable functions $0 \leq h \leq 1 $ on $\mathbb{R}^{+}_{0}$, such that $h(0) = 1$, and $h'(+0) = -1$.\ (ii) For $\varepsilon > 0$ there exists $\delta = \delta(\varepsilon) < 1$, such that $h(s) \leq 1 - \delta(\varepsilon)$ for $s \geq \varepsilon$, and $$[h]_{\beta} := \sup_{s>0} \frac{\left|h(s) -1+s\right|}{s^{\beta}} < \infty \ , \ \ {\rm{for}} \ \ 1 < \beta \leq 2 \ .$$ The standard examples are: $h(s) = e^{-s}$ and $h(s)= (1 + a^{-1}s)^{-a}\ , \ a>0$. Below we consider the class $\mathcal{K}_{\beta}$ and a particular case of generators $A$ and $B$, such that for the Trotter-Kato product formulae the estimate of the convergence rate is *optimal*. \[IV.6-6\][[[@IchTamTamZag2001]]{}]{} Let $f,g \in {\mathcal{K}_{\beta}}$ with $\beta = 2$, and let $A$, $B$ be non-negative self-adjoint operators in $\mathcal{H}$ such that the operator sum $C := A + B $ is *self-adjoint* on domain $dom(C):=dom(A)\cap dom(B)$. Then the Trotter-Kato product formulae converge for $n\to \infty$ in the operator norm: $$\begin{aligned} && \|[f(tA/n)g(tB/n)]^n - e^{-tC}\| = O(n^{-1}) \ , \ \|[g(tB/n)f(tA/n)]^n - e^{-tC}\| = O(n^{-1}) \ , \\ && \|F(t/n)^n - e^{-tC}\| = O(n^{-1}) \ \ , \ \ \ \ \|T(t/n)^n - e^{-tC}\| = O(n^{-1}) \ .\end{aligned}$$ Note that for the corresponding to each formula [error bounds]{} $O(n^{-1})$ are equal up to coefficients $\{\Gamma_j > 0\}_{j=1}^4$ and that each rate of convergence $\Gamma_j \ \varepsilon(n)= O(n^{-1})$, $j = 1, \ldots 4$, is optimal. The first *lifting* lemma yields sufficient conditions that allow to strengthen the *strong* operator convergence to the $\|\cdot\|_\phi$-norm convergence in the the symmetrically-normed ideal $\mathcal{C}_{\phi}(\mathcal{H})$. \[lem6-2.7\] Let self-adjoint operators: $X \in \mathcal{C}_{\phi}(\mathcal{H})$, $Y \in \mathcal{C}_{\infty}(\mathcal{H})$ and $Z \in \mathcal{L}(\mathcal{H})$. If $\{Z(t)\}_{t \ge 0}$, is a family of self-adjoint bounded operators such that $$\label{6-2.49} s-\lim_{t \to +0}Z(t) = Z \ ,$$ then $$\label{6-2.50} \lim_{r\to\infty}\sup_{t \in [0,\tau]}\|(Z(t/r) - Z)YX\|_\phi = \lim_{r\to\infty}\sup_{t \in [0,\tau]}\|XY(Z(t/r) - Z)\|_\phi = 0 \ ,$$ for any $\tau \in (0,\infty)$. Note that (\[6-2.49\]) yields the strong operator convergence $s-\lim_{r\to\infty}Z(t/r) = Z$, uniformly in $t \in [0,\tau]$. Since $Y \in \mathcal{C}_{\infty}(\mathcal{H})$, this implies $$\label{6-2.52} \lim_{r\to\infty}\sup_{t \in [0,\tau]}\|(Z(t/r) - Z)Y\| = 0 \ .$$ Since $\mathcal{C}_{\phi}(\mathcal{H})$ is a Banach space with symmetric norm (\[6-2.12-4\]) that verifies $\|Z X\|_\phi \leq \|Z\|\|X\|_\phi$, one gets the estimate $$\label{6-2.53} \|(Z(t/r) - Z)YX\|_\phi \le \|(Z(t/r) - Z)Y\|\|X\|_\phi \ ,$$ which together with (\[6-2.52\]) give the prove of (\[6-2.50\]). $\Box$ The second *lifting* lemma allows to estimate the rate of convergence of the Trotter-Kato product formula in the norm (\[6-2.14\]) of symmetrically-normed ideal $\mathcal{C}_{\phi}(\mathcal{H})$ via the [error bound]{} $\varepsilon(n)$ in the operator norm due to Proposition \[IV.6-6\]. \[thm6-5.0\] Let $A$ and $B$ be non-negative self-adjoint operators on the separable Hilbert space $\mathcal{H}$, that satisfy the conditions of Proposition \[IV.6-6\]. Let $f,g \in {\mathcal{K}_{2}}$ be such that $F(t_0) \in \mathcal{C}_{\phi}(\mathcal{H})$ for some $t_0 > 0$. If $\Gamma_{t_0} \varepsilon(n)$, $n \in \mathbb{N}$, is the operator-norm error bound away from $t_0 > 0$ of the Trotter-Kato product formula for $\{f(tA)g(tB)\}_{t \ge 0}$, then for some $\Gamma_{2t_0}^{\phi} > 0$ the function $\varepsilon_\phi(n) := \{\varepsilon([n/2]) + \varepsilon([(n+1)/2])\}$, $n \in \mathbb{N}$, defines the error bound away from $2t_0$ of the Trotter-Kato product formula in the ideal $\mathcal{C}_{\phi}(\mathcal{H})$: $$\label{6-1.16-2} \|[f(tA/n)g(tB/n)]^n - e^{-tC}\|_{\phi} = \Gamma_{2t_0}^{\phi}\varepsilon_\phi(n) \ , \ \ \ n\to \infty. \qquad t \ge 2t_0 \ .$$ Here $[x] := \max\{l \in \mathbb{N}_0: l \le x\}$, for $x \in \mathbb{R}^{+}_{0}$. To prove the assertion for the family $\{f(tA)g(tB)\}_{t \ge 0}$ we use decompositions $n = k + m$, $k \in \mathbb{N}$ and $m = 2,3,\ldots \ $, $n \ge 3$, for representation $$\begin{aligned} \label{6-5.0-14} \lefteqn{\hspace{0.0cm} (f(tA/n)g(tB/n))^n - e^{-tC} = }\\ & & \left((f(tA/n)g(tB/n))^k - e^{-ktC/n}\right)(f(tA/n)g(tB/n))^m \nonumber \\ & & + \ e^{-ktC/n}\left((f(tA/n)g(tB/n))^m - e^{-mtC/n}\right)\ .\nonumber\end{aligned}$$ Since by conditions of lemma $F(t_0) \in \mathcal{C}_{\phi}(\mathcal{H})$, definition (\[6-1.16-1\]) and representation $f(tA/n)g(tB/n))^m = f(tA/n)g(tB/n)^{1/2}F(t/n)^{m-1}g(tB)^{1/2}$ yield $$\label{6-5.0-16} \|(f(tA/n)g(tB/n))^m\|_\phi \le \|F(t_0)\|_\phi \ ,$$ for $t$ such that $t_0 \le {(m-1)}t /{n} \le (m-1)t_0$ and $m-1 \ge 1$. Note that for self-adjoint operators $e^{-tC}$ and $F(t)$ by Araki’s log-order inequality for compact operators [@Ara1990] one gets for $kt/n \ge t_0$ the bound of $e^{-ktC/n}$ in the $\|\cdot\|_\phi$-norm: $$\label{6-5.0-17} \|e^{-ktC/n}\|_\phi \le \|F(t_0)\|_\phi \ .$$ Since by Definitions \[def6-2.1-2\] and \[def6-2.3\] the ideal $\mathcal{C}_{\phi}(\mathcal{H})$ is a Banach space, from (\[6-5.0-14\])-(\[6-5.0-17\]) we obtain the estimate $$\begin{aligned} \label{6-5.0-18} &&\|(f(tA/n)g(tB/n))^n - e^{-tC}\|_\phi \le \\ &&\|F(t_0)\|_\phi \ \|(f(tA/n)g(tB/n))^k - e^{-ktC/n}\| \nonumber \\ &&+ \|F(t_0)\|_\phi \ \|(f(tA/n)g(tB/n))^m - e^{-mtC/n}\| \ , \nonumber\end{aligned}$$ for $t$ such that: $(1 + {(k+1)}/{(m-1)})t_0 \le t \le nt_0$, $m \ge 2$ and $t \ge (1 + {m}/{k})t_0$. Now, by conditions of lemma $\Gamma_{t_0} \varepsilon(\cdot)$ is the operator-norm error bound away from $t_0$, for any interval $[a,b] \subseteq (t_0,+\infty)$. Then there exists $n_0 \in \mathbb{N}$ such that $$\label{6-5.0-19} \|(f(tA/n)g(tB/n))^k - e^{-ktC/n}\| \le \Gamma_{t_0} \varepsilon(k) \ ,$$ for ${k}t/{n} \in [a,b] \Leftrightarrow t \in [(1 + {m}/{k})a,(1+ {m}/{k})b]$ and $$\label{6-5.0-20} \|(f(tA/n)g(tB/n))^m - e^{-mtC/n}\| \le \Gamma_{t_0} \varepsilon(m) \ ,$$ for ${m}t /{n} \in [a,b] \Leftrightarrow t \in [(1 + {k}/{m})a,(1 + {k}/{m})b]$ for all $n > n_0 $. Setting $m := [(n+1)/2]$ and $k = [n/2]$, $n \ge 3$, we satisfy $n = k + m$ and $m \ge 2$, as well as, $\lim_{n\to\infty} {(k+1)}/{(m-1)} = 1$, $\lim_{n\to\infty} {m}/{k} = 1$ and $\lim_{n\to\infty} {k}/{m} = 1$. Hence, for any interval $[\tau_0,\tau] \subseteq (2t_0,+\infty)$ we find that $[\tau_0,\tau] \subseteq [(1 + {(k+1)}/{(m-1)})t_0, nt_0]$ for sufficiently large $n$. Moreover, choosing $[\tau_0/2,\tau/2] \subseteq (a,b) \subseteq (t_0,+\infty)$ we satisfy $[\tau_0,\tau] \subseteq [(1 + {m}/{k})a,(1 + {m}/{k})b]$ and $[\tau_0,\tau] \subseteq [(1 + {k}/{m})a, (1 + {k}/{m})b]$ again for sufficiently large $n$. Thus, for any interval $[\tau_0,\tau] \subseteq (2t_0,+\infty)$ there is $n_0 \in \mathbb{N}$ such that (\[6-5.0-18\]), (\[6-5.0-19\]) and (\[6-5.0-20\]) hold for $t \in [\tau_0,\tau]$ and $n \ge n_0$. Therefore, (\[6-5.0-18\]) yields the estimate $$\begin{aligned} \label{6-5.0-21} &&\|(f(tA/n)g(tB/n))^n - e^{-tC}\|_\phi \le \\ &&\hspace{1cm} \Gamma_{t_0} \ \|F(t_0)\|_\phi \{\varepsilon([n/2]) + \varepsilon([(n+1)/2])\} \ , \nonumber\end{aligned}$$ for $t \in [\tau_0,\tau] \subseteq(2t_0,+\infty)$ and $n \ge n_0$. Hence, $\Gamma_{2t_0}^{\phi} := \Gamma_{t_0} \ \|F(t_0)\|_\phi$ and $\Gamma_{2t_0}^{\phi} \varepsilon_\phi(\cdot)$ is an error bound in the Trotter-Kato product formula (\[6-1.16-2\]) away from $2t_0$ in $\mathcal{C}_{\phi}(\mathcal{H})$ for the family $\{f(tA)g(tB)\}_{t \ge 0}$. The lifting Lemma \[lem6-2.7\] allows to extend the proofs for other approximants: $\{g(tB)f(tA)\}_{t \ge 0}$, $\{F(t)\}_{t \ge 0}$ and $\{T(t)\}_{t \ge 0}$. $\Box$ Now we apply Lemma \[thm6-5.0\] in Dixmier ideal $\mathcal{C}_{\phi}(\mathcal{H}) = \mathcal{C}_{1, \infty}(\mathcal{H})$. This concerns the norm convergence (\[6-1.16-2\]), but also the estimate of the convergence rate for Dixmier traces: $$\label{7-3.6} |{\rm{Tr}}{_{\omega}}(e^{-tC}) - {\rm{Tr}}{_{\omega}}(F(t/n)^n)| \leq \Gamma^{\omega} \varepsilon_{\omega}(n) \ .$$ In fact, it is the same (up to $\Gamma^{\omega}$) for all Trotter-Kato approximants: $\{T(t)\}_{t\geq 0}$, $\{f(t)g(t)\}_{t\geq 0}$, and $\{g(t)f(t)\}_{t\geq 0}$. Indeed, since by inequality (\[7-2.19-1\]) and Lemma \[thm6-5.0\] for $t \in [\tau_0,\tau]$ and $n \ge n_0$, one has $$\begin{aligned} |{\rm{Tr}}{_{\omega}}(e^{-tC}) - {\rm{Tr}}{_{\omega}}(F(t/n)^n)| \leq \|e^{-tC} - F(t/n)^n\|_{1, \infty} \le \Gamma_{2t_0}^{\phi} \ \varepsilon_{1, \infty}(n) \ , \label{7-3.7}\end{aligned}$$ we obtain for the rate in (\[7-3.6\]): $\varepsilon_{\omega}(\cdot) = \varepsilon_{1, \infty}(\cdot)$. Therefore, the estimate of the convergence rate for Dixmier traces (\[7-3.6\]) and for $\|\cdot\|_{1, \infty}$-convergence in (\[7-3.7\]) are *entirely* defined by the operator-norm error bound $\varepsilon(\cdot)$ from Lemma \[thm6-5.0\] and have the form: $$\varepsilon_{1, \infty}(n) := \{\varepsilon([n/2]) + \varepsilon([(n+1)/2])\} \ , \ n \in \mathbb{N} \ . \label{7-3.8}$$ Note that for the particular case of Proposition \[IV.6-6\], these arguments yield for (\[6-5.0-21\]) the explicit convergence rate asymptotics $O(n^{-1})$ for the Trotter-Kato formulae and consequently, the same asymptotics for convergence rates of the Trotter-Kato formulae for the Dixmier trace (\[7-3.6\]), (\[7-3.7\]). Therefore, we proved in the Dixmier ideal $\mathcal{C}_{1, \infty}(\mathcal{H})$ the following assertion. \[pro7-1.3\] Let $f,g \in {\mathcal{K}_{\beta}}$ with $\beta = 2$, and let $A$, $B$ be non-negative self-adjoint operators in $\mathcal{H}$ such that the operator sum $C := A + B $ is *self-adjoint* on domain $dom(C):=dom(A)\cap dom(B)$. If $F(t_0) \in \mathcal{C}_{1, \infty}(\mathcal{H})$ for some $t_0 > 0$, then the Trotter-Kato product formulae converge for $n\to \infty$ in the $\|\cdot\|_{1, \infty}$-norm: $$\begin{aligned} && \|[f(tA/n)g(tB/n)]^n - e^{-tC}\|_{1, \infty} = O(n^{-1}) \ , \ \|[g(tB/n)f(tA/n)]^n - e^{-tC}\|_{1, \infty} = O(n^{-1}) \ , \\ && \|F(t/n)^n - e^{-tC}\|_{1, \infty} = O(n^{-1}) \ \ , \ \ \ \ \|T(t/n)^n - e^{-tC}\|_{1, \infty} = O(n^{-1}) \ ,\end{aligned}$$ away from $2t_0$. The rate $O(n^{-1})$ of convergence is optimal in the sense of [[[@IchTamTamZag2001]]{}]{}. By virtue of (\[7-3.7\]) the same asymptotics $O(n^{-1})$ of the convergence rate are valid for convergence the Trotter-Kato formulae for the Dixmier trace: $$\begin{aligned} && |{\rm{Tr}}{_{\omega}}([f(tA/n)g(tB/n)]^n) - {\rm{Tr}}{_{\omega}}(e^{-tC})| = O(n^{-1}) \ , \\ && |{\rm{Tr}}{_{\omega}}([g(tB/n)f(tA/n)]^n) - {\rm{Tr}}{_{\omega}}(e^{-tC})| = O(n^{-1}) \ , \\ && |{\rm{Tr}}{_{\omega}}(F(t/n)^n) - {\rm{Tr}}{_{\omega}}(e^{-tC})| = O(n^{-1}) \ \ , \ \ \ |{\rm{Tr}}{_{\omega}}(T(t/n)^n) - {\rm{Tr}}{_{\omega}}(e^{-tC})| = O(n^{-1}) \ ,\end{aligned}$$ away from $2t_0$. Optimality of the estimates in Theorem \[pro7-1.3\] is a heritage of the optimality in Proposition \[IV.6-6\]. Recall that in particular this means that in contrast to the Lie product formula for *bounded* generators $A$ and $B$, the *symmetrisation* of approximants $\{f(t)g(t)\}_{t\geq 0}$, and $\{g(t)f(t)\}_{t\geq 0}$ by $\{F(t)\}_{t\geq 0}$ and $\{T(t)\}_{t\geq 0}$, does not yield (in general) the improvement of the convergence rate, see [[[@IchTamTamZag2001]]{}]{} and discussion in [@Zag2005]. We resume that the *lifting* Lemmata \[lem6-2.7\] and \[thm6-5.0\] are a general method to study the convergence in symmetrically-normed ideals $\mathcal{C}_{\phi}(\mathcal{H})$ as soon as it is established in $\mathcal{L}(\mathcal{H})$ in the operator norm topology. The crucial is to check that for any of the *key* Kato functions (e.g. for $\{F(t)\}_{t\geq 0}$) there exists $t_0 > 0$ such that $F(t)|_{t\geq t_0} \in \mathcal{C}_{\phi}(\mathcal{H})$. Sufficient conditions for that one can find in [@NeiZag1999a]-[@NeiZag1999c], or in [@Zag2003]. [**Acknowledgments.**]{} I am thankful to referee for useful remarks and suggestions. [1999]{} H. Araki, *On an inequality of Lieb and Thirring*, Lett. Math. Phys. **19** (1990), 167–170. V. Cachia and V. A. Zagrebnov, *[T]{}rotter product formula for nonself-adjoint [G]{}ibbs semigroups*, J. London Math. Soc. **64** (2001), 436–444. A. L. Carey and F. A. Sukachev, *Dixmier traces and some applications in non-commutative geometry*, Russian Math. Surveys, **61**:6 (2006), 1039–1099. A. Connes, *Noncommutative Geometry*, Academic Press, London, 1994. J. Dixmier, *Existence des traces non normales*, C.R. Acad.Sci. Paris, Sér. A **262** (1966), 1107–1108. J. Dixmier, *Von Neumann Algebras*, North Holland, Amsterdam, 1981. A. Doumeki, T. Ichinose, and Hideo Tamura, *Error bounds on exponential product formulas for [S]{}chr[ö]{}dinger operators*, J. Math. Soc. Japan **50** (1998), 359–377. I. C. Gohberg and M. G. [Kreǐn]{}, *Introduction to the theory of linear nonselfadjoint operators in Hilbert space*, (Translated by A.Feinstein from the Russian Edition: “Nauka”, Moscow 1965) Transl. Math. Monogr., vol. 18, Am. Math. Soc., Providence, R. I., 1969. T. Ichinose, Hideo Tamura, *Error bound in trace norm for [T]{}rotter-[K]{}ato product formula of [G]{}ibbs semigroups*, Asymptotic Anal. **17** (1998), 239–266. T. Ichinose, Hideo Tamura, *The norm convergence of the [T]{}rotter-[K]{}ato product formula with error bound*, Commun. Math. Phys. **217** (2001), 489–502. T. Ichinose, Hideo Tamura, Hiroshi Tamura, and V. A. Zagrebnov, *Note on the paper “the norm convergence of the [T]{}rotter-[K]{}ato product formula with error bound” by [I]{}chinose and [T]{}amura*, Commun. Math. Phys. **221** (2001), 499–510. T. Kato, *[T]{}rotter’s product formula for an arbitrary pair of self-adjoint contraction semigroups*, in Topics in Funct. Anal., Adv. Math. Suppl. Studies, Vol.3, pp.185-195 (I. Gohberg and M. Kac, eds.), Acad. Press, New York, 1978. S. Lord, F. Sukochev, and D. Zanin, *Singular Traces. Theory and Applications*, Series: De Gruyter Studies in Mathematics 46, W. de Gruyer GmbH, Berlin, 2013. H. Neidhardt and V. A. Zagrebnov, *The [T]{}rotter product formula for [G]{}ibbs semigroup*, Commun. Math. Phys. **131** (1990), 333–346. H. Neidhardt and V. A. Zagrebnov, *On error estimates for the [T]{}rotter-[K]{}ato product formula*, Lett. Math. Phys. **44** (1998), 169–186. H. Neidhardt and V. A. Zagrebnov, *Fractional powers of self-adjoint operators and [T]{}rotter-[K]{}ato product formula*, Integral Equations Oper. Theory **35** (1999), 209–231. H. Neidhardt and V. A. Zagrebnov, *[T]{}rotter-[K]{}ato product formula and operator-norm convergence*, Commun. Math. Phys. **205** (1999), 129–159. H. Neidhardt and V. A. Zagrebnov, *On the operator-norm convergence of the [T]{}rotter-[K]{}ato product formula*, Oper. Theory Adv. Appl. **108** (1999), 323–334. H. Neidhardt and V. A. Zagrebnov, *[T]{}rotter-[K]{}ato product formula and symmetrically normed ideals*, J. Funct. Anal. **167** (1999), 113–167. A. Pietsch, *Traces of operators and their history*, Acta et Comm.Univer.Tartuensis de Math. **18** (2014), 51–64. R. Schatten, *Norm ideals of completely continuous operators*, Springer-Verlag, Berlin, 1970. B. Simon, *Trace ideals and their applications*, Second edition, Math. Surveys and Monographs Vol.120, AMS, 2005. Hiroshi Tamura, *A remark on operator-norm convergence of [T]{}rotter-[K]{}ato product formula*, Integral Equations Oper. Theory **37** (2000), 350–356. V. A. Zagrebnov, *The [T]{}rotter-[L]{}ie product formula for [G]{}ibbs semigroups*, J. Math. Phys. **29** (1988), 888–891. V. A. Zagrebnov, *Topics in the Theory of [G]{}ibbs semigroups*, Leuven Notes in Mathematical and Theoretical Physiscs, vol.10 (Series A: Mathematical Physiscs), Leuven University Press 2003. V. A. Zagrebnov, *[T]{}rotter-Kato product formula: some recent results*, Proceedings of the XIVth International Congress on Mathematical Physics, Lisbon (July 28 – August 02, 2003), World Scientific, Singapour 2005, pp. 634–641.
66,077,108
# MongoDB Advanced Data Processing ---------------------- # Objectives By the end of this module you'll know how: - Load large csv data sets into MongoDB - The basics of Map Reduce and Aggregations - To process data with Map/Reduce tasks in MongoDB against a large collection - To process data MongoDB Aggregation Tasks # Introduction MongoDB provides support for large data processing tasks such as [Map Reduce](http://docs.mongodb.org/manual/core/map-reduce/) and [Aggregation](http://docs.mongodb.org/manual/core/aggregation-pipeline/). Map Reduce is the process of processing your entire database with 2 steps **Map** and **Reduce**. The Map step maps every document in your database to a category or key. Then in the reduce step, every key reduces all of its mapped values by aggregating all the values with some type of algorithm. MongoDB aggregation tasks allow similar operations as Map Reduce but works as a pipeline rather than a 2 step process. Aggregations allow you take a collection and transform it N number of times until you get the collection you desire. The advantage to Aggregations vs Map Reduce is that it allows you to only process the parts of your database you need and exclude parts you don't. Map Reduce requires that your *entire* database be processed. # The Example Data Set The example data set we will be dealing with is City of Chicago crime police report data from 2001 to 'present' which at the time of writing this is November 2014. The data comes in a large .csv file and we have uploaded a zipped version [here](https://mongdbmva.blob.core.windows.net/csv/crimedata.csv.zip). You can also find the original unzipped download link from the City of Chicago [here](https://data.cityofchicago.org/api/views/ijzp-q8t2/rows.csv?accessType=DOWNLOAD). After downloading the CSV file, we can import this data to our database using the ```mongoimport``` utility we used previously to import the test bank_data json: ``` mongoimport Crimes_-_2001_to_present.csv --type csv --headerline --collection crimes ``` The ```--type``` parameter specifies its a csv file, ```--headerline``` indicates that the first line of the csv file has the field names and obviously ```--collection``` specifies the collection to insert the new documents into. Doing this command may take a while and it should be noted that you should do this on a fairly fast machine or else working with data this large may hang up your machine. Here's what your output will look like: ``` connected to: 127.0.0.1 2014-11-17T21:36:07.025-0800 Progress: 2756872/1330450848 0% 2014-11-17T21:36:07.026-0800 11700 3900/second 2014-11-17T21:36:10.004-0800 Progress: 5634999/1330450848 0% 2014-11-17T21:36:10.004-0800 23900 3983/second 2014-11-17T21:36:13.013-0800 Progress: 8629859/1330450848 0% 2014-11-17T21:36:13.013-0800 36600 4066/second 2014-11-17T21:36:16.003-0800 Progress: 11551450/1330450848 0% 2014-11-17T21:36:16.003-0800 49000 4083/second ``` Afterwards you should have about 5.6 million documents uploaded, each representing a police report incident. Here's a sample of what your documents will look like: ``` { "_id" : ObjectId("5462725476ecd357dbbc721e"), "ID" : 9844675, "Case Number" : "HX494115", "Date" : "11/03/2014 11:51:00 PM", "Block" : "056XX S MORGAN ST", "IUCR" : 486, "Primary Type" : "BATTERY", "Description" : "DOMESTIC BATTERY SIMPLE", "Location Description" : "ALLEY", "Arrest" : "false", "Domestic" : "true", "Beat" : 712, "District" : 7, "Ward" : 16, "Community Area" : 68, "FBI Code" : "08B", "X Coordinate" : 1170654, "Y Coordinate" : 1867165, "Year" : 2014, "Updated On" : "11/10/2014 12:43:02 PM", "Latitude" : 41.790980835, "Longitude" : -87.649786614, "Location" : "(41.790980835, -87.649786614)" } { "_id" : ObjectId("5462725476ecd357dbbc721f"), "ID" : 9844669, "Case Number" : "HX494159", "Date" : "11/03/2014 11:50:00 PM", "Block" : "027XX S HOMAN AVE", "IUCR" : 820, "Primary Type" : "THEFT", "Description" : "$500 AND UNDER", "Location Description" : "RESIDENTIAL YARD (FRONT/BACK)", "Arrest" : "false", "Domestic" : "false", "Beat" : 1032, "District" : 10, "Ward" : 22, "Community Area" : 30, "FBI Code" : 6, "X Coordinate" : 1154188, "Y Coordinate" : 1885408, "Year" : 2014, "Updated On" : "11/10/2014 12:43:02 PM", "Latitude" : 41.841385453, "Longitude" : -87.709678617, "Location" : "(41.841385453, -87.709678617)" } { "_id" : ObjectId("5462725476ecd357dbbc7220"), "ID" : 9846437, "Case Number" : "HX494607", "Date" : "11/03/2014 11:49:00 PM", "Block" : "008XX N MILWAUKEE AVE", "IUCR" : 4386, "Primary Type" : "OTHER OFFENSE", "Description" : "VIOLATION OF CIVIL NO CONTACT ORDER", "Location Description" : "RESIDENCE", "Arrest" : "false", "Domestic" : "true", "Beat" : 1213, "District" : 12, "Ward" : 27, "Community Area" : 24, "FBI Code" : 26, "X Coordinate" : 1168403, "Y Coordinate" : 1905809, "Year" : 2014, "Updated On" : "11/10/2014 12:43:02 PM", "Latitude" : 41.897072334, "Longitude" : -87.656924505, "Location" : "(41.897072334, -87.656924505)" } { "_id" : ObjectId("5462725476ecd357dbbc7221"), "ID" : 9844605, "Case Number" : "HX494099", "Date" : "11/03/2014 11:47:00 PM", "Block" : "025XX W 51ST ST", "IUCR" : 1310, "Primary Type" : "CRIMINAL DAMAGE", "Description" : "TO PROPERTY", "Location Description" : "RESIDENCE", "Arrest" : "true", "Domestic" : "false", "Beat" : 923, "District" : 9, "Ward" : 14, "Community Area" : 63, "FBI Code" : 14, "X Coordinate" : 1159952, "Y Coordinate" : 1870801, "Year" : 2014, "Updated On" : "11/10/2014 12:43:02 PM", "Latitude" : 41.801185293, "Longitude" : -87.688928625, "Location" : "(41.801185293, -87.688928625)" } ``` # Map Reduce Explained The best way to explain map reduce is to attempt to answer a question about the massive amount of data we have. Let's try this one out for size: **What day of the week has the most crime incidents recorded in Chicago?** [Map Reduce](http://en.wikipedia.org/wiki/MapReduce) breaks down this problem into 2 steps, **Map** and **Reduce**. ## Mapping The question essentially asks us to break down the number of crimes that have occurred from 2001 in Chicago by day. Notice that for each crime document there is a ```Date``` field which indicates the exact date of the incident. We can use this data to **map** the crime document to a specific day of the week. Here's a graphical view of what we will be doing in this step: ![](ScreenShots/mongodbss1.png) Each of the 5.6 million police reports will be processed by mongo db and we will specify that we want the document to be mapped to either the 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', or 'Sunday' keys depending on the date field of the document. We do this by calling the **mapReduce** function on the collection. **mapReduce** takes two functions as parameters, a **map** function and a Reduce function. The MongoDB driver (and interactive shell) allow us to specify a **map** function which defines how MongoDB will map the documents to their respective keys. To emit a map, the **emit** function can be called within the function to let MongoDB know that you have a mapping for this document. Here's what this looks like in code using the NodeJS driver (which is very similar to the interactive shell): ```js //Question: What day of the week do most crimes occur in chicago from September 2001 to present? crimes.mapReduce(function(){ var milis = Date.parse(this.Date); var date = new Date(milis); //use a javascript array to create key mappings between 0-6 and Sunday - Saturday var daysOfWeek = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]; //emit the mapping between this document and the Day which it cooresponds to. emit(daysOfWeek[date.getDay()], 1); } ``` Notice that we parse the date field using the Javascript Date class and then use that to easily acquire the day of the week. We then call getDay() to retrieve the day which is a number between 0 and 6. We then use a Javascript array to map those values to a day of the week since every javascript object is already a key/value dictionary. We can't execute the code snippet above since it's not complete yet without our **reduce** function. ## Reducing The second step in this process is to reduce the these mappings into a resulting data set which sort of *summarizes* the mappings we've created. The original question at hand would like us to *summarize* which day of the week crimes happened most frequently in Chicago. The Map step has already created a large amount of mappings of 5.6 million crime documents to 7 different types of mappings. If we summarized the data by summing the total number of crime documents for each key (the day of the week) we would quickly be able to answer the question at hand. Here's a Visual representation of what the Reduce step does: ![](ScreenShots/mongodbss3.png) The **reduce** function is the second parameter to the **mapReduce** function where we do this summarization. Reduce is called with 2 parameters, **key** and **values**. ```js function(key, values){ //reduce the set of values to a single sum. //the count of values for this day (key) is sufficient enough return Array.sum(values); }, ``` It turns out that the reduce function for this question is quite easy. We just have to return the number of documents that have been mapped to the key (the day of the week). The final parameter for **mapReduce** is the output collection for the results. This is the collection where the results for each key will be placed. We can pass a simple javascript object that specifies this as an **out** collection: ```js { out: "crime_day_frequencies" }, ``` Its optional, but let's add a callback function which will output the results upon finishing the map/reduce job. Remember Node.js is asynchronous so the mapReduce function won't stop the application for the task to complete: ```js function(err, results, stats){ console.log('completed!'); if(err){ return console.err(err); } var outCollection = db.collection('crime_day_frequencies'); outCollection.find().toArray(function(err, docs){ if(err){ return console.error(err); } console.log('Number of crimes based on each day of the week'); for(var i in docs){ console.log(docs[i]); } return; }); }); ``` Putting the entire MapReduce call together for Node.js looks like: ```js //Question: What day did most crimes occur in chicago from September 2001 to present? crimes.mapReduce(function(){ var milis = Date.parse(this.Date); var date = new Date(milis); var daysOfWeek = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]; //emit the mapping between this document and the Day which it cooresponds to. emit(daysOfWeek[date.getDay()], 1); }, function(key, values){ //the values parameter will be an array of docs, since thats the value we emitted in map return values.length; }, { out: "crime_day_frequencies" }, function(err, results, stats){ if(err){ return console.err(err); } var outCollection = db.collection('crime_time_frequencies'); outCollection.find({}).toArray(function(err, docs){ if(err){ return console.error(err); } console.log('Number of crimes based on each day of the week'); for(var i in docs){ console.log(docs[i]); } return; }); }); ``` Finally, the output of this map/reduce job gives us the answer we were looking for in the form of a mongodb collection: ``` Number of crimes based on each day of the week { _id: 'Friday', value: 854604 } { _id: 'Monday', value: 798881 } { _id: 'Saturday', value: 807961 } { _id: 'Sunday', value: 760800 } { _id: 'Thursday', value: 810790 } { _id: 'Tuesday', value: 814967 } { _id: 'Wednesday', value: 820651 } ``` From the results above, barring any actual statistical science, it appears that the most common day that crimes have occurred in Chicago since 2001 to present has been Fridays. This could be a very interesting insight! # Data Aggregations Another nifty data processing tool that MongoDB offers is [Aggregations](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set/). Aggregations work on a concept of processing [pipelines](http://docs.mongodb.org/manual/core/aggregation-pipeline/#id1) which consist of a number of stages. At each stage the data in the returned collection of documents is transformed. As before, the best way to learn this concept is by asking a question about the data and using Aggregations to answer it. Let's go with the question: **What is the most common type of crime committed in Chicago between 2001 and November 2014?** Its possible to answer this with Map/Reduce but let's use Aggregates instead. To do aggregates we first start with the original crimes collection of 5.6 million documents, group those documents by the ```Primary Type``` field, keep track of the count for the number of documents in the Primary Type group and then finally sort those groups by the generated count values. Visually we are processing the crimes collection through a series of pipeline steps: ![](ScreenShots/mongodbss2.png) Stage 0 is the initial collection of Crime elements, which has approximately 5.6 million documents. We then use a [**$group**] (http://docs.mongodb.org/manual/reference/operator/aggregation/group/) operator to group all those documents into groups by ```Primary Type``` which essentially is the type of crime that was committed. This leaves us with a new collection of documents one for each group, We will add a **count** field to each group to record how many documents are in that group. Finally we do our next pipeline step which is to sort the input collection which is the collection that came out of the $group operator using the **$sort** operator. We will sort descending by the count field on the grouped collection. Here's what this looks like in Node.js code: ```js crimes.aggregate({ $group : { _id: "$Primary Type" , count:{$sum: 1} }}, {$sort: { count: -1 } }, function(err, docs){ if(err){ return console.error(err); } console.log('Crime data by type'); for(var i in docs){ console.log(docs[i]); } } ); ``` For the provided dataset, here are the results, which also yields some interesting insights: ``` { _id: 'THEFT', count: 1168715 } { _id: 'BATTERY', count: 1033129 } { _id: 'CRIMINAL DAMAGE', count: 654607 } { _id: 'NARCOTICS', count: 646714 } { _id: 'OTHER OFFENSE', count: 349060 } { _id: 'ASSAULT', count: 343095 } { _id: 'BURGLARY', count: 334888 } { _id: 'MOTOR VEHICLE THEFT', count: 271217 } { _id: 'ROBBERY', count: 212066 } { _id: 'DECEPTIVE PRACTICE', count: 186657 } { _id: 'CRIMINAL TRESPASS', count: 166819 } { _id: 'PROSTITUTION', count: 64586 } { _id: 'WEAPONS VIOLATION', count: 53954 } { _id: 'PUBLIC PEACE VIOLATION', count: 40761 } { _id: 'OFFENSE INVOLVING CHILDREN', count: 35031 } { _id: 'SEX OFFENSE', count: 20696 } { _id: 'CRIM SEXUAL ASSAULT', count: 20406 } { _id: 'GAMBLING', count: 13522 } { _id: 'LIQUOR LAW VIOLATION', count: 13063 } { _id: 'ARSON', count: 9363 } { _id: 'HOMICIDE', count: 6869 } { _id: 'INTERFERENCE WITH PUBLIC OFFICER', count: 6738 } { _id: 'KIDNAPPING', count: 5936 } { _id: 'INTERFERE WITH PUBLIC OFFICER', count: 3760 } { _id: 'INTIMIDATION', count: 3379 } { _id: 'STALKING', count: 2653 } { _id: 'OFFENSES INVOLVING CHILDREN', count: 382 } { _id: 'OBSCENITY', count: 294 } { _id: 'PUBLIC INDECENCY', count: 114 } { _id: 'OTHER NARCOTIC VIOLATION', count: 102 } { _id: 'NON-CRIMINAL', count: 24 } { _id: 'RITUALISM', count: 23 } { _id: 'CONCEALED CARRY LICENSE VIOLATION', count: 14 } { _id: 'NON - CRIMINAL', count: 13 } { _id: 'NON-CRIMINAL (SUBJECT SPECIFIED)', count: 3 } { _id: 'DOMESTIC VIOLENCE', count: 1 } ``` From our results, it would appear that Theft and Battery by far have been the most common types of crimes in Chicago for the past 13 years. # Aggregates vs Map/Reduce Map Reduce and Aggregations are very powerful tool sets to gain insights into your MongoDB database. It makes sense to compare and contrast the difference between the two. Map Reduce is fundamentally designed to process ALL of your data. Every single document is looked at at least once in a a Map/Reduce task. Map reduce allows for more complex logic in mapping documents to keys. Similarly you can implement more complex behavior for reduction as well. Aggregations came to MongoDB after version 2.0 partially out of a need to avoid having to process the entire database if it wasn't needed. Because aggregations use a pipeline model, you can chain as many aggregation operations as you'd like and each subsequent operation can perform better given that the operation before reduced the number of documents. In short, if the aggregation operators suit your needs and you don't need to process your entire database for each aggregation, than aggregates show a better value. If you constantly process the entire database or have more complex Map or Reduce logic Map/Reduce may be a more attractive option. # Conclusion Whether you use Map Reduce or Aggregates you will notice that these tasks take quite some time. Due to this these operations are not designed to be real-time requests and thus should really be ran in the background of your application. Think - hourly, daily or weekly runs where you aggregate some result and do something interesting with that data. With Map/Reduce and Aggregates you can really find fun and interesting insights without having to write very much code at all.
66,077,133
“A good-looking kid — great athlete, great student, great friend,’’ said Mattapan’s Harry Benzan, recalling the Martin Richard he coached on the soccer field for one season. “He was Eddie Haskell meets Larry Bird and Tom Brady — the type of kid who made me feel like anything I said as his coach meant the world to him.’’ They’re racing Monday from Hopkinton to Boston with Martin Richard’s name and number on their back. They call themselves “Team MR8,” 102 in total, and many of them will carry the memories of the young, spirited boy they saw grow up day-to-day in and around Dorchester. Eight-year-old Martin Richard, ever-smiling, eager to please, and delightfully mischievous at times, was the youngest of the three people killed in the two explosions near the Boston Marathon’s finish line last April. Members of Team MR8, assembled in recent months by the charitable foundation the Richard family formed in his name, recently crossed the $1 million threshold in donations. The funds will be directed toward education, athletics, and community — and it is the close-knit community of friends around the Richard home in Dorchester that gives Team MR8 much of its identity. Benzan, 48, will be running Boston for the seventh time (including three bandit appearances), and has privately pledged to knock off 26 pushups (one per mile) when he crosses the finish line. Martin was the kind of kid who appreciated extra effort. “I can still hear him now,’’ said Benzan. “Whatever I asked of him, he’d volunteer to do more, like, ‘OK, Coach Harry, run up the hill? We’ll do it twice.’ Just that big smile, all the time. ‘Hi, Coach Harry, how you doing? What are we doing today, Coach Harry?’ ’’ When the inevitable pain courses through his legs around Heartbreak Hill, it will be Martin’s voice that will carry Benzan up, over, and on toward Boston. “I’ll be drinking a lot of water, that’s for sure,’’ said Benzan. “Because I know I’m going to be crying through the whole thing. People are going to be saying, ‘Hey, who’s that crybaby?’ ’’ Team MR8 runner Lisa Jackson, age 47 and Holy Cross Class of 1989, lives less than a mile from the Richard home with her husband and four children. With seven kids perpetually dashing back and forth between the two households, she saw Martin nearly every day, including the weekend leading up to last year’s race. “Oh my God, a little boy through and through,’’ said Jackson, recalling the Friday last April when Martin and her son Joe knocked on the Jacksons’ back door. “Both of them standing there, their faces totally covered in mud. Martin’s idea, totally. All you could see through the mud was Martin’s eyes lighting up. He was the kid. An angel. They loved him at school. We all loved him.’’ Jackson ran track in high school and college. This will be only her second marathon, hitting the streets again for the first time since running in the race’s 100th anniversary. “I’ve had two ACL reconstructions, so never in my mind did I think I’d be doing this,’’ said Jackson, emphasizing the strength she has drawn from the Richard family. “It’s important to me that people remember Martin. And it’s something I figured, if I am physically able to do this, then how could I not do it? “It’s an honor. I’m pround to be wearing his number 8.’’ Martin, whatever his sport, always raised his hand for No. 8. “I’ll finish, no doubt about it,’’ said Jackson, who completed a 21-mile tuneup March 29 with a bunch of MR8 teammates. “I’ll be counting on his little legs to carry me over Heartbreak Hill. “I can’t tell you how often I cry, thinking about him when I run. And I think of all the Richards — Denise, Bill, Jane, Henry — I don’t know how they get through the day. But to see them so strong, so inspiring . . . they’re just an unbelievable family.’’ Jose Calderon coached Martin in soccer, for two years. He and wife Amy and their three boys live on Melville Avenue, but a mile from the Richards. Their children went to the Neighborhood House Charter School, same as the Richard kids, the same school where Denise Richard is a librarian. “Martin was a bundle of energy, always smiling, fun to be around,’’ recalled Calderon. “Win, lose, or tie, he always had that big toothy smile. You’ve seen the pictures.’’ Calderon, 42, an aerospace engineer for General Electric, will be running his fourth marathon, his second Boston. When he learned that the Boston Athletic Association offered the Richard family 100 slots for Monday’s race, he was among the first to fill out a Team MR8 application. Less than a third of some 350 applicants made the cut. “I jumped all over it,’’ said Calderon, who grew up in Puerto Rico and came here nearly a quarter-century ago to attend Boston University. “I’ve know the family a long time, and I saw this as a perfect opportunity to help them keep Martin’s memory alive — and help them in some small way deal with their unimaginable grief.’’ Dorchester neighbor Pat Doherty, 46, ran Boston for the first time in 1993 and was in the race last year, only to see his run cut short near the Mass. Ave. bridge after the bombs exploded on Boylston Street. “The emotion of it all sort of didn’t hit me until I got home to Dorchester,’’ said Doherty. “I mean, God . . . I don’t know what to say about that day.’’ Doherty will have Martin in mind, and like some other MR8 runners, including Benzan, he’ll be running with some anger. “Just because of how [the bombers] tore that family apart,’’ said Doherty. “The way I see it, they had every opportunity to succeed in this country, between getting an education and everything else. I’m lost for words to describe what they did.” The Richards, through their strength and courage and perseverance, said Doherty, should be viewed as heroes. “They make this country great,’’ he said. Rachel Moo, 38, grew up in Toronto, earned her teaching degree at Syracuse, and was Martin’s second-grade teacher at Neighborhood House. She’ll be running her first marathon. “You’d better believe I’ll finish,’’ said Moo, who has since left teaching to pursue a master’s degree in sports leadership at Northeastern. “I’ll be thinking about what a privilege it is to run in Martin’s memory — and to be part of taking back the day, the city.’’ It was for a segment on peace studies and conflict resolution Moo designed for her class that Martin wrote what has become his signature poster, “No More Hurting People.’’ “Totally him,’’ she said. “He came up with that on his own — his message to the world.’’ Still resonating with Moo is the time when Martin asked her how old he had to be to run a marathon. They researched it, said Moo, and he said he would be back to her at age 18. “ ‘OK,’ he said to me,’’ recalled Moo, “ ‘so when I’m old enough, we will run it together.’ That’s what prompted me.’’
66,077,350
To compare a placebo versus an active nicotine patch, administered for 2 weeks, on sleep and mood in depressed patients who wish to stop smoking.
66,077,384
Graduate Education The Graduate Education Program leads to the degree of Master of Arts in Education (MA). The general purpose of the University is to provide its students with a liberal education- an education laying emphasis upon the training of the critical faculty; cultivation of a wide breadth of learning; development of the student's wholeness as an individual possessed of both body and soul. Villanova University offers three options for individuals interested in earning an MA in Education. Graduate Certificate in Education.A fifteen credit program for post-baccalaureate students planning careers in education or related fields (special note: this program does not lead to initial teacher certification). A fifteen credit program for post-baccalaureate and post master's students who wish to develop their leadership ability for use in the classroom or in roles such as lead teacher, curriculum developer, department chair, new teacher mentor or special project leader. Please contact us to make an appointment to visit our department at your convenience. You can also email us at [email protected] Social Media Links About Villanova Villanova University was founded in 1842 by the Order of St. Augustine. To this day, Villanova’s Augustinian Catholic intellectual tradition is the cornerstone of an academic community in which students learn to think critically, act compassionately and succeed while serving others. There are more than 10,000 undergraduate, graduate and law students in the University’s six colleges.
66,077,491
ALENA BITTER ORANGE SATIN STAMPED DRESS A contemporary take on the classic tea dress, our Alena dress features poetic puffed sleeves that dramatically drape at the shoulder and cinch at the wrist, offsetting the flattering bias cut silhouette. This midi length dress in a saturated orange printed devore features a high collar and an open back that ties with a ribbon, and a fluted hem. Pair with colour pop heels for a contrasting look. Related Items RINA B BLACK JACQUARD DRESS DESCRIPTION With a mini hemline our Rina-B dress is sure to make a statement this season. Crafted from a delicate jacquard with a subtle sheen, this short dress features a high neckline trimmed with a... RAQUEL B SPACE DOT SILK DRESS DESCRIPTION A brand new version of our best-selling Raquel style, our Raquel B midi length dress has no frills across the bodice for a more refined look. This lightweight shirt dress features a Mandarin collar,...
66,077,495
Your Source for Las Campanas Real Estate Las Campanas Homes REAL ESTATE Our team of experts represents the best and brightest in the industry, and we’re always striving to lead the field in research, innovation, and consumer education. Today’s buyers and sellers need a trusted resource that can guide them through the complex world of real estate. With our extensive knowledge and commitment to providing only the best and most timely information to our clients, we are your go-to source for real estate industry insight and advice.
66,077,881
Santa program provides assistance to seniors during holiday season November 28, 2011 Senior gift requests are expected to be up again this holiday season amid worries about the threat of declining benefits and the economy. Be a Santa to a Senior, the popular campaign that has delivered 1.5 million gifts to needy seniors throughout North America during the past seven years, is gearing up again this holiday, according to the Home Instead Senior Care network, the world's largest provider of non-medical, in-home care services for seniors. The program relaunches during a time when already-nervous seniors faced the threat of Social Security payment delays as part of the debt-ceiling debate earlier this year. These older adults have lost nearly one-third (32 percent) of their buying power since 2000, according to the Annual Survey of Senior Costs from The Senior Citizens League, a senior advocacy organization. The area office of the Home Instead Senior Care network, the world's largest provider of non-medical in-home care and companionship services for older adults, has partnered with GraceWorks Unlimited, and several businesses to provide gifts and companionship to seniors who otherwise might not receive either this holiday season. In North America, the program has attracted upwards of 65,000 volunteers during the past seven years distributing gifts to deserving seniors. Since introducing the Be a Santa to a Senior program, the Home Instead Senior Care network has helped provide gifts to more than 750,000 seniors. "Older adults continue to struggle in a down economy, particularly those who live alone with no family nearby," said Sue Bidwell, owner of the Home Instead Senior Care offices serving Southwest Florida. Here's how the program works: Before the holiday season, the participating local nonprofit organizations will identify needy and isolated seniors in the community and provide those names to the local Home Instead Senior Care office for this community service program. Christmas trees, which will go up in several area businesses (see complete list with addresses below) on Nov. 1 through Dec. 15th, will feature ornaments with the first names only of the seniors and their gift requests. Holiday shoppers can pick up an ornament, buy items on the list and return them unwrapped to the store, along with the ornament attached. Home Instead Senior Care then enlists the volunteer help of its staff, senior-care business associates, nonprofit workers and others to collect, wrap and distribute the gifts. "Be a Santa to a Senior is a way to show our gratitude to those older adults who have contributed so much to our community," Bidwell said. "We hope to reach out to many with this gesture of holiday cheer and goodwill. We know holiday shoppers will open their hearts to those seniors who have given so much to make our community a better place," she added. If you or someone you know is interested in volunteering to help with the community gift-wrapping event, contact Laura Gillian at 941-505-0450. Businesses are encouraged to contact the local Home Instead Senior Care office about adopting groups of seniors. For tree locations, or for more information about the program, visit www.beasantatoasenior.com.
66,077,939
Ecological preserves losing biodiversity, study finds Many of the ecological preserves created to protect sensitive species are losing biodiversity, according to a vast study published this month in the journal Nature, which provided a “health check” of preserves around the world. Spearheaded by William Laurance, a conservation biologist at James Cook University in Cairns, Australia, the study surveyed field biologists and environmental scientists, including Erin Riley, a professor of anthropology with San Diego State who has researched the macaque monkey on the Indonesian island of Sulawesi. Laurance and his team conducted 262 interviews with the scientists, and asked them to complete 10-page questionnaires on their findings from 60 protected areas across the world’s major tropical regions of Africa, America and Asia. The questions focused on changes in 31 animal and plant species, including primates, freshwater fish and exotic plants, at 60 preserves in 36 countries. “The team found that around half of the reserves are experiencing a severe loss of biodiversity,” Nature reported. Riley’s study of macaques in Lore Lindu National Park found that the animals interacted with farmers outside the boundaries of the preserve, often raiding cacao plantations to gobble the nutritious, creamy liquid in the pods. The intelligent monkeys elude farmers’ efforts to keep them out of the crops, Riley said, but local residents respect the animals, which feature prominently in local folklore. That tolerance allows the monkeys to thrive on the interface between protected and cultivated land. But not all species fare so well in proximity to humans, she said. Both her own work and the larger study show the importance of considering land uses around preserves, Riley said, and challenge “the idea of protected areas being this saving ark,” isolated from outside influences. It’s important, she said, “to have realistic expectations about what protected areas can and cannot do, and a renewed conversation that while protected areas are crucial, we should be thinking about primate conservation out of protected areas.”
66,077,984
Q: adding keylistener returns null pointer exception I am learning to program 2d games in java for quite a while. In my lastest game I have tried to create a private class that will handle the key events from within the Player class, I did it like this: package game; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; public class Player { final private int MOVEMENTSPEED = 4; final private int BOOST = 8; final private EventHandler HANDLER = new EventHandler(); private int x, y; private int speedX, speedY; public Player(int x, int y){ this.x = x; this.y = y; } public void update(){ x += speedX; y += speedY; } public int getSpeedX() { return speedX; } public int getSpeedY() { return speedY; } public int getX() { return x; } public int getY() { return y; } public EventHandler getHandler(){ return HANDLER; } private class EventHandler implements KeyListener{ @Override public void keyPressed(KeyEvent e) { switch(e.getKeyCode()){ case KeyEvent.VK_W: speedY = -MOVEMENTSPEED; break; case KeyEvent.VK_S: speedY = MOVEMENTSPEED; break; case KeyEvent.VK_A: speedX = -MOVEMENTSPEED; break; case KeyEvent.VK_D: speedX = MOVEMENTSPEED; break; } } @Override public void keyReleased(KeyEvent e) { switch(e.getKeyCode()){ case KeyEvent.VK_W: speedY = 0; case KeyEvent.VK_S: break; case KeyEvent.VK_A: speedX = 0; case KeyEvent.VK_D: break; } } @Override public void keyTyped(KeyEvent e) { // TODO Auto-generated method stub } } and here is where i try to add the eventHandler class to the class that deals with the game loop, painting and such: public class FrameWork extends Applet implements Runnable { private URL base; private Graphics second; private Image image; public static Player p; @Override public void start() { p = new Player(400, 400); Thread thread = new Thread(this); thread.start(); } @Override public void init() { setSize(1000, 600); setFocusable(true); Frame frame = (Frame) this.getParent().getParent(); frame.setTitle("Assassin"); frame.setResizable(false); **this.addKeyListener(p.getHandler());** } I have already worked with keylistener but did it either without the an extra class (implementing the methods within the same class of the game loop) or by creating a completely differnent class and using its instance. Anyway I tried to do it differently because it seemed more comfortable but it always returns the error java.lang.NullPointerException and I don't understand why. Thanks for help A: The lifecycle of an Applet starts with init and later start is called. You initialize the player in start which is called after init, therefore p in init is null. Create the player in the init method. See Applet lifecycle: what's the practical difference between init() & start(), and destroy() & stop()?
66,077,988
Just because the WGLNA Gold League kicked off last week doesn't mean that they're the only game in town. In fact, the WGLNA Silver League is fresh full of up-and-coming WoT teams, and the tier X action will be streamed live by community casters Inchon and Eural. This week's battles begin Monday night (December 7) at 18:30 PT, as Ze Foxey Baddies take on the Spaghetti Slappers, with another battle (match-up TBD) the same time on Wednesday, December 9! While the Silver League's rules mirror the Gold League's (7/68 Attack/Defense), Silver League players can win Gold and tanks, including 60,000and a Type 59 for first place; 45,000and a KV-5 for second, or 30,000and a Löwe for third! The top teams include HU3 and the aforementioned Spaghetti Slappers, who finished fourth and fifth last season, and both teams played particularly well in the Gold League qualifiers against some stiff competition. Another team to watch is Danger Close -- these upstarts are looking to contend after adding SpANiSH_ and hol598 from Rival Gaming and GeeForcer from eLevate, three top Gold league players who should add some serious firepower to an already great team. Get ready for the action; we'll see you there!
66,078,215
Activists demand settlers leave Palestinian home in Hebron following court order Oct. 22, 2017 6:30 P.M. (Updated: Oct. 23, 2017 5:10 P.M.) HEBRON (Ma'an) -- A Palestinian committee in the city of Hebron in the southern occupied West Bank has reportedly received an order by the Israeli Supreme Court to evacuate a group of Israeli settlers illegally occupying a Palestinian home in the area. The home, which belongs to the Abu Rajab family, has been embroiled in a legal battle with Israeli settlers who claim that they purchased the rights to the home, though the Palestinian homeowners and the Israeli state have maintained that the settlers forged the documents. The state of Israel ordered the 15 settlers families living in the Abu Rajab house to evacuate last month, however, the Israeli Supreme Court ordered in September to delay the evacuation based on an appeal submitted by the settlers. According to Palestinian activists, who held a sit-in protest in front of the home on Friday, the Hebron Reconstruction Committee “was able to issue an order by the Israeli Supreme Court to evacuate Israeli settlers of the Abu Rajab building as it was proven that their entrance to the building was not legal.” During the sit-in, Israeli forces detained coordinator of the Youth against Settlement group Issa Amro. An Israeli army spokesperson was not immediately available for comment. Located in the center of Hebron -- one of the largest cities in the occupied West Bank -- the Old City was divided into Palestinian and Israeli-controlled areas, H1 and H2, following the Ibrahimi Mosque massacre. The Abu Rajab home is located near to this mosque. Some 800 notoriously aggressive Israeli settlers now live under the protection of the Israeli military in the Old City, surrounded by more than 30,000 Palestinians. Palestinian residents of the Old City face a large Israeli military presence on a daily basis, with at least 20 checkpoints set up at the entrances of many streets, as well as the entrance of the Ibrahimi Mosque itself.
66,078,584
1. Field of the Invention The invention relates in general to an apparatus for converting a digital signal to a corresponding analog signal and a method thereof, and more particularly to an apparatus for converting a digital pixel signal to a corresponding analog voltage signal for a liquid crystal display and a method thereof. 2. Description of the Related Art Featuring the favorable advantages of thinness, lightness, and generating low radiation, liquid crystal displays (LCDs) have been widely used. The LCD panel includes a number of pixels, and the light transmittance of each pixel is determined by the voltage difference between the upper plate voltage and the lower plate voltage. The light transmittance of every pixel is typically non-linear with respect to the voltage applied across the pixel. Thus, gamma correction is performed to reduce color distortion by adjusting the lightness or darkness of pixels of the LCD panel. FIG. 1 shows the gamma relation between the gamma voltage applied to a pixel and the luminance of the pixel. The X-axis represents the gamma voltage applied to the pixel, that is, the voltage difference between the upper plate and the lower plate voltages and the Y-axis represents the light transmittance of the corresponding pixel (T). When the magnitude of the upper plate voltage is fixed at a value, for example, Vcom, the voltage difference between the upper plate voltage and lower plate voltage is determined by the magnitude of the lower plate voltage. The corresponding relation between the lower plate voltage and the light transmittance of the pixel is nonlinear, as shown by the gamma curve in FIG. 1. In addition, the gamma curve is symmetric with respect to the voltage of Vcom because the light transmittance of the pixel relates to the voltage across the pixel and is independent of the polarities of the voltages applied to the pixel. If two gamma voltages with the same magnitude but opposite polarities, for example, a positive gamma voltage Va and a negative gamma voltage Vb, are individually applied to the pixel, the light transmittance of the pixel is identical (TO). In other words, if the upper plates of two pixels are supplied with the voltage Vcom and the lower plate of one pixel is supplied with the voltage Va and the lower plate of another pixel is supplied with the voltage Vb, the luminance of the two pixels will be identical. The liquid crystal molecules may deteriorate if a pixel of the LCD panel is supplied with voltages in the same polarity continually. Hence, the liquid crystal molecules can be protected by applying voltages in opposite polarity alternately across the upper and lower plates for each pixel. In other words, when a pixel has to emit at a luminance continuously, voltages in opposite polarities can be applied across the upper and the lower plates alternately by changing two different voltages across the upper and lower plates for the pixel alternately. In this way, deterioration of the pixel can be avoided. FIG. 2 shows a block diagram of a nonlinear digital-to-analog converter (D/A converter) 202. The driving circuit of the liquid crystal display includes a nonlinear digital-to-analog converter 202 for converting the digital pixel signal (DATA) to the corresponding analog gamma voltage signal (OUT). Since the relation between the luminance of the pixel and the gamma voltage is not linear, the corresponding relation between digital pixel signal (DATA) and the analog gamma voltage signal (OUT) is determined according to the gamma curve. This process is called gamma correction. The corresponding relation between the digital pixel signal (DATA) and the luminance of the pixel is then approximated as linear by executing the gamma correction using the nonlinear digital-to-analog converter 202. FIG. 3 shows a gamma curve, which is for use in the nonlinear digital-to-analog converter to perform gamma correction. The X-axis represents the data value of the digital pixel signal and the Y-axis represents the gamma voltage signal. The gamma curve shown in FIG. 3 includes a positive polarity gamma curve 404 and a negative polarity gamma curve 402. Each digital pixel signal corresponds to a positive polarity gamma voltage signal on the positive polarity gamma curve 404 or a negative polarity gamma voltage signal on the negative polarity gamma curve 402. The points A, B, C, D and E chosen from the positive polarity gamma curve 404 and the points A′, B′, C′, D′ and E′ chosen from the negative polarity gamma curve 402 are specific reference points. According to the gamma curve shown in FIG. 3, each reference point corresponds to a reference gamma voltage signal (GMV) and a reference digital pixel signal. When performing the gamma correction, the nonlinear digital-to-analog converter 202 converts each digital pixel signal to the corresponding gamma voltage signal by interpolation according to the relationship between the reference gamma voltage signal (GMV) and the corresponding reference digital pixel signal. FIG. 4 shows a conventional apparatus for outputting the gamma voltage signals according to the reference gamma voltage signals, wherein the conventional apparatus for outputting the gamma voltage signals includes two strings of resistors. Each resistor string includes 255 resistors (R0˜R254), five input nodes (V0˜V4, V5˜V9) for receiving the reference gamma voltage signals, and 256 output nodes for outputting the gray level voltage signals. When the gamma correction is executed, the gamma output voltage signal corresponding to the digital pixel signal can be determined according to the gray level voltage signals. FIG. 5 shows the diagram of the pixel P(N,M). The driving circuit of the pixel P(N,M) includes a thin film transistor T(N,M) and a pixel capacitor C(N,M). The gate electrode of the transistor T(N,M) is coupled to the scan line (SN) SN; the source electrode of the transistor T(N,M) is coupled to the data line (DM) DM; and the drain electrode of the transistor T(N,M) is coupled to the pixel capacitor C(N,M). When the transistor T(N,M) is turned ON through enabling the scan line SN, the gamma voltage output signal is delivered to the pixel capacitor C(N,M) through the data line DM and the transistor T(N,M). The luminance of the pixel P(N,M) can be determined by data value of the gamma voltage output signal. In a color LCD, a picture frame is displayed based on a pixel element, called a color pixel or pixel simply, including three sub-pixels for displaying primary colors, that is, red, green, and blue. The three sub-pixels of a color pixel are supplied with separate gamma voltage signals outputted by the driving circuit of the color LCD after gamma correction. The pixel can thus display different colors by changing the brightness of the three sub-pixels individually. FIG. 6 shows three different gamma curves, marked “R”, “G”, and “B”, for the primary colors, red, green, and blue, respectively. According to the “R”, “G”, and “B” gamma curves, the gamma voltages corresponding to the maximum luminance of the sub-pixels are VRM, VBM, and VGM for red, blue, and green respectively. The magnitude of VBM is smaller than that of VGM, and VGM is smaller than VRM (VBM<VGM<VRM). The nonlinear digital-to-analog converter conventionally predetermines the maximum magnitude of the gamma voltage signal to be VBM for gamma correction. Based on this magnitude of VBM,all other gamma voltage signals corresponding to the digital pixel signals are determined. Therefore, the relation between digital pixel signals and the corresponding gamma voltage signals is fixed and independent of the display color of the pixel corresponding to the digital pixel signal. Unfortunately, this conventional gamma correction method disadvantageously causes the luminance of a pixel being unable to reach its maximum value when the display color of the pixel is red or green, because the maximum magnitude of the gamma voltage signal is set to VBM while VBM is smaller than that of VGM and VGM is smaller than VRM (VBM<VGM<VRM). In this way, optimum display quality of the LCD panel becomes unachievable and the display performance would be degraded.
66,078,589
The influence of UGT2B7 genotype on valproic acid pharmacokinetics in Chinese epilepsy patients. The aim of this study was to investigate the distribution and frequency of genetic polymorphisms in uridine diphosphate glucuronosyltransferase-2B7 (UGT2B7) in epilepsy patients and to evaluate the effect of these on the metabolism of valproic acid (VPA). Single nucleotide polymorphisms in UGT2B7 were investigated in 102 epilepsy patients using DNA sequencing and polymerase chain reaction-restriction fragment length polymorphism analysis. The steady-state plasma concentrations of VPA were determined in these patients, who had received VPA (approx. 500-1000 mg/day) for at least 2 weeks. Fourteen patients had the CC genotype at UGT2B7 C802T, 46 carried CT, and 42 carried the TT genotype. At UGT2B7 G211T, 78 patients had the GG genotype, 23 carried GT, and one individual had the TT genotype. The standardized trough plasma concentration of VPA was much lower in those patients with a T allele at UGT2B7 C802T than in those with the CC genotype (TT, 2.11 ± 1.26; CT, 2.31 ± 1.25; CC, 3.02 ± 1.32 μg kg mL(-1) mg(-1), p < 0.01). However, UGT2B7 G211T polymorphisms had no influence on the plasma concentration of VPA (GG, 2.28 ± 1.32, GT, 2.303 ± 1.38 μg kg mL(-1) mg(-1)). These results suggested that UGT2B7 C802T may be an important determinant of individual variability in the pharmacokinetics of VPA and that it may be necessary to increase the VPA dose for individuals with a T allele in order to achieve the therapeutic range of 50-100 μg/mL.
66,078,693
YouTube Marketing Tips You Must Learn! Youtube.com is a great place to help you drive free traffic to your websites online. If you are new online, you must learn the Youtube marketing tips I’m going to share with you today! I don’t know how new you are to the internet marketing world, but just to make sure I got you covered, I’m going to answer common questions most newbies have about Youtube Marketing real quick. If these questions are too basic for you, just skip them! lol .. I’m just looking out for the brand new newbies out there! 😀 …….. – How can someone make money on Youtube? The answer is simple. You upload valuable videos, implement a little bit of search engine optimization, promote the heck out of them, start ranking up to get free organic traffic, and the final step would be to monetize that traffic. You make money from the revenue you generate with your own videos on Youtube. You can promote your own products or/and services, affiliate products, a network marketing opportunity, or simply earn from the Google Adsense Program. – Can you make money on Youtube by posting videos very often? No, you can’t make money by just publishing videos on Youtube.com often. There is a process to make it a profitable strategy. If you just upload videos on Youtube, and expect them to get traffic and make you money on their own, you are going to wait forever. Learn the right process to make money from your own Youtube videos, and take consistent action with it. – Do you get paid for uploading videos on Youtube? No, you do not get paid for just uploading videos on Youtube.com. You have to do a lot more than just uploading videos on Youtube. Don’t worry, you are going to learn the entire process today. I know this is a simple question, and probably a “pointless” question to ask, but hey … people actually ask this question a lot! .. trust me … – Does Youtube pay you after a certain amount of views? No. You do not get paid by Youtube.com. You get paid from third-party programs and companies. Or by selling your own products and services. A popular way to make money from your Youtube video’s revenue, is with Google Adsense. Google Adsense is an advertising program that pays you for the revenue you generate for them. I guess we could say you get paid by Youtube, in a way, since Youtube is now part of Google.com. LOL The more views you get to your videos, the better. You would need thousands of views to make some decent money from Google Adsense, though. – Can I make money on Youtube without Adsense? Yes you can. You can make a lot more money by promoting your own stuff, than with Google Adsense. As I mentioned above, you would need thousands of views to start making some decent money from Adsense. You can promote affiliate products, network marketing opportunities, or your own products and services, to profit a lot more from your video’s revenue. Alright, these questions and answers should help a newbie online understand very well how to make money from Youtube.com. 🙂 Let’s get down to these Youtube Marketing Tips now! – Optimize your video for the Search Engines First, you need to do a little bit of keyword research. Personally, I like to keep the keyword research simple, so I can get moving fast. This is what I do to find long tail keywords to rank for on Youtube: I take a look at what people are actually typing and looking for on Youtube. Just start typing keywords relevant to your niche, and target market – and Youtube will start giving you suggestions. Use the Suggested Keywords! You can do the same thing with the Google Search Engine: These suggestions are what people are looking for in the search engines. There is a reason why Youtube.com and Google.com are suggesting these keywords to you. Capitalize on these long tail keywords, and start using them on your videos! When it comes to choosing the right TAGS to use for your videos – one thing you can do is check what TAGS the videos that are ranking high in Youtube, are using, and use them on your Youtube videos. There is one cool tool you can use to help you find out what TAGS a video on Youtube is using. There is a Google Chrome Browser Extension that helps you learn more about a Youtube Video. It’s called VideoIQ. As you can see, this little tool gives you a lot of data about a Youtube Video. Use this data to your advantage!! Let me give you one important SEO tip for your Youtube Videos. If you are going to record yourself to make a video, you should say your keyword very clearly within the first 30 seconds, and throughout the video. The Youtube search engine is only getting smarter. When they are looking for factors to rank your video, they will also try to read the script of your video. This is a valuable SEO tip for Youtube videos I have learned from one of the best Youtube marketers around, Jon Penberthy. Once you have published a search engine optimized video on Youtube, the next step will be to drive as much traffic as possible to the video. Having a lot of video views, will help the video get better rankings in the Youtube Search Engine. If you want better rankings in the Youtube Search Engine, you need to learn what Youtube looks for on a video, to rank it higher. Here are some of the things Youtube looks for, when it comes to ranking a video: Youtube Channel Subscribers Video Views Video Comments Video Likes (thumbs up) Backlinks Social Shares The higher the numbers here, the better rankings you can have for the videos you upload. 🙂 Now, let me share with you more information on editing your video the best way possible, to get more out of it. I have learned a lot on editing your video, from a successful blogger named Ileane Smith. Her blog is full of valuable blogging tips. You should check out her blog: BasicBlogTips.com I looked around her blog, and found many valuable blog posts on Youtube Marketing. Let me share with you a few valuable blog posts on Youtube Marketing: Learn these internet marketing strategies, and take massive action with them!! Let me give you a few tips to help you record good videos for Youtube. – Imagine You Are Talking to a Friend When you are recording your video, imagine you are talking to a friend, and you are explaining the information you want to share, to a friend. You want to make the viewer feel as if you were talking specifically to them. Instead of saying … “hey guys!” … say “hey you!”. Talk in the video as if you were talking to one person. This will help the viewer feel more comfortable watching you, and feel a lot more connected to you. One great tip I have learned from successful internet marketers online, is that if you try to talk to everyone, you will talk to no one. You need to try to talk to a specific person. The person you want to watch your videos. – Lead with Value You need to give people what they are looking for. I’m sure you know the simple law of Supply and Demand. If you want people sharing your video, and taking action with your video, you need to give them what they need. Find a demand in your Niche, and fulfill it with your videos! Record videos solving a problem in your niche. Make it an interesting video, and try your best to capture your viewers attention as soon as the video start. Keep the viewers interested, and show them that you have the information they are looking for online. – Encourage Engagement with Your Viewers One thing I didn’t do when I first started recording videos for Youtube, is telling the viewer to leave a comment, and give me a thumbs up. You want to ask your viewer questions, and ask for opinions. You need to try to interact with the viewer as much as possible. This will help your video get more comments, and that can also help the video get more exposure online. – Never Forget Call to Actions Always tell people what you want them to do after watching your video. Most people like to be told what to do. It is the truth. And if you don’t tell them what you want them to do, then they will not take any action after watching your video. You need to have a call to action at the end of your videos. Tell the viewer to go to a website, leave you a comment, and give you a thumbs up. And just tell the viewer anything you want them to do. This is very important, and I see a lot of people not having a call to action in their videos. This is why I’m talking about it here. You must have a call to action in your videos!! – Leave Your Links In The Description This is an obvious tip – you should always have the links you want people to click on, in the description of the video on Youtube. I’m mentioning this obvious tip because I still see many videos from newbies on Youtube without any links in the description. You have to make sure you take good care of your Youtube Video Description! Please, help spread the love by sharing this blog post. Thank you so much! I used to be a miserable construction worker. A high school dropout who felt like a failure in life. When I learned anybody could make money blogging online, I started to dream again. My blogging journey began in 2011. After years of trial and error, I've been able to get the results I was looking for. I have a passion for helping others. Always did. And now I have the expertise and knowledge to help you make money blogging. If I can do it you can do it too and better. Thank you for coming to my blog. Don't forget to subscribe for blog updates. I really appreciate your visit and your support. 11 Comments Great value blog post. This is definitely great for the internet marketing newbies. YouTube is one of the largest and definitely most engaging platforms that people should definitely take advantage of. Thanks for sharing such great tips. How do you feel about transcribing your video and uploading a transcript file to YouTube? Do you feel that it will help with search engine ranking, etc.? Thanks again. Hi there, thank you for excellent post., But what do you honestly think about statement that end customers are not using you tube for purchases. Would love to implement you tub einto my web approach, but not sure if I will lose again a lot of time – like with FB. Regards, Matija, Slovenia There are many people making sales from Youtube Videos! … you just need a lot of targeted traffic and a good capture page. Most of the time you are not going to make the sale right away – this is why you need to CAPTURE the potential customer’s email. Then, follow up with them and make the sale that way. Youtube Traffic is great for building an email list. Most of the sales will come from the follow up you do with these people. But, you can still make sales directly from the Youtube Traffic. You just have to learn about targeting your offers. It is the same thing with Facebook. I have made good money with just Facebook Marketing alone. And by implementing free strategies. It is never a “waste of time” if you take the time to learn how everything works, and needs to work, for you to earn the money! 😉 Well! I am already developing my videos on digital marketing, and I am sure that it will help me. I have also analyzed that there is a relation between SEO and SSL certificates. It means installing an SSL certificate can improve your search engine ranking.
66,079,086
Q: Why there is no neurovascular drug-eluting stent? I'm researching on neurovascular stents and I'm wondering why there is not much about drug-eluting neurovascular flow diverters in the literature? I read in an article that it's because of complex shape of cerebral arteries in comparison to cardiovascular system. But it doesn't look satisfactory to me. Any suggestion or idea is appreciated. A: Despite some superficial similarities, the design of coronary stents and neurovascular stents are quite different. Coronary stents are hard metal, made out of stainless steel. They are quite solid, and if you crush one, it will remain crushed. They are deployed by expanding a balloon at potentially very high pressures (10-20 atm). Coronary stents are relatively short, and are deployed in vessels that are relatively straight given the length of the stent. Drug-eluting variants of coronary stents are additionally coated in a polymer (i.e., plastic) matrix that contains the active ingredient and releases ('elutes') it over time. The common commercially available drug-eluting stents release anti-proliferative agents to prevent neointimal growth. The purpose is also somewhat different: you mention flow diverters, so you are likely thinking about the context of aneurysms, where they may be used alongside coils to seal off the aneurysm from the normal flow. They are not providing structural scaffolding to prevent vessel occlusion from atherosclerosis. Neurovascular stents, including flow diverters, are often made of self-expanding metals like nitinol. They are more flexible, but try to keep their nominal shape. These stents are deployed by retracting a sheath and then they themselves pop out to their nominal diameter. Because of this different deployment strategy, they can also sometimes be retracted and repositioned. The way these stents deploy is a lot more gentle, and they are also typically longer than the coronary varieties. They also may be placed in vessels that can be accessed superficially, such as in the neck. A stainless steel stent in the neck would be a disaster: if you laid pressure on that part of the neck you could permanently collapse the stent. A self-expanding stent will bounce right back open. Back to your actual question... The actual purpose of coronary vs flow-diverting stents are different; the purpose of drug-eluting coronary stents, preventing neointimal hyperplasia, is not as relevant in the neurovascular context. Additionally, there are lots of good reasons to use self-expanding metals in the neurovasculature because of tortuous anatomy and superficial vessels in the case of the carotids. The hard plastics that are coated onto steel stents can't be easily placed onto self-expanding stents, so other techniques would need to be developed. The article you read talking about the complex shape of cerebral arteries is in large part correct.
66,079,151
Phenolic and carotenoid profiles of papaya fruit (Carica papaya L.) and their contents under low temperature storage. Tropical fruits are rich in phenolic and carotenoid compounds, and these are associated with cultivar, pre- and postharvest handling factors. The aim of this work was to identify major phenolics and carotenoids in 'Maradol' papaya fruit and to investigate their response to storage temperature. Ferulic acid, caffeic acid and rutin were identified in 'Maradol' papaya fruit exocarp as the most abundant phenolic compounds, and lycopene, β-cryptoxanthin and β-carotene were identified in mesocarp as the major carotenoids. Ranges of contents of ferulic acid (1.33-1.62 g kg(-1) dry weight), caffeic acid (0.46-0.68 g kg(-1) dw) and rutin (0.10-0.16 g kg(-1) dw) were found in papaya fruit, which tend to decrease during ripening at 25 °C. Lycopene (0.0015 to 0.012 g kg(-1) fresh weight) and β-cryptoxanthin (0.0031 to 0.0080 g kg(-1) fw) were found in fruits stored at 25 °C, which tend to increase during ripening. No significant differences in β-carotene or rutin contents were observed in relation to storage temperature. Phenolics and carotenoids of 'Maradol' papaya were influenced by postharvest storage temperature with exception of β-carotene and rutin. Ripe papaya stored at 25 °C had more carotenoids than those stored at 1 °C. Low (chilling) temperature (1 °C) negatively affected the content of major carotenoids, except β-carotene, but preserved or increased ferulic and caffeic acids levels, as compared to high (safe) temperature (25 °C).
66,079,201
Molecular cloning and functional characterization of zebrafish ATM. Ataxia-telangiectasia mutated (ATM) is the gene product mutated in ataxia-telangiectasia (A-T), which is an autosomal recessive disorder with symptoms including neurodegeneration, cancer predisposition and premature aging. ATM is thought to play a pivotal role in signal transduction in response to genotoxic DNA damage. To study the physiological and developmental functions of ATM using the zebrafish model system, we cloned the zebrafish homolog cDNA of human ATM (hATM), zebrafish ATM (zATM), analyzed the expression pattern of zATM during early development, and further developed the system to study loss of zATM function in zebrafish embryos. Employing information available from the zebrafish genomic database, we utilized a PCR-based approach to isolate zATM cDNA clones. Sequence analysis of zATM showed a high level homology in the functional domains of hATM. The putative FAT, phosphoinositide 3-kinase-like, and FATC domains of zATM, which regulate ATM kinase activity and functions, were the most highly conserved regions, exhibiting 64-94% amino acid identity to the corresponding domains in hATM, while exhibiting approximately 50% amino acid identity outside these domains. The zATM gene is expected to consist of 62 coding exons, and we have identified at least 55 exons encompassing more than 100kb of nucleotide sequence, which encodes about 9 kb of cDNA. By in situ hybridization, zATM mRNA was detected ubiquitously with a dramatic increase at the 18-somite stage, then more specifically in the eye, brain, trunk, and tail at later stages. To inhibit zATM expression and function, we designed and synthesized splice-blocking antisense-morpholino oligonucleotides targeting the phosphoinositide 3-kinase-like domain. We demonstrated that this knockdown of zATM caused abnormal development upon ionizing radiation-induced DNA damage. Our data suggest that the ATM gene is structurally and functionally conserved in vertebrates from zebrafish to human.
66,079,249
Q: How selecting a specific value and joining it to the first result set I have a select statement that returns me a few column from a table. I have a "daySince" column (the 3rd) which is actually a date diff. I need to select one more column (as numberOfRecord) which would represent the number of row having the same "daySince" value. I just appended the number 7 here to explain how the structure would look like, even tho the correct values for that column would be: 1 for rows having daySince val between 10 and 14 8 for rows having daySince val 15 2 for rows having daySince val 16 Hope that what I'm asking does make sense. I tried to run some random left, right, and full outer join with awful result. Anyone can address me to the right way? Here's the example query SELECT username, id, DATEDIFF( creation, '2018/02/28') as daySince, 7 as numberOfRecord FROM MyTable ORDER BY daySince ASC username id daySince numberOfRecord rob 2D8836 11 7 rob 2D8836 12 7 rob 2D8836 13 7 rob 2D8836 14 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 15 7 rob 2D8836 16 7 rob 2D8836 16 7 A: Check This. SELECT username, id, DATEDIFF( creation, '2018/02/28') as daySince, Count(DATEDIFF( creation, '2018/02/28')) as numberOfRecord FROM MyTable Group By DATEDIFF( creation, '2018/02/28') ORDER BY daySince ASC
66,079,438
Background {#Sec1} ========== Radiation induced lung injury {#Sec2} ----------------------------- Lung cancer is the most common cause of cancer death in men and women worldwide \[[@CR1]\]. The large majority of lung cancer patients present with non-small cell lung cancer (NSCLC), and of these, approximately 30% present with locally advanced (stage III) disease. The current standard of care for locally advanced unresectable NSCLC is concurrent chemotherapy (CRT) with curative intent \[[@CR2], [@CR3]\]. Survival improvements of concurrent CRT over sequential CRT have been well-defined after multiple randomized trials, with concurrent CRT conferring a 10% overall survival benefit at two years \[[@CR4], [@CR5]\]; however, such treatment is associated with an increased risk of radiation-induced lung injury (RILI), including radiation pneumonitis (RP). Clinically symptomatic RP occurs in 30-40% of patients after concurrent CRT and can have a major impact on quality of life, sometimes resulting in oxygen dependence, and in severe cases is fatal \[[@CR6], [@CR7]\]. Several factors are currently used to attempt to predict RP and to mitigate risk. Most of these predictive factors are metrics of the radiation dose delivered to normal lung, such as the volume of lung receiving ≥20 Gy of radiation, the mean lung dose and the dose per fraction of radiation. For example, a recent meta-analysis found that the volume of lung receiving at least 20 Gy (V20) is the best individual predictor of RP risk; a V20 \> 40% is associated with a 35% risk of symptomatic RP, and \>3% risk of fatal RP \[[@CR7]\], supporting several previous single-institution studies \[[@CR8]\] and a systematic review \[[@CR6]\]. The risk of RP limits the radiotherapy dose that can be safely delivered. Although numerous modelling studies have indicated that higher doses of radiotherapy should be associated with improved oncologic outcomes, randomized data have shown that dose escalation leads to excess lung toxicity. The recent landmark RTOG 0617 randomized trial compared standard vs. high dose radiotherapy (60 Gy vs. 74 Gy), with concurrent chemotherapy, for locally advanced NSCLC. Overall survival at 18-months was 66.9% in the 60-Gy arm and 53.9% in the 74-Gy arm (p \< 0.001), indicating inferior survival with dose-escalation \[[@CR9]\]. Toxicity outcomes from RTOG 0617, as scored by the health-care providers, did not initially appear to explain the inferior survival in the high-dose arm. Although there were more deaths due to radiation pneumonitis in the high-dose arm (5% vs. 1%) this did not meet statistical significance and only accounted for a small proportion of the overall survival difference between the two arms. However, *patient-reported outcomes* indicated a different toxicity profile; respiratory toxicity was common and was not often detected by the health-care providers. In the high-dose arm, 49% of patients exhibited a clinically-meaningful decline in the pulmonary quality of life (QOL) at 3-months, compared to 31% of patients in the low-dose arm (p = 0.024). Pulmonary QOL was also an important survival metric overall. Baseline QOL predicted for overall survival (OS) in multivariable analysis, more so than stage, performance status and other conventional prognostic factors \[[@CR9]\]. In summary, for patients treated with standard concurrent CRT for locally advanced lung cancer, RP is a major source of morbidity, impairs quality of life, and can result in treatment-related death. RP also limits the dose of radiotherapy that can be safely delivered, and currently precludes radiotherapy dose escalation. RP is not well-ascertained by healthcare providers; in contrast, patient-reported QOL outcomes appear to be a powerful tool to capture pulmonary toxicity outcomes \[[@CR9]\]. Clearly, better methods are needed to reduce pulmonary toxicity for patients undergoing concurrent CRT for lung cancer. Functional lung avoidance {#Sec3} ------------------------- At present, radiation treatment planning for advanced lung cancer is based upon minimizing radiation dose to the total lung, regardless of the degree of function at any particular point within that lung. This approach does not account for the fact that lung tissue can be heterogeneous, especially in smokers, whose lungs are frequently characterized by large regions of unventilated parenchyma such as bullae. Ideally, radiotherapy treatment planning should be able to exploit these regional differences in lung function by minimising dose to the more highly functional lung while favouring radiation deposition in areas of less highly-functioning or non-functioning lung. Over the last decade, functional measurements and maps obtained from thoracic imaging have been evaluated for use in lung cancer radiation therapy planning with single photon emission computed tomography (SPECT) \[[@CR10], [@CR11]\], high resolution four-dimensional x-ray computed tomography (4DCT) \[[@CR12], [@CR13]\], and hyperpolarized noble gas magnetic resonance imaging (MRI) \[[@CR14], [@CR15]\]. All of these techniques potentially facilitate the delineation of regional pulmonary function for lung cancer radiation treatment planning, resulting in reduced radiation dose to well-functioning lung without dose decreases to the treatment target volume \[[@CR11], [@CR15], [@CR16]\]. However, it is not clear which of these is optimum, as each has its own merits and drawbacks. For example, one of the most widely studied techniques is SPECT. Although the incorporation of SPECT for lung cancer radiation therapy planning has been promising, there are some inherent limitations that may preclude its routine clinical use, mainly related to image artefacts stemming from radiolabelled tracers depositing in the major airways \[[@CR17]\], requiring significant post-processing to remove, and sometimes resulting in distortion of the underlying ventilation signal. Hyperpolarized noble gas MRI {#Sec4} ---------------------------- Hyperpolarized ^3^He MRI provides an alternative to ventilation SPECT \[[@CR15], [@CR18]\]. ^3^He MRI provides relatively high spatial and temporal resolution of respiratory function, can be used safely in a wide variety of respiratory patients and does not release ionizing radiation \[[@CR19]\]. Although ^3^He MRI has several inherent advantages, it will not likely achieve widespread clinical use due to cost and a limited global supply of ^3^He gas for research purposes. Several alternative imaging techniques appear promising and are expected to be available for widespread clinical use in the future, including ^129^Xe MRI, which is currently less well-developed than ^3^He MRI \[[@CR20]\], ^1^H Fourier decomposition methods \[[@CR21]\], and 4DCT-based ventilation mapping \[[@CR22]\]. If the benefits of functional lung avoidance can be demonstrated now using ^3^He MRI, then other, more easily accessible ventilation imaging modalities (e.g. 4DCT and ^129^Xe MRI) may allow for more widespread implementation of functional lung avoidance radiotherapy in future. Methods/design {#Sec5} ============== Objectives {#Sec6} ---------- ### General objective {#Sec7} To determine if functional lung avoidance based on ^3^He MRI improves quality of life outcomes for patients with NSCLC undergoing concurrent CRT. ### Primary endpoint {#Sec8} Pulmonary QOL 3-months post-treatment ○ Measured using the Functional Assessment of Cancer Therapy---Lung Cancer Subscale (FACT-LCS) ### Secondary endpoints {#Sec9} Pulmonary QOL at other time-points ○ Measured using the FACT-LCS Other QOL scores ○ FACT---Trial Outcomes Index (FACT-TOI) ○ FACT---Lung (FACT-L) and subscales Provider-reported toxicity (including RP and esophagitis) ○ Assessed by the National Cancer Institute Common Toxicity Criteria (NCI-CTC) version 4 Overall Survival ○ Defined as time from randomization to death from any cause Progression-free survival ○ Time from randomization to disease progression at any site or death ○ Progression defined according to RECIST 1.1 Quality-Adjusted Survival (based on EQ-5D) Study design {#Sec10} ------------ This study is a double-blinded randomized controlled trial (Figure [1](#Fig1){ref-type="fig"}).Figure 1**Study design: patients will be randomized in a 1:1 ratio between Arm 1 (standard radiotherapy) and Arm 2 (functional lung avoidance radiotherapy).** Patient selection {#Sec11} ----------------- ### Inclusion criteria {#Sec12} Age 18 or olderWilling to provide informed consentECOG performance status 0-2Histologically confirmed non-small cell lung carcinomaLocally advanced Stage IIIA or IIIB lung carcinoma according to AJCC 7^th^ editionHistory of at least 10-pack-years of smokingAmbulatory and able to perform the Six Minute Walk Test (6MWT)FEV~1~ ≥ 750 ml or ≥30% predictedNot undergoing surgical resectionAssessment by medical oncologist and radiation oncologist, with adequate bone marrow, hepatic and renal function for administration of platinum-based chemotherapy, as determined by the treating physicians ### Exclusion criteria {#Sec13} Subject has an implanted mechanically, electrically or magnetically activated device or any metal in their body which cannot be removed, including but not limited to pacemakers, neurostimulators, biostimulators, implanted insulin pumps, aneurysm clips, bioprosthesis, artificial limb, metallic fragment or foreign body, shunt, surgical staples (including clips or metallic sutures and/or ear implants)In the investigator's opinion, subject suffers from any physical, psychological or other condition(s) that might prevent performance of the MRI, such as severe claustrophobia.Serious medical comorbidities (such as unstable angina, sepsis) or other contraindications to radiotherapy or chemotherapyPrior history of lung cancer within 5 yearsPrior thoracic radiation at any timeMetastatic disease. Patients who present with oligometastatic disease where all metastases have been ablated (with surgery or radiotherapy) are candidates if they are receiving concurrent CRT to the thoracic disease with curative intentInability to attend full course of radiotherapy or follow-up visitsPregnant or lactating women Pre-treatment evaluation {#Sec14} ------------------------ History and physical examination by a radiation oncologist and medical oncologist within 12 weeks prior to enrolment onto studyHistological confirmation of non-small cell carcinomaStandard staging within 12 weeks prior to initiation of chemotherapy including: ○ CT chest and upper abdomen ○ Whole body FDG-PET-CT scan (currently funded for stage III NSCLC in Ontario) ○ CT head or MRI head Pulmonary function tests within 12 weeks of initiation of radiotherapy showing adequate FEV~1~: the best value obtained pre- or post-bronchodilator must be ≥750 ml or ≥30% predictedBloodwork: CBC with differential, Hemoglobin, AST, ALT, bilirubin, creatinine should be done before 1^st^ cycle of chemotherapy. If any tests are missed they must be done prior to start of radiation.Pregnancy test for women of child-bearing age Study visits {#Sec15} ------------ Subjects will visit the research center three times: pre-treatment, three months post-treatment, and 12 months post-treatment. ^3^He MRI and non-contrast chest CT will be performed on the first visit only. Subjects will undergo pulmonary function tests, Forced Oscillation Technique, 6MWT, and QOL questionnaires at each visit. ### Pulmonary function tests {#Sec16} Full pulmonary function tests including spirometry, plethysmography and diffusing capacity of carbon monoxide (DL~CO~) will be performed according to the joint American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines \[[@CR23]--[@CR27]\] using the MedGraphics (Elite Series, MedGraphics Corporation, St. Paul, MN USA) whole-body plethysmograph and/or ndd EasyOne Spirometer (ndd Medical Technologies Inc., Andover, MA USA). Airwave oscillometry will be performed using the TremoFlo™ (THORASYS Thoracic Medical Systems, Halifax, NS). Airwave oscillometry measures the mechanics of the respiratory system and evaluates lung function without patient effort by superimposing a gentle multi-frequency airwave onto the patient's respiratory airflow. Patients breathe normally throughout the measurement sequence for less than a minute via a disposable mouthpiece. ### Six minute walk test {#Sec17} Subjects will perform the 6MWT according to ATS guidelines \[[@CR28]\]. Subjects will rate their dyspnea and overall fatigue at baseline and at the end of the exercise using the Borg Scale \[[@CR29]\]. ### CT {#Sec18} Low dose, thoracic multi-detector row computed tomography will be performed with the same breath-hold volume and maneuver used for MRI. CT imaging will be performed using a 64-slice (General Electric Health Care, Milwaukee) scanner. In order to match CT and MRI breath-hold volumes and anatomy, subjects will be scanned in the supine position during inspiration breath-hold from functional residual capacity (FRC) after inhaling one litre of N~2~ gas as previously described \[[@CR30]\]. ### MRI {#Sec19} MR imaging will be performed using a 3.0 T MR750 system (GE Health Care, Milwaukee, Wisconsin) using a whole-body gradient amplitude of 1.94 G/cm and a single-channel, rigid elliptical transmit/receive chest coil (Rapid Biomedical GmbH, Wuerzburg, Germany). For ^1^H and ^3^He MRI, subjects will be instructed to inhale from FRC a gas mixture from a one-litre Tedlar bag (Jensen Inert Products, Coral Springs, FL). Image acquisition will be performed during a 16-second breath hold. Coronal (anatomical) ^1^H MRI will be performed using the whole-body radiofrequency coil and ^1^H fast-spoiled, gradient-recalled echo sequence using a partial echo (16 s total data acquisition, repetition time \[TR\] =4.7 ms, echo time \[TE\] =1.2 ms, flip angle =30°, field of view =40 cm, bandwidth =24.4 kHz, matrix =128 × 80, 15-17 slices, 15 mm slice thickness). ^3^He MRI static ventilation images will be acquired using a fast-gradient echo method using a partial echo (14 s total data acquisition, TR/TE/flip angle =4.3 ms/1.4 ms/7°, field of view =40 cm, bandwidth =48.8 kHz, matrix =128 × 80, 15-17 slices, 15 mm slice thickness) \[[@CR30]\]. A pulse oximeter lead will be attached to all subjects to monitor their heart rate and oxygen saturation. All subjects will have supplemental oxygen provided via nasal cannula at a flow rate of two litres per minute during the scanning process. Adverse events and pulse oximetry measurements during MRI will be recorded. If oxygen saturation falls to \<80% continuously for ≥15 seconds, scanning will be discontinued and the patient will be provided supplemental oxygen, as necessary, until oxygen saturation recovers to the patient's baseline value. The patient will then be discontinued from the study. Oxygen desaturation below 88% during a ^3^He/^129^Xe breath-hold will be considered an adverse event. Radiotherapy {#Sec20} ------------ ### Technique {#Sec21} Patients will be treated with intensity modulated radiotherapy (IMRT). Volumetric modulated arc therapy (VMAT) is preferred, but IMRT can be delivered using static-beam techniques or other rotational techniques (*e.g.* Tomotherapy™). For rotational techniques, care must be taken to minimize dose to the contralateral lung. For each patient, both plans (standard and functional-lung-avoidance) must be planned using the same delivery technique. Respiratory gating is allowed for tumours with \>7 mm of respiratory motion. ### Immobilization and localization {#Sec22} All patients will be positioned with arms over their heads, chin extended and immobilized according to institutional standards. Patients will undergo a planning 4DCT simulation encompassing the entire lung volume, typically extending from level of the C5 to L1 (below diaphragm), with 3 mm slice thickness. Intravenous contrast may be used to improve delineation of target volumes when the target is centrally located, at the discretion of the treating radiation oncologist. The planning CT may be fused with other available standard diagnostic imaging (MRI, CT or PET). ### Functional lung delineation on planning CT {#Sec23} Pulmonary segmentation of ventilatory patterns will be performed using semi-automated methods, as previously described \[[@CR22]\]. The ^3^He MRI containing the delineated areas of functional lung will be fused to the breath-hold CT and the planning CT. Due to differences in tidal volume between the two scans, deformable registration will be required for most cases. To ensure accuracy, the fusion will be inspected by a physicist and the treating physician. ### Radiotherapy volume definitions {#Sec24} The radiotherapy planning and delivery parameters used in this study are based on current consensus guidelines for treatment of locally advanced lung cancer. The gross tumour volume (GTV) is defined as the visible tumour and involved lymph nodes based on CT or PET imaging (nodes must be 1 cm or more in short axis or necrotic on CT, or PET positive, or biopsy-proven to contain carcinoma). Elective nodal irradiation will not be used. Nodes that are \<1 cm and PET-negative may not be included in the GTV unless they are necrotic-appearing. Radiotherapy may be delivered using either a free-breathing or a gated technique. For the free-breathing treatment, a GTV will be delineated at end-inspiration and end-expiration. At the discretion of the radiation oncologist, the GTV may also be delineated on other phases of the breathing cycle (e.g. in cases involving significant hysteresis). All contoured GTVs shall then be fused to create the internal GTV (IGTV). For patients treated with respiratory gating, a subset average CT will be created by averaging several phases around end-expiration such that tumour motion is minimized while maintaining a clinically-acceptable gating window (typically, the 40-60% phases will be used for the subset average). The GTV will be contoured on the subset average scan and treatment will be delivered within the defined gating window. The GTV may also be contoured on the end-inspiration phase to aid in image-guidance using free-breathing cone-beam CT. For all patients, a 5 mm margin will be added for microscopic disease to create an Internal Target Volume (ITV). This margin may be decreased at natural boundaries to microscopic extension (e.g. bone), or increased up to 8 mm in areas of uncertainty. For the planning target volume (PTV), a further 5 mm expansion will be added to the ITV in all directions. For the purposes of radiotherapy planning, two ventilation regions will be created representing different levels of lung ventilation. The structure [lung-vent]{.smallcaps} will represent lung with any measurable ventilation. The structure [lung-avoid]{.smallcaps} will represent lung tissue that has normal ventilation according to previously published methodology \[[@CR23]\]. The structure [lung-avoid]{.smallcaps} will be preferentially spared. Due to registration errors and various imaging artefacts, portions of the lung ventilation maps are expected to be outside the anatomical lung boundary of the planning CT. For this reason, lung ventilation structures will be cropped so that they are contained within the anatomically-defined lungs. ### Prescription and dose constraints {#Sec25} The prescription dose will be 60 Gy in 30 fractions, with 95% of the PTV to receive 95% of the prescribed dose. Target dose constraints were adapted from the RTOG 0617 study protocol (Table [1](#Tab1){ref-type="table"}) \[[@CR31]\]. Normal structures including spinal cord, right lung, left lung, oesophagus and heart should be contoured on each CT slice of the planning CT. The lung volume at risk is defined as the total lung minus IGTV. The oesophagus should be contoured from the caudal aspect of the cricoid to the gastroesophageal junction. The heart contours should extend from the beginning of the ascending aorta down to the apex of the heart.Table 1**Normal tissue dose constraints for radiotherapy planning**StructureDose constraints for 60 Gy in 30 fractionsSpinal CordMax dose \<50 GyLungsV20 \< 37%V05 \< 90%Mean dose \<21 GyOesophagusMean dose \<34 GyMinimize V60Heart60 Gy to \<1/345 Gy to \<2/340 Gy to \<3/3 The spinal cord dose constraint cannot be exceeded for any reason. It is strongly recommended that the other dose constraints not be exceeded. If any dose constraint needs to be exceeded in order to achieve adequate coverage of the PTV, approval by the treating physician is required. ### Planning workflow and blinding {#Sec26} Before randomization, each patient will require two clinically-approved treatment plans meeting the constraints defined above. One plan will be designed without reference to the functional status of the lung (termed *standard plan*). The second plan (termed *avoidance plan*) will be optimized such that dose to functional lung is as low as reasonably achievable, with an aim to minimize the V5, V20, and mean dose within the functional lung. For a given patient, both standard and avoidance plans must use the same treatment technique, (i.e. VMAT, static-gantry IMRT or Tomotherapy™). While this protocol does allow for static-beam IMRT, a rotational technique is preferred. The standard plan will always be completed first, followed by the avoidance plan. Avoidance plans will, in general, be more heterogeneous than standard plans. While homogeneous plans may be more aesthetically pleasing, there is no evidence to suggest that they are superior. Thus, PTV homogeneity constraints and conformity constraints will be relaxed for avoidance plans. Nonetheless, hotspots of 105% or greater will be avoided outside the PTV. The structure [lung-avoid]{.smallcaps} (representing normally-ventilated lung) will be preferentially avoided in order to devise the avoidance plan. In general, the anatomically-defined lungs should not be used during optimization of the avoidance plan; rather, the structure [lung-vent]{.smallcaps} should be used in its place[.]{.smallcaps} If necessary, a maximum V20 constraint for the anatomical lungs may be used which is the smaller of: 37%, or 3% greater than the V20 generated for the anatomical lungs in the standard plan. Compared with the standard plan, the goals for the avoidance plan will be as follows: 3% reduction in the V20 for [lung-avoid,]{.smallcaps} and/or a 1.5% reduction in V20 for [lung-vent]{.smallcaps}1.5 Gy drop in the mean dose to [lung-avoid]{.smallcaps}, and/or a 1 Gy reduction in mean dose to [lung-vent]{.smallcaps}V20 and mean dose to the anatomical lungs should be as similar as possible between the two plans If these goals cannot be achieved, a decision will be made by the physicist and the treating physician as to whether the patient should be excluded from the trial. Once both plans are deemed acceptable, the radiation oncologist, dosimetrist, and physicist will review and approve both plans. When the radiation oncologist reviews the two plans, machine parameters will be hidden; these parameters will be viewable by the radiation oncologist only after the randomization step (see below), and only for the clinically-selected plan. To ensure consistency, the two plans will have a V20 for the anatomically-defined lungs (lung minus IGTV) within 3% of one another. While there are no other explicit constraints for the two plans relative to one-another, in the best clinical judgment of the treating physician, neither plan should be clearly superior in terms of either target coverage or organ-at-risk (OAR) doses (excluding the anatomically-defined lungs). The doses to other OARs must be considered clinically equivalent. Both plans will be printed and signed. The patient will then be randomized (see randomization below). After randomization, one physicist (termed the 'unblinded physicist') will be responsible for receiving the randomization results for all patients in this trial. This unblinded physicist will choose the applicable plan for clinical use. This will be labelled as "Clinical Plan" and placed in the treatment system for standard quality assurance, and functional lung contours will be removed from that plan to maintain blinding. The previous two plans (*standard* and *avoidance*) will be archived under password protection. The standard treatment binder will include a printout of the clinical plan but will not contain information that would allow for unblinding. It is generally required that an unblinding procedure be available in case of emergency. However, it is unlikely that unblinding would be required under any circumstance, even in the event of a radiation complication, as the doses delivered to normal structures will always be available in the treatment binder, wherein the treatment arm would not be apparent. In the unlikely event of an unforeseen emergency where unblinding is required, the unblinded physicist would be contacted and would confirm with the radiation oncologist that unblinding is required. That physicist would then access the password-protected plan to determine which plan was selected. The unblinded physicist will not be involved in patient care or ascertainment of outcomes. In the event that a patient should require a repeat CT simulation for an unforeseen reason (e.g. pulmonary re-expansion, new atelectasis, rapid reduction in tumour bulk), the planning procedure above will be repeated, with both a standard plan and an avoidance plan being created. The unblinded physicist will not be involved in any of the re-planning steps up until the randomization step. Both re-plans (*standard* and *avoidance*) will need to be approved by the radiation oncologist. At this point, the unblinded physicist will select the appropriate plan according to the previous randomization. The post-randomization procedure will then continue as described below. Chemotherapy and treatment sequencing {#Sec27} ------------------------------------- Platinum-based chemotherapy will be delivered by the medical oncologist according to local standards. Radiotherapy will usually start with cycle 2 of chemotherapy. However, at the discretion of the treating physician,radiotherapy may be started with cycle 1 (urgent cases), or after more than 2 cycles (e.g. where tumour downsizing is required). Concurrent chemotherapy cannot contain taxanes or gemcitabine. Follow-up and assessment of efficacy {#Sec28} ------------------------------------ ### Quality of life (QOL) and health utilities {#Sec29} QOL and health-utility data will be collected using the FACT-L scale and the EQ-5D, respectively. The primary QOL endpoint will be based on the FACT-LCS (a subset of FACT-L) which measures pulmonary QOL specifically. As a secondary endpoint, the FACT-TOI incorporates the LCS with two additional domains from FACT-L: the Physical Well Being (PWB) and Functional Well-Being (FWB) scores. Although only subsets of FACT-L are used for these two endpoints, it is generally recommended that the full FACT-L be completed during clinical trials for more robust assessment of other QOL domains. The EQ-5D measures health utility, and is used for calculating quality-adjusted survival and cost-utility. ### Follow-up {#Sec30} Patients will be assessed by the radiation oncologist at 3 months, 6 months and 1 year, then every 6 months until two years, then annually until 5 years (Table [2](#Tab2){ref-type="table"}). A detailed history, physical examinations and CT chest and upper abdomen will be performed with each assessment. Other investigations are as follows:Table 2**Follow-up schedule**Before entryLast week of radiotherapyMonth 3Month 6Bianually for 2 years then annually until year 5History and PhysicalXXXXXStaging Imaging (see section 5)XPulmonary function tests and 6MWT at Robarts Research InstituteXXXBaseline Bloodwork (see section 5)XPregnancy test for women of child-bearing ageXToxicity and QOL Scoring (PAR, FACT-L, EQ-5D)XXXXX (month 12 only)Follow-up CT chest/upper abdomenXXX Toxicity scoring and QOL scoring (FACT-L and EQ-5D): during the last week of radiotherapy (i.e. at last patient review clinical visit), and at 3-, 6- and 12-months post-treatment.Pulmonary Function Tests and 6MWT: 3-months and 12-months post-treatment Statistics and sample size calculation {#Sec31} -------------------------------------- ### Sample size {#Sec32} The primary endpoint is the QOL score on the FACT-LCS measured 3-months post-treatment. A change in the LCS score of 2-3 points is considered clinically relevant \[[@CR32]\]. Following concurrent CRT, it is assumed that in the arm receiving standard treatment, the mean post-treatment LCS score will be 20 \[[@CR33]\]. The study will use a two-sided, independent-sample t-test with an alpha level of 0.05 and power of 80%, and will assume a 3-month QOL non-completion rate of 10%. The standard deviation of the LCS score is estimated to be 4. In order to detect a 3-point improvement in QOL in the experimental arm (Arm 2), a total of 64 patients will be required (32 in each arm). ### Randomization {#Sec33} Randomization will occur in a 1:1 ratio between Arm 1 and Arm 2 using permuted blocks. Randomization results will be communicated by the statistician to the unblinded physicist by telephone. ### Analysis plan {#Sec34} Patients will be analysed in the groups to which they are assigned (intention-to-treat). An independent-sample t-test will be used to compare QOL scores at 3-months. The percentage of patients in each arm who experience a clinically significant QOL decline (3 points) will also be reported. Survival will be calculated from date of randomization using the Kaplan-Meier method with differences compared using the log-rank test. A Cox multivariable regression analysis will be used to determine baseline factors predictive of survival. For the secondary endpoints involving QOL scales, linear mixed effects models will be used. ### Data safety monitoring committee {#Sec35} The data safety monitoring committee (DSMC) will consist of a statistician, an independent investigator, and a content expert. The DSMC will review toxicity outcomes on a semi-annual basis. If any grade 3-5 toxicity is reported, the patient case will be reviewed to determine if such toxicity is related to treatment. The DSMC may recommend modification or cessation of the trial if radiotherapy toxicity rates are deemed excessive (e.g. \>10% grade 5 toxicity). ### Interim analysis {#Sec36} The DSMC will conduct one interim analysis once 32 patients have been accrued and followed for 3 months. For this analysis, the DSMC will be blinded to the identity of each treatment arm, but QOL and OS data will be presented for each arm. The DSMC will recommend stopping the trial if there is an OS difference that is statistically significant with a threshold of p \< 0.001 using the log-rank test, based on the Haybittle-Peto stopping rule. This retains an overall alpha of 0.05. At the Interim Analysis, the DSMC will also check the validity of the sample size calculation assumptions. The DSMC will be provided with the standard deviation of the FACT-LCS scores in each arm (while remaining blinded to the identity of each arm), and the rate of completion of the 3-month QOL forms. If these values are substantially different than estimated in the sample size calculation, the DSMC can recommend increasing or decreasing the target accrual in order to maintain statistical power. Institutional research ethics board (REB) approval {#Sec37} -------------------------------------------------- Western University REB Number: 104834 transferred under methods section. Discussion {#Sec38} ========== The goal of this randomized, double-blind trial is to comprehensively evaluate the effect of functional lung avoidance using pulmonary functional imaging on both pulmonary toxicity and QOL, specifically for patients receiving concurrent CRT for locally advanced NSCLC. To date, SPECT and 4DCT have been used in radiation treatment planning to provide functional lung information. These investigations have demonstrated the reduction of dose to healthy lung tissue \[[@CR11], [@CR16]\]. However, to our knowledge, they have not assessed patient QOL as one of their primary endpoints, which has demonstrated to be a powerful tool to capture pulmonary toxicity outcomes \[[@CR9]\]. Hyperpolarized ^3^He MRI provides an alternative to SPECT and 4DCT and offers high spatial and temporal resolution of respiratory function \[[@CR15]\]. Although ^3^He has several inherent advantages, it will not likely achieve widespread clinical use, due to cost and a limited global supply of helium. However, several alternatives appear promising and are expected to be available for widespread clinical use in the future, including ^129^Xe MRI and ^1^H Fourier Decomposition, which are currently less well-developed than ^3^He MRI and 4DCT. By establishing the benefits of ^3^He MRI functional lung avoidance and validating it against less developed methods such as ^129^Xe and ^1^H Fourier Decomposition MRI, these latter methods will allow for widespread implementation of functional lung avoidance radiotherapy. In summary, this study will determine if ^3^He MRI-based functional lung avoidance methods will improve QOL and pulmonary toxicity in subjects with unresectable NSCLC. **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** Study conception: DAH, DAP, BY. Initial study design: DAH, DAP, BY, GP, GBR. Revision of study design and protocol: All authors. Drafting and approval of final protocol and manuscript: All authors. We wish to thank Andrew Wheatley, BSc, and Sandra Blamires, CCRC, for clinical coordination, pulmonary function tests, dispensing of gas doses and data archival. We are also grateful to Trevor Szekeres, RMRT, for MRI of research subjects. This study is funded by a Grant-in Aid from the Ontario Thoracic Society/Canadian Lung Association (D.H., B.Y.), along with funding from the Ontario Institute for Cancer Research through funding provided by the Government of Ontario (D.A.P.). The granting bodies are not involved in data collection or analysis.
66,079,613
/* * @format */ import { GeoJSON, Format, Dp, Polygon } from '.'; import Vector3d from './vector3d'; import Dms from './dms'; type PathBrngEnd = LatLonNvectorSpherical | number; declare class LatLonNvectorSpherical { constructor(lat: number, lon: number); get lat(): number; set lat(lat: number); get latitude(): number; set latitude(lat: number); get lon(): number; set lon(lon: number); get lng(): number; set lng(lon: number); get longitude(): number; set longitude(lon: number); static get metresToKm(): number; static get metresToMiles(): number; static get metresToNauticalMiles(): number; toNvector(): NvectorSpherical; greatCircle(bearing: number): Vector3d; distanceTo(point: LatLonNvectorSpherical, radius?: number): number; initialBearingTo(point: LatLonNvectorSpherical): number; finalBearingTo(point: LatLonNvectorSpherical): number; midpointTo(point: LatLonNvectorSpherical): LatLonNvectorSpherical; intermediatePointTo(point: LatLonNvectorSpherical, fraction: number): LatLonNvectorSpherical; destinationPoint(distance: number, bearing: number, radius?: number): LatLonNvectorSpherical; static intersection( path1start: LatLonNvectorSpherical, path1brngEnd: PathBrngEnd, path2start: LatLonNvectorSpherical, path2brngEnd: PathBrngEnd, ): LatLonNvectorSpherical; crossTrackDistanceTo(pathStart: LatLonNvectorSpherical, pathBrngEnd: PathBrngEnd, radius?: number): number; alongTrackDistanceTo(pathStart: LatLonNvectorSpherical, pathBrngEnd: PathBrngEnd, radius?: number): number; nearestPointOnSegment(point1: LatLonNvectorSpherical, point2: LatLonNvectorSpherical): LatLonNvectorSpherical; isWithinExtent(point1: LatLonNvectorSpherical, point2: LatLonNvectorSpherical): boolean; static triangulate( point1: LatLonNvectorSpherical, bearing1: number, point2: LatLonNvectorSpherical, bearing2: number, ): LatLonNvectorSpherical; static trilaterate( point1: LatLonNvectorSpherical, distance1: number, point2: LatLonNvectorSpherical, distance2: number, point3: LatLonNvectorSpherical, distance3: number, radius?: number, ): LatLonNvectorSpherical; isEnclosedBy(polygon: Polygon<LatLonNvectorSpherical>): boolean; static areaOf(polygon: Polygon<LatLonNvectorSpherical>, radius?: number): number; static meanOf(points: Polygon<LatLonNvectorSpherical>): LatLonNvectorSpherical; equals(point: LatLonNvectorSpherical): boolean; toGeoJSON(): GeoJSON; toString(format?: Format, dp?: Dp): string; } declare class NvectorSpherical extends Vector3d { constructor(x: number, y: number, z: number); toLatLon(): LatLonNvectorSpherical; greatCircle(bearing: number): Vector3d; toString(dp?: number): string; } export { LatLonNvectorSpherical as default, NvectorSpherical as Nvector, Dms };
66,079,688
Howling on Halloween at Rotary Dog Park Come join the fun on Halloween, 2015 at Rotary Park for the Howling on Halloween Fundraiser from 8:00AM to 12:00 Noon. The fundraiser is for Bullhead City Animal Care and Welfare, so be sure to come out!
66,079,756
Ion chemistry of anti-o,o dibenzene. The ion chemistry of anti-o,o'-dibenzene (1) was examined in the gaseous and the condensed phase. From a series of comparative ion cyclotron resonance (ICR) mass spectrometry experiments which involved the interaction of Cu+ with 1, benzene, or mixtures of both, it was demonstrated that 1 can be brought into the gas phase as an intact molecule under the experimental conditions employed. The molecular ions, formally 1*+ and 1*- , were investigated with a four-sector mass spectrometer in metastable-ion decay, collisional activation, charge reversal, and neutralization-reionization experiments. Surprisingly, the expected retrocyclization to yield two benzene molecules was not dominant for the long-lived molecular ions; however, other fragmentations, such as methyl and hydrogen losses, prevailed. In contrast, matrix ionization of 1 in freon (77 K) by gamma-radiation or in argon (12 K) by X-irradiation leads to quantitative retrocyclization to the cationic dimer of benzene, 2*+. Theoretical modeling of the potential-energy surface for the retrocyclization shows that only a small, if any, activation barrier is to be expected for this process. In another series of experiments, metal complexes of 1 were investigated. 1/Cr+ was formed in the ion source and examined by metastable ion decay and collisional activation experiments, which revealed predominant losses of neutral benzene. Nevertheless, comparison with the bis-ligated [(C6H6)2Cr]+ complex provided evidence for the existence of an intact 1/Cr+ under these experimental conditions. No evidence for the existence of 1/Fe+ was obtained, which suggests that iron mediates the rapid retrocyclization of 1/Fe+ into the bis-ligated benzene complex [(C6H6)2Fe]+.
66,079,788
FREE DOWNLOAD Newsletter Ryan McGarvey: Redefined Review In 2007, Ryan McGarvey released his debut album, Forward In Reverse, which was recorded when he was only 19 years age. The album was as strong of a debut you can imagine from a young guitarist. Now 25, McGarvey is back with his highly anticipated sophomore album, Redefined. It’s a fitting title as McGarvey’s sound has evolved into more of a straight up rock style, though still blues based. Redefined begins with “All The Little Things,” a nice rocker that does a good job of setting the tone for the album. The upbeat “Never Seem To Learn” and euphoric “My Sweet Angel” are next and if there was any justice they would receive airplay on mainstream rock radio. Other highlights include “Prove Myself,” a groovy jam that really puts McGarvey’s guitar mastery on display and “So Close To Heaven,” which should become a McGarvey staple for years to come. Many young guitarists struggle to separate their sound from the rest of the pack, but McGarvey has come up with a signature sound that is distinctively his own. Redefined will go down as one of best albums of the year. Names like Bonamassa, Trucks, and Sayce are often mentioned as modern day guitar heroes, and it won’t be long until Ryan McGarvey enters the equation. I think your sound is great, it got more of a country flare, then a contemporary blues; blues sound. This is my opinion only….I am sure if you send something to JB; he will respond and probably jam with you. Can’t find someone more humble than he is.
66,080,108
Pages Wednesday, 31 March 2010 I thought a nice way to start would be to give an intro to what I'll be writing about here. For a start, this isn't going to be serious, it's just a nice way to pass time and feel like I'm actually doing something. It's going to be about anything and everything that I'm interested in: Student life, a fair chunk of sport, maybe a bit of politics seeing as there's an election coming and anything else that pops into my head. I'll post a few more bits and pieces to let you know about me before I get into the meat of it. Subscribe To Disclaimer No-one mentioned on this blog is real, even me maybe, unless they have given permission to be included. All patients, illnesses and places are mere figments of my imagination or have been altered enough so as not be accurate to any real person. It might seem a bit high-brow to have this but I've been informed by the powers at be that the blog has to have one because of the patient contact on my course.
66,080,158
Singular Extensions: Example In the seminal paper on singular extensions published by Anantharaman, Campbell, and Hsu in 1988, a study based on 300 test positions (Fred Reinfeld's Win at Chess Collection) was conducted that showed how this search extension strategy greatly improved the tactical capabilities of their chess computer Deep Thought. As a particularly intriguing example, the performance on position #213 was discussed in detail, showing that singular extensions enabled detection of a mate in 18 on this relatively complex middlegame position in 65 seconds, whereas this very system with singular extensions deactivated failed to find the mate in reasonable time. This test has been repeated for Fischerle 0.9.70 SE 64Bit. Employing its standard settings for singular extensions (no singular extensions at Cut nodes; only moves suggested by the transposition table the value of which is marked there as exact or lower bound can become singularity candidates) and using 256 MB of hashtable space, Fischerle finds the mate in 18 in just 11 seconds, processing only 3,874K nodes at a nominal search depth of 14: Further tests showed that even with singular extensions deactivated, this version of Fischerle finds the mate in only 571 seconds (considering 220,825K nodes at nominal depth 20), which is still impressive. This gives evidence that singular extensions might be less important in more modern state-of-the-art engines that already employ a carefully optimized blend of other variable search depth strategies (extensions and reductions). Thus, while there yet seems to be a considerable number of positions in which singular extensions nicely improve the tactical capabilities, this might partly explain why recent research on singular extensions no longer confirms the highly positive results that the inventors of this strategy described back in the year 1988. Another reason seems to be that the original research focused on a restricted number of test positions (in particular, Reinfeld's Win at Chess Collection) and on a quite limited number of computer-vs-human games, while the comprehensive evaluation as performed nowadays, which is typically based on thousands of engine-vs-engine matches, is presumably quite a different story.
66,080,288
Cities Encouraged to Take Action on 2013 Annual Conference Resolutions August 2, 2013 This year, the League’s Annual Conference will be held Sept. 18 – 20 in Sacramento. As part of the conference member participants will consider action on two resolutions that have been submitted regarding environmental quality and public safety policies. The League strongly encourages each city council to consider the resolutions and determine its position. Each city council should designate a voting delegate and two alternates to represent their city during the resolution vote at the Annual Business Meeting. To designate your voting delegate, please fill out this form and return it to the League offices by Sept. 13. The 2013 Annual Conference Resolutions Packet is available online. In addition, a hard copy of the resolutions packet will be mailed out starting next week, Aug. 5. For more information regarding the conference, please visit the Annual Conference website.
66,080,400
Cho Seung-Hui shot dead 32 people at Virginia Tech in 2007 Virginia Tech killer Cho Seung-Hui repeatedly told counsellors he did not harbour suicidal or homicidal thoughts, newly released health files indicate. The documents include those that were found last month, two years after Cho killed 32 people at the US university. The records indicate the therapists found Cho depressed and anxious but saw no evidence he would commit violence. The medical treatment of 23-year-old Cho, who committed suicide, has been a major issue in the investigation. The records released on Wednesday cover Cho's dealings with the counselling centre at Virginia Tech - two phone conversations and a subsequent face-to-face session. They include files discovered last month in the possession of a former employee of the Cook Counseling Center by legal teams of some of the victims' families. In the records, the counsellors noted that Cho denied thinking about suicide or murder. Cho met counsellor Sherry Lynch in December 2005 after he was detained in a mental hospital overnight because he had expressed thoughts of suicide. "He denies suicidal and/or homicidal thoughts. Said the comment he made was a joke. Says he has not reason to harm self and would never do it," Ms Conrad wrote in her evaluation. She urged him to return for counselling the following term. He was not seen again by the centre. Treatment "The absence and belated discovery of these missing files have caused pain, further grief and anxiety for families of the 16 April victims and survivors," a statement by Virginia Tech said. "With release of these records, Virginia Tech seeks to provide those deeply affected by the horrible events of April 2007 with as much as is known about Cho's interactions with the mental health system 15-16 months prior to the tragedy." Cho, a South Korean student, targeted students and staff during his rampage at the college in Blacksburg, Virginia. As police moved in, he committed suicide. Much of the investigation has centred on the events of the day, and how the police, and the staff at Virginia Tech, reacted to the unfolding events. But some survivors, and families of the victims, say they are more concerned about the treatment Cho received at the counselling centre. While most of the survivors and relatives of the victims accepted an $11m (£6.6m) settlement from the state in April 2008, two families earlier this year took out a civil suit against the state, the school and its counselling centre. Bookmark with: Delicious Digg reddit Facebook StumbleUpon What are these? E-mail this to a friend Printable version
66,080,461
Integrated circuits, such as field programmable gate arrays (FPGAs), may include circuitry to perform various mathematical operations. For example, a deep learning neural network may be implemented in one or more integrated circuit devices for machine learning applications. The integrated circuit devices may perform several operations to output results for the neural network. This Discussion of the Background section is for background information only. The statements in this Discussion of the Background are not an admission that the subject matter disclosed in this section constitutes a prior art to the present disclosure, and no part of this section may be used as an admission that any part of this application, including this Discussion of the Background section, constitutes prior art to the present disclosure.
66,080,550
Short Bytes: Two researchers have uncovered the dark secrets of North Korea’s Red Star Linux distribution. The OS fulfills North Korea’s aim to control all the information exchange in the country. These researchers called the spying features implemented in Red Star “a wet dream of a surveillance state dictator”. L inux Kernel is one of the biggest open source projects on planet. To suit one’s purpose, anyone can customize it and use it. So, it shouldn’t be surprising that the North Korean dictatorship chose Linux to build its own operating system that would spy on its citizens. Red Star OS, North Korea’s own Linux distribution allows the users to see what their governments wants them to see. However, two researchers have presented an in-depth analysis of a leaked version of Red Star OS 3. “We found that the features implemented in Red Star OS are the wet dream of a surveillance state dictator,” said Florian Grunow and Niklaus Schiess. The Red Star Linux OS comes with a plethora of surveillance tools. All the documents and multimedia files are watermarked to make their tracking easier. Its inbuilt antivirus software and web browser point to the internal government servers. Also read: North Korea Has Hydrogen Bomb, Claims Kim Jong-Un According to the researchers, even though the OS is built on top of Linux Kernel, it comes with the charming looks of Mac OS X. It has got multiple safeguarding methods to protect its system files that included sudden reboot if the system detects any changes. “Angae means “Fog” in Korean. The term is widely used in parts of custom code used by the Red Star OS. We will lift the fog on the internals of North Korea’s operating system,” the researchers write. The researchers believe that the OS is made to keep North Koreans isolated. With its Red Star OS, North Korea works to abuse the principles of free software and uses it to suppress free speech. And to do this, they are using a software that is supposed to support free speech. Well, it’s irony at its best. Do you have something to add about North Korea’s oppressive Linux distro? Share with fossBytes community through comments. Also read: The Internet Room of North Korea’s New Airport Doesn’t Have Any Internet
66,080,596
Modern diesel fuels are typically formulated with low sulphur levels, often 10 ppmw or less, in order to reduce the pollution caused by their combustion. However, the processes used to remove sulphur-containing components also typically reduce fuel lubricity. It is therefore generally necessary to incorporate lubricity enhancing additives in diesel fuels, in particular to reduce wear on the fuel pumps through which the fuels are conveyed. It is also necessary, both in the interests of the environment and to comply with increasingly stringent regulatory demands, to increase the amount of biofuels used in automotive diesel fuels. Biofuels are combustible fuels, typically derived from biological sources, which result in a reduction in “well-to-wheels” (ie from source to combustion) greenhouse gas emissions. For use in diesel engines, fatty acid methyl esters (FAMEs) such as rapeseed methyl ester, soybean methyl ester and palm oil methyl ester are the biofuels most commonly blended with conventional diesel fuel components. However, FAMEs and their oxidation products tend to accumulate in engine oil, which has typically limited their use to 10% v/v or less in fuels burned in many diesel engines. At higher concentrations they can also cause fouling of fuel injectors. Moreover, due to the incomplete esterification of oils (triglycerides) during their manufacture, FAMEs can contain trace amounts of glycerides which on cooling can crystallise out before the FAMEs themselves, causing fuel filter blockages and compromising the cold weather operability of fuel formulations containing FAMEs. It would be desirable to provide new biofuel-containing diesel fuel formulations which could overcome or at least mitigate the above problems, and which ideally could help to overcome lubricity issues in diesel fuels.
66,081,151
The invention relates to a method determining an instantaneously optimum pressure of the brakes of a trailer or semitrailer connected to a tractor and, more particularly, to such a method for the purpose of adjusting the coupling force occurring between the tractor and the trailer or semitrailer to conditions instantaneously present. DE 3,901,270 AS discloses a braking device in which during the first braking, the actual pressure value at the trailer is varied until the measured drawbar force vanishes. A braking pressure correction value is derived from the difference between the prescribed desired pressure value and the actual pressure value. No information or suggestion is provided, however, as to how the braking pressure correction value is to be derived. Furthermore, DT 2,164,352 B2 discloses a method in accordance with which the medium braking pressure on the wheels of a motor vehicle trailer is regulated as a function of the coupling force between the tractor and the trailer. In this case, the medium braking pressure is regulated such that the coupling force vanishes or becomes as small as possible. It is taken into account whether the braking process is stable or unstable, i.e. whether, in the case of the occurrence of a positive coupling force, a greater retardation of the trailer is produced and thus the coupling force become smaller given an increase in the medium braking pressure or whether the wheels of the trailer lock, and thus the coupling force increases once again, give an increase in the medium braking pressure. It could be regarded as disadvantageous in the previously known methods that braking pressure is not set to an adjusted value until the occurrence of a coupling force, as a result of which under certain circumstances driving comfort could possibly be impaired during the braking process. It is an object of the present invention to improve upon known methods such that during a braking process as much driving comfort as possible is guaranteed simultaneously with driving safety that is as high as possible during the braking process. In a general method of determining an instantaneous optimum braking pressure for a trailer or semi-trailer connected to a tractor, this object has been achieved according to the present invention by providing that, during at least one braking process in which the coupling force has been set, equal to its desired value, the assignment of pressure of the brakes of the tractor to the pressure is determined, taking account of the certain parameters. A conclusion is drawn from the determined assignments in the range of small values of the pressure on the assignments in the range of larger values of the pressure, and during subsequent braking processes, the target value of the pressure of the brakes of the trailer is determined at least indirectly from the assignment of the pressure of the brakes of the tractor to the pressure. By contrast with known methods, advantages of the present invention include that, when the method according to the invention is applied, the desired value of the coupling force can be achieved very quickly. In the method according to the present invention for determining an instantaneously optimum braking pressure for a trailer or semitrailer connected to a tractor, a first value is derived for an instantaneously optimum braking pressure by assigning to the instantaneous position of the braking value sensor (brake pedal) a value, derived from earlier stationary braking processes, of the braking pressure of the trailer or semitrailer as a target value of the braking pressure of the trailer or semitrailer. It is possible in this way to take account of differences arising in the individual brake systems, which can be based in different designs of the brake systems of the tractor and trailer or semitrailer or in different conditions with regard to ageing. A stationary braking process is derived from the condition that over a relatively long period of time the coupling force is equal to zero or equal to the desired value of the coupling force. Deviations within a prescribed threshold value, which arise, for example, from measurement inaccuracies, likewise are recognized as a stationary braking process. In this case, this relatively long period of time can, in particular, be on the order of magnitude of approximately 0.5 s. A desired value of the coupling force is that is not equal to 0 can advantageously be used when the trailer is a central axis trailer. In the case of such a road train, a component of the braking force for the trailer, which corresponds to the supporting force on the tractor, must be accepted by the tractor. In the same way, the method can also be used for semitrailer trains. Only the term "trailer" will be generally used in the following description, although all other possibilities of composing a road train that are indicated here are understood to be encompassed by the present invention. Both changes in the composition (that is to say a change in the trailer attached to the tractor) of the tractor-trailer combination and changes in the loading of the tractor-trailer combination are advantageously taken into account by resetting the stored assignments in the case of changes in the composition and/or changes in the loading. Changes in the composition of the tractor-trailer combination, as well as changes in the loading can be derived in this case, for example, from a relatively long stationary period or an engine standstill. The duration of this relatively long standstill can be fixed in this case at, say, 2 minutes. After the braking pressure of the trailer assigned to the instantaneous position of the braking value sensor has been set, this braking pressure can then additionally be regulated so that the coupling force present between the tractor and the trailer reaches its desired value. The assignment of the position of the braking value sensor to the braking pressure of the trailer can then be used as the assignment for a target braking pressure for a specific position of the braking value sensor if this regulation were to lead in turn to the occurrence of a stationary braking process. Various indices will be used below for the pressures in the braking system. The pressure designation P.sub.ALB designates the pressure upstream of the automatically load-dependent braking force valve (ALB valve) of the tractor, and the pressure designation P.sub.KKB designates the pressure at the coupling head brake between the tractor and the trailer, and thus designates the braking pressure of the trailer.
66,081,208
Expression of cytokeratins, adhesion and activation molecules in oral ulcers of Behçet's disease. Behçet's disease (BD) is a multisystemic inflammatory disorder of which oral aphthous ulceration is a major feature. AIMS/HYPOTHESIS. This study sought to determine the role of cytokeratins, differentiation and proliferation markers, gammadelta T-cell adhesion and activation molecules, and apoptotic markers in oral ulcers of this disease. Expression patterns for cytokeratins (K1, K6, K14, K15, K16), integrins (beta1 and alpha6), CD3 T-cell and gammadelta T-cell adhesion and activation markers [CD40, CD44, CD54, ICAM-1, CD58, leucocyte function-associated antigen (LFA)-3, vascular cell adhesion molecule-1 (VCAM-1), CD86], and cellular proliferation and differentiation markers (Ki67 and involucrin), and apoptotic markers (CD95 and Bcl-2) in oral ulcers of nine patients with BD and four healthy controls were analysed by immunohistochemistry. K14, K15 and involucrin expression were unchanged, whereas Ki67, the proliferation marker, was reduced by around 50%. K1, K6, K16, beta1 integrin and the apoptotic marker CD95 were upregulated, whereas alpha6 integrin and Bcl-2 were downregulated in BD samples. CD3 and gammadelta T-cell expression and other adhesion molecules including CD44, CD86, CD58 (LFA-3), VCAM-1 and intercellular adhesion molecule-1 (CD54) were upregulated, whereas CD40 showed little change. Our data demonstrates changes in cell-cell and cell-extracellular matrix interactions that affect cell homeostasis and may participate in the formation of oral ulcers in BD.
66,081,262
Q: Nil'ing weak pointers in Objective-C ARC? Slow like hell? In the following question, it was asked how the niling of weak pointers works in Objective-C: How does the ARC's zeroing weak pointer behavior implemented? The answer pointed to this document that seems to include the answer: http://mikeash.com/pyblog/friday-qa-2010-07-16-zeroing-weak-references-in-objective-c.html The answer is to keep a dictionary/hash table from object to a set of its weak references. But isn't the consequence that each deallocation must then feature a hash table lookup? Isn't this a quite hard performance penalty, especially in case of many short-living objects? A: hash table lookup is usually fast but as you correctly state the performance penalty will increase in the case of lots of short lived objects. This however must be balanced against the convenience of the hash table guaranteeing that a weak reference will be valid
66,081,514
Archives While pounding out seven miles on my treadmill yesterday I listened to C.J. Mahaney’s message from the recent T4G conference, Sustaining a Pastor’s Soul. It was the least dramatic message I’ve listened to by Mahaney (albeit out of only a dozen or so from Resolved, Shepherds’ Conference, and various mp3 downloads) but it had/is having appreciable effect on me. The central point of his message was that God is best served by glad pastors. He asserted that it is simply not sufficient for a pastor to serve faithfully, he must also serve joyfully. I’ve heard that before, but God graciously opened the eyes of my heart anew. The entire sermon challenged the soul by considering the apostle Paul’s joyful ministry in the midst of demanding responsibilities, hard sufferings, and even imprisonment for sake of the gospel as seen in Philippians 1:3-8. As I’ve heard him do previously, Mahaney urged each pastor to ask those closest to him–wife, ministry team, personal assistant–a series of simple questions about whether he lives and serves joyfully or irritably, with happiness or moodiness, gladness or discouragement. This time I took his advice. I didn’t have to ask Mo for her answers. I just went ahead and asked her forgiveness immediately after I finished my run yesterday. But earlier today I arranged for some of the guys who work with me on a daily basis to listen to Mahaney’s message with me and then invited them share their observations about my life and ministry. I warned them in advance that a quiz would follow and when it was over I printed the questions and even initialed the disclaimer at the bottom so they could hold me to it. Here’s the quiz. Click on it to see it full size. I won’t go into specific successes or failures, but suffice it to say the process was less painful than it would have been three or four years ago. One thing they all agreed on is that my attitude is “ridiculously influential” for better or worse and that I should wield that influence with great care. Since the door’s already open I suppose you are welcome to participate as well. The condition, however, is that you’ll need to email me so that the comments don’t get carried away in either direction. I’m not looking for praise or pettifogging criticism, but for signs of grace and areas needing growth. Of course God is the ultimate and only inerrant judge as well as the only One who can see my heart. Even so, my progress is supposed to be evident to all. Some very important things depend on my paying close attention and maybe you can help. In case my last post left you feeling a little down, let your heart cheer you in the days of your youth. Now is perhaps the best time ever to be a Christian college student, especially if you’re in Lynchburg, VA because Chuck Norris is your graduation speaker. I love Dr. John MacArthur. Much of my spiritual and pastoral growth can be attributed directly to him as the human instrument. When I packed my Ford Probe and moved to Los Angeles in 1997 for seminary it was because I wanted to be a student fully trained with him as the teacher. There is no one else I would rather listen to preach. And thanks to Phil Johnson and other editors his body of published material is without modern day equal. He is one of God’s strongest and clearest messengers and I sit up straight when he speaks. More than likely, most of the people who read this blog know and love Dr. MacArthur as well. So it’s no new revelation to say he’s the preacher who never met a passage that wasn’t his favorite. Each week, every next verse he unleashes is the most “rich” one. I really do admire his endless positivity, especially in light of everything I imagine he’s seen and heard. His sanguine perspective also spills over into a proclivity for hyping whatever he’s thinking/talking about in the now. I’m amazed how excited he is, or at least sounds, about anything he’s announcing.1 It’s more than admirable, it’s endearing. That said, I don’t always believe everything he says. Sometimes sweet things are too good to be true and you get to a point where you can’t handle any more honey. A good case in point would be the introduction of his recent chapel message, The Responsibility of a Christian College, in which he claimed that being at a Christian college is the most intense spiritual experience a believer could have.2 As president of The Master’s College I understand he’s obligated, and I think in his case genuinely excited, to promote the school. But this characterization of life at a Christian college is more like a caricature, and the exaggeration gives a dangerous and unhelpful impression. For the record, I don’t have a problem with Christian colleges, or The Master’s College in particular. Just the opposite is true. For the last seven years I’ve promoted, organized, and driven students thousands of miles for Preview Weekends at TMC. Some of my favorite people are TMC students or graduates and I wish I could have gone there myself. Furthermore, I attended three different Christian colleges before finally graduating from one of them. So I agree, as MacArthur opened his message, that Christian colleges should produce “distinctive Christians,” defined by him as those “who’s sanctification is evident.” That’s good if not self-evident. But I couldn’t believe what he said just a little under two minutes in: Being a Christian in a Christian college should be the most formidable, the most aggressive, the most progressive, the most intense time of sanctification that a believer could ever know. …to be a true Christian, and to be put in this setting, with its level of spiritual intensity, biblical understanding, biblical literacy, theological clarity, ministry opportunity, is a level of intensity in spiritual experience that has no parallel. No youth pastor can produce this level of discipleship. No family can produce the breadth, height, length and depth of this level of discipleship coming from so many different directions, all singularly focused, all founded on the same convictions, all pursuing the same objective.3 For sake of full disclosure, it’s true that Pastor of Student Ministries is the title on my business card. Maybe I’m howling because one of the rocks he threw hit my head, but I don’t think that’s the only reason. If he would have said something like, “Sadly it’s true for many Christians that the most intense time of spiritual growth is in college” I’d have no complaint. Certainly that is possible. My objection is that his statement makes it sound ideal. But if Christian college is truly the ultimate place for spiritual growth and sanctification and discipleship then that’s awful for the majority of past and present believers who never went to a Christian college for whatever reason. We should make everyone enroll immediately for every semester and attend classes at some Christian college as long as they’re alive.4 Every family, and church, should organize themselves around the college schedule. Apparently the rest of us are really missing out. Obviously that’s not biblical. God ordained the church and the family as His institutions for instruction, discipleship, worship, ministry, and personal obedience. Christian college may be a small brick in the wall, but to say it is “a level of intensity in spiritual experience that has no parallel” discourages almost everyone but donors and undermines MacArthur’s message and ministry for over 40 years. Professors cannot take the place of preachers/pastors and parents. They’re not supposed to. College is also patently NOT discipleship, unless discipleship is defined in terms of classes and chapels, which I’ve argued against elsewhere. In addition, roommates and RAs cannot provide what older, and younger, and otherwise different parts of the Body can. When I look back on my own Christian college experience, sanctification was indeed formidable. But I always attributed that more to the fact of living in close quarters of a thousand other selfish sinners. And we certainly had some spiritually intense discussions, but most of the intensity was due to our youthful arrogance, not our theological acumen. I’m very thankful for everything I learned and wouldn’t trade it for anything. I’m even more thankful that it’s done. In fact, trying to be a loving husband, a diligent dad, and a faithful shepherd has no parallel in terms of intensity. My point is that no Christian college can provide the breadth, height, length and depth of spiritual experience. I don’t believe it. Dr. MacArthur himself has preached and published otherwise for a long time. I don’t think he believes it either. For instance, though he’s greeted new visitors thousands of times I’ve never heard his welcome sound stale, rote, or disinterested. ↩ He certainly meant being at a “good” Christian college, and presumably he would consider a “good” one to be like The Master’s College where Scripture is the authority and personal holiness is prized. ↩ I transcribed this quote from the podcast of Chapel @ TMC. At the moment there is no other way to access the message than by subscribing to the entire podcast. Note to the media department at TMC: if you’re going to make the audio available at all (for which we are very thankful), why not create a more inviting way to access the chapel messages than only through iTunes? Anyway, I suspect this quote is exactly the kind of material that might be published in a future edition of The Master’s Current, though I hope it gets buried or lost on the editor’s desk. ↩ We’re also going to have to do something about the hefty $120,000 price tag for this level of sanctification. ↩ Comments (copied) Hey Hig, Thanks for your leadership and truth declaring on our behalf. Thanks for your willingness to disagree, when necessary, with someone you love. Our family has been through some “intense sanctification training” lately, and you’re exactly right, it’s the Body that brings all those things so sweetly together. Thanks again for your work at TMC and seminary to move you along in the process, but thank you even more for your continued growth in sanctification over the past years you have been involved in our ministry. You, and Mo, have growth significantly and so has our 128 staff and students. God is Good. Chuck Hmmm, I don’t think I care for his dogmatic declaration either. When I first considered colleges, there was almost nothing I wouldn’t have given to be able to attend that college as well – a few years removed from that decision and looking at the growth and intense discipleship I have received through the ministry here in One28 – I KNOW I have received more ministry opportunity and personalized exhortation from the Godly men in this church than I would ever have received from a teacher that is forced into the formality of the academic arena. My deepest gratitude to your “second best” efforts Sean! @clyde – i agree with the disagreement too. and i agree that money talks. but i did want to jump in with a tiny (maybe weak) defense of the book you linked to. from what i understand, many of the “spin off” books are handled by the publisher and macarthur (or phil) has very little, if anything, to do with them. he obviously has very much to do with his message at chapel, though, so there is no getting out of that. and yes, the next question is why you would let your publisher have that much control and influence…i did say it might be a weak defense. Although I am about to graduate from said college in a little over a week and I did sit through that very chapel message, I do agree that he definitely exaggerated the point of the Christian college experience being unique. Honestly, as I listened to him, although what he was saying was superlative and declarative, it didn’t surprise me. The day he gave that message, it was a view weekend, so there were over a hundred prospective students and their parents visiting. We (TMC students) all know that view weekends is when the visitors see the full face of the college. It isn’t that they see anything fake or false, but all Master’s is and represents and stands for is showcased for the visitors. With that said, MacArthur is stepping into the pulpit in front of prospective customers and with enrollment down, it didn’t surprise any of us that he was saying the things that he was. But that is no excuse for undermining the authority and priority of the local church. I could give plenty of testimony to how God has shaped my life here, but there is no chapter and verse on the necessity to attend Christ’s college. When I have heard leaders around here say that the college has something that the church wishes it had, I have understood that to mean we live alongside the very people who are discipling us and who we are discipling. We don’t just meet two times a week, but day in and day out we see each other live and confront sin and forgive and love and encourage. Although I see those benefits, living around people that are primarily your own age is not exactly what Titus 2 speaks of. By the way, you can access the podcast by just visiting the feed through the web browser, where then you can download individual files without subscribing. So, just plug this feed into a browser: http://www.masters.edu/podcast/chapel/chapel.xml GP said April 30, 2008 at 3:10 pm: This makes me embarrassed to say where I went to school. I agree with footnote # 4. What you said about husband, dad and shepherd was right. Dave Crawford said May 1, 2008 at 5:11 pm: “In fact, trying to be a loving husband, a diligent dad, and a faithful shepherd has no parallel in terms of intensity.” No kidding. I truly don’t understand, however, how the book The Extraordinary Mother, which was linked above in the comments, typifies the expression “money talks.” The phrase means a compromise on essential principles for the sake of money. Since the content is biblical, I don’t see how a spin-off book does this, regardless of whether it was initiated by the author or the publisher. SKH said May 2, 2008 at 11:02 am: Alright, first of all and to all who have commented thus far, thanks for the feedback and encouragement. Second, I don’t want to jump into The Extraordinary Mother discussion too deep except to say, Dave, it’s okay, we promise not to tell Jen what’s coming for her Mother’s Day gift. Looking back on my schooling experience, I wish I could have mixed private and public university. I certainly would have loved the depth of Biblical insight and knowledge provided by a school like TMC, but the fight to be “in the world and not of it,” the opportunity to meet many diverse Non-Christians, what it means to battle for truth in a hostile environment, refining critical thinking with Biblical truth – that ‘training’ through the UW and UVa was also excellent. However, Andy and I have often discussed that, if I (or he) had grown up at a church like Grace, we would have been far more equipped for a place like the UW. I didn’t fully understand discipleship, God’s sovereignty – so many things! And I know a good church would have been a more effective equipper than time at a private college. Biblically, is the ideal mode of sanctification through something like a Church environment or a School environment? I agree with others, and SKH, that this is more a charge to bolster the depth and life-on-life elements of the Church than start investing in the Stock Market in the hopes of being able to send all our kids to TMC… To recap: the discipler instructs his disciple in doctrine, illustrates truth in daily practice, involves the disciple in the work of the ministry, helps the disciple improve his effectiveness, and inspires the disciple when he’s discouraged. These five stages of development span the Biblical Discipleship Bulls-eye from evangelism to edification to equipping. Disciplers labor to help new coverts grow in Christ and train them to make disciples in fulfillment of the Great Commission. Maturity and multiplication are beautiful things. As we wrap up this series, here are some final thoughts on disciple-making. We learn about discipleship from Jesus! Jesus already walked the road ahead of us and all we need to do is follow Him. As I mentioned before, The Master Plan of Evangelism by Robert Coleman traces Jesus’ steps and is must read material. Jesus called disciples, lived and associated with them, taught them, modeled for them, partnered with them, delegated assignments, did follow-up, and then He left. We are here, not because Jesus filled stadiums with hundreds of thousands of people and preached great messages, but because He focused on twelve ordinary men1. Apparently making disciples like Jesus it is effective (not to mention biblical). Though not complex, discipleship is not easy. In fact, discipleship may be the toughest thing we’ll ever do. It’s so easy to focus on other things. It isn’t always pleasant having other people look into our lives and it’s often messy when we get involved in theirs. But no matter how difficult, making disciples is our Lord’s commission. Discipleship is all about the people and not about the program. The best curriculum cannot guarantee growth. There are no checklists to complete or shortcuts to maturity. Some structure (like organized small groups) may be helpful, but the best program with the wrong people won’t make disciples. On the other hand, the right people with the worst program–or even no program at all–will move forward. You are missing out if you just partake and don’t participate. I changed the person of the pronoun on purpose. If you come and soak and don’t give you won’t grow like you should. Your joy will be half of what it could be if you’re not using your spiritual giftedness and pouring back out into someone else’s life.2 There is always someone who knows less than you. You can encourage someone. You’re ready. Every believer has a responsibility to reach out to someone else and make a disciple. What stage are you in? What will it take to get you to the next level? My prayer is that God would give us all a passion for discipleship, that not just the pastor or the youth staff or parents, but that all of the saints would take ownership. May He give us a vision and burden for others and keep us from sitting on the sidelines. Let us commit to make disciples of all the nations until everyone is complete in Christ. Yes, this is a shameless reference to John MacArthur’s book by the same name. ↩ Besides, it would probably help you stop whining about your own life. ↩ This is the final stage in the practical discipleship plan of attack. In Stage Five the disciple exits the process as a discipler. The disciple has been taught. He’s watched how it’s done. He’s rolled up his sleeves in the work of the ministry alongside his discipler. He’s received constructive criticism to help him get better. By now the bulk of his training is complete and he’s ready to be on his own. So the fifth TASK of the disciple-maker is to inspire. This is probably my least favorite word, but it fits (for more than just alliteration). The PURPOSE is encouragement. Making disciples is hard work. Difficulties and heart heaviness are regular occurrences. Sometimes disciples need a shot in the arm. The ROLE of the discipler becomes that of a resource. The need for constant interaction diminishes, but the disciple turned discipler may run into something he hasn’t encountered before. Maybe an unusual circumstance or knotty theological question surfaces. Maybe he needs seasoned counsel, wisdom from experience, or just someone to pray for him. But he has access to advice whenever he asks. Therefore the discipler utilizes the MOTTO of “Keep it up.” and is always available for assistance. The PRINCIPLE is spiritual reproduction, much like the proper goal of parenting. Good parenting isn’t about providing or doing everything for the children. It aims to train kids how to be adults; how to accept and fulfill responsibilities. That doesn’t happen if dad always builds the Soap Box Derby car or never lets his son make a decision. Mom hinders growth by always being the one to braid her daughter’s hair or by constantly defending her. Yes, kids need more care at the beginning and it may be a slow train to maturity. But parents find out whether they were successful when their young person leaves the house, not by them living at home forever. Even then, however, they provide a different kind of attention when the kids are grown and have families of their own. So a discipler knows he’s succeeded when he sees and serves spiritual grandchildren. Our goal is to see every person complete in Christ. Another way to say it is, we work to see each person independently dependent on Christ. An independent person is one who looks for things that need doing and does them without someone else constantly looking over their shoulder. A mature disciple doesn’t need constant supervision though every disciple remains dependent on Christ. So a discipleship purpose statement might look something like this: We labor to help every person establish godly habits, motivated by love for Christ, that will cause them to be independently dependent on Christ for the rest of their lives, while helping others do the same.1 The relationship between a disciple and his discipler purposefully changes over time if discipleship is effective. But whether disciples move on to minister near or far, disciplers are always ready resources. We don’t expect to complete this objective in student ministries, even by the time a senior graduates. But we do aim to equip students as much as possible in the six years we have them and hope they enter the next stage of life more like Christ in character and service than when we got them. ↩ There are always more ways for a disciple to grow no matter how well instructed they are or how many examples they’ve observed or even if they’re heavily involved the process. That’s what Stage Four is for. By this point in the process the disciple should be busy reaching out to others. He’s been pushed out of the comfort of the nest and is learning to fly on his own. If he’s normal he will suffer through at least a few crashes. So the fourth TASK of a disciple-maker is to help the disciple improve, not only in personal obedience but in ministry. The PURPOSE is to increase their effectiveness. Though no technique exists that guarantees spiritual success, the discipler can give guidance and encouragement even when it appears the disciple flopped. As the disciple ventures out on his own the discipler takes the ROLE of a constructive critic. This evaluation isn’t for the sake of discouragement but for betterment. Maybe an evangelism exchange could have been more accurate or a counseling conversation could have been more gentle. But mistakes and failures are not the doom of discipleship, instead they provide platforms for development. In this stage the MOTTO is “I watch you.” and then help make it better. Again, the Master lived with His disciples, taught them, trained them, modeled for them, sent them out, and then debriefed them. For example, in Mark 6 He sent them out with partners and gave them all the instruction they needed for their short term assignment. Later they returned to Jesus and told Him all that they had done and taught. This retreat was for rest and no doubt they also discussed their successes, setbacks, and what they could do better next time. The PRINCIPLE is supervision; follow up for the sake of adjustment, correction, and encouragement. In order to make progress disciples need to make decisions and do the work without always having their hand held. But diligent and regular review will realign and reinforce where necessary. Maturing disciples don’t always need their discipler present. But they do need faithful follow up in order to move forward with only one more stage to go. The practical plan of discipleship starts with instruction and includes living illustration. In Stage Three the disciple develops even further toward becoming a discipler. Teaching biblical doctrine and demonstrating how to follow Christ is fundamental to making disciples. But that’s not all we can do. Since we also want our disciple to make disciples of his or her own we must bring them in to the process. So the third TASK of a disciple-maker is to involve the disciple in service and ministry for the PURPOSE of giving them experience. Explaining Scripture and being a Christian example isn’t necessarily the same thing as discipling. It is possible (though not as valuable) to watch someone from a distance and listen to good teaching on the radio. I assume there are probably people watching me who have little to no relationship with me. That’s okay because I can still model obedience for people I don’t know. And I can certainly instruct people without ever talking to them individually. But disciplers get involved. They open the hood, take the engine apart (or put it back together), and get four hands dirty, not just two. The ROLE is more than teacher or example, it is partner. The MOTTO is “We do together.” The discipler says, “I’ve told you about it, you’ve seen me do it, now we’re both going to do it.” Jesus lived with His disciples for three years. As they matured He increased their responsibilities. Jesus wanted His disciples to work side by side with Him. He assigned them to pass out the loaves and fishes. They listened to Him, watched Him, and worked alongside of Him. The Master’s plan followed the PRINCIPLE of delegation. No doubt there were discipleship purposes, not just logistical advantages, when Paul took young men along on his missionary journeys.1 Practically speaking, Stage Three requires a focus on the few to reach the many. No one has enough time to be involved and be partners with everyone. Jesus Himself didn’t do that. He had 12 key disciples and three of them were even closer than the rest. We cannot experience growth and ministry with everyone. Besides, will we have greater influence by spending 60 minutes with one person or one minute with 60 people? How will we maximize our investment? By pouring much time and energy into a small number of disciples (maybe only one at the beginning) the earlier they’ll be ready to pour into others, multiplying our ministry. Working shoulder to shoulder exposes not only the disciples’ weaknesses and shortcomings, but ours too. Sometimes we can hide certain elements of our example. But we can’t work together very long before our partner realizes what we’re good at and what we’re not good at. It takes humility to involve someone else in our lives and in our ministry, but it is a necessary part of the development process. And it’s good for them to see our deficiencies because it isn’t about our perfection, it’s about participation. Discipleship Evangelism utilizes the same procedure. At the start, verses and the evangelism outline must be memorized. Then there are visits where the trainer does all the talking as an example. At a certain stage, the trainer involves the trainee in the discussion. Eventually the trainee is expected to do all the talking and the trainer is just a resource. But that’s an upcoming stage. ↩ Making disciples requires instruction, but verbal communication isn’t the end of the process. Now we come to Stage Two. Teaching others the truth is crucial. So is practicing it in front of them. Therefore our second TASK is to illustrate; to put instruction on display. The PURPOSE is exposure to the difficulties and delights of being a disciple. Our Lord left us an example in order for us to follow in His steps. Likewise, we are to live as examples for our disciples to watch. A master trains his apprentice both by telling him what to do and by showing him how to do it. We take the same hands-on, eyes-on approach. Therefore in Stage Two the ROLE of the disciple-maker is that of a model. Our MOTTO is “You watch me.” At least two benefits come from disciples seeing their discipler’s personal obedience. First, they see how it’s done. But second, the teacher establishes credibility and underscores the believability of the truth. Expecting others to do what we won’t or don’t do undermines integrity. On the other hand, living out the truth corroborates our knowledge and love of the truth. People pay attention when we practice what we preach. This presumes the “life on life” precept. We cannot make disciples remotely; it requires a relationship. We cannot effectively model–or watch for that matter–from faraway. Living rooms and waiting rooms supplement classrooms. Yes, truth can be taught in a living room. Yes, some life on life occurs in a classroom. But this component of training looks at a discipler’s lifestyle at work and play. We must spend a quantity of quality time or else our disciples will be ill-prepared. We’re all busy, but Stage Two must be intentionally included at every opportunity. Dinner time isn’t sufficient for diligent parenting. Kids need car rides and late night conversations. Part-time shepherds put the sheep at risk. So discipleship is the product of many moments, but it is never momentary.1 While Christ’s substitutionary atonement is the primary purpose of the incarnation, His life on life discipleship was part of the reason as well. God could have dropped a copy of His Word from the sky instead of sending His Son to earth for so long. Jesus called His disciples to follow Him and to be with Him. They watched Him in public and in private. They saw Him spend nights in prayer, respond to religious authorities, care for little children, teach the masses, heal the sick, and do all sorts of miracles. They observed Him when He was tired, hungry, interrupted, angry, and sorrowful. As the time of His crucifixion came closer He focused more personal attention on His disciples, not less. This stage of discipleship is hardly flashy, not easily evaluated, and often unappreciated. But it is relevant, effective, and as we’ve seen, it was the Master’s plan. For those who want to grow, listen to good teaching and find a good follower of Christ. Get in their back pocket. Make yourself available to serve them and hang out with them as much as possible. Watch how they respond to everything. Don’t isolate yourself from those who are further down the discipleship road than you. Christ is life, not class.3 Examples without teaching are useless without knowing what the example is for. Of course, instruction without personal illustration won’t have the same influence. Truth must be proclaimed, believed, and practiced to make disciples. Herein is the reason for every retreat we run, why we drive 20 hours to and from the Shepherds’ Conference and Preview Weekend at The Master’s College, why we have small groups, and why we work to schedule life “path crossings” like running errands, drinking coffee, or scraping gum off the gym floor: to be together. ↩ I know some people are uncomfortable with the arrogance of asking another person to imitate us. Instead, they say, we should tell everyone just to follow Jesus. That’s fine as far as it goes, but exposing our lives and letting others see we’re sinners gives us an opportunity to repent and show how that works too. ↩ By this I do not mean the same thing as those who insist “Christ is life, not doctrine.” That’s bologna. I went out of my way to say discipleship depends on doctrine in Stage One. I simply mean that formal, corporate learning is only one slice of the discipleship pie, not the whole. ↩ Each stage in our practical plan of attack includes the Task, the Purpose, the Role, the Motto, and the Principle (as the table below shows). In Stage One we insert a disciple into the very beginning of the process. To make disciples we start by proclaiming good news, specifically the gospel of Christ as revealed in Scripture. Our first TASK is to instruct and our PURPOSE to educate. Faith comes by hearing and hearing by the word of Christ. Therefore, Christianity requires communicated truth and discipleship depends on properly understanding doctrines of theology rooted in God’s Word. We received a message from our Lord. Our responsibility is to pass that message on to the another person and the next group and the following generation so that they will do the same. Disciples aren’t made if the baton of truth is dropped anywhere along the way. The apostle Paul explained that all believers–those who are no longer slaves of sin–have been committed to the standard of teaching. Disciples are delivered into a form of truth, into principles and teaching that mold their lives. Christians are those shaped more by doctrine than by sin. Jesus modeled this better than anyone. He regularly preached in front of large crowds and instructed His disciples in private. Whether by sermons or conversations, teaching was at the heart of our Lord’s disciple-making plan. And every Christian can follow His example. The teacher typically knows more than his student. Most of the time the educator is also the elder, that is, they are older. Titus 2 describes a pattern of the older teaching the younger and more maturity brings more responsibility to disciple. But anyone who knows more truth than someone else can and should participate. You can always find someone who knows (at least a little) less than you do. Just because you’re learning from someone doesn’t mean you can’t also be passing that on to someone else. This Stage incorporates a few PRINCIPLES from The Master Plan of Evangelism such as Selection (of faithful men just as Jesus chose His disciples), Association (being with people just as Jesus appointed disciples to be with Him), and Impartation (giving what has been received to others). Disciples never move beyond the need for instruction. Though Stage One could be done independent of the others (resulting in delayed growth and therefore a defective plan), the other stages depend on teaching for effectiveness. Two years ago today my dad died. We had less and less in common after I answered the call to pastoral ministry but I still miss talking to him. There were so many things over the last year I wanted to share with him. I think that’s because for all I learned from him and everything I prayed for him, most of all I really liked him. More than a few things have kept him on my mind recently, most of which relate to Calvin. One of my greatest disappointments is that my father never met my son. They lived together on the planet for almost four and a half months, but were separated by three time zones, dad was too sick for travel, and our scheduled visit in June wasn’t soon enough. Just like my son, though, I never met my dad’s father. There’s no doubt my dad would not have entirely appreciated Calvin’s thundering (“shake the gates of hell” kind of) ambition, yet there is much he would have liked. They could have watched ball together all day. The specific sport doesn’t matter so long as a ball’s involved: baseball, football, basketball, golf. All three of us love the game like our fathers. The sons love the yard like their dads too. My dad got me started as early as four months. Calvin already has his own John Deere. There’s also the injuries. When I was 14 I wrecked my bike pretty bad. When my dad saw the wounds he told me he didn’t remember being a “human scab” when he was a teenager and that if I wanted to see 15 I should probably slow down some. My son can’t even ride someone else’s knee without getting black-eyes and big scabs. I guess like father, like son. And the other night at dinner I realized both my father and son are fascinated with belly-buttons. My dad enjoyed looking at his, keeping it free from lint, and talking to other people about theirs. So far Calvin follows his grandfather’s preoccupation, however, this trait apparently skipped a generation. My dad wanted better for his son; so do I, though my hopes concern spiritual things more than earthly ones. My dad was too often cranky or even angry; so are his son and grandson. But for all the similarities (and differences), and though in God’s providence it didn’t work out, it would have been nice to get together. I know we would have liked each other.
66,081,592
Farmworkers fight back. Bernie Sanders announces bid and more… Today on Flashpoints: Farmworkers fight back against the deadly dangers of pesticides in our foods and on the workers who pick them. Also, Bernie Sanders formally announces his bid for the presidency. We’ll have a special report on his alternative vision for labor. And our education series, The Battle for Public Education In the 21st Century continues:Today’s episode, “Breaking Through The Poverty Of Imagination”
66,081,787
Transocean Deepwater Inc., an oil drilling company, formally pleaded guilty on Thursday to a misdemeanor charge and will pay $400 million in criminal penalties, the latest action in the 2010 Gulf oil spill. U.S. District Judge Jane Triche Milazzo in New Orleans accepted the guilty plea to violating the Clean Water Act plea and imposed sentence, the Justice Department announced Thursday. Transocean agreed last month to plead guilty to the misdemeanor charge and to pay $1 billion in civil penalties along with the criminal penalty. Another judge will decide whether to accept the civil penalty portion. The penalties totaling $1.4 billion represent the second-largest recovery in an environmental case, following the $4-billion criminal sentence imposed on BP Exploration and Production Inc. in connection with the same oil spill, the Justice Department said. Most of the $1.4 billion will fund environmental-restoration projects and spill-prevention research and training. “Transocean’s guilty plea and sentencing are the latest steps in the department’s ongoing efforts to seek justice on behalf of the victims of the Deepwater Horizon disaster,” said Atty. Gen. Eric Holder in a prepared statement. “Most of the $400-million criminal recovery -- one of the largest for an environmental crime in U.S. history -- will go toward protecting, restoring and rebuilding the Gulf Coast region.” Transocean owned the drilling rig, Deepwater Horizon, which exploded on April 20, 2010, and sank over BP’s Macondo well. Eleven workers were killed. The explosion also led to the nation’s worst environmental disaster. The well spilled an estimate 4.9 million barrels in the Gulf before it was capped July 15, 2010. The well was declared sealed two months after that. “The Deepwater Horizon explosion was a senseless tragedy that could have been avoided,” said Assistant Atty. Gen. Lanny A. Breuer of the Justice Department’s Criminal Division. “Eleven men died, and the Gulf’s waters, shorelines, communities and economies suffered enormous damage. With today’s guilty plea, BP and Transocean have now both been held criminally accountable for their roles in this disaster.” Milazzo said she had received no letters objecting to the Transocean settlement. Last November, BP and the Justice Department agreed to a settlement in which BP will pay a record $4 billion in criminal penalties. The company also entered a guilty plea to manslaughter and other criminal charges related to the spill. That agreement was approved in court last month. Still pending is the civil phase against BP. ALSO: 53 shark attacks recorded in U.S. last year, most in a decade SEAL who killed Bin Laden met with lawmakers to talk veteran care New York Mayor Michael Bloomberg calls for Styrofoam container ban
66,081,900
2 wounded in fatal Cincinnati shooting are back home Ohio News Sep 13, 2018 By DAN SEWELL, Associated Press CINCINNATI (AP) — A woman who survived at least 12 gunshots in a downtown Cincinnati attack was back home Wednesday and helped get her two children ready for school, kissing them as she sent them off, a spokeswoman said. Whitney Austin, 37, was discharged Tuesday evening from UC Medical Center, five days after a man opened fire inside the Fifth Third Bancorp headquarters, killing three people and wounding two. Austin, a Fifth Third vice president, got home to Louisville, Kentucky, in time to put her children to bed, according to Fifth Third spokeswoman Laura Trujillo, who quoted Austin as saying she was grateful to be home.“I got to see my motivation for living,” Austin said in a statement. “I’m thankful to be alive, for all the good wishes for everyone who helped.” She said she has been reading and learning more about the three people killed and urged people to help their families any way they can. She was initially in critical condition and faces what her husband calls “a long road” in recovering physically and mentally. Brian Sarver, 45, a Fifth Third contractor, was released Monday. The Lebanon, Ohio, resident offered his thanks to God, prayers for other victims and families, and thanks for all the expressions of support in a statement Tuesday.“I look forward to getting back on my feet and return to work as soon as practical,” Sarver said. “Thank you, again, to everyone who has treated me and my family with such kindness and affection.” Police are still trying to determine why 29-year-old Omar Enrique Santa Perez opened fire Sept. 6 inside the bank building. Officers responded quickly and killed him in a hail of gunfire. ___ Follow Dan Sewell at http://www.twitter.com/dansewell
66,081,937
Blur Unveils New Song “There Are Too Many of Us” From New Album Blur is thankfully back in vision and will soon snap The Magic Whip, their first new album in 12 years. The band has previously released rocker Go Out from the album, but today they’ve unveiled the preferable There Are Too Many of Us from the album. Check the relatively straightforward over-population song below. In a word: propulsion. Great sounds from one of the great bands.
66,082,075
Homocysteine-lowering exercise effect is greater in hyperhomocysteinemic people living with HIV: a randomized clinical trial. Elevated concentration of homocysteine has been identified as an independent risk factor for the development of cardiovascular disease and is frequently associated with oxidative stress. Moreover, studies have shown that people living with human immunodeficiency virus (PLHIV) present elevated concentration of homocysteine and oxidative stress compared with people without HIV. Our purpose was to describe blood homocysteine and oxidative stress markers in PLHIV and those without HIV infection, and to examine the effects of a 16-week combined training exercise program (CTE) on oxidative stress and homocysteine concentrations of PLHIV. We included 49 PLHIV (21 men, 28 women) and 33 people without HIV infection (13 men, 20 women). After baseline evaluations, 30 PLHIV were randomized to either CTE (trained group, n = 18) or the control group (n = 12); CTE consisted of aerobic and strength exercise sessions during 16 weeks, 3 times a week. Plasma homocysteine, oxidative damage markers, folate, and vitamin B12 were assessed pre- and post-training and by hyperhomocysteinemia (homocysteine ≥ 15 μmol/L) status. At baseline, PLHIV had higher levels of homocysteine and malondialdehyde, as well as reduced circulating folate when compared with people without HIV infection. CTE resulted in a 32% reduction (p < 0.05) in homocysteine concentration and a reduction in lipid hydroperoxide in PLHIV with hyperhomocysteinemia, which was not observed in those without hyperhomocysteinemia. Hyperhomocysteinemic participants experienced a 5.6 ± 3.2 μmol/L reduction in homocysteine after CTE. In summary, 16 weeks of CTE was able to decrease elevated homocysteine concentration and enhance redox balance of PLHIV with hyperhomocysteinemia, which could improve their cardiovascular risk.
66,082,086
President Donald Trump praised Chinese business leaders during his state visit to China, pointing out frankly that he didn’t blame them for the massive trade deficit with the United States. “I don’t blame China,” Trump said, pointing to estimations that the trade deficit with the United States was as high as $500 billion. “After all, who can blame a country for being able to take advantage of another country for the benefit of its citizens? I give China great credit.” His remarks drew applause from the business leaders in the room. Trump blamed previous presidents of the United States for allowing the lopsided trade relationship with China. “I do blame past administrations for allowing this out-of-control trade deficit to take place and to grow,” he said. “We have to fix this because it just doesn’t work for our great American companies, and it doesn’t work for our great American workers. It is just not sustainable.” Trump said that he had a great relationship with Chinese President Xi Jinping, and would continue to negotiate on trade. “There’s a very good chemistry between the two of us, believe me,” he said. Trump repeated his remarks during a media appearance with President Xi prior to a meeting with the Chinese president, pointing out that the United States had a responsiblity to change it’s trade policies. “[I]t’s too bad that past administrations allowed it go get so far out of kilter,” he said. “But we’ll make it fair, and it will be tremendous for both of us.” Trump, however, did not blame the United States for the Chinese theft of intellectual property, telling business leaders that it was costing the United States at least $300 billion a year. “My administration is committed to improving our trade and business relationships with China,” he said. “And this relationship is something which we are working very hard to make a fair and reciprocal one.” The White House said that they negotiated $250 billion in business deals on energy, transportation, technology, and agricultural products while in China, according to Bloomberg News.
66,082,165
Q: Assign result of `log10(2)` to a constant I want to assign the result of log10(2) to a constant. I did const float f = log10(2); And it tells that Initializer element is not a constant expression. I also defined a new function const float Log10(float f) { return (const float)log10(f); } But the compiler is complaining(why wouldn't it? I'm also using log10 function) that Type qualifiers are ignored on function's return type. Does that mean there are no functions which can return a constant? Then how can I do what I want to? EDIT: As some people have doubts, I included the math.h header file and linked it with -lm, but I'm using the -pedantic option in gcc, and it does not accept it. A: Assuming that f is declared at global level. Unlike C++, C does not permit runtime expressions to be used when initializing global variables. All expressions must be computable at compile time. Therefore const float f = log10(2); is not a valid C, while const float f = 0.30102999566; is valid. From C Reference: When initializing an object of static or thread-local storage duration, every expression in the initializer must be a constant expression or string literal. A: This will work #include <stdio.h> #include <math.h> int main() { const float f = log10(2); printf("%f\n", f); } But this will not work #include <stdio.h> #include <math.h> const float f = log10(2); int main() { printf("%f\n", f); } because you cannot initialise a global variable from a function return value. Note too that the compiler warns about mixing float with double. Never use float unless there are very good reasons why you cannot use double.
66,082,184
****Spoiler warning!**** Making a Murderer tells the real-life story of Steven Avery, a man who was imprisoned for 18 years for a crime he didn’t commit, only then to be accused of another more serious crime upon release - by a county sheriff’s department who were in the process of being sued by Mr Avery (for $36 million) for the previous wrongful conviction. Whether viewers found him guilty or not of the second conviction, they united in the acceptance that there was nothing ‘fair’ and ‘just’ about his trial. The show has gripped audiences all over the world, with its apparent tale of injustice, accusations of corrupt officials, and the devastating impact the case has had on the local community. People were outraged at the seemingly unjust conviction. Jarett Wiselman wrote in Buzzfeed that “there is a fundamental inequity at work in countless branches of [the US] legal system.” This was followed by the inevitable backlash, as online sleuths dug deeper into the case, finding evidence the documentary omitted, while figures from around the case were sought and interviewed by a hungry press hunting for a new angle. It would be impossible to produce a documentary that satisfied everyone. As Bronwen Dickey wrote in Slate (The Emotional Manipulations of Making a Murderer), when editing 700 hours of material into a 10 hour narrative, the viewer only sees 1.4% of the footage, so the task of making it truly representative is always going to be a challenge. The show may not have got across every aspect of a complicated trial, but does it matter? Regardless of what the show included or did not include, it clearly showed the importance of a fair trial in criminal proceedings. We’ve written about the different aspects of the right to a fair trial, but this case showed beyond doubt the importance of access to a lawyer, which was highlighted in more than one episode. The show illustrated the problems Mr Avery had in finding a lawyer without money, with the observation in episode 3 that ‘poor people always lose’. Questions were also raised around the lawyer of Brendan Dassey, Steven’s 16 year old nephew. Perhaps the most shocking aspect of the series is the interrogation that Brendan underwent without either a responsible adult or a lawyer. Fair Trials has worked extensively in the EU surrounding the protection of vulnerable suspects. The show also highlighted the presumption of innocence — and we saw how this can be impacted upon by any number of things. The press coverage of the case was extensive, and from the documentary it certainly wasn’t balanced. The filmmakers also showed a number of occasions when Steven Avery was paraded before the press, wearing handcuffs and prison clothes, which brought into question whether he was really being treated as innocent until proven guilty. How can a jury maintain a fair presumption of innocence when the press and prosecutors have already publicly denounced the suspect as guilty? A fundamental element of the Right to a Fair Trial is that every person should be presumed innocent unless and until proved guilty following a fair trial. This certainly could not be said to be the case here. The Right to a Fair Trial means that people can be sure that processes will be fair and certain. It prevents governments from abusing their powers. A Fair Trial is the best means of separating the guilty from the innocent and protecting against injustice. Without this right, the rule of law and public faith in the justice system collapse. The Right to a Fair Trial is one of the cornerstones of a just society. Whether Steven Avery and his nephew Brendan Dassey were guilty or not, they deserved a fair trial. Fair Trial rights start not upon first stepping into court, but when they first accused. We might not have seen the complete story through the course of the show, but we saw enough to see that their defence rights were not protected, and that leaves both sides of the argument unsatisfied. For those that think them not guilty, the abuse of those rights led to their imprisonment. To those who consider them guilty, they’re left with a questionable conviction, which brings much less comfort than should be the case, undermining all involved in the prosecution. Either way, the right to a fair trial was absolutely not adhered to in this case, and raises stark questions about the justice system in the US. If you are a journalist interested in this story, please telephone Fair Trials’ press department on +44 (0) 20 7822 2370 or +44 (0) 7950 849 851. For regular updates follow Fair Trials on Twitter or sign up to our monthly bulletin at the bottom of the page.
66,082,215
Impact of atrial fibrillation among stroke patients in a Malaysian teaching hospital. Atrial fibrillation (AF) is a well-recognised, major risk factor for ischaemic stroke. The presence of atrial fibrillation in a stroke patient translates into higher mortality rates and significant disability. There is lack of data on the impact of atrial fibrillation on stroke patients in Malaysia. The aim of this study was to determine the prevalence of AF in a hospital setting and determine the risk factors, clinical profile and discharge outcomes in ischaemic stroke patients with and without atrial fibrillation from a tertiary centre in Malaysia. This was a retrospective review of patients admitted consecutively to the University Malaya Medical Centre, Kuala Lumpur with the diagnosis of stroke during the first six months of 2009. The presence of AF was confirmed with a 12- lead ECG. All patients had neuroimaging with either cranial computed tomography (CT) or magnetic resonance imaging (MRI). Other variables such as clinical features, risk factors, stroke subtypes, length of acute ward stay, complications and evaluation at discharge (mortality) with modified Rankin scale (mRS) were also recorded. A total of 207 patients were admitted with stroke during the study duration. Twenty two patients (10.6%) were found to have non valvular AF. Patients with AF were found to be older with a mean age of 71.0 ± 2.2 than those without AF with a mean age of 63.6 ± 0.89 (p<0.05). Risk factors for stroke such as diabetes mellitus and hypertension were equally common between the two groups while the proportion of patients with ischaemic heart disease was higher among patients with AF (p<0.005). Most of the stroke subtypes among patients with AF were of ischaemic type (n=192; 92.8%) while haemorrhagic stroke was uncommon (n=15; 6.2%). Patients with AF had a longer median hospital stay, higher mortality rate and greater functional disability on hospital discharge compared to non AF patients. The prevalence of AF among stroke patients in a tertiary centre in Malaysia was 10.6%. Stroke patients with AF were observed to have a higher mortality rate and disability on hospital discharge.
66,082,356
Media Display all by Display with the brief description Maritime Greenwich Maritime Greenwich The ensemble of buildings at Greenwich, an outlying district of London, and the park in which they are set, symbolize English artistic and scientific endeavour in the 17th and 18th centuries. The Queen's House (by Inigo Jones) was the first Palladian building in England, while the complex that was until recently the Royal Naval College was designed by Christopher Wren. The park, laid out on the basis of an original design by André Le Nôtre, contains the Old Royal Observatory, the work of Wren and the scientist Robert Hooke. Outstanding Universal Value Brief synthesis Symmetrically arranged alongside the River Thames, the ensemble of the 17th century Queen’s House, part of the last Royal Palace at Greenwich, the palatial Baroque complex of the Royal Hospital for Seamen, and the Royal Observatory founded in 1675 and surrounded by the Royal Park laid out in the 1660s by André Le Nôtre, reflects two centuries of Royal patronage and represents a high point of the work of the architects Inigo Jones (1573-1652) and Christopher Wren (1632-1723), and more widely European architecture at an important stage in its evolution. It also symbolises English artistic and scientific endeavour in the 17th and 18th centuries. Greenwich town, which grew up at the gates of the Royal Palace, provides, with its villas and formal stuccoed terraces set around St Alphege’s church rebuilt to Hawksmoor’s designs in 1712-14, a setting and approach for the main ensemble. Inigo Jones’ Queen’s House was the first Palladian building in Britain, and also the direct inspiration for classical houses and villas all over the country in the two centuries after it was built. The Royal Hospital, laid out to a master plan developed by Christopher Wren in the late 17th century and built over many decades by him and other leading architects, including Nicholas Hawksmoor, is among the most outstanding group of Baroque buildings in England. The Royal Park is a masterpiece of the application of symmetrical landscape design to irregular terrain by André Le Nôtre. It is well loved and used by residents as well as visitors to the Observatory, Old Royal Naval College and the Maritime Museum. The Royal Observatory’s astronomical work, particularly of the scientist Robert Hooke, and John Flamsteed, the first Astronomer Royal, permitted the accurate measurement of the earth’s movement and also contributed to the development of global navigation. The Observatory is now the base-line for the world’s time zone system and for the measurement of longitude around the globe. Criterion (i): The public and private buildings and the Royal Park at Greenwich form an exceptional ensemble that bears witness to human artistic and creative endeavour of the highest quality. Criterion (ii): Maritime Greenwich bears witness to European architecture at an important stage of its evolution, exemplified by the work of great architects such as Inigo Jones and Christopher Wren who, inspired by developments on the continent of Europe, each shaped the architectural development of subsequent generations, while the Park exemplifies the interaction of people and nature over two centuries. Criterion (iv): The Palace, Royal Naval College and Royal Park demonstrate the power, patronage and influence of the Crown in the 17th and 18th centuries and its illustration through the ability to plan and integrate culture and nature into a harmonious whole. Criterion (vi): Greenwich is associated with outstanding architectural and artistic achievements as well as with scientific endeavour of the highest quality through the development of navigation and astronomy at the Royal Observatory, leading to the establishment of the Greenwich Meridian and Greenwich Mean Time as world standards. Integrity The boundary of the property encompasses the Old Royal Naval College, the Queen’s House, Observatory, the Royal Park and buildings which fringe it, and the town centre buildings that form the approach to the formal ensemble. All the attributes of Outstanding Universal Value are included within the boundary of the property. The main threats facing the property are from development pressures within the town that could impact adversely on its urban grain and from tall buildings, in the setting, which may have the potential to impact adversely on its visual integrity. Authenticity The ensemble of buildings and landscapes that comprise the property preserve a remarkably high degree of authenticity. The Old Royal Naval College complex, in particular the Painted Hall and Chapel, retains well its original form, design and materials. The Royal Observatory retains its original machinery and its associations with astronomical work. The management of the Old Royal Naval College as a single entity now allows for coordinated conservation of the buildings and surrounding spaces. The Observatory, Queen’s House and its associated high-quality 19th century buildings are all managed as elements of the National Maritime Museum. The landscape of the Royal Park retains its planned form and design to a degree with some ancient trees still surviving. The stuccoed slate roofed terraces of the town that form the approach to the formal buildings and the Park retain their function as a commercial and residential centre. The coherence and conservation of buildings within the town is good, although there is a need for some refurbishment and to repair the urban pattern within the property, where it was disrupted by World War II bombing and subsequent reinstatement. Protection and management requirements The UK Government protects World Heritage properties in England in two ways. Firstly, individual buildings, monuments, gardens and landscapes are designated under the Planning (Listed Buildings and Conservation Areas) Act 1990 and the 1979 Ancient Monuments and Archaeological Areas Act and secondly through the UK Spatial Planning system under the provisions of the Town and Country Planning Acts. Government guidance on protecting the Historic Environment and World Heritage is set out in the National Planning Policy Framework and Circular 07/09. Policies to protect, promote, conserve and enhance World Heritage properties, their settings and buffer zones can be found in statutory planning documents. The Mayor’s London Plan provides a strategic social, economic, transport and environmental framework for London and its future development over a period of 20-25 years and is reviewed regularly. It contains policies to protect and enhance the historic environment including World Heritage properties. Further guidance is set out in London’s World Heritage Sites – Guidance on Setting and The London View Management FrameworkSupplementary Planning Guidance which protects important designated views, some of which focus on the property. The London Borough of Greenwich Unitary Development Plan (UDP) contains guidance to protect and promote the Maritime Greenwich World Heritage property which have been saved and will remain in place until the UDP is replaced by the emerging Local Development Framework (LDF). There are also policies to protect the setting of the World Heritage property included in the current statutory plans for the neighbouring London Boroughs of Lewisham and Tower Hamlets. The property is protected by a variety of statutory designations: the hospital, Queen’s House and observatory buildings are Grade 1 listed buildings ; statues, railings and other buildings are of all grades; and the surrounding residential buildings of Greenwich town centre lie within a Conservation Area. There are a number of scheduled monuments in the Park which is itself a Grade 1 registered park and garden, and elements of the park are considered important for nature conservation. The Royal Park is owned, managed and administered by The Royal Parks, a Crown agency. The Queen’s House and associated 19th-century buildings and the Royal Observatory is in the custodianship of the Trustees of the National Maritime Museum. The Old Royal Naval College is in the freehold of Greenwich Hospital, which remains a Crown Naval charity. The buildings are leased to the Greenwich Foundation for the Old Royal Naval College, also a registered charity whose objectives are to conserve, maintain and interpret the buildings for the public. The Royal Courts are leased to Greenwich University and Trinity Laban Conservatoire of Music and Dance to form the Maritime Greenwich University Campus. Greenwich Foundation also retains and maintains a number of key buildings. Commercial activities in the town centre are coordinated by a town centre manager. The management of the property is guided by a Management Plan approved by all the key partners which is regularly reviewed. A World Heritage Coordinator is responsible for development and implementation of the Management Plan and overall coordination for the whole property; this post reports to a World Heritage Executive Committee made up of key owners and managers within the property. A World Heritage Site Steering Group made up of key local stakeholders and national organisations monitors implementation of the Management Plan. The history, value and significance of the property is now explained to visitors through Discover Greenwich, a recently opened state-of-the-art visitor centre which helps orientate visitors before entering the property. The Royal Park, like any designed landscape evolving over time, is vulnerable to erosion of detail and its maintenance and conservation form part of a detailed plan that sets out the design history of the Royal Park, and the rationale for its ongoing maintenance and future restoration of the historic landscape, in particular, the way in which avenues and trees are managed and re-planted. A number of high-profile annual events are held within the Royal Park, some of which have several millions of spectators worldwide. For all events, appropriate safeguards are put in place to ensure there is no adverse impact on the attributes of Outstanding Universal Value, in particular on the Royal Park trees, on underground archaeology or on the surrounding buildings. The events generate worldwide interest in, and publicity for the World Heritage property. Long Description Maritime Greenwich is an unique ensemble of buildings and landscape of exceptional artistic value, the work of a number of outstanding architects and designers. At the same time, it is of considerable scientific significance by virtue of the contributions to astronomy and to navigation. The public and private buildings and the Royal Park at Greenwich form an exceptional ensemble bearing witness to human artistic and scientific endeavour of the highest quality, to European architecture at an important stage of British design evolution, and to the creation of a landscape that integrates nature and culture in a harmonious whole. Prehistoric burial mounds and a large Roman villa (1st-4th centuries AD) have been discovered within the World Heritage site. In the 8th century it was owned by Ethelrada, niece of Alfred the Great. In the 15th century the estate was the property of Duke Humphrey, uncle of Henry VI. The king and his wife, Margaret of Anjou, built the Palace of Placentia, where the Tudor monarchs Henry VIII, Mary I and Elizabeth were all born. James I of England and VI of Scotland settled the palace upon his wife, Anne of Denmark, who in 1616 commissioned the building of the Queen's House from Inigo Jones, Surveyor of the King's Works. During the Interregnum, Parliament used the palace as a biscuit factory, and also kept Dutch prisoners there. Charles II commissioned Andre Le Nôtre to design the park, as well as a new palace from John Webb. In 1675 Christopher Wren and Robert Hooke designed and built the turreted Royal Observatory on the bluff overlooking the old palace for John Flamsteed, first English Astronomer Royal. In 1884 the Greenwich Meridian and Greenwich Mean Time were adopted as world standards for measuring space and time. Although the departure of the royal court and the rise of dockyard-related industries robbed the town of its fashionable character, it remained prosperous, favoured in particular by sea captains, naval officers, and merchants. Its earlier timber-framed houses were gradually replaced during the 18th and 19th centuries by two- and three-storeyed brick terraces. The focus of the Greenwich ensemble is the Queen's House, the work of Inigo Jones and the first true Renaissance building in Britain, a striking departure from the architectural forms that preceded it. It was inspired by Italian style, and it was in its turn to be the direct inspiration for classical houses and villas all over Britain in the two centuries that followed its construction. Since 1937 the Queen's House and its associated buildings have housed the National Maritime Museum. The Royal Naval College, the most outstanding group of Baroque buildings in Britain, is also the most complex of Christopher Wren's architectural projects. The four main components, aligned on the Queen's House, are arranged symmetrically alongside the Thames. Trafalgar Quarters, a colonnaded brick structure, was built in 1813 as living accommodation for the officers of the Royal Hospital. The complex now houses the University of Greenwich. Greenwich Royal Park is formal in plan, arranged symmetrically on either side of its main north-south axis, which is aligned on the Queen's House. The Old Royal Observatory is sited on the brow of Greenwich Hill and dominates the landscape. Above is an octagonal room which was used by the Royal Society for meetings and dinners. This is surmounted by the famous time-ball, which indicates Greenwich Mean Time daily at 13.00. Adjacent is the former New Physical Observatory (1890-99), which is cruciform in plan and crowned by a terracotta dome. The area also includes a number of handsome private houses of the 17th-19th centuries: Vanbrugh Castle, the home of Sir John Vanbrugh, the architect of Blenheim Palace; the Ranger's House, built in 1700-20; the Trafalgar Tavern, an elegant building in Regency style, fronting on the Thames. St Alfege's Church is one of the outstanding works of Nicholas Hawksmoor, built in 1711-14 to replace a collapsed medieval structure. Also there is the Cutty Sark, a tea-clipper built in 1869 and the fastest ship in the world at that time. The vessel is berthed in a special dry-dock and maintained as a museum. Source: UNESCO/CLT/WHC Historical Description Greenwich has been favoured by humankind since the Bronze Age at least, as demonstrated by the burial mounds and the large 1st-4th century AD Roman villa that have been discovered in the modern Park. It has long associations with royalty. In the 8th century it was owned by Ethelrada, niece of Alfred the Great. In the 15th century the estate was the property of Duke Humphrey, uncle of Henry VI, and it was first developed as a royal residence when that king and his wife, Margaret of Anjou, built the Palace of Placentia, where the Tudor monarchs Henry VIII, Mary I, and Elizabeth were all born. James I of England and VI of Scotland settled the palace upon his wife, Anne of Denmark, who in 1616 commissioned the building of the Queen's House from Inigo Jones, Surveyor of the King's Works. The project was suspended when the queen's health failed the following year (she died in 1618), but Jones resumed work for Henrietta Maria, wife of Charles I, around 1630. It was completed just before the outbreak of the Civil War in 1640. During the Interregnum, Parliament used the palace as a biscuit factory, and also kept Dutch prisoners there, so it was in a sadly deteriorated condition when the monarchy was restored. Charles II commissioned Andre Le Notre to design the park (although the eventual layout probably owes more to Sir William Boreman). He also commissioned a new palace from John Webb. Part of Placentia was demolished in 1664 to make way for a wing of the new palace. With the accession of William II and Mary II as joint monarchs in 1688 the days of Greenwich as a royal residence ended, because its situation was inimical to the king's asthma. However, in 1692 the queen ordered that building of the palace should continue, but in a new form, as a hospital for retired seamen. The master plan was devised by Sir Christopher Wren, assisted by his pupil Nicholas Hawksmoor. The complex took many years to complete, and was to involve the services of other leading architects, including Colen Campbell, Thomas Ripley, James "Athenian" Stuart, and John Yenn. In 1807 the Queen's House became a school for young seamen, with the addition of long colonnades and wings, the work of Daniel Asher Alexander. During the 17th century study of the role of astronomy in navigation developed rapidly, and in 1675 Wren and the scientist Robert Hooke designed and built the turreted Royal Observatory on the bluff overlooking the old palace for John Flamsteed, the first English Astronomer Royal. Greenwich established its pre-eminence in this field and it was here that in 1884 the Greenwich Meridian and Greenwich Mean Time were adopted as world standards for the measurement of space and time. In the 18th century the little town of Greenwich attracted aristocrats and merchants, who built villas there, some of which survive (the most important is probably the Ranger's House). Although the departure of the royal court and the rise of dockyardrelated industries robbed the town of its fashionable character, it remained prosperous, favoured in particular by sea captains, naval officers, and merchants. Its earlier timber-framed houses were gradually replaced during the 18th and 19th centwies by two- and three-storeyed brick terraces. Since 1937 the Queens' House and its associated buildings have housed the National Maritime Museum. The Royal Naval College has been located in the former Royal Naval Hospital since 1873. It will. be vacating the buildings during 1997; at the time of writing this evaluation the future tenants have not been decided, but there are strong indications that the buildings will. be shared by the Museum and the new University of Greenwich.
66,082,418
Recently we shared the techniques we used to save more than a million dollars annually on our AWS bill. While we went into detail about the various problems and solutions, the most common question we heard was: "I know I’m spending a ton on AWS, but how do I actually break that into understandable pieces?" At face value, this sounds like a fairly straightforward problem. You can easily split your spend by AWS service per month and call it a day. Ten thousand dollars of EC2, one thousand to S3, five hundred dollars to network traffic, etc. But what’s still missing is a synthesis of which products and engineering teams are dominating your costs. Then, add in the fact that you may have hundreds of instances and millions of containers that come and go. Soon, what started as simple analysis problem has quickly become unimaginably complex. In this follow-up post, we’d like to share details on the toolkit we used. Our hope is to offer up a few ideas to help you analyze your AWS spend, no matter whether you’re running only a handful of instances, or tens of thousands. Grouping by ‘product areas’ If you’re operating AWS at scale–it’s likely that you’ve hit two major problems. First, it’s difficult to notice if one part of the engineering team suddenly starts spending a lot more than it used to. Our AWS bill is six figures per month, and the charges for each AWS component change rapidly. In a given week, we might deploy five new services, optimize our DynamoDB throughput, and add hundreds of customers. In this environment it’s easy to overlook that a single team spent $20,000 more on EC2 this month than they did last month. Second, it can be difficult to predict how much new customers will cost. As background, Segment offers a single API which can send analytics data to any number of third-party tools, data warehouses, S3, or internal data pipelines. While customers are good at predicting how much traffic they will have and the products they’d like to use, we’ve historically had trouble translating this usage information to a dollar figure. Ideally we’d like to be able to say "1 million new API calls will cost us $X so we should make sure we are charging at least $Y." Our solution to these problems was to bucket our infrastructure into what we dubbed ‘product areas’. In our case, these product areas are loosely defined as: integrations (the code that sends data from Segment to various analytics providers) API (the service that receives data customer libraries sent to Segment) warehouses (the pipeline that loads Segment data into a customer's data warehouse) website and CDN internal (shared support logic for the four above) In scoping the project, we realized it would be next to impossible to measure everything. So instead, we decided to target a percentage of the costs in the bill, say, 80%, and try to get that measurement working end-to-end. It's better to deliver business value analyzing 80% of the bill than to shoot for 100%, get bogged down in the collection step, and never deliver any results. Shooting for 80% completeness (being willing to say "it's good enough") ended up saving us again and again from rabbit-holing into analysis that didn’t meaningfully impact our spend. Gather, then analyze To break out costs by product area, we need to gather data for the billing system which we had to collect and then subsequently join together: the AWS billing CSV - the CSV generated by AWS to provide the full billing line items tagged AWS resources – resources which could be tagged within the billing CSV untagged resources – services like EBS and ECS that required custom pipelines to tag usage with ‘product areas’ Once we calculated the product areas for each of these pieces of data, we could load them into Redshift for analysis. 1 . The AWS Billing CSV The place to start to understand your spend is the AWS Billing CSV. You can enable a setting in the billing portal and Amazon will write a CSV with detailed billing information to S3 every day. By detailed, I mean VERY detailed. Here is a typical billing row: That row is a charge for a whopping $0.00000001, or one one-millionth of a penny, for DynamoDB storage on a single table between 3AM and 4AM on February 7th. There are about six million rows in our billing CSV for a typical month. (Unfortunately, most cost more than a millionth of a penny.) We use Heroku's awsdetailedbilling tool to copy the billing data from S3 to Redshift. This was a good first step, but we didn't have a great way to correlate a specific AWS cost with our own product areas (e.g. whether a given instance-hour is used for the integrations or warehouses product areas). What’s more, about 60% of the bill is consumed by EC2. Despite being the lions’ share of the cost, understanding how a given EC2 instance mapped to a product area was impossible with the data provided by the billing CSV. There’s a good reason why we couldn’t just use instance names to determine product areas. Instead of running a single process per host, we make heavy use of ECS (Elastic Container Service), to stack hundreds of containers on a host and achieve much higher utilization. Unfortunately, Amazon bills only for the EC2 instance costs, so we had zero visibility into the costs of the containers running on an instance: how many containers we were running at a typical time, how much of the pool we were using, and how many CPU and memory units we were using. Even worse, information about container auto-scaling isn’t reflected anywhere in the billing CSV. To get this data for analysis, we had to write our own tooling to gather and then process it. I’ll cover how this pipeline works in the following sections. Still, the AWS Billing CSV will provide very good granular usage data that will become the basis for our analysis. We just need to associate that data with our product areas. Note: This problem isn’t going away either. Billing by the instance-hour is going to be a bigger and bigger problem from a "what am I spending money on?" perspective, since more companies are running fleets of containers across a set of instances, with tools like ECS, Kubernetes and Mesos. In a slight twist of irony, Amazon has had this same problem for years - each EC2 instance is a Xen hypervisor, being run on the same bare metal machine as other instances. 2 . Cost data from tagged AWS resources The most important and readily available data comes from ‘tagged’ AWS resources. Out of the box, the AWS billing CSV doesn’t include any tags in its analysis. As such, it’s impossible to discern how one EC2 instance or bucket might be used vs another. However, you can enable certain tags to appear alongside your line item costs using cost allocation tags. These tags are officially supported by many AWS resources, S3 buckets, DynamoDB tables, etc. You can toggle a setting in the AWS billing console to make a cost allocation tag show up in the CSV. After a day or so, your chosen tag (we chose product _ area) will start showing up as a new column next to the associated resources in the detailed billing CSV. If you are doing nothing else, start by using cost allocation tags to tag your infrastructure. It’s essentially ‘free’ and requires zero infrastructure to run. After we enabled cost allocation tags, we had two challenges: 1) tagging all of the existing infrastructure, and 2) ensuring that any new resources would automatically have tags. Tagging your existing infrastructure Tagging your existing infrastructure is pretty easy: for a given AWS product, query Redshift for the resources with the highest costs, bug people in Slack until they tell you how those resources should be tagged, and stop when you've tagged 90% or more of the resources by cost. However, enforcing that new resources stay tagged requires some automation and tooling. To do this, we use Terraform. In most cases, Terraform's configuration supports adding the same cost allocation tags that you can add via the AWS console. Here's an example Terraform configuration for a S3 bucket: Though Terraform provided the base configuration, we wanted to verify that every time someone wrote resource "aws _ s3 _ bucket" into a Terraform file, they included a product _ area tag. Fortunately Terraform configurations are written in HCL (Hashicorp Configuration Language), which ships with a comment preserving configuration parser. So we wrote a checker that walks every Terraform file looking for taggable resources lacking a product _ area tag. We set up continuous integration for the repo with Terraform configs, and then added these checks, so the tests will fail if anyone tries to check in a tag-able resource that's not tagged with a product area. This isn't perfect - the tests are finicky, and people can still technically create untagged resources directly in the AWS console, but it's good enough for now–the easiest way to provision new infrastructure is via Terraform. Rolling up cost allocation tag data Once you've tagged resources, accounting for them is fairly simple. Find the product _ area tags for each resource, so you have a map of resource id => product area tags. Sum the unblended costs for each resource Sum those costs by product area, and write the result to a rollup table. SELECT sum(unblended_cost) FROM awsbilling.line_items WHERE statement_month = $1 AND product_name='Amazon DynamoDB'; You might also want to break out data by AWS product - we have two separate tables, one for Segment product areas, and one for AWS products. We were able to account for about 35% of the bill using traditional cost allocation tags. Analyzing Reserved Instances This approach works great for tagged, on-demand instances. But in some cases, may have paid AWS up front for a ‘reservation’. Reservations guarantee a certain amount of capacity, in exchange for up-front payment at a lower fixed rate. In our case, this means several large charges that show up in the December 2016 billing CSV need to be amortized across each month in the year. To properly account for these costs, we wanted to use the unblended cost that was incurred in the desired time period. The query looks like this: Subscription costs take the form "$X0000 of DynamoDB," so they are impossible to attribute to a single resource or product area. Instead, we sum the per-resource costs by product area and then amortize the subscription costs according to the percentages. If the warehouses pipeline used 60% of our EC2 compute costs, we assume it used 60% of the reservation as well. This isn't perfect. If a large percentage of your bill is reserved up front, this amortization strategy will be distorted by small changes in the on-demand costs. In that case you'll want to amortize based on the usage for each resource, which is more difficult to sum than the costs. 3 . Cost data from untagged AWS resources While tagging instances and DynamoDB tables is great, other AWS resources don't support cost allocation tags. These resources required that we build a Rube Goldberg-ian-style workflow to successfully get the cost data into Redshift. The two biggest untagged resources groups we had to deal with were ECS and EBS. ECS ECS is constantly scaling our services up and down, depending on how many containers a given service needs. It’s also responsible for re-balancing and bin-packing containers across individual instances. ECS starts containers on hosts based upon “CPU and memory reservation”. A given service indicates how many CPU shares it requires, and ECS will either put new containers on a host with capacity, or scale up the number of instances to add more capacity. None of these ECS actions are directly reflected within our AWS Billing CSV–but ECS is still responsible for triggering the auto-scaling for each of our instances. Put simply, we wanted to understand what ‘slice’ of each machine a given container was using, but the billing CSV only gives us ‘whole unit’ breakdown by instance. To determine the cost of a given service, we built our own pipeline that makes use of the following pieces: Set up a Cloudwatch subscription any time an ECS task gets started or stopped. Push the relevant data (Service name, CPU/memory usage, starting or stopping, EC2 instance ID) from the event to Kinesis Firehose (to aggregate individual events). Push the data from Kinesis Firehose to Redshift. Once all of the task start/stop/size data is in Redshift, we multiply the amount of time a given ECS task ran (say, 120 seconds) by the number of CPU units it used on that machine (up to 4096 - this info is available in the task definition), to get a number of CPU-seconds for each service that ran on the instance. The total bill for the instance is then divided across services according to the number of CPU-seconds each one used. It's not a perfect method. EC2 instances aren't running at 100% capacity all the time, and the excess currently gets divided across the services running on the instance, which may or may not be the right culprits for that overhead. But (and you may recognize this as a common theme in this post), it's good enough. Additionally, we want to map the right product area for each ECS service. However we can't tag those services in AWS because ECS doesn't support cost allocation tags. Instead we added a product _ area key to the Terraform module for each ECS service. This key doesn't lead to any metadata being sent to AWS, but it does populate a script script that reads the product _ area keys for each service. That script then publishes the service name => b64encoded product area mappings to DynamoDB on every new push to the master branch. Finally, our tests then validate that each new service has been tagged with a product area. EBS Elastic Block Storage (EBS) also makes up a significant portion of our bill. EBS volumes are typically attached to an EC2 instance, and for accounting purposes it makes sense to count the EBS volume costs together with the EC2 instance. However, the AWS billing CSV doesn't show you which EBS volume was attached to which instance. We again used Cloudwatch for this - we subscribe to any "volume attached" or "volume unattached" events, and then record the EBS => EC2 mappings in a DynamoDB table. We can then add EBS volume costs to the relevant EC2 instances before accounting for ECS costs. Combining data across accounts So far we’ve talked about all of our costs within the context of a single AWS account. However, this doesn’t actually reflect our AWS setup, which is spread across different physical AWS accounts. We use an ops account not only for consolidated, cross-account billing, but to help provide a single access point for engineers making changes to production. We separate staging from production to ensure that an API call which might, say, delete a DynamoDB table, can be run safely with the appropriate checks. Of these accounts, prod dominates the cost–but our staging costs are still a significant percentage of the overall AWS bill. Where this gets tricky is when we need to write the data about ECS services in the stage realm to the production Redshift cluster. To achieve writing ‘cross account’, we needed to allow the Cloudwatch subscription handlers to assume a role in production that can write to Firehose (for ECS) or to DynamoDB (for EBS). These are tricky to set up because you have to add the correct permissions to the right role in the staging account (sts.AssumeRole) and in the production account, and any mistake will lead to a confusing permission error. For us, this means that we don't have a staging realm for our accounting code, since the accounting code in stage is writing to the production database. While it’s possible to add a second service in stage that subscribes to the same data but doesn't write it, we decided that we can swallow the occasional problems with the stage accounting code. Rolling up the statistics Finally we have all of the pieces we need to run proper analysis: tagged resources in the AWS billing CSV data about when every ECS event started and stopped a mapping between ECS service names and the relevant product areas a mapping between EBS volumes and the instances they are attached to To roll all of this up for the analytics team, I broke out the analysis by AWS product. For each AWS product, I totaled the Segment product areas and their costs, for that AWS product. The data gets rolled up into three different tables: Total costs for a given ECS service in a given month Total costs for a given product area in a given month Total costs for a (AWS product, Segment product area) in a given month. For example, "The warehouses product area used $1000 worth of DynamoDB last month." The total costs for a given product area look like this: And the costs for an AWS product combined with Segment product area look like this: For each of these tables, we have a finalized table that contains the finalized numbers for each month, and a rollup append-only table that writes new data for a month as it updates every day. A unique identifier in the rollup table identifies a given run, so you can sum the AWS bill by finding all of the rows in a given run. Finalized data effectively becomes our golden ‘source of truth’ that we use for top-level metrics and board reporting. Rollup tables are used to monitor our spend over the course of the month. Note: AWS does not "finalize" your bill until several days after the end of the month, so any sort of logic that marks the billing record as complete when the month flips over is incorrect. You can detect when the bill becomes "final" because the invoice_id field in the billing CSV will be an integer instead of the word "Estimated". A few last gotchas Before closing, we realized that there are a few places where a little bit of preparation and knowledge could have saved us a lot of time. In no particular order, they are: Scripts that aggregate data or copy it from one place to another are infrequently touched and often under-monitored. As an example, we had a script that copied the Amazon billing CSV from one S3 bucket to another, but it failed on the 27th-28th of each month because the Lambda handler doing the copying ran out of memory as the CSV got large. It took a while to notice this, because the Redshift database had a lot of data and the right-ish numbers for each month. We’ve since added monitoring to the Lambda function to ensure that it runs without errors. Be sure these scripts are well documented, especially with information about how they are deployed and what configuration they need. Link to the source code in other places where they are referenced - for example, any place you pull data out of an S3 bucket, link to the script that puts the data in the bucket. Also consider putting a README in the S3 bucket root. Redshift queries can be really slow without optimization. Consult with the Redshift specialist at your company, and think about the queries you need, before creating new tables in Redshift. In our case we were missing the right sortkey on the billing CSV tables. You cannot add sortkeys after you create the table , so if you don't do it up front you have to create a second table with the right keys, send writes to that one and then copy all the data over. Using the right sortkeys took the query portion of the rollup run from about 7 minutes to 10-30 seconds. Initially we planned to run the rollup scripts on a schedule - Cloudwatch would trigger an AWS Lambda function a few times a day. However the run length was variable (especially when it involved writing data to Redshift) and exceeded the maximum Lambda timeout, so we moved it to an ECS service instead. We chose Javascript for the rollup code initially because it runs on Lambda and most of the other scripts at the company were in Javascript. If I had realized I was going to need to switch it to ECS, I would have chosen a language with better support for 64 bit integer addition, and parallelization and cancellation of work. Any time you start writing new data to Redshift, the data in Redshift changes (say, new columns are added), or you fix integrity errors in the way the data is analyzed, add a note in the README with the date and information about what changed. This will be extremely helpful to your data analysis team. The blended costs are not useful for this type of analysis - stick to the unblended costs, which show what AWS actually charged you for a given resource. There are 8 or 9 rows in the billing CSV that don't have an Amazon product name attached. These represent the total invoice amount, but throw off any attempt to sum the unblended costs for a given month. Be sure to exclude these before trying to sum costs. The bottom line As you might imagine, getting visibility into your AWS bill takes a large amount of work–both in terms of custom tooling and identifying expensive resources within AWS. The biggest win we’ve found comes from making it easy to continuously estimate your spend rather than running the occasional ‘one-time-analysis’. To do that, we’ve automated all of the data collection, enforced tagging within Terraform and our CI, and educated the entire engineering team how to properly tag their infrastructure. Rather than sitting within a PDF, all of our data is continuously updated within Redshift. If we want to answer new questions or generate new reports, we can instantly get results via a new SQL query. Additionally we’ve exported that data into an Excel model so we can estimate exactly how much a new customer will cost. And we can also see if a single service or a single product area is suddenly costing a lot more, before that causes too much of a hit to our bottom line. While it may not exactly mirror your infrastructure, hopefully this case study will be useful for helping you get a better sense of your costs and manage them as you scale!
66,082,476
1. Field of the Invention The present invention relates to a coating applied onto a substrate and a method for applying the coating. The invention specifically described is an improved coating for optical fibers. The coating consists of a densely packed structure of sputtered particles forming a precise, dense, and adherent layer. The coating is deposited within a cylindrical magnetron via a sputtering process that avoids damaging the optical fiber. 2. Description of the Related Art In recent years, optical fiber technology has gained popularity in many commercial applications due to unparalleled performance advantages over existing metal-wire systems. In particular, optical fibers and related components are widely accepted in military communications, civilian telecommunications, and control systems. Optical fibers are small, strong, and lightweight. In communication applications, they provide wide bandwidth, low transmission loss, resistance to radiation damage, and immunity to electromagnetic interference. A typical optical fiber is composed of a core within a layer of cladding and thereafter one or more layers of a buffer. The core provides a pathway for light. The cladding confines light to the core. The buffer provides mechanical and environmental protection for both core and cladding. Fiber construction and materials are known within the art. For example, a typical single-mode fiber (SMF) is composed of precision extruded glass having a cladding with a diameter of 125 xcexcmxc2x12 xcexcm and a core with a diameter of 8 xcexcmxc2x11 xcexcm residing within the center of the cladding. A buffer is typically composed of a flexible polymer applied onto the outer surface of a cladding via known methods yielding dimensional variations at least one magnitude larger than in core and cladding. Existing deposition methods produce a coating with large dimensional variations. Consequently, state-of-the-art optical fibers are composed of a dimensionally precise core and cladding assembly within a less precise buffer and coating. Such imprecisions skew the concentricity between core and coating. As such, commercial optical fibers do not lend themselves to precision alignment. Misalignment between fibers or fiber and optical component (i.e., photodetector) is the primarily source of light energy loss. Optical fiber systems typically require a hermetic seal at fiberxe2x80x94fiber connections, fiber-component connections, and along the length of a fiber to prevent moisture and other contaminates from degrading the optical pathway. Commercially available coated fibers are porous and therefore fail to provide a hermetic seal sufficient to exploit component lifetime. Furthermore, porous coatings reduce adherence between coating and fiber thereby weakening connections. Coated optical fibers are typically soldered to other components thereby providing a continuous pathway. The pull strength of the coated fiber at such connections is critical to the integrity of the pathway. Currently, coating design and fabrication methods limit pull strength to approximately 1.6 pounds as verified by quality assurance tests known within the art. Coating methods may also further weaken the fiber by creating micro-cracks within the fiber structure. Various methods are known within the art to coat an optical fiber with a metal layer, see Kruishoop et al. (U.S. Pat. No. 4,609,437), Cholewa et al. (U.S. Pat. No. 5,100,507), Filas et al. (U.S. Pat. No. 5,380,559), and Dunn et al. (U.S. Pat. No. 5,970,194). The related arts have sought to minimize dependence on sputtering and to develop replacement methods. Kruishoop et al., issued Sep. 2, 1986, describes a two-step method to form a metal coating onto a synthetic resin cladding along an optical fiber. A thin conductive layer is first applied by reducing a metal salt onto the cladding and thereafter forming a thin metal layer by electroplating. Kruishoop et al. explicitly excludes sputtering methods for applying the conductive layer since such methods produce thermal energy sufficient to damage the underlying structure. Cholewa et al., issued Mar. 31, 1992, describes a method for processing an optical fiber comprised of an integral lens and a metallized outer coating. Metallization is achieved via sputtering. However, Cholewa et al. does not address the thermal heating problem and damage inherent to sputtering as identified by Kruishoop et al. Filas et al., issued Jan. 10, 1995, describes an electroless method for depositing nickel and gold coatings onto optical fibers using aqueous chemistry. The Filas et al. method was developed since sputtering is not only expensive, but also produces a non-uniform coating and tends to weaken the fiber. Dunn et al., issued Oct. 19, 1999, describes a method wherein a limited mid-section of an optical fiber is metallized via sputtering or evaporation. Dunn et al. does not address the problems inherent to sputtering as identified by Kruishoop et al. and later by Filas et al. Planar sputtering methods are known within the art. Planar sputtering deposits a thin film coating onto a fiber as it rotates relative to a uni-directional coating source. Both stationary and moving fibers are coated with this technique. Planar sputtering methods are complex, inefficient, and fail to provide the uniformity and quality required for many optical fiber applications. Planar sputtering requires mechanically complicated precision rotation means to adjust the fiber with respect to the planar source. Such rotating systems cannot ensure sufficient thickness uniformity for accurate fiber alignment. Planar sputtering is inefficient in that only a small portion of the metal ejected from the target is deposited onto the fiber thereby making its use costly. Planar sputtering subjects the fiber to asymmetric overheating across the cross section of the fiber thereby promoting microcracks within the cladding and reducing the quality of the coating. Furthermore, planar sputtering yields a porous coating reducing hermeticity and adherence. Kumar describes a cylindrical magnetron for applying a circumferential coating, see U.S. Pat. Nos. 5,178,743 issued Jan. 12, 1993 and 5,317,006 issued May 31, 1994. While cylindrical magnetron inventions are disclosed, methods for depositing precise, dense, and adherent coatings without damaging an optical fiber are neither described nor claimed. It is therefore an object of the present invention to avoid the disadvantages of the related art. More particularly, it is an object of the invention to provide a coated optical fiber with minimal dimensional variability thereby facilitating rapid alignment and assembly of such fibers within an optical system. It is an object of the invention to provide a dense, low-porosity coating onto a fiber substrate. It is an object of the invention to provide improved adherence between coating and fiber substrate. It is also an object of the invention to provide a controlled method for depositing a coating onto a fiber without damaging the fiber. It is an object of the invention to provide a coated fiber with greater pull strength. It is an object of the invention to provide a coating method facilitating the simultaneous application of one or more coatings onto a plurality of fibers. Furthermore, it is an object of the invention to provide a coating method that facilitates the application of several independent layers within a single vacuum chamber without breaking the vacuum. The present invention provides a controlled method for the application of an improved coating onto an optical fiber while avoiding damage to the fiber structure. The improved coating is applied via sputtering within a cylindrical magnetron. The claimed deposition method includes generating a plasma cloud composed of dimensionally similar sputtered particles that adhere to an optical fiber and form a uniform, adherent, low-porosity coating, monitoring at least one environmental parameter within the vacuum chamber during deposition, and adjusting the deposition step to avoid one or more conditions that promote fiber damage. Monitoring and adjusting steps are either manually controlled or automated. Environmental conditions include such examples as temperature, pressure, and gas composition, each indicative of the onset or progression of fiber damage. An optional cleaving step is provided. In one embodiment, a portion of the fiber end is removed to expose an optically clear core. In another embodiment, the fiber mid-section is cleaved yielding two fiber ends, each having an optically clear core. The deposition method is applicable to fiber ends, fiber mid-sections, as well as along the length of the optical fiber. One or more optical fibers may be simultaneously coated in one or more cylindrical magnetrons thereby increasing production yield. The improved coating includes single and multiple layer configurations. In one embodiment, at least one layer is composed of a thermal barrier material applied directly onto an optical fiber, a metal layer, or another thermal barrier material. In yet another embodiment, the layers are composed of commercially pure metals. Both coating embodiments facilitate a stronger optical fiber in conventional pull test arrangements. Thermal barrier coatings are inherently stronger due to mechanical properties of the materials and improved adherence between such materials and fiber, as well as between such materials and other layer materials. Metal-based coatings are inherently stronger because either the coating compressively constrains the fiber or the coating closes microcracks within the fiber prior to or as a result of the sputtering process.
66,082,625
But I had a problem detecting all the articles because the first article of each section is different from the rest. I do know how to write two recipes that would include all the articles, but haven't figured out a way to do it in a single recipe. Here is the recipe that fetches all the articles except the first article of each section. I'd appreciate it if someone can take a look and tweak the recipe.
66,082,805
In the Interest of FC, CC and TC, Minor Children IN THE TENTH COURT OF APPEALS No. 10-01-088-CV IN THE INTEREST OF F.C., C.C., AND T.C., MINOR CHILDREN From the 82nd District Court Falls County, Texas Trial Court # 33,276                                                                                                                                                                                                                            O P I N I O N                                                                                                                        Mary Cummins’s parental rights were terminated by the trial court in October 2000. She raises four issues on appeal. She argues that: 1) the trial court failed to notify the Indian tribe of their right of intervention as required by the Indian Child Welfare Act; 2) the trial court failed to correctly apply the Indian Child Welfare Act standard for termination of parental rights of an Indian child; 3) the trial court erred in finding that Cummins engaged in conduct that endangered the physical or emotional well-being of the children; and 4) the trial court erred in finding that Cummins failed to comply with a court ordered “plan of service.” Indian Child Welfare Act       In her first point, Cummins argues that the trial court erred in terminating her parental rights without notifying the tribe of their right of intervention. In her second point, she asserts that the court applied the improper standard of review for termination of an Indian child under the Indian Child Welfare Act (“ICWA”).       The provisions of the ICWA must be followed in any proceedings involving termination of the parental rights over Indian children. See Indian Child Welfare Act, 25 U.S.C.A. § 1912 (1983); Doty-Jabbaar v. Dallas County Child Protective Services, 19 S.W.3d 870, 874 (Tex. App.—Dallas 2000, pet. denied). The ICWA provides in any involuntary proceeding in State court, where the court knows or has reason to know that an Indian child is involved, the party seeking termination shall notify the parent, Indian custodian, and the Indian child’s tribe. 25 U.S.C.A. at § 1912. Under the ICWA, an Indian child is defined as “any unmarried person who is under age eighteen and is either (a) a member of an Indian tribe or (b) is eligible for membership in an Indian tribe and is the biological child of a member of an Indian tribe.” Id. at § 1903 (4).       Cummins argues that the trial court terminated her parental rights without first notifying the Indian tribe, and without applying the proper standard of review under the Act. However, Cummins presents no evidence to support her contention that the children qualify as “Indian children” under the ICWA. The sole evidence of Cummins’ alleged Indian heritage was a statement in Dr. Shinder’s report stating she was a “Caucasian/Native American (Cherokee descent) woman.”       In order to ensure jurisdiction, this Court requested an affidavit containing the facts supporting Cummins’s position that the children are “Indian children” as defined by the ICWA. See Tex. Gov’t Code Ann. § 22.220(c) (Vernon 1988); Tex. R. App. P. 10.2(a); Mellon Service Co. v. Touche Ross & Co., 946 S.W.2d 862, 864 (Tex. App.—Houston [14th Dist.] 1997, no pet.). In her affidavit, Cummins stated: “I am not an enrolled member of any tribe. To the best of my knowledge, neither of my parents were members of a tribe. I have never taken the steps necessary to enroll my children in any tribe.” Because her children are neither a) members of an Indian tribe, or b) eligible for tribe membership and the biological child of a member of an Indian tribe, the ICWA does not apply. See 25 U.S.C.A. § 1903 (4). Accordingly, we find notice to an Indian tribe as specified in the ICWA is not required, and the court was not required to apply the standard of review for termination as set forth in the ICWA. Accordingly, points one and two are overruled. Clear and Convincing Evidence       In point three, Cummins argues that the trial court erred in finding that she engaged in conduct that endangered the physical or emotional well-being of the children. On appeal, an involuntary termination of parental rights must be strictly scrutinized because termination proceedings involve the fundamental constitutional rights surrounding the parent-child relationship. See Holick v. Smith, 685 S.W.2d 18, 20 (Tex. 1985); In re D.L.N., 958 S.W.2d 934, 936 (Tex. App.—Waco 1997, pet. denied). A termination of parental rights is an irrevocable act severing the parent-child relationship for all purposes, except for the right of inheritance. Id.; Tex. Fam. Code Ann. § 161.206(b) (Vernon 1996). Because a termination involves rights of "constitutional dimension," the grounds for termination must be proved by clear and convincing evidence at trial. See id. at § 161.001 (Vernon Supp. 2001); § 161.206(a); D.L.N., 958 S.W.2d at 936 (citing Richardson v. Green, 677 S.W.2d 497, 500 (Tex. 1984)). Termination of parental rights is a two prong test. The trial court must find by clear and convincing evidence that the parent 1) engaged in one of the predicate acts listed in the Family Code, and 2) that termination was in the children's best interest. See §§ 161.001(1) & 161.001(2); In re A.P., 42 S.W.3d 248, 257 (Tex. App.—Waco 2001, no pet.). Thus, the court may order termination if the court finds by clear and convincing evidence that the parent has engaged in conduct or knowingly placed the child with persons who engaged in conduct which endangered the physical or emotional well-being of the child and termination was in the child’s best interest. Id. at §§ 161.001(1)(E) & 161.001(2).       Cummins argues that an abuse of discretion standard should apply to the court’s termination of her parental rights. We disagree. The trial court's findings of fact after a bench trial are reviewed for legal and factual sufficiency by the same standards applied in reviewing the evidence supporting a jury's answer. See Cason v. Taylor, 51 S.W.3d 397, 403 (Tex. App.—Waco 2001, no pet.); Hitzelberger v. Samedan Oil Corp., 948 S.W.2d 497, 503 (Tex. App.—Waco 1997, writ denied). No findings of fact or conclusions of law were filed in this case, but a reporter’s record was filed. When, as here, no findings of fact or conclusions of law are requested or filed, we imply all necessary findings in support of the trial court's judgment. See Holt Atherton Indus., Inc. v. Heine, 835 S.W.2d 80, 83 (Tex. 1992); Casino Magic Corp. v. King, 43 S.W.3d 14, 19 (Tex. App.—Dallas 2001, pet. denied). When a reporter's record is included in the record on appeal, the implied findings may be challenged for legal and factual sufficiency. See Roberson v. Robinson, 768 S.W.2d 280, 281 (Tex. 1989) (per curiam); King, 43 S.W.3d at 19. We review implied findings by the same standards we use in reviewing the sufficiency of the evidence to support a jury's answers or a trial court's fact findings. Id. Cummins fails to state in her brief whether she is challenging the legal or factual sufficiency of the evidence. Despite Cummins’s failure to articulate her specific sufficiency challenge, we will review the evidence for both legal and factual sufficiency in the interest of justice.Legal Sufficiency       To determine whether the evidence is legally sufficient to support the court’s finding, we consider only the evidence supporting the verdict "in the light most favorable to the party in whose favor the verdict has been rendered, and every reasonable inference deducible from the evidence is to be indulged in that party's favor." We will find the evidence legally insufficient if: (1) there is a complete absence of evidence for the finding, (2) there is evidence to support the finding, but rules of law or evidence bar the court from giving any weight to the evidence, (3) there is no more than a mere scintilla of evidence to support the finding, or (4) the evidence conclusively establishes the opposite of the finding. Merrell Dow Pharms, Inc. v. Havner, 953 S.W.2d 706, 711 (Tex. 1997) (citing Robert W. Calvert, “No Evidence” and “Insufficient Evidence” Points of Error, 38 Tex. L. Rev. 361, 362-63 (1960)). "More than a scintilla of evidence exists when the evidence supporting the finding, as a whole, 'rises to a level that would enable reasonable and fair-minded people to differ in their conclusions.'" Burroughs Wellcome Co. v. Crye, 907 S.W.2d 497, 499 (Tex. 1995) (quoting Transportation Ins. Co. v. Moriel, 879 S.W.2d 10, 25 (Tex. 1994)). The "no evidence" standard is the same for findings made under the "clear and convincing standard" as for a preponderance standard. See Spangler v. Texas Dept. of Regulatory Services, 962 S.W.2d 253, 257 (Tex. App.—Waco 1998, no pet.). Factual Sufficiency       To determine whether the evidence is factually sufficient to support a jury finding made under the "clear and convincing" standard, we consider all the evidence in the record both for and against it, and we will find the evidence factually insufficient "if the trier of fact could not reasonably find the existence of the fact to be established by clear and convincing evidence." Id. This could occur if: "(1) the evidence is factually insufficient to support a finding by clear and convincing evidence; or (2) a finding is so contrary to the weight of contradicting evidence that no trier of fact could reasonably find the evidence to be clear and convincing." Id. This intermediate standard of review is necessary to preserve the constitutionally protected interests involved in a termination of parental rights. Id. Evidence       Cummins’s three children are F.C., T.C., and C.C. The evidence demonstrates the following:       1) Cummins struck F.C. at the grocery store on one occasion;       2) Cummins left T.C. in the Wal-Mart parking lot and drove off without her;       3) C.C. had bruises and marks on his body and neck;   4) C.C. had a cigarette burn on his chest while Cummins placed him under the care of her aunt;   5) With all the children riding along as passengers, Cummins wrecked her car while she was intoxicated and speeding;   6) Cummins admitted that her alcohol problems may have contributed to her past mistakes with the children’s care;   7) F.C. and T.C. reported incidents of sexual abuse by their brother C.C.;   8) F.C. reported that Terry Washington, Cummins’s boyfriend, touched her inappropriately;   9) Child Protective Services (“CPS”) reported that the Cummins home where the children lived was filthy, smelled foul, and was infested with roaches and spiders; and   10) The CPS visits to the Cummins home indicated that the children appeared hungry, dirty, and there was no running water.       Despite abandoning her service plan and stating that she wanted the children to live with her aunt, Cummins decided to fight the termination of her parental rights. She said she still loves her children and admitted making “mistakes” in the past. She testified that she was unaware of any sexual abuse of the children. She further testified that she no longer abuses alcohol and has obtained steady employment. Endangerment       The evidence strongly supports the court’s finding by clear and convincing evidence that Cummins caused the children to be endangered. Although “endanger means more than a threat of metaphysical injury or the possible ill effects of a less-than-ideal family life, it is not necessary that the conduct be directed at the child or that the child actually suffers injury.” In re M.C., 917 S.W.2d 268, 269 (Tex. 1996) (quoting Dep't of Human Services v. Boyd, 727 S.W.2d 531, 533 (Tex. 1987). The term simply means “to expose to loss or injury; to jeopardize.” Id. For example, allowing children to live in unsanitary conditions, and neglecting their physical condition, can be endangerment. See id. at 270. Here, CPS reports demonstrated that Cummins allowed the children to live in unsanitary conditions and neglected their physical condition.       Furthermore, the courts have found that a "course of conduct" by a parent that jeopardizes a child's physical or emotional well-being is evidence of endangerment as defined by Boyd. See Boyd, 727 S.W.2d at 533; D.L.N., 958 S.W.2d at 938. In the D.L.N. case, we could not point to one specific act that justified termination of parental rights. We did, however, find that the pattern established by the parent's bad temper, neglect of the child's physical and emotional well-being, inability to deal with the child's emotional needs, limited interaction with the child, and treatment of the child's siblings was legally sufficient to support an involuntary termination under section 161.001(1)(E). Id. at 938-39. Such a course of conduct applies in this case. Cummins’s pattern of substance abuse and neglect, coupled with the evidence of sexual and physical abuse of the children while under her care, shows a course of conduct that endangered the physical and emotional needs of the children.       Viewing the evidence in a light that supports the trial court’s finding, we conclude that there was more than a scintilla of evidence that Cummins engaged in conduct or knowingly placed the children with persons who engaged in conduct which endangered the physical or emotional well-being of each child. We find the evidence legally sufficient to support the court’s finding of endangerment.       Viewing all the record evidence we cannot say that the court’s finding of endangerment was against the great weight and preponderance of the evidence. Accordingly, we find the evidence factually sufficient to support the court’s finding of endangerment. We now examine the best interest evidence for legal and factual sufficiency. Best Interest       The evidence in the present case must also support the court’s finding by clear and convincing evidence that termination was in the children's best interest. The Texas Supreme Court identified some of the factors which might justify such a finding. See Holley v. Adams, 544 S.W.2d 367, 372 (Tex. 1976); see also In re J.O.C., 47 S.W.3d 108, 114-15 (Tex. App.—Waco 2001, no pet.). The list is not exhaustive, nor must there be evidence of all of the factors. See J.O.C., 47 S.W.3d at 115. The factors pertinent to this case are: (1) the emotional and physical needs of the child now and in the future; (2) the emotional and physical danger to the child now and in the future; (3) the acts or omissions of the parent which may indicate that the existing parent-child relationship is not a proper one; (4) parental abilities; and (5) any excuse for the acts or omissions of the parent.       The evidence is strong that termination was in the best interest of these children. The record shows that Cummins consistently failed to meet the physical and emotional needs of the children. CPS visits found the children hungry and dirty on more than one occasion, and the home was reported filthy and without running water. Cummins, while intoxicated, placed the children in physical danger and an automobile accident resulted. She also abandoned her youngest child in a store parking lot. Further, Cummins exposed the children to physical and sexual abuse while under her care. Accordingly, we find the record legally and factually sufficient to support involuntary termination of Cummins’ parental rights. Point three is overruled.       Cummins argues in point four that the trial court erred in finding that she failed to comply with a court ordered “plan of service.” Having already found that the court’s termination was supported by sufficient evidence, we need not address this issue. Point four is overruled.                                                                                       REX D. DAVIS                                                                                Chief Justice Before Chief Justice Davis,       Justice Vance, and       Justice Gray Affirmed Opinion delivered and filed January 23, 2002 Do not publish [CV06] led to its enactment.”  Stracener, 777 S.W.2d at 382.  A policy provision frustrates the statute’s intended purpose by “limit[ing] the possibility that an injured insured can recover actual damages” or reducing protection “below the minimum [statutory] limits.”  Id. at 383; Kidd, 997 S.W.2d at 270, 276.             Progressive’s policies state that it “will pay damages which a covered person is legally entitled to recover from the owner or operator of an uninsured motor vehicle.”  A vehicle is underinsured where “its limit of liability” is insufficient to “pay the full amount the covered person is legally entitled to recover as damages” or “has been reduced by payment of claims to an amount which is not enough to pay the full amount the covered person is legally entitled to recover as damages.”  The policies also contain a “Two or More Auto Policies” provision: If this policy and any other auto insurance policy issued to you by us apply to the same accident, the maximum limit of our liability under all the policies shall not exceed the highest applicable limit of liability under one policy.   Citing Fidelity & Casualty Co. v. Gatlin, American Liberty Insurance Co. v. Ranzau, Stracener, and Briggs, Kelley argues that this clause is an “other insurance” provision that “contravenes the purpose and intent of the UM Statute and is contrary to public policy.”[5]             In Gatlin, Margaret Gatlin was killed when the vehicle in which she was riding, owned by Mrs. James W. Talley, was struck by an uninsured motorist.  See 470 S.W.2d 924, 925 (Tex. Civ. App.—Dallas 1971, no writ).  Talley carried insurance with Republic Insurance Company, and Margaret’s husband carried insurance with Fidelity & Casualty Company.  Id.  Fidelity argued that Gatlin could not recover under its policy “because of the ‘pro rata clause of the Republic policy’ and the ‘excess insurance’ clause” in Fidelity’s policy.  Id. at 926.  The Dallas Court disagreed and held: (1) that our uninsured motorist statute sets a minimum amount of coverage but it does not place a limit upon the total amount of recovery so long as that amount does not exceed the amount of actual loss; (2) that where the loss exceeds the limits of one policy, the insured may proceed under other available policies; (3) and that where uninsured motorist coverage has been provided, we cannot permit an insurer to avoid its statutorily imposed liability by its insertion into the policy of a liability limiting clause which restricts the insured from receiving the benefit of that coverage.   Id. at 928.             In Ranzau, Paula Ranzau suffered injuries when the vehicle in which she was riding, owned by Victor Raphael, was struck by an uninsured motorist.  See 481 S.W.2d 793, 794-95 (Tex. 1972).  Raphael carried insurance with USAA, and Paula’s father carried insurance with American Liberty.  Id at 795.  USAA paid the policy limits to Paula, but American Liberty refused payment based on an “other insurance” provision in its policy.  Id.  The Supreme Court held that a provision may not “‘limit the recovery of actual damages caused by an uninsured motorist:” The statute does not expressly or by any reasonable inference limit the recovery of actual damages to the statutory limits of required coverage for one policy in circumstances where the conditions to liability are present with respect to two policies with different insurers and insureds. This is the effect, however, of “other insurance” clauses, whether in the form of “pro-rata,” “excess insurance,” “excess-escape” or like clauses; one or the other insurer escapes liability, or both reduce their liability.   Id. at 797.  The Court further held that “to permit one policy, or the other, to be reduced or rendered ineffective by a liability limiting clause would be to frustrate the insurance benefits which the statute sought to guarantee and which were purchased by the respective insureds.”  Id.                 In Briggs, Thomas and JoJean Briggs suffered injuries while riding in a vehicle belonging to their employer.  See 514 S.W.2d at 234.  The employer carried insurance with International Insurance Company and the Briggs carried insurance with American Motorists Insurance Company.  Id.  Both policies contained “other insurance” provisions.  Id.  The Briggs settled with International and won a judgment against American.  Id.  The Supreme Court held: [W]henever coverage exists under the uninsured motorist endorsement, the person covered has a cause of action on the policy for his actual damages to the extent of the policy limits without regard to the existence of other insurance. If coverage exists under two or more policies, liability on the policies is joint and several to the extent of plaintiff’s actual damages, subject to the qualification that no insurer may be required to pay in excess of its policy limits.   Id. at 236.  In Stracener, the Supreme Court considered two cases involving the stacking of underinsured benefits.  See 777 S.W.2d at 379-81.  In the first case, LaDonna Stracener was killed when the car in which she was riding was struck by a vehicle driven by Robert Lampe.  Id. at 380.  Stracener was covered by four insurance policies issued by different insurers.  Id.  All the insurers settled, except USAA.  Id.  The First Court held that the Straceners could not “combine or ‘stack’ the limits of underinsured motorist coverage under four separate insurance policies.”  Id. at 379.  In the second case, Scott Hestilow was injured when the car he was driving was struck by a vehicle driven by Alvino Casarez.  Id. at 380.  A settlement was reached with Casarez’s insurance carrier.  Id.  Scott’s parents each carried a policy with USAA.  Id.  The Fourth Court held that “coverage may be stacked,” but the “total coverage available to the beneficiary should be reduced by the limit of the tortfeasor’s liability coverage.”  Id. at 379.  Both Stracener and Hestilow involved clauses stating that the “limit of liability shall be reduced by the amount recovered or recoverable from, or on behalf of the owner or operator of an underinsured motor vehicle.”  Id. at 380.  The Supreme Court reversed Stracener and affirmed Hestilow, holding that “clauses in insurance policies which are not consistent with and do not further the purpose of article 5.06-1 [the uninsured/underinsured motorist statute] are invalid.”  Id. at 384.             Progressive attempts to distinguish Stracener, Briggs, Ranzau, and Gatlin, arguing that they do not address “two policies from the same insurer issued to the same named insured with injuries based on one accident.”  Progressive points to language in Ranzau stating that the uninsured/underinsured statute does not limit the recovery for actual damages “where the conditions to liability are present with respect to two policies with different insurers and insureds” and that “other insurance” clauses allow “one or the other insurer” to escape or “reduce their liability.”  481 S.W.2d at 797 (emphasis added).             We are not persuaded that this distinction affects our analysis.  In United Services Automobile Ass’n. v. Hestilow, one of the cases addressed in Stracener, the San Antonio Court addressed a similar issue, holding that the fact that the “insurance carrier is the same for the two separate policies” is irrelevant because: This is no different than had one of the policies been issued by a different insurance carrier. By allowing USAA to offset against each of its policies the amount paid by the underinsured motorist’s carrier, USAA would be receiving a double setoff, or a windfall. The insurance company collected premiums on statutorily required insurance coverage. To permit it to deny recovery under one of its policies would mean it profited under the statute to the insured’s detriment. Such cannot possibly have been the intention of the Legislature.   754 S.W.2d 754, 758-59 (Tex. App.—San Antonio 1988), aff'd, 777 S.W.2d 378 (Tex. 1989) (internal citations omitted).  In Travelers Indemnity Co. v. Lucas, the Lucases suffered injuries when the ambulance in which they were riding was struck by an uninsured motorist.  See 678 S.W.2d 732, 733 (Tex. App.—Texarkana 1984, no writ).  The Lucases sought personal injury protection and uninsured motorist coverage under their two policies with Travelers.  Id.  Travelers paid the P.I.P. benefits under one policy, but refused to pay under the other policy.  Id. at 733-34.  The Texarkana Court held: An insurance company may not reduce its U.M. liability to an amount less than the policy limit by crediting to itself an amount paid under another policy. The Lucases are entitled to recover the U.M. coverage limits to the extent it does not exceed their actual damages.   Id. at 735 (internal citations omitted).             Neither are we convinced that it matters whether injuries arise out of “one accident.”  “Texas has traditionally permitted stacking of uninsured motorist coverage when different policies apply to the same accident.”  Hestilow, 754 S.W.2d at 757 (emphasis added).  Furthermore, the statute expressly defines an “underinsured motor vehicle” in the context of a single accident: The term “underinsured motor vehicle” means an insured motor vehicle on which there is valid and collectible liability insurance coverage with limits of liability for the owner or operator which were originally lower than, or have been reduced by payment of claims arising from the same accident to, an amount less than the limit of liability stated in the underinsured coverage of the insured’s policy.    Act of May 6, 1977, 65th Leg., R.S., ch. 182, 1977 Tex. Gen. Laws 370, repealed by Act of May 24, 2005, 79th Leg., R.S., ch. 727, § 18, 2005 Tex. Gen. Laws 1752, 2186-87 (current version at Tex. Ins. Code Ann. § 1952.103 (Vernon 2007)) (emphasis added).  In light of these authorities, we conclude that multiple policies may be stacked even though issued by the same insurer to the same insured for damages arising out of the same accident.  See Hestilow, 754 S.W.2d at 757-59; see also Lucas, 678 S.W.2d at 735.             Progressive next argues that the “Two or More Auto Policies” provision does not: (1) “limit an insured’s ability to recover an amount in excess of the policy limits” required by the statute; (2) “restrict the [statute’s] effect;” (3) “render Progressive an excess insurer (as in “other insurance” clauses) wherein the benefits paid to the insured are dependent upon the benefits received by the insured from a separate insurer;” or (4) give rise to the “uncertainties” of “other insurance” clauses as identified in Stracener.[6]  See 777 S.W.2d 378 at 383 (“uncertainties” include the limits of underinsured motorist coverage, “the limits of the tortfeasor’s liability insurance, the extent of damages suffered by any other persons who may have been involved in the same accident and the amount of any settlements made with the liability insurance carrier”).  According to Progressive, this clause is an anti-stacking provision that allows an insured to recover “actual damages in an amount for which the insured contracted with full disclosure of all terms of the applicable policy.”  Thus, Progressive argues that the underinsured limits available to Kelley “in the event of a single accident were disclosed at the time Mr. Kelley purchased the policies.”             However, the proper inquiry is whether a provision limits the insured’s ability to recover the extent of her actual damages incurred, not whether a provision limits an insured’s “actual damages in an amount for which the insured contracted.”  See Stracener, 777 S.W.2d at 383; see also Briggs, 514 S.W.2d at 236; Ranzau, 481 S.W.2d at 797.  Regardless of whether it qualifies as an “other insurance” clause, an anti-stacking clause, or some other type of clause, the “Two or More Auto Policies” provision may not be used to frustrate the intended purpose of the statute.  See Stracener, 777 S.W.2d at 384 (“clauses in insurance policies which are not consistent with and do not further the purpose of article 5.06-1 are invalid”).             Although Texas law allows an insured to stack two or more policies to the extent of the insured’s actual damages, Progressive’s “Two or More Auto Policies” provision effectively prohibits the stacking of multiple policies.  See id. at 382-83; see also Briggs, 514 S.W.2d at 236.  In doing so, this clause improperly inhibits the injured insured’s ability to recover actual damages.  See Ranzau, 481 S.W.2d at 797.  This frustrates the very purpose of the statute.  See Stracener, 777 S.W.2d at 383-84; see also Jankowiak, 201 S.W.3d at 212 (finding “limiting provisions” invalid to the extent they “provide less than the statutory minimum amount of coverage” or “limit a covered person’s recovery of actual damages”).  Accordingly, we conclude that the “Two or More Auto Policies” clause is inconsistent with the uninsured/underinsured motorist statute and is invalid.  See Stracener, 777 S.W.2d at 384; see also Jankowiak, 201 S.W.3d at 212.  Kelley’s second issue is sustained. Conclusion             Because Kelley has established as a matter of law that Progressive issued two separate policies of insurance and that Progressive’s “Two or More Auto Policies” provision violates public policy, we reverse the trial court’s judgment and render judgment that Kelley is entitled to recover under the second policy to the extent of her actual damages.  We remand this cause to the trial court for further proceedings consistent with this opinion.   FELIPE REYNA Justice   Before Chief Justice Gray,       Justice Vance, and Justice Reyna (Chief Justice Gray dissents without a separate opinion) Reversed and rendered in part, Reversed and remanded in part Opinion delivered and filed December 12, 2007 [CV06] [1]               Kelley alleges that she has suffered over $1,000,000 in damages. [2]               The First Court accepted this contention for the sake of argument, but noted that the separate declaration sheets appeared to be a single document.  See Monroe v. Gov’t Employees Ins. Co., 845 S.W.2d 394, 398 n.6 (Tex. App.—Houston [1st Dist.] 1992, writ denied).      [3]               Progressive attached an excerpt from its “Product & Underwriting Guide” to its response to Kelley’s motion for summary judgment.   [4]               The uninsured motorist statute was enacted in 1967 to provide uninsured protection and amended in 1977 to provide both uninsured and underinsured protection.  See Act of May 3, 1967, 60th Leg., R.S., ch. 202, § 1, 1967 Tex. Gen. Laws 448, amended by Act of May 6, 1977, 65th Leg., R.S., ch. 182, § 1, 1977 Tex Gen. Laws 370, repealed by Act of May 24, 2005, 79th Leg., R.S., ch. 727, § 18, 2005 Tex. Gen. Laws 1752, 2186-87 (current version at Tex. Ins. Code Ann. §§ 1952.101-1952.110 (Vernon 2007)).  These provisions were repealed effective April 1, 2007.  See Act of May 24, 2005, 79th Leg., R.S., ch. 727, § 18, 2005 Tex. Gen. Laws 1752, 2186-87.  Because this case was filed in 2005, we apply the law in effect at that time. [5]               Kelley also relies on a Montana case.  See Hardy v. Progressive Specialty Ins. Co., 67 P.3d 892 (Mont. 2003).  Because Hardy addresses a Montana statute and Montana public policy, we do consider it. [6]               Progressive points out that the “Two or More Auto Policies” provision is separate from the “other insurance” clause in the policy.  An “other insurance” clause is contained elsewhere in the policy:   If there is other applicable liability insurance, we will pay only our share of the loss.  Our share is the proportion that our limit of liability bears to the total of all applicable limits.   We accept that these two clauses are not one and the same.
66,082,930
Enrollment into Land of Emotions Preschool 1. Contact To register your child in our preschool, please contact us by phone or fill out the form below. 2. Meeting We will invite you to a meeting with us, during which we will give you a tour of the Land of Emotions and sign a contract of care and education for your child. 3. Registration fee The registration fee can be made in cash at the preschool or by bank transfer. Signing the agreement and payment of the registration fee guarantees your reservation at the Land of Emotions. Will your child be attending extra curricular activities YesNo DancePiano LessonsRoboticsSpanish I consent to the processing of my personal data by Child First Sp. o.o. for the needs of the recruitment process of the child to Land of Emotions Preschool in accordance with the Act of August 29, 1997. on personal data protection (Dz. U. of 2002. No. 101, item. 926, as amended.).
66,082,986
Q: jQuery mouseover/mouseout delay, multiple elements I have this series of parent divs (sharing the same class) that enclose a child each. When I mouseover a parent, I want to add a class to its child. When I mouseout, I want to remove that class with a 500ms delay. I figured setTimeout was the way to go for the delay. To prevent mouseout from triggering the removeClass after I actually got back over the parent, I thought of using clearTimeout. The problem is I cannot get clearTimeout to be triggered only if the newly hovered parent is the same as the previous one. The result of this being that if I hover from parent A to parent B in less than 500ms, removeClass is not triggered on parent A (as I actually would like it to). I hope this makes sense. Any help greatly appreciated! var timer $('.parent') .mouseover(function() { //clearTimeout(timer) $(this).children('.child').addClass('red'); }) .mouseleave(function() { that = this timer = setTimeout(function() { $(that).children('.child').removeClass('red'); }, 500); }); https://jsfiddle.net/andinse/1wnp82nm/ A: You should set timeout specific to each .parent element and bind relevant context to setTimeout callback, e.g using bind() to avoid variable that closure issue: -DEMO- $('.parent') .mouseover(function() { clearTimeout(this.timer) $(this).children('.child').addClass('red'); }) .mouseleave(function() { this.timer = setTimeout(function() { $(this).children('.child').removeClass('red'); }.bind(this), 500); });
66,083,065
7.1K shares Citrus and tropical flavours combine in this light & delicious Coconut Lime Cake with a sticky, zesty glaze. Lime and coconut are two of my favourite flavours and they come together perfectly in this delightfully sticky and zesty Coconut Lime Cake. I love zingy, fresh flavours, as you can probably tell from the number of citrus recipes I have here on AVV. From deliciously sweet like my Sticky Orange Olive Oil Baked Donuts, to savoury in my Loaded Taco Fries with Lime Crema, there is something for everyone. I can never resist a bargain and when I saw great big nets of limes at two for $5.00 I had to have them. Along with lots of gin and tonics, we have been eating plenty of this Coconut Lime Cake. The tropical flavours bring back welcome memories of summer which, after the recent storms here, feels like a lifetime ago. The tender lime sponge is full of citrus flavour and heavily flecked with shredded coconut which lends a slightly chewy texture. Top that with an extra pop of flavour from the zesty, sticky lime glaze and you will be in tropical heaven! Limes have a multitude of health benefits. Just one lime has around twenty-two milligrams of calcium and five milligrams of folate. They are also known to be anti-carcinogenic, help prevent kidney stones, lower cholesterol and combat aging skin. All great reasons to make and eat this Coconut Lime Cake! Please leave your feedback below when you make this cake. I love to hear what you think! You can also share your pictures on Instagram or Twitter. I am @avirtualvegan. Use the #avirtualvegan on Instagram so I don't miss them. Lime Coconut Cake Melanie McDonald Citrus and tropical flavours combine in this light & delicious Lime & Coconut Cake with a sticky, zesty glaze. 5 from 5 votes Print Recipe Pin Recipe Prep Time 10 mins Cook Time 45 mins Total Time 55 mins Course Baked Goods, Dessert Cuisine vegan Servings 10 slices Calories 242 kcal Ingredients 1x 2x 3x For the cake 120mls / ½ cup lime juice (3-4 large juicy limes) 120mls / ½ cup lime juice (3-4 large juicy limes) 3 limes zested 3 limes zested 1 tablespoon apple cider vinegar 1 tablespoon apple cider vinegar 190g / ¾ cup apple sauce 190g / ¾ cup apple sauce 60mls / ¼ cup coconut oil in liquid form ( unrefined will add to the coconut flavour but refined will do) 60mls / ¼ cup coconut oil in liquid form ( unrefined will add to the coconut flavour but refined will do) 180g / 1 ¾ cup cake flour (you can sub all purpose but the cake won't be quite as light. If in the UK use sieved plain flour) 180g / 1 ¾ cup cake flour (you can sub all purpose but the cake won't be quite as light. If in the UK use sieved plain flour) 30g / ½ cup shredded coconut ,unsweetened 30g / ½ cup shredded coconut ,unsweetened ¼ teaspoon salt ¼ teaspoon salt 1 teaspoon baking powder (I love this one as it is aluminium free & works really well) 1 teaspoon baking powder (I love this one as it is aluminium free & works really well) ¼ teaspoon baking soda ¼ teaspoon baking soda 100g / ½ cup cane sugar (any other granulated sugar will work here) 100g / ½ cup cane sugar (any other granulated sugar will work here) For the icing 65g / ½ cup natural powdered sugar (I use this organic & vegan brand) 65g / ½ cup natural powdered sugar (I use this organic & vegan brand) 2 - 3 tablespoons lime juice 2 - 3 tablespoons lime juice 1 lime zested 1 lime zested INSTRUCTIONS Preheat your oven to 350 degrees F. Preheat your oven to 350 degrees F. Grease and line a cake tin with parchment paper or use loaf tin liners . I used a 9 x 5 USA Pan loaf tin Grease and line a cake tin with parchment paper or use loaf tin liners . I used a 9 x 5 USA Pan loaf tin Add all the wet ingredients to a bowl or jug and mix well. Add all the wet ingredients to a bowl or jug and mix well. Add all the dry ingredients to a mixing bowl and stir to combine. Add all the dry ingredients to a mixing bowl and stir to combine. Add the wet mixture to the dry mixture and stir until just combined. DO NOT over mix. Just mix until you cannot see any dry bits of flour. Work quickly. You will start to see bubbles forming as you mix and the quicker you get it in the oven the better the cake will be. Add the wet mixture to the dry mixture and stir until just combined. DO NOT over mix. Just mix until you cannot see any dry bits of flour. Work quickly. You will start to see bubbles forming as you mix and the quicker you get it in the oven the better the cake will be. Place in the centre of your oven and bake for 40 - 50 mins or until a toothpick or skewer inserted into the centre comes out clean. Place in the centre of your oven and bake for 40 - 50 mins or until a toothpick or skewer inserted into the centre comes out clean. Remove from oven and cool on a cooling rack Remove from oven and cool on a cooling rack For the icing Put the powdered sugar in a small bowl Put the powdered sugar in a small bowl Add lime juice 1 tablespoon at a time, stirring well between each addition. You won't need as much as you think you will. Stop when you have a thickish, but pourable icing. If you add too much lime juice just add a little more powdered sugar. Add lime juice 1 tablespoon at a time, stirring well between each addition. You won't need as much as you think you will. Stop when you have a thickish, but pourable icing. If you add too much lime juice just add a little more powdered sugar. Pour the icing over the top of the cooled cake. It looks good when you let it run down the side in little streams. Pour the icing over the top of the cooled cake. It looks good when you let it run down the side in little streams. Grate a little lime zest on the top to add a pop of colour! Grate a little lime zest on the top to add a pop of colour! NUTRITION Serving: 1 slice Calories: 242 kcal Carbohydrates: 41 g Protein: 2.4 g Sodium: 166 mg Fiber: 1.8 g Sugar: 19 g Vitamin C: 9.9 mg Calcium: 50 mg Iron: 2.2 mg Nutritional information is provided for convenience & as a courtesy only. The data is a computer generated estimate so should be used as a guide only. Tried this recipe? Rate it & leave your feedback in the comments section below, or tag @avirtualvegan on Instagram and hashtag it #avirtualvegan Get my Top 10 reader's favourite recipes e-Cookbook FREE!
66,083,195
Linkstation Live Buffalo's new LinkStation Live (HS-DHxxxGL) lineup sticks to the general HDD bump scheme, with options for 250GB, 320GB and 500GB hard drives (at the respective prices of roughly $287, $306 and $441), spices things up a bit with DLNA for media pushing, and breaks new ground with iTunes server functionality. It seems as if they are seeking to abandon the PPC/Freescale and MIPSel/IDT platform/processors altogether in favor of the system on chip ARM/Marvell processor. The device is the same from a hardware standpoint as the LSPro just as the HS is the same from a hardware standpoint as the HG. In the UK the 500GB version is available from Staples stores at £159.99. This price has now dropped to £89.99. Dropped again in germany to 139.- Euro in Nov 2008. Special Features Inside the LS Live. Seamlessly Integrates with iTunes® 7 and Allows You to Access Your Music Files on the LinkStation from Your iTunes Software
66,083,306
believes that eliminating religion will end all wars gets in a fight about it 649 shares
66,083,369
The invention relates generally to multiple compressor refrigeration systems, and more particularly to improvements in suction pressure controls for the compressors. In recent years many advances have been made in the refrigeration art and especially in the commercial refrigeration field, which includes supermarket refrigeration and like installations having heavy refrigeration requirements over a wide range of temperatures from about -40.degree. F. to about 50.degree. F. So-called central refrigeration systems of the heavy multiplexing type utilize several compressors (typically either two or four) connected for parallel operation to effect refrigerant flow to and from the evaporators of a large number of refrigerated fixtures. Multiple compressor systems are generally controlled by pressure sensitive switches responsive to the suction pressure at the compressors intake so that as the suction pressure fluctuates in response to increases or decreases in system loads, the compressors will cycle on and off to maintain the common suction pressure on the system within prescribed limits as required to maintain proper temperature control of the refrigerated fixtures. Fluctuation of the suction pressure is influenced by various internal (system) factors including temperature controls, defrosting apparatus and the like, and by several external factors including product loading of refrigerated fixtures, ambient temperatures and the like, and at times sudden transient increases in suction pressure may cause one or more idle compressors to start thereby rapidly reducing the suction pressure to the point where such compressors will cycle off again. Since these suction pressure changes are frequently transient in nature, the capacity of the operating compressors in the system would often be adequate to restore the normal suction pressure operating the system before there is any significant influence on the refrigerated compartment temperatures of the fixtures. However, if the thermal load change causing the suction pressure (temperature) rise is of long duration, then the operation of one or more additional compressors may be necessary to maintain normal refrigerated compartment temperatures. It is apparent that electric power consumption will be reduced in the overall operation of the refrigeration system if additional compressors are not started in response to transient load increases. In the past, electric time delay relays have been used for delaying the start of additional compressors sensing an increase in suction pressure, but such electric relays are insensitive to the actual magnitude of suction pressure, whereby the compressor controlled thereby will start no matter how small the difference between actual suction pressure and the pressure switch setting of the compressor. In short, heretofore there has been no simple, positive acting, suction pressure control for effectively obviating on/off compressor cycling due to sudden temporary or transient suction pressure changes.
66,083,385
Drawing upon decades of experience, RAND provides research services, systematic analysis, and innovative thinking to a global clientele that includes government agencies, foundations, and private-sector firms. The Pardee RAND Graduate School (PRGS.edu) is the largest public policy Ph.D. program in the nation and the only program based at an independent public policy research organization—the RAND Corporation. Despite increasing interest and investments in climate adaptation science, the implementation of adaptation plans through institutional policies or other actions designed to reduce health vulnerabilities has been slow. Institutionalized assumptions are an important roadblock. The Nuclear Waste Administration Act (S. 1240) appears to strike a balance between the competing values of public accountability and insulation from political influence, write Lynn Davis and Debra Knopman. This essay will argue that long-term emissions reduction goals currently proposed before Congress at best only highlight the magnitude of the climate change challenge, without contributing much to a solution. Instead of setting an arbitrary Production Tax Credit value for producers of renewable energy, we could provide a tax credit based on the social value of clean electricity generation, writes Constantine Samaras. A federal government corporation and an independent government agency are the two most promising models for a new organization to manage and dispose of spent nuclear fuel and high-level radioactive waste in the United States. A federal government corporation and an independent government agency are the two most promising models for a new organization to manage and dispose of spent nuclear fuel and high-level radioactive waste in the United States. RAND researchers describe the attributes of potential organizational models and the steps needed to choose the form of a new organization charged with managing and disposing of commercial and defense high-level radioactive materials. Developing an integrated perspective on mitigation, adaptation and residual climate impacts could be aided by identifying a set of global narratives and socio-economic pathways offering scalability to different regional contexts. Achieving the potential economic and national security benefits offered by alternative fuels requires that their domestic production must be an appreciable fraction of domestic demand for liquid fuels. Alternative fuels derived from oil shale and coal have the potential to meet that important criterion. To combat climate change, the British government has thus far valued the cost of carbon emissions based on how much people should pay, rather than how much they are willing to pay, or the value they place on carbon emissions reduction. An analysis of a series of RAND Europe studies suggests there is an opportunity for a large consumer surplus — a social benefit — by introducing a carbon tax to pay for the damages caused by carbon emissions. To break the impasse over how to deal with spent nuclear fuel from commercial nuclear power plants, policymakers should focus on how various waste management strategies address societal priorities related to nuclear energy. To break the impasse over how to deal with spent nuclear fuel from commercial nuclear power plants policymakers should focus on how various waste management strategies address societal priorities related to nuclear energy. The U.S. Environmental Protection Agency ended a voluntary national program that encouraged facilities to improve all aspects of their environmental performance. The significant environmental challenges that the U.S. faces require it to continue to seek complements to traditional regulatory approaches. Devising policies to mitigate greenhouse gases responsible for climate change is one of the great challenges facing the U.S. Options that are effective and politically feasible must not just be cost-effective but also consider the realities of passing major federal legislation with widespread impacts on U.S. producers and consumers. Stay Informed Energy & Environment The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest.
66,083,396
Molecular characterization of hepatitis B virus isolates from Zimbabwean blood donors. Hepatitis B virus (HBV) is endemic in Africa, being hyperendemic in sub-Saharan Africa. Genotypes A, D, and E circulate in Africa, showing a distinct geographical distribution. The aim of the present study was to determine the HBV genotype distribution in blood donors from different geographical locations in Zimbabwe. Using a restriction fragment polymorphism assay, sequencing of the basic core promoter/precore region and of the complete S open reading frame showed that 29 HBV isolates from geographically distinct regions belong to subgenotype A1. The complete genome of two of these Zimbabwean HBV isolates was sequenced. Forty-four percent of the Zimbabwean HBV isolates (11/23) were characterized by a G1862C missense mutation, which causes a Val to Leu amino acid substitution at position 17 of the precore region. The majority of Zimbabwean HBV isolates clustered with a number of South African HBV isolates, with which they shared characteristic amino acids in the preS1, preS2, and polymerase spacer regions. The wide distribution of subgenotype in Africa, as well as the high intragroup divergence and the geographical clustering of the African and Asian subgenotype A1 HBV isolates indicate that this subgenotype has a long period of endemicity in these regions.
66,083,730
--- abstract: 'We theoretically study physical properties of the most promising color center candidates for the recently observed single-photon emissions in hexagonal boron nitride (h-BN) monolayers. Through our group theory analysis combined with density functional theory (DFT) calculations we provide several pieces of evidence that the electronic properties of the color centers match the characters of the experimentally observed emitters. We calculate the symmetry-adapted multi-electron wavefunctions of the defects using group theory methods and analyze the spin-orbit and spin-spin interactions in detail. We also identify the radiative and non-radiative transition channels for each color center. An advanced [*ab initio* ]{}DFT method is then used to compute energy levels of the color centers and their zero-phonon-line (ZPL) emissions. The computed ZPLs, the profile of excitation and emission dipole polarizations, and the competing relaxation processes are discussed and matched with the observed emission lines. By providing evidence for the relation between single-photon emitters and local defects in h-BN, this work provides the first steps towards harnessing quantum dynamics of these color centers.' author: - Mehdi Abdi - 'Jyh-Pin Chou' - Adam Gali - 'Martin B. Plenio' bibliography: - 'origin.bib' title: 'Color centers in hexagonal boron nitride monolayers: A group theory and *ab initio* analysis' --- Introduction ============ Thanks to the advances in fabrication and control, two-dimensional (2D) materials are under intense investigation nowadays for their potential applications in many areas including quantum technology [@Britnell2012; @Georgiou2013; @He2015]. This ranges from quantum nanophotonics [@Xia2014; @Clark2016; @Shiue2017] to quantum sensing [@Lee2012; @Abderrahmane2014; @Li2014; @Abdi2016] and quantum information processing [@Cai2013; @Tran2016a; @Abdi2017]. Recently, there has been an increasing interest in photon emitters in 2D materials [@Srivastava2015; @Chakraborty2015; @Palacios2016], in general, and layers of hexagonal boron nitride (h-BN), in particular [@Aharonovich2016]. The recent reports on the observation of single photon emission from few layers of hexagonal boron nitride samples has sparked a considerable interest. These experiments have provided a broad spectrum of data on the single-photon emitters from h-BN both in the visible [@Tran2016a; @Tran2016b; @Chejanovsky2016; @Jungwirth2016; @Martinez2016; @Schell2016; @Shotan2016; @Jungwirth2017; @Li2017; @Exarhos2017] and UV [@Museur2008; @Bourrellier2016; @Vuong2016]. The origin of the emissions, however, is still under study. Nevertheless, based on the experimental observations and density functional theory (DFT) computations there are evidences leading to the local defects, the ‘color centers’ [@Wong2015; @Tran2016b]. Several color center candidates have already been suggested. Despite considerable efforts by research made so far, detailed experimental investigations are still required to fully unveil the origin of these optical emissions [@Jimenez1997; @Museur2008; @Jin2009]. Color centers in 2D materials are envisioned to have applications in nanophotonics and, provided their electronic structure and magnetic properties are known well, they could be employed for many other purposes from quantum sensing to quantum information processing [@Abdi2017]. Since they have the privilege of sitting in a 2D material, which means natural proximity to the surface their sensitivity to the surrounding environment is expected to be high. Their high quality zero-phonon-line (ZPL), on the other hand, makes them promising high quantum efficiency spin-photon interfaces. A better understanding of the observed emissions from 2D materials and their origins is required for their control, which cannot be achieved without more theoretical investigations. The pioneering theoretical works are focused on the computational methods to study the stability and structural properties of h-BN flakes accommodating local defects [@Orellana2001; @Mosuang2002; @Azevedo2007] as well as their electronic and magnetic properties [@Si2007; @Azevedo2009; @Topsakal2009; @Okada2009]. Further investigations clarified that some defect families exhibit deep band-gap levels with partial occupation, which indicate allowed electric dipole transitions at optical frequencies and beyond [@Attaccalite2011; @Huang2012b]. Other first-principles considerations do not exclude dislocation lines, multi-vacancy sites, and topological defects like Stone-Wales as a resource of emission [@Zobelli2006; @Cretu2014; @Yin2010; @Wang2016]. However, the similarities between the h-BN optical emitters to the known color centers in other materials, e.g. nitrogen-vacancy (NV) and silicon-vacancy (SiV) centers in diamond [@Aharonovich2016], have tentatively brought the attention of the community to substitutional and vacancy defects. In particular, it is observed that the emitters can be created by ion bombardment in a controlled fashion [@Choi2016]. More recently, several DFT computational analyses have attributed the emissions to charge neutral native and substitutional defects with deep band-gap energy levels [@Li2017; @Tawfik2017; @Cheng2017]. These reports lay the ground work for further research on the physics of such defects. But a group theory accompanied with DFT analysis is necessary for a deeper understanding of these color centers which can guide future experimental and theoretical works. A route that have proven to be beneficial in diamond color centers, e.g. NV and SiV centers [@Doherty2011; @Maze2011; @Hepp2014]. Nonetheless, to the best of our knowledge, there is no group theoretical investigation on these single photon emission candidates. To provide a better understanding on their electronic and magnetic properties, here we provide an analysis based on symmetry observations on a few of the most relevant candidates and take advantage of the observations made in the experiments and our [*ab initio* ]{}calculations to explain such properties of the optical emitters via group theory analysis. By determining the symmetry-adapted total wavefunctions of the multi-electron states we present energy ordering of such states aided by our advanced DFT calculations. We perform thorough analyses on the effect of spin-orbit and spin-spin interactions as well as applied electric fields, as first inevitable step, at fixed coordinate of the atoms associated with the potential energy surface minimum at the given electronic configuration. The effect of electron-phonon coupling on the results will be discussed briefly as well. These studies allows us to come to the conclusion that the defects we are investigating here correlate well with the observed emitter species in the experiments. Remarkably, their electronic and magnetic properties allow for applications in quantum control and information processing [@Cai2013]. Hexagonal boron nitride can accommodate a wide variety of local defects in its lattice structure. This includes the most stable vacancy incorporated representatives: boron vacancy [V$_{\rm B}$]{}, nitrogen vacancies [V$_{\rm N}$]{}, a complex anti-site which is a nitrogen vacancy next to a nitrogen anti-site [V$_{\rm N}$N$_{\rm B}$]{}, and substitutional anti-site defect like $\mathrm{V_NC_B}$ \[see Fig. \[fig:scheme\]\]. Our focus will be on the cases that can offer a platform for future quantum technological applications due to their nontrivial ground spin state. That is, those defects whose ground state is not a spin singlet. It is worth mentioning here that defects with electronic spin-singlet ground state can still be interesting provided their optical excited state has a nonzero spin state and in strong interaction with some neighboring spin systems, e.g. nuclear spins, though demanding more complicated control protocols. The observations on optical single-photon emitters strongly supports the likelihood of defects with a vacancy. Furthermore, such defects are being created in a rather controllable way by ion irradiation [@Grosso2017]. Therefore, we place our focus on defects with one vacancy. Observation of [V$_{\rm N}$]{} and [V$_{\rm B}$]{} defects is already reported in a TEM experimental work [@Jin2009], however, visual evidences on the existence of [V$_{\rm N}$N$_{\rm B}$]{} and $\mathrm{V_NC_B}$ yet to come out. ![The geometry of various possible defects in h-BN. Note that [V$_{\rm N}$N$_{\rm B}$]{} and [V$_{\rm N}$C$_{\rm B}$]{} defects have $C_{2v}$ point group symmetry with their axis of symmetry laying in the plane ($x$-axis here), while the rest of defects have $D_{3h}$ point group symmetry with the symmetry axis pointing out of the plane ($z$-axis).[]{data-label="fig:scheme"}](Defects.pdf){width="0.7\columnwidth"} In this paper, we provide a thorough group theoretical analysis combined with DFT calculations on the defects of our interest, namely the neutral [V$_{\rm N}$N$_{\rm B}$]{} and the negatively charged [V$_{\rm B}$]{}. The order of many-body levels are determined by our advanced DFT calculations. Note that positively charged [V$_{\rm N}$C$_{\rm B}$]{} as well as the singlet state situations resulting from the negatively and positively charged [V$_{\rm N}$N$_{\rm B}$]{} and neutral [V$_{\rm N}$C$_{\rm B}$]{} are not excluded from being the source of single photon emission. Therefore, their electronic diagram and spin properties are discussed in the supporting information. Neutral [V$_{\rm N}$N$_{\rm B}$]{} ================================== The geometry of this defect is shown in Fig. \[fig:scheme\]. As supported by previous [*ab initio* ]{}simulations it has $C_{2v}$ point group symmetry [@Tran2016a]. Our hybrid DFT calculations also support this conclusion \[see Fig. \[fig:vnnb\](b)\]. Here we apply the molecular orbital technique to build up the total wavefunctions and hence our subsequent theory. The identification of defect energy levels in the band-gap as well as the localization of corresponding orbitals allows us to analyze them via ‘defect molecular diagram’. Because of its hexagonal structure of the lattice the $\sigma$-dangling bonds around the vacancy are $sp^2$ orbitals, which result from hybridization of $2s$, $2p_x$, and $2p_y$ orbitals, hence they lie mostly on the plane of the layer. Meanwhile, the $2p_z$ orbitals perpendicular to the plane provide $\pi$-dangling bonds in the monolayer case. This is also valid for a multilayer membrane, as the inter-layer bonds are of weak Van der Waals nature. Therefore, the atoms in the neighboring layers do not affect significantly the defect dynamics. This has been verified by the [*ab initio* ]{}computations [@Tran2016a] where the electronic structure of point defects in mono- and three-layer h-BN membranes show negligible discrepancies. The $x$-axis is the symmetry axis of the [V$_{\rm N}$N$_{\rm B}$]{} defect which points from the vacancy to the nitrogen atom \[Fig. \[fig:scheme\]\]. The atomic dangling bonds are named after their variety ($\sigma$ or $\pi$) and the atom of origin: $\{\sigma_N, \sigma_{B_1}, \sigma_{B_2}, \pi_N, \pi_{B_1}, \pi_{B_2}\}$. Construction of the symmetry-adapted molecular orbitals (MOs) facilitates our further analyses. They provide the basis functions that diagonalize the attractive Coulomb Hamiltonian of the defect. These MOs are linear combinations of the set of atomic orbitals listed above, which are the bases for the irreducible representations of the defect point group. One finds them by applying the projection method [@Cornwell1997]. The MOs in the energy order from lowest to highest are [@Abdi2017; @Cheng2017]: $a_1^{(1)} = \alpha\sigma_N +\frac{\beta}{\sqrt{2}}(\sigma_{B_1} +\sigma_{B_2})$, $a_1^{(2)} = \beta\sigma_N +\frac{\alpha}{\sqrt{2}}(\sigma_{B_1} +\sigma_{B_2})$, $b_2^{(1)} = \alpha'\pi_N +\frac{\beta'}{\sqrt{2}}(\pi_{B_1} +\pi_{B_2})$, $b_2^{(2)} = \beta'\pi_N +\frac{\alpha'}{\sqrt{2}}(\pi_{B_1} +\pi_{B_2})$, $b_1 = \frac{1}{\sqrt{2}}(\sigma_{B_1} -\sigma_{B_2})$, $a_2 = \frac{1}{\sqrt{2}}(\pi_{B_1} -\pi_{B_2})$, where $\alpha$, $\beta$, $\alpha'$, and $\beta'$ coefficients are the overlap integrals and ${\left|\alpha\right|}^2 +{\left|\beta\right|}^2 = 1$ and ${\left|\alpha'\right|}^2 +{\left|\beta'\right|}^2 = 1$. We have named the single-electron orbitals after the symmetry of the irreducible representation they transform like. In the ground state, the five defect electrons fill two orbitals and half-occupy the third. The [*ab initio* ]{}calculations show that the fully occupied $a_1^{(1)}$ orbital lies deep in the valence band. Three of these orbitals with three occupant electrons are placed well-within the band-gap. Meanwhile, two unoccupied orbitals are located in the conduction band. The electronic configuration is sketched in Fig. \[fig:vnnb\]. The energy spacing of the orbitals is such that the effect of those located in valence and conduction bands can be safely neglected. We thus only consider the band-gap orbitals and electrons in our study. Hence, the ground state is $[a_1]^2[b_2]^1[b_2']^0$ and a few possible excited states are $[a_1]^1[b_2]^2[b_2']^0$, $[a_1]^2[b_2]^0[b_2']^1$, and $[a_1]^1[b_2]^1[b_2']^1$. Here, we have adopted a shorthand for orbitals $b_2 \equiv b_2^{(1)}$ and $b_2' \equiv b_2^{(2)}$ and this convention will be used in our following analysis. The superscripts indicate the number of electrons occupying each orbital. In our DFT calculations the spin-polarized defect levels appear in the fundamental band-gap of h-BN. The occupation \[Fig. \[fig:vnnb\](a)\] and symmetry \[Fig. \[fig:vnnb\](b)\] of these wavefunctions fully supports the group theory analysis. The empty $b_2'$ spin-polarized levels lie in the gap, hence, intra-defect optical transitions are viable. ![image](VnNb.pdf){width="\textwidth"} Multi-electron states --------------------- The multi-electron excited states calculated by our [*ab initio* ]{}method are presented in Fig. \[fig:vnnb\](c). The schematic energy levels with possible dipole induced and non-radiative transitions are also summarized in Fig. \[fig:vnnb\](d). We combine the tensor products of $a_1$, $b_2$, and $b_2'$ states with the total spin eigenstates to construct the basis set, from which the multi-particle states compatible with $\bar C_{2v}$ can be calculated. Here, the ground state and the two first excited states are constructed by a filled and a half-occupied orbital. Therefore, their multi-electron spin state can only take antisymmetric doublet state and all other combinations are rejected by Pauli’s exclusion principle. For the third excited state, where three electrons occupy three different singlet orbitals, both doublet (symmetric and antisymmetric) and quartet spin states are possible. The total wavefunction of these four electronic configurations in terms of Slater determinants are summarized in Table \[tab:vnnb\_khonsa\]. (Note that the primes in ${}^2B_2'$, ${}^2A_1'$, and ${}^2A_1''$ do not correspond to other irreducible representations and are to distinguish the states). The order of energy levels in the third excited state is estimated by Hund rules: The state with highest multiplicity lies at lower energies. Therefore, the quartet comes first, then the doublet with antisymmetric space wavefunction—which minimizes the Coulomb repulsion of electrons. As we will see below, the four-fold degeneracy of the quartet states is lifted and reduced into two two-fold degeneracies by the electronic spin-spin dipole interaction. configuration ${}^{2S+1}\Gamma_o$ label -------------------------- ------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------- $[a_1]^2[b_2]^1[b_2']^0$ $\prescript{2}{}{B}_2$ ${\vert{a_1\overline{a}_1b_2}\rangle}, {\vert{a_1\overline{a}_1\overline{b}_2}\rangle}$ ${\mathscr{B}^{0,\rm d}_{\pm\sfrac{1}{2}}}$ $[a_1]^1[b_2]^2[b_2']^0$ $\prescript{2}{}{A}_1$ ${\vert{a_1b_2\overline{b}_2}\rangle}, {\vert{\overline{a}_1b_2\overline{b}_2}\rangle}$ ${\mathscr{A}^{1,\rm d}_{\pm\sfrac{1}{2}}}$ $[a_1]^2[b_2]^0[b_2']^1$ $\prescript{2}{}{B}_2'$ ${\vert{a_1\overline{a}_1b_2'}\rangle}, {\vert{a_1\overline{a}_1\overline{b}_2'}\rangle}$ ${\mathscr{B}^{2,\rm d}_{\pm\sfrac{1}{2}}}$ $[a_1]^1[b_2]^1[b_2']^1$ $\prescript{4}{}{A}_1$ ${\vert{\overline{a}_1b_2b_2'}\rangle}+{\vert{a_1\overline{b}_2b_2'}\rangle}+{\vert{a_1b_2\overline{b}_2'}\rangle}$, $\mathscr{A}^{3,\rm q}_{+\sfrac{1}{2}}$ ${\vert{\overline{a}_1\overline{b}_2b_2'}\rangle}+{\vert{\overline{a}_1b_2\overline{b}_2'}\rangle}+{\vert{a_1\overline{b}_2\overline{b}_2'}\rangle}$ $\mathscr{A}^{3,\rm q}_{-\sfrac{1}{2}}$ ${\vert{a_1b_2b_2'}\rangle}, {\vert{\overline{a}_1\overline{b}_2\overline{b}_2'}\rangle}$ ${\mathscr{A}^{3,\rm q}_{\pm\sfrac{3}{2}}}$ ${}^2A_1'$ ${\vert{\overline{a}_1b_2b_2'}\rangle}+{\vert{a_1\overline{b}_2b_2'}\rangle}-2{\vert{a_1b_2\overline{b}_2'}\rangle}$, $\mathscr{A}^{3,\rm d'}_{+\sfrac{1}{2}}$ ${\vert{\overline{a}_1\overline{b}_2b_2'}\rangle}+{\vert{\overline{a}_1b_2\overline{b}_2'}\rangle}-2{\vert{a_1\overline{b}_2\overline{b}_2'}\rangle}$ $\mathscr{A}^{3,\rm d'}_{-\sfrac{1}{2}}$ ${}^2A_1''$ ${\vert{a_1\overline{b}_2b_2'}\rangle}-{\vert{\overline{a}_1b_2b_2'}\rangle}$, $\mathscr{A}^{3,\rm d}_{+\sfrac{1}{2}}$ ${\vert{a_1\overline{b}_2\overline{b}_2'}\rangle}-{\vert{\overline{a}_1b_2\overline{b}_2'}\rangle}$ $\mathscr{A}^{3,\rm d}_{-\sfrac{1}{2}}$ : \[tab:vnnb\_khonsa\] The configuration and spin-orbit total wavefunctions of neutral [V$_{\rm N}$N$_{\rm B}$]{} in terms of superpositions of the Slater determinants. The prime sign in $\mathrm{d}'$ stands for symmetric nature of the doublet spin wavefunction. The line over orbitals in the Slater states indicate the spin-down state of the electron in that orbital. We calculated the lowest excitation energies that may be directly compared to experimental data, and reveal the position of the quartet level with respect to the doublet excited states’ levels. Since the quartet level may play a role in the intersystem crossing (ISC) processes, thus the knowledge about the order of these levels is of high importance. We created the excited states in the $\Delta$SCF procedure as obtained in the group theory analysis, and the spin-polarized levels and their occupation are shown in Fig. \[fig:vnnb\](c). The corresponding excitation energies are listed in Table \[tab:VN\_NB-energies\]. We demonstrate for this defect that the popular semi-local PBE DFT functional [@Perdew1996] strongly underestimate the zero-phonon-line energies, whereas the HSE provides reliable energies. Indeed, the calculated first zero-phonon-line excitation energy is close to the detected one [@Tran2016a]. In the previous studies, the calculated vertical excitation energy of the defect were inaccurately compared to the measured zero-phonon-line energy at PBE DFT level. Our calculations show that the error in the calculated ZPL by PBE functional and the neglect of relaxation energy almost cancels each other. Overall, the calculated HSE ZPL energy of charge neutral [V$_{\rm N}$N$_{\rm B}$]{} indeed shows good correspondence with the signature of the single-photon emitter in h-BN. Interestingly, the calculated quartet $^{4}A_1$ level is between the two doublet excited states’ levels. Although, the calculated gaps between these levels are relatively large, ultraviolet or optical two-photon excitations may lead to ISC processes. Furthermore, it is worth mentioning that ${}^2B_2$ and ${}^2B_2'$ have the same symmetry and thus the Coulomb correlation effects can mix them up. That is, the states that diagonalize the Coulomb Hamiltonian are ${\vert{{\mathscr{B}^{0,\rm d}_{\pm\sfrac{1}{2}}}}\rangle} +\kappa{\vert{{\mathscr{B}^{2,\rm d}_{\pm\sfrac{1}{2}}}}\rangle}$ and $\kappa{\vert{{\mathscr{B}^{0,\rm d}_{\pm\sfrac{1}{2}}}}\rangle} +{\vert{{\mathscr{B}^{2,\rm d}_{\pm\sfrac{1}{2}}}}\rangle}$. The large energy difference between the two states $3.65$ eV, on the other hand, is expected to reduce the degree of mixing such that $\kappa \approx 0$. Hence, for our current study the mixing will be neglected and the multi-particle states listed in Table \[tab:vnnb\_khonsa\] are assumed to be reasonably valid. This effect, however, can slightly modify the dipole allowed transitions of the system and thus the excitation-emission dynamics of the system as we will discuss in below. Spin interactions ----------------- We now study the effect of the spin interactions on electronic structure and dynamical properties of the defect by applying time-independent perturbation theory. As we will find out, the spin-orbit and spin-spin interactions do not lift the Kramers degeneracy of the states. However, they can provide non-radiative transitions. In a short notation, the spin-orbit interaction Hamiltonian is $H_{\rm so}=\sum_j\sum_\alpha \ell^\alpha_j s^\alpha_j$, where $j=1,2,3$ counts the particle numbers, while $\alpha$ addresses the irreducible representations by which the orbital and spin angular momenta transform [@Lenef1996; @Marian2001]. Here, $\bm\ell_j = (1/2m^2c^2)\bm\nabla V(\bm{r}_j)\times\bm{p}_j$ with $V(\bm{r})$ the nuclei Coulomb potential and $\bm{s}_j$ are the spin and angular momenta vectors of the $j$-th electron, respectively. The group theory allows us to predict potentially nonzero matrix elements of a Hamiltonian. In general, the matrix element ${\langle{\psi}\vert} O {\vert{\phi}\rangle}$ vanishes if $\Gamma(\psi)\otimes\Gamma(O)\otimes\Gamma(\phi) \not\supset \Gamma_1$, where $\Gamma(X)$ is the irreducible representation of operator or wavefunction $X$ and $\Gamma_1$ is the totally symmetric irreducible representation. One thus concludes from $C_{2v}$ character table that only $\ell_y$ component of the orbital angular momentum could give nonzero values. (The angular momenta transform like axial vectors and have no $A_1$ component). This simplifies the spin-orbit Hamiltonian to $H_{\rm so}=\sum_j\ell_{y,_j}s_{y,_j}$. $^{2}A_1$ $^{2}B_2'$ $^{4}A_1$ ---------------------------- ----------- ------------ ----------- Vertical absorption energy 2.53 4.02 3.17 Relaxation energy 0.48 0.37 0.34 Zero phonon line 2.05 3.65 2.83[^1] Zero phonon line, PBE 1.40 2.98 2.54^a^ : \[tab:VN\_NB-energies\] Optical transition energies of [V$_{\rm N}$N$_{\rm B}$]{} defect as calculated by HSE $\Delta$SCF method in units of electronvolt. Another effect of spin is the direct magnetic dipole moment interaction between the electrons and nuclei. We neglect the interaction of nuclear spin with the electronic spins, which introduce hyperfine structures to the system and only focus on the electronic system. The spin-spin interaction is $H_{\rm ss}=\frac{\mu_0}{4\pi}\gamma_e^2\hbar^2 \sum_{j>k}\frac{1}{r_{jk}^3}\big[\bm{s}_j.\bm{s}_k -3(\bm{s}_j.\bm{\widehat r}_{jk})(\bm{s}_k.\bm{\widehat r}_{jk})\big]$, where $\bm{r}_{jk}=\bm{r}_j-\bm{r}_k$ is the displacement vector between electron $j$ and electron $k$, and $\gamma_e$ is the gyromagnetic ratio of electron. This Hamiltonian can be rewritten as $$H_{\rm ss}=\frac{\mu_0}{4\pi}\gamma_e^2\hbar^2\sum_{j > k}\sum_{\alpha} \widehat{S}_{jk}^\alpha \widehat D^\alpha_{jk}, \label{hss}$$ where $\widehat D^{\alpha}_{jk}$ are the symmetry-adapted spin-spin second rank tensor components, e.g. $\widehat D^{B_2}_{jk}=(1/2r_{jk}^5)(x_{jk}z_{jk}+z_{jk}x_{jk})$ [@Marian2001]. We have also expressed spin vector of the two electrons in the dyadic form $\widehat S \equiv \bm s\bm s$. Only $\widehat D^{A_1} = \{\widehat D_{xx}, \widehat D_{yy}, \widehat D_{zz}\}$ and $\widehat D^{B_2} = (\widehat D_{xz} +\widehat D_{zx})/2$ components can give nonzero values and the rest have no contribution. The $\widehat D^{A_1}$ components simply impose energy shifts on all states. Most importantly, the interaction lifts degeneracy of the quartet state ${}^4A_1$ such that ${\mathscr{A}^{3,\rm q}_{\pm\sfrac{3}{2}}}$ assume higher energies than ${\mathscr{A}^{3,\rm q}_{\pm\sfrac{1}{2}}}$. Another noticeable effect of this interaction is mixing states with different spin projections $m_S$ inducing non-radiative transitions channels \[Fig. \[fig:vnnb\](d)\]. For the explicit form of spin-orbit Hamiltonian alongside the spin-spin interaction we refer the reader to the supporting information. Selection rules --------------- Next we study possible optical transitions in the system. In the lowest order such transitions can happen via the electric dipole interaction: $H_{\rm dp}=\sum_j\sum_\alpha d^\alpha_jE^\alpha_j$, where $\bm{d}=-e(x,y,z)$ is the dipole moment of the electron and $\bm{E}$ is the electric field vector. Any polar vector (including the dipole moment) in $C_{2v}$ transforms like $(B_1,B_2,A_1)$. In calculating the matrix elements, spinors are just spectator component of the wavefunction. Hence, the orbital symmetries and spin overlaps determine the allowed transitions. Generally, since the orbitals have either $A_1$ or $B_2$ point-group irreducible symmetry, the allowed transitions are either induced by the axial in-plane component of the dipole moment $d_x$ (between orbitals with the same symmetry) or the out-of-plane component $d_z$. The transition rates are proportional to the quantities ${\langle{b_2}\vert}d_x{\vert{b_2'}\rangle}E_x$, ${\langle{a_1}\vert}d_z{\vert{b_2}\rangle}E_z$, and ${\langle{a_1}\vert}d_z{\vert{b_2'}\rangle}E_z$. According to orthogonality of the orbitals and spin states, the allowed optical dipole transitions are those sketched in Fig. \[fig:vnnb\](d). Negatively Charged [V$_{\rm B}$]{} ================================== The geometry of a boron vacancy is shown in Fig. \[fig:scheme\]. The negatively charged [V$_{\rm B}$]{} has $D_{3h}$ point group symmetry [@Huang2012a]. Similar to [V$_{\rm N}$N$_{\rm B}$]{} defect discussed in the previous section there are three hybrid $sp^2$ dangling-bonds and three $2p_z$ orbitals. In this case the atoms are identical and hence are their dangling-bonds: $\{\sigma_1, \sigma_2, \sigma_3, \pi_1, \pi_2, \pi_3 \}$. The single-electron MOs can be obtained by applying the projection operators produced from the character table of the $D_{3h}$ on these dangling-bonds. The equivalence representation for the structure of [V$_{\rm B}$]{} defect suggests that orbitals with $A_1'$, $A_2''$, $E'$, and $E''$ should be present. The MOs are then obtained: $a'_1 = \frac{1}{\sqrt{3}}(\sigma_1 +\sigma_2 +\sigma_3)$, $e''_x = \frac{1}{\sqrt{6}}(2\pi_1 -\pi_2 -\pi_3)$, $e''_y = \frac{1}{\sqrt{2}}(\pi_2 -\pi_3)$, $a''_2 = \frac{1}{\sqrt{3}}(\pi_1 +\pi_2 +\pi_3)$, $e'_x = \frac{1}{\sqrt{6}}(2\sigma_1 -\sigma_2 -\sigma_3)$, $e'_y = \frac{1}{\sqrt{2}}(\sigma_2 -\sigma_3)$, where the $e_x'$ and $e_y'$ single-electron orbitals are degenerate as well as $e_x''$ and $e_y''$. In a neutral defect, every atom shares three electrons with the other ions in the defect. Hence, for the negatively charged case there is a total number of ten dynamical electrons. Our HSE DFT spin-polarized calculation on the negatively charged [V$_{\rm B}$]{} shows a complicated electronic structure \[see Fig. \[fig:vb\](a)\]. The spin polarization of the defect states is large. The spin-up level of the $a_1'$ state fall in the valence band, thus only 9 electrons are visible with 5 spin-up and 4 spin-down electrons in the fundamental gap. By taking into account the spin-up $a_1'$ level in the valence band, this results in an $S=1$ ground state, as predicted by our group theory analysis (see below). The double degenerate $e'$ and $e''$ levels lie very close to each other, and also to the $a_2''$ level. This implies relatively small difference in the optical excitation energies of the corresponding many-body excited states. ![image](Vb_1.pdf){width="70.00000%"} Multi-electron states --------------------- The ‘physical’ total wavefunctions are determined by direct multiplication of the irreducible spatial and spin wavefunctions. Of course, at the end an anti-symmetrization is required by writing the wavefunctions in terms of linear combinations of Slater determinants. Since the number of electrons contributing in the defect dynamics is rather high in this case, it is more convenient to deal with ‘hole’s (lack of electrons) instead [@Maze2011; @Doherty2011]. ### Ground state {#ground-state .unnumbered} In the ground state, two holes are in the $e'$ degenerate orbitals. Therefore, spatial part of the wavefunction transforms like $E'\otimes E' = A_1'\oplus A_2'\oplus E'$ and the corresponding functions are found by projection technique. The spin part on the other hand has $A_1'\oplus A_2'\oplus E''$ symmetries, which are the two-fermion singlet $\chi^{A_1'}$ and triplet $\{\chi^{A_2'},\chi^{E_x''},\chi^{E_y''}\}$ states. To construct the total wavefunctions one systemically multiplies the spatial states by the spinors, the outcomes are indeed superpositions of Slater determinants and only the physical states will survive. That is, the states that do not violate Pauli’s exclusion principle. One then finds that the ground state manifold is composed of $\{{}^3A_2',{}^1E_x',{}^1E_y',{}^1A_1'\}$. The electron-electron repulsion Coulomb energy can be easily computed here. The Hund rules imply that the triplet state ${}^3A_2'$ lies at the lowest level and gives the ground state. While the $E'$ and $A_1'$ singlet states place in higher energies with equal energy spacings. This, in fact, is concluded from the symmetry observation that the Coulomb potential is a scalar and transforms as $A_1'$, and therefore, the energy expectation value for both $E'$ states should be the same. Then, the explicit calculations show that the Coulomb energy difference of ${}^3A_2'$ and ${}^1E_y'$ equals the energy difference of ${}^1A_1'$ and ${}^1E_x'$ and it exactly is given by Coulomb exchange energy. ### Excited state {#excited-state .unnumbered} The spin symmetry and possible spin states of the defect excited states are the same as the ground state. The first excited state, as inferred from our HSE Kohn-Sham computations presented in Fig. \[fig:vb\](a), is attained when an electron from the state $a_1'$ climbs to the $e_x'$ or $e_y'$ states. In this case, the only irreducible symmetry of the spatial wavefunction is $A_1'\otimes E' = E'$ and hence the wavefunctions are simply $\frac{1}{\sqrt 2}(a_1'e' \pm e' a_1')$. They thus can assume both triplet and singlet spin components: $\{{}^3E_x',{}^3E_y',{}^1E_x',{}^1E_y'\}$. Meanwhile, with a very small energy difference, it is also probable to have the electron transferred from $a_2''$ to $e'$ doublet in the spin-down channel. The spatial wavefunctions then will have $A_2''\otimes E' = E''$ irreducible symmetries. The explicit form of the wavefunctions, in analogy to the first excited state, are $\frac{1}{\sqrt 2}(a_2''e' \pm e'a_2'')$. This shapes the manifold $\{{}^3E_{x,a}',{}^3E_{y,a}',{}^1E_{x,a}',{}^1E_{y,a}''\}$. The spatial part, for the next possible excited state, is composed of $E'\otimes E'' = A_1'' \oplus A_2'' \oplus E''$ symmetries. A triplet and singlet state is assigned to every symmetric and antisymmetric function, respectively. Their energy degeneracy is lifted by introducing the Coulomb repulsion energies and according to Hund’s rules the spin singlet states with lowest orbital momenta attain the highest energies whereas the triplet states with largest angular momenta (here the state with $E''$ symmetry) lie at the lowest energy forming up the triplets $\{{}^3E_{x,\pi}'',{}^3E_{y,\pi}'',{}^3A_2'',{}^3A_1''\}$ and singlets $\{{}^1E_{x,\pi}'',{}^1E_{y,\pi}'',{}^1A_2'',{}^1A_1''\}$. Our HSE Kohn-Sham levels indicate that, indeed, the ${}^3E_a''$ excitation energy (promoting an electron from the $a_2''$ orbital to the $e'$ orbitals) is lower in energy than the ${}^3E'$ in $a_1'e'$ configuration and ${}^3\Gamma''$ excitations ($\Gamma ={A_1, A_2, E_\pi}$) which can be constructed by promoting an electron from the $e''$ to the $e'$ state. The calculated energy differences with respect to the ground state ${}^3A_2'$ are 2.13 eV and 1.92 eV in $D_{3h}$ symmetry for the ${}^3E_a''$ and ${}^3E'$ states. respectively. The latter, as we will discuss below, has a dipole allowed transition to the ground state. Basically, ${}^3E_a''$ and ${}^3E'$ states are Jahn-Teller systems. In the Born-Oppenheimer approximation (that we inherently apply in our DFT simulations) we find that in the $C_{2v}$ configuration they assume lower energies than the high symmetry $D_{3h}$ configuration. This energy difference is the so-called Jahn-Teller energy of these states. The effect is such that the arrangement of ${}^3E_a''$ and ${}^3E'$ states is reshuffled. In experiment, we expect that the calculated ZPL energy of the $C_{2v}$ configuration can be observed for ${}^3E'$. However, the large phonon energies of the h-BN lattice can result in a large electron-phonon coupling which results in a dynamic Jahn-Teller system showing a high symmetry in the experiments (average of the three $C_{2v}$ configurations), at least, at room temperature. Thus, the selection rules worked out for the high symmetry are expected to be valid at ambient conditions. It is beyond the scope of our study to quantitatively estimate the electron-phonon coupling here, and explore the fine structure at cryogenic temperatures. Now, we turn to the estimation of the other excitation energies. The ${}^3\Gamma''$ excited states in $e'e''$ configuration are highly complicated: they can be described by Slater multi-determinants that cannot be accurately calculated within $\Delta$SCF procedure. In addition, strong electron-phonon interactions can also couple these states. On the other hand, a crude estimate for the average energy of these states can be yielded from the $\Delta$SCF procedure, i.e., promoting an electron from the $e''$ to the $e'$ state. The calculated total energy of this state is $\approx 0.7$ eV larger in energy than the lowest excitation energy. This result implies that the ${}^3E_a''$ and ${}^3E'$ states is below the levels of ${}^3A_1''$, ${}^3A_2''$, and ${}^3E_\pi''$ states \[see Table \[tab:VB-energies\]\]. symmetry $^{3}E_a''$ $^{3}E'$ $^{3}E_\pi''$ ---------- ------------- ---------- --------------- $D_{3h}$ 2.13 1.92 2.62 $C_{2v}$ 1.71 1.77 2.47 : \[tab:VB-energies\] Energy levels of negatively charged [V$_{\rm B}$]{} excited states with respect to the ground state as calculated by HSE $\Delta$SCF method in units of electronvolt. ### Spin interactions The spin and orbital angular momenta components transform like $\{E''_x,E''_y,A'_2\}$. Regarding orbital part; since none of the holes are in molecular orbital with $E''$ symmetry in the ground state, the interaction should only come from the axial component of the orbital angular momentum. In the excited state, matrix elements like ${\langle{e'_x}\vert}\ell_x{\vert{e'_y}\rangle}$ could still contribute in the dynamics as they are predicted by their symmetry to be different from zero as they contain the $A_1'$ irreducible representation. However, the purely imaginary character of the angular momenta operator demands that such matrix elements must vanish [@Lenef1996]. This reduces the spin-orbit Hamiltonian into $H_{\rm so} = \ell_{z,1}s_{z,1} +\ell_{z,2}s_{z,2}$. The most important effect of spin-orbit interaction in the excited states manifolds is lifting the degeneracy of ${}^3E_a''$, ${}^3E'$, and ${}^3E_\pi''$ states. Such that the states with maximum spin projection absolute value ${\left|m_S\right|} = 1$ get mixed. The energy of states with $m_S=0$ remains intact by the spin-orbit interaction. Instead, these states are coupled to the singlet states and may result-in irradiative transition channels. Finally, this interaction makes connections between different electronic states. That is, inter-electronic admixture of the singlet and triplet spin states as well as mixing the states within triplet and singlet spin states. Therefore, allowing for electronic or vibronic induced transitions that do not conserve the spin projection. Such transition might be non-radiative, too. In a nutshell, the spin-orbit interaction mixes states with the same total wavefunction symmetry and different ${\left|m_S\right|}$ values. We refer the reader to Fig. \[fig:vbtrans\](b) for a qualitative picture of such state mixing induced transitions. The explicit form of the spin-orbit interaction matrix elements are too cumbersome to be reported here. The effect of spin-spin interaction can be summarized as follows. The spin degeneracy in triplet states is lifted between states with different $|m_S|$: The axial component $\widehat D^{A_1'}$ imposes energy differences between $m_S=0$ and $m_S=\pm 1$ states. Another important effect is further lifting the orbital degeneracies in ${}^3E_a''$, ${}^3E'$, and ${}^3E_\pi''$. The energy of all singlet states is increased more that their triplet counterpart, which cannot influence the level ordering because the spin interactions are much weaker than Coulomb repulsions and thus do not change order of the states. The electronic structure of negatively charged [V$_{\rm B}$]{} states associated with spin-orbit and spin-spin induced splittings are shown in Fig. \[fig:vbtrans\](b). The spin-spin interaction of the electrons also induces electronic and spintronic mixings, which for the sake of clarity and simplicity are not reported here. ![Energy levels of negatively charged [V$_{\rm B}$]{} defect in h-BN: (a) HSE Kohn-Sham levels for the ${}^3E'$ excited state with $D_{3h}$ and $C_{2v}$ symmetries. (b) Electronic and spin configuration of the defect. The triplet (left column) and singlet (right column) states are separated for clarity. The radiative transitions are marked by arrows in different colors for in-plane (red) and out-of-plane (orange) dipole moments. The possible spin-orbit induced non-radiative transitions that will lead to the optical spin polarization of the ground state are also shown by blue dashed lines. Energy differences are not to the scale.[]{data-label="fig:vbtrans"}](VbMulti.pdf){width="\columnwidth"} Selection rules --------------- The electrical dipole moment $\bm{d}~(=-e\bm{r}$) components transform like $(E'_x,E'_y,A''_2)$ in $D_{3h}$ point group symmetry. In the zeroth order of spin interaction perturbation, the spin component of the wavefunctions imply that only singlet-singlet and triplet-triplet transitions are permitted. It is noteworthy that when the first order perturbation of the spin-orbit and spin-spin interactions are taken into account the spin flip transitions are also possible. From symmetry observations, one already finds that the first excited state is a dark state, while non-axial components of the dipole moment can cause excitations and emissions to and from the second excited state. The bright triplet ${}^3E'$ states can relax into the ground triplet ${}^3A_2'$ via different competing processes; radiative, ISC, and direct non-radiative. Due to the spin-orbit mixing in ${}^3E'$ manifold the emitted photons stemming from transitions from these states to the ground state will adopt different polarizations. For states with $m_S=\pm1$ the emitted photons should have circular polarizations ${}^3A_2' \xleftrightarrow{x\pm iy} {}^3E'$, while the $m_S=0$ states are connected via linearly polarized optical photons $^{3}A_2' \xleftrightarrow{\{x,y\}} {}^3E'$. The other possible transitions are those to the third excited state. These transitions that are only induced by the axial dipole moment are ${}^3A_2' \xleftrightarrow{z} {}^3A_1''$ in the triplet manifold. The singlet states are expected to assume much higher energies, and therefore, optically inaccessible. These radiative transitions are sketched in the energy level diagram in Fig. \[fig:vbtrans\](b) alongside the possible spin-orbit interaction induced non-radiative transitions, which encompass optical spin polarization channels. The situation in the singlet channel is similar for the second and third excited states. However, a $d_z$-induced transition is possible between ${}^1E_\sigma'$ in the ground state manifold and ${}^1E_a''$ of the first excited state \[Fig. \[fig:vbtrans\](b)\]. Discussion ========== Here we discuss our results in the context of the main experimental observations. The h-BN emitters are mainly divided into two classes based on their line shape and excitation/emission polarization pattern: (I) Those with an asymmetric and broader line shape that have a matching in-plane excitation and emission polarizations. (II) The SPEs that possess a sharp and symmetric line shape with a mismatch between their excitation and emission polarization patterns. The emission from class-I SPEs are typically around $2.1$ eV, while class-II emitters on average assume energies about $1.8$ eV [@Tran2016b; @Jungwirth2016]. Emitters of both classes exhibit several shelving states and non-radiative relaxation processes that compete with the radiative transitions. Moreover, two-photon processes are reported in the excitation of some emitters [@Schell2016]. It is also noteworthy that the peaks in photoluminescence spectra of mono- and multi-layer samples have significantly different widths. Due to their vulnerability to environmental effects, the emitters on a mono-layer h-BN exhibit much broader linewidths. Strain and quantum confinement effects in h-BN flakes may also influence the optical properties of defects residing at the edge of the flakes. Our theoretical work, instead, is performed by neglecting environmental effects. Therefore, our following comparisons are made with the experimental data where such environmental effects are minimized and the material is still two-dimensional. That is, those related to the multi-layer samples. ![Formation energy as a function of the Fermi level for defect [V$_{\rm B}$]{} (solid lines) and [V$_{\rm N}$N$_{\rm B}$]{} (dashed lines) in N-rich (black curve) and B-rich (red curve) growth conditions. The symbols indicate charge state of the defects.[]{data-label="fig:CTL"}](CTL.pdf){width="0.7\columnwidth"} In particular, we consider the neutral [V$_{\rm N}$N$_{\rm B}$]{} defect and the negatively charged [V$_{\rm B}$]{} defect as qubit candidates. The [V$_{\rm B}$]{} and [V$_{\rm N}$N$_{\rm B}$]{} defects have the same composition in the compound h-BN material. The relative stability of these defects may depend on their charge state. We analyze this issue by calculating the defect formation energies ($E^q_f$) with charge state $q$, which is defined as $E^q_f(\epsilon_F)=E^q_\text{tot}-E_\text{BN}+\mu_\text{B}+ q(\epsilon_\text{F}+E_\text{V})+E_\text{corr}$, where $E^q_\text{tot}$ is the total energy of the charged defect system, $E_\text{BN}$ is the total energy of the pristine h-BN, $\mu_\text{B}$ is the chemical potential of boron, $\epsilon_F$ is the position of the Fermi-level with respect to the valence band maximum $E_\text{V}$, and $E_\text{corr}$ is the charge correction energy [@Zhang1991; @Wu2017]. The boron chemical potential depends on the growth conditions. In nitrogen-rich conditions, the nitrogen atoms in h-BN are assumed to be in equilibrium with N$_2$ gas, therefore, $\mu_\text{N}$ equals half of the energy of a N$_2$ molecule ($\mu_\text{N}=\frac{1}{2}\mu_{\text{N}_2}$) and $\mu_\text{B}$ can be obtained from $\mu_\text{BN} = \mu_\text{B} + \mu_\text{N}$, where $\mu_\text{BN}$ is the energy of h-BN primitive cell. The calculated HSE formation energies as a function of Fermi level are plotted in Fig. \[fig:CTL\]. The ($+$) charge state is not stable for [V$_{\rm B}$]{} defect whereas its ($0/-$) acceptor level is at $E_\text{V}+2.39$ eV. For [V$_{\rm N}$N$_{\rm B}$]{} defect, the ($+/0$) level is at $E_\text{V}+1.79$ eV whereas the deep ($0/-$) acceptor level occurs only for high $\epsilon_{\rm F}$ values. That is, at 0.24 eV below the conduction band edge (HSE bandgap is 5.98 eV), which is fairly consistent with a very recent result [@Wu2017]. We find that the formation energies of these defects are high even at nitrogen-rich condition that is favorable for [V$_{\rm B}$]{}-like defects. On the other hand, we note that the nitrogen chemical potential may vary significantly at realistic experimental conditions of N$_2$ gas such as partial pressure and temperature. This, however, affects the absolute values in the calculated formation energies but not their relative values. Thus, we rather focus on the relative stability of [V$_{\rm B}$]{} and [V$_{\rm N}$N$_{\rm B}$]{} defects. Our findings show that these defects exhibit a bistability: at low Fermi-level values (p-type conditions) the [V$_{\rm N}$N$_{\rm B}$]{} defect is stable whereas [V$_{\rm B}$]{} defect becomes stable only for $\epsilon_\text{F}>1.5$ eV. In the region of $1.9$ eV$<\epsilon_\text{F}<2.6$ eV the neutral [V$_{\rm N}$N$_{\rm B}$]{} is almost as stable as the neutral [V$_{\rm B}$]{} within $\approx0.2$ eV. The stability of negatively charged [V$_{\rm B}$]{} becomes dominant for $\epsilon_\text{F}>2.6$ eV. We conclude that the neutral [V$_{\rm N}$N$_{\rm B}$]{} and negatively charged [V$_{\rm B}$]{} defects can indeed exist in h-BN and as discussed below can be the source of single photon emissions. Next, we discuss the neutral [V$_{\rm N}$N$_{\rm B}$]{} defect as a quantum emitter in h-BN. The 2D nature of layered h-BN samples demands that the [V$_{\rm N}$N$_{\rm B}$]{} defect axis of symmetry remain in the plane of the membrane. Yet the orientation of the defect symmetry axis is restricted to: $\vartheta = 0^\circ, 120^\circ, 240^\circ$, where $\vartheta$ is the angle between defect symmetry axis and the $x$-axis in the lab \[Fig. \[fig:scheme\]\]. A perpendicular optical beam irradiating a h-BN flake, thus, preferentially excites the in-plane dipole moment of the defect. The magnitude of the exciting dipole moment is given by $d_x\cos\vartheta$ for the three possible orientations. In zeroth order, the axial moment can only induced transitions between the ground state and the second excited state ${}^2B_2\leftrightarrow {}^2B_2'$. Given the typical $\approx 2.4$ eV excitation lasers used in the experiment, the ${}^2B_2'$ energy level is inaccessible via single-photon transitions according to our DFT simulations \[Table \[tab:VN\_NB-energies\]\]. Therefore, the charge neutral [V$_{\rm N}$N$_{\rm B}$]{} defects do not follow (in the zeroth order) the reported polarization pattern; that the excitation and emission dipole moments are similarly oriented in the plane of flake [@Tran2016a]. Nevertheless, one notices that the out-of-plane dipole moment $d_z$ still can give rise to excitations and emissions at energies around $2.0$ eV due to ${}^2B_2 \leftrightarrow {}^2A_1$ coupling, see Table \[tab:VN\_NB-energies\]. Moreover, two photon processes can excite the defect into its second excited state ${}^2B_2'$. This can either stem from the real excitation via ${}^2A_1$ state induced by the $d_z$ dipole moment or the direct nonlinear two-photon excitation facilitated by the in-plane $d_x$ dipole moment [@Schell2016]. The ‘real’ two-photon excitation of emitters in h-BN may occur by climbing the energy ladder thanks to the Coulomb mixing effect between ${}^2B_2$ and ${}^2B_2'$ states discussed above. The defect excited to ${}^2B_2'$ will face several competing routes to relax down to the ground state: The non-radiative and radiative shelving processes or a mixture of them \[Fig. \[fig:vnnb\](d)\]. The existence of the shelving state ${}^4A_1$ predicted by our DFT and group theory study can be experimentally confirmed by applying excitation photons in the following way. The spin-orbit mixing of ${}^4A_1$ states with the ground state in the first order approximation allows for its excitation when irradiated by an intense $2.8-3.0$ eV laser. The excited system then relaxes by first experiencing a non-radiative transition to ${}^2A_1$ followed by an optical emission at $\approx 2.05$ eV with an out-of-plane polarization. We should add that due to the lack of orbital degeneracy, effective g-factor of electronic spin in ground and excited states are the same. Hence, in the presence of an external magnetic field spin-down and spin-up channels in the ground and excited states experience equal energy splitting. This means no Zeeman splitting in the photoluminescence spectrum of [V$_{\rm N}$N$_{\rm B}$]{} should be expected [@Li2017]. Now, we turn to the discussion about the negatively charged [V$_{\rm B}$]{}. We tentatively associate the electrical dynamics of this defect to the single-photon emitters observed in the labs, where there is a mismatch in the absorption and emission polarization of the defects (class-II). The laser largely excites the in-plane dipole moment and therefore the defect is excited to the $a_1'e'$ configuration. Then it either emits a linearly polarized photon, which could have a polarization in parallel or perpendicular to the absorption polarization, or with a finite probability that the color center emits photons with circular polarization. Therefore, the absorption and emission polarizations of the photons do not necessarily coincide. This indeed stems from the spatial degeneracy of the excited state. Our above theory is further supported by the [*ab initio* ]{}results where the transition frequencies is expected to be about $1.77$ eV, which is in good agreement with the observed wavelengths of class-II emitters. This hypothesis is also corroborated by considering the processes induced by spin interactions. The non-radiative channels induced by the spin-orbit interaction shown in Fig. \[fig:vbtrans\](b) are in agreement with the multi-channel transition reported in the experiments. The excited state ${}^3E'$ faces several competing relaxation processes. It could even rapidly get evacuated by a fast spin-orbit process owing to the small energy differences with the first excited state \[see Table \[tab:VB-energies\] for the energy difference between the first, second, and third excited states\]. It is noteworthy that the defect may even become excited to the $e'e''$ configuration via a photon with axial electric polarization and then decay to the first excited state through the non-radiative channels and finally relax by emitting an in-plane-polarized photon. In this case, the defect can exhibit a singlet-singlet transition as well. Such a transition is immune to the Zeeman splitting and magnetic field fluctuations as reported in a recent observation [@Li2017]. Interestingly, our study predicts an optical spin polarization channel in negatively charged boron vacancy in h-BN. A light beam at frequency around $1.77$ eV with linear polarization drives the system into ${}^3E'$, which will either relax back with a photon emission or will experience a non-radiative transition to the ground states with $m_S=\pm 1$. Since the linear polarization is more efficient in eviction of the $m_S=0$ state (see above discussion), one expects population transfer from $m_S=0$ to $m_S=\pm 1$ after a few optical circulations. To summarize, the lowest calculated ZPLs are given in Table \[tab:ZPLs\] alongside the associated optical polarization of the emitted photons. In this table we also show the results for [V$_{\rm N}$C$_{\rm B}$]{} that are discussed in the supporting information. Defect lowest ZPL OEP ------------------------------------ ------------ ----------------------- $[$[V$_{\rm N}$N$_{\rm B}$]{}$]^0$ 2.05 $\hat{z}$ 1.92[^2] $\hat{x}\pm i\hat{y}$ 1.77[^3] $\hat{x},\hat{y}$ $[$[V$_{\rm N}$C$_{\rm B}$]{}$]^+$ 1.51 $\hat{z}$ : \[tab:ZPLs\] Optical zero phonon lines (ZPLs) and corresponding optical emission polarization (OEP) of charge neutral [V$_{\rm N}$N$_{\rm B}$]{}, positively charged [V$_{\rm N}$C$_{\rm B}$]{} \[see supporting information\], and negatively charged [V$_{\rm B}$]{} defects as calculated by HSE $\Delta$SCF method in units of electronvolt. OEP notation is introduced after coordinate chosen in Fig. \[fig:scheme\]. The inplane clockwise and counter-clockwise circular polarizations are noted by $\hat x \pm i\hat y$. Conclusion ========== The group theoretical analysis as well as the calculated zero-phonon-lines have been exploited to relate the studied defects to the experimentally reported ZPLs, their excitation and emission profile of polarization, two-photon excitation processes, radiative and non-radiative channels of relaxation, and dark and bright shelving states. In particular, we have identified shelving states in the defects that can contribute in ISC processes. The ${}^4A_1$ state in [V$_{\rm N}$N$_{\rm B}$]{} is believed to play an important role in the optical dynamics of the defect and its existence can be experimentally verified by employing an exciting laser $\approx 3.0$ eV. The dark triplet ${}^3E_a''$ in the negatively charged [V$_{\rm B}$]{} has also been shown to induce competing non-radiative relaxations owing to its energetic proximity to the bright ${}^3E'$ state. The spin-orbit interaction study has also allowed us to distinguish the observed multi-relaxation routes. Moreover, it predicts the low temperature electronic structure of the defects and identify an optical spin polarization channel for the [V$_{\rm N}$N$_{\rm B}$]{} and negatively charged [V$_{\rm B}$]{}. Our work, therefore, is anticipated to shed more light on the road to identification of quantum emitters in h-BN monolayers. A better knowledge about those emitters, in turn, will considerably influence the nanophotonics and quantum technology. This work was supported by the ERC Synergy grant BioQ, the EU EQUAM and DIADEMS projects, the DFG CRC TRR21, and DFG FOR 1493. Support from the Hungarian Na- tional Research Development and Innovation Office (NKFIH) in the frame of the Quantum Technology National Excellence Program (Project No. 2017-1.2.1-NKP-2017-00001) is acknowledged. [^1]: Zeroth order dipole transitions are forbidden. [^2]: Room temperature; $D_{3h}$ excited state symmetry [^3]: Cryogenic temperatures; $C_{2v}$ excited state symmetry
66,083,780
GUATEMALA CITY, Guate­mala — It was set up just like any other Baptist meeting. Pastors greeted each other with calls of “hey brother,” a keyboard player led participants in singing hymns and praise choruses, and food was available at every break. One might confuse it for any Baptist gathering in Missouri, except it was in Spanish, and in Guatemala. Around 75 pastors, spouses and layleaders gathered in Quet­zaltenango (pronounced kay-tzal-teh-NAN-go) — or Xela (pronounced SHAY-lah), the more popular Mayan name — for a two-day pastors conference led by Baptist General Con­vention of Missouri representatives Gary Snowden, Bob Perry and his wife, Marilyn Nel­son. Xela, which is set in the mountains, is the second largest city in Guatemala. For many attendees, the conference serves not only as the source of their theological training, but also as their only opportunity to stay in a hotel or receive three meals a day. BGCM provided funds and leadership for the training session Jan. 6-7, the fifth of its kind, through its partnership with the Gua­te­malan Baptist Con­ven­tion. Pastors were asked to contribute the equivalent of $7 — a sacrifice for many, Snowden said. The partnership is fo­cused on the western region of Guatemala, around Xela. “It’s a wonderful idea for Missouri Baptists to be linked with one of the outlying areas,” Perry said in an interview at the Baptist Seminary in Guatemala City. Perry serves as congregational health team leader for BGCM. “Guatemala as a whole is too big for Missouri Baptists to have a lot of impact.” The smaller area allows the partners to develop personal relationships. “These people know Gary,” he said. A core group of 50-55 church leaders have attended all five sessions. According to Roger Marquez, pastor of First Baptist Church, Xela, these meetings provided the opportunity for the pastors to get to know each other, leading to the first western region association in 50 years. “I noticed a need for fellowship with each other,” Nelson said at the seminary. The training sessions consisted of an opening session, led by Snowden, on spiritual maturity and an indepth look at spiritual gifts, led by Perry and Nelson. Participants had the opportunity to take a spiritual gifts inventory that Nelson designed, which was translated into Spanish for the conference. Perry had questioned the effectiveness of the sessions, given the prevalence of spiritual gift materials in the U.S. “As it turned out, it was new,” he said. “Carlos (Cerna, executive secretary of the Guatemala Baptist Convention) said no one there had completed a spiritual gifts inventory. The idea was new to many.” “Baptists here, like in the States, have been slow to emphasize spiritual gifts due to what some feel is an excessive emphasis of Pentecostals.” Nelson and Perry were both impressed by the interest and participation of conference attendees. “It became obvious that they were learners — and deeply grateful learners,” Nelson said. “The Guate­malans are people who show a great respect for everyone, particularly those teaching and helping them grow in Christ. They show their respect and appreciation in tangible ways — words, hugs, eye contact.” Perry said the discussion was lively and indicated a good degree of interest and understanding. “It is always good when participants contribute and add to the value,” he said. Guatemala is a somewhat undeveloped country, Perry said, but it has great natural resources. It’s democracy has a history of being stable, and the country is seeing development of new businesses. “Bap­tist work seems to be growing,” he said. “I’m glad BGCM is having a hand in helping Baptists in Guate­mala.” Guatemalans are a hardworking people and very diligent about their faith, he said. “Many Guatemala Baptists have a style of worship that is very enthusiastic and energetic.” As indicated by the conference, many of the churches in the western region use music that would be familiar to those in the U.S. The conference featured Spanish-language versions of “Lord, I Lift Your Name on High” and other popular praise choruses. “Even though I don’t speak their language, I felt a real connection with the people at the conference,” Nelson said. “I hope to return to maintain and grow that connection, which is possible because of our mutual love for Christ.” The conferences are just one aspect of the partnership, Snowden said on the van ride from Xela to Panajachel, where the travelers spent the night. BGCM has provided bicycles for rural pastors who needed transportation, computers and Inter­net access for the Baptist Seminary in Guatemala City, video pro­jectors for the convention and theology books for pastors. The real value of the partnership, however, is the sharing of human re­sour­ces, Snowden said. BGCM is encouraging Missouri churches to get involved. For example, First Baptist Church, Lee’s Summit, will be traveling to the department (the equivalent of a U.S. state) of San Marcos to speak at public schools, distribute bags of food and share the gospel with needy families, conduct Vacation Bible School and lead adult Sunday School leadership training classes. San Marcos is a poor area, Snowden said. The Lee Summit congregants will be able to supply bags of staple groceries for around $10 each. First Baptist Church, Far­ming­ton, has a trip scheduled in April to work with First Baptist Church, Xela, in evangelism, VBS and multifaceted training. Snowden said construction work may be available at some point, but that has not been a part of the early partnership. “I see the partnership as vitally important,” Nelson said. The church leaders have little opportunity for training and new ideas. “The more we can help, the better for everyone.” “Consider what you’ve taken for granted in church,” she said. “Consider how it has made a real difference in your life — whether it be Bible study, learning from a minister or age group ministries. You’ll see it in a fresh light by coming and helping train leaders. You will go back energized and ready to take on some things you’ve become blind to or didn’t see as a need.”
66,084,166
Reference is made to related application Ser. No. 08/001,702 entitled "Base Metal Only Catalyst System for Lean Burn Engines" filed Jan. 7, 1993 by the present inventors, which application is commonly assigned with this invention. This invention is directed to a two-stage catalyst system comprising a first-stage nitric oxide removal catalyst and a second-stage carbon monoxide and hydrocarbon removal catalyst for treating the exhaust gases produced by an internal combustion engine. A number of catalysts have been suggested to convert engine exhaust gas components like carbon monoxide (CO), hydrocarbons (HC's), and nitrogen oxides (NO.sub.x) into other gases. The first two are desirably oxidized to H.sub.2 O and CO.sub.2 while the nitrogen oxides present in the exhaust gas, generally nitric oxide, are desirably reduced to N.sub.2. These so called "three-way" catalysts achieve simultaneous efficient (conversion &gt;80%) removal of CO, HC, and NO.sub.x when the fuel mixture of an internal combustion engine is slightly "rich" in fuel, i.e., in a narrow A/F ratio range between about 14.7 and 14.4, and the exhaust gas is slightly reducing. Such three-way catalysts are not efficient, however, in the reduction of NO.sub.x when engines are operated on the lean (reduced fuel) side where the A/F ratio is greater than 14.7, generally 19-27, and the exhaust gas is richer in oxygen. It is desirable, however, to operate engines on the lean side to realize a benefit in fuel economy, estimated to be in the range of 6-10 %. In addition to three-way catalysts, two-stage conversion systems have also been proposed for treating exhausts and involve an initial contacting zone directed to removal of NO.sub.x and a second contacting zone directed to removal of CO and HC's. Gladden in U.S. Pat. No. 4,188,364 discloses a system wherein the nitric oxide content is reduced through a reaction with ammonia in the first catalyst bed comprising a porous inorganic oxide. The gas stream, containing oxygen, ammonia and reduced nitric oxide content is subsequently contacted with a second catalyst bed having an oxidation catalyst disposed on a porous inorganic oxide carrier, comprising a noble metal or other metals such as copper, zinc, or tungsten. The resultant exhaust stream is disclosed to be substantially nitric oxide and ammonia free. Gladden's invention is not suitable for automotive application, however, because this system requires the storage of ammonia on board a vehicle. Gandhi et al. in U.S. Pat. No. 4,374,103 disclose a catalyst system useful in fuel-rich applications in which the exhaust gases initially flow over a catalyst comprising palladium and subsequently over a catalyst comprising palladium-deposited on tungsten. The first catalyst bed operates slightly rich of stoichiometry. Since the engine is required to operate fuel-rich to provide reducing conditions at the inlet of the catalyst, fuel economy is adversely affected. Also, tungsten is present on the support in large amounts, generally around 50% by weight of alumina, in the second-stage catalyst. It would be desirable to have a catalyst system which would be effective in reducing nitric oxide emissions and also provide high conversions for hydrocarbons and carbon monoxide under lean-burn conditions (oxygen rich exhaust situations). Such a system would allow for improved fuel economy. In lean burn situations, considerable success has been achieved in the catalytic oxidation of unburned hydrocarbons and carbon monoxide, but the reduction of the nitrogen oxides has proven to be a much more difficult problem. This is because the reducing substances (such as CO or H.sub.2) tend to react more quickly with the oxygen present in the exhaust gas than with the oxygen associated with nitrogen in NO.sub.x. The present invention overcomes such problems.
66,084,195
Effect of the addition of industrial by-products on Cu, Zn, Pb and As leachability in a mine sediment. A series of incubation and leaching experiments were performed to assess the feasibility of three industrial by-products (red gypsum (RG), sugar foam (SF) and ashes from the combustion of biomass (ACB)) to reduce the leachability of Cu, Pb, Zn and As in a sediment of São Domingos mine (Portugal). The changes in the element solid phase speciation were also evaluated by applying a sequential extraction procedure. All amendments significantly reduced the leachability of Zn and Cu, whereas the treatment with RG+SF+ACB also decreased the mobility of As. The reduction in Cu leachability was especially remarkable. This could be due to the great affinity of carbonates (included in SF and SF+ACB amendments) to precipitate with Cu, and maghemite and rutile (RG amendment) for acting as relevant sorbents for Cu. Pb was the least mobile element in the sediment and none of the treatments reduced its mobility. The sequential extraction reveals that the amendments induced a significant decrease in the concentration of elements associated with the residual fraction. Cu, Pb and As are redistributed from the residual fraction to the Al, Fe, and Mn hydr(oxides) fraction and Zn from the residual fraction to the water/acid soluble, exchangeable and bound to carbonates pool.
66,084,252
Q: Find sequence to a target number using restricted set of primitive operations Given an integer , compute the minimum number of operations(+1, x2, x3) needed to obtain the number starting from the number 1. I did this using this code: #include<iostream> #include<algorithm> #include<vector> using namespace std; int main() { int n; cin >> n; vector<int> v(n+1, 0); v[1] = 1; for(int i = 1; i < v.size(); i++) { if((v[i + 1] == 0) || (v[i + 1] > v[i] + 1)) { v[i + 1] = v[i] + 1; } if((2*i <= n) && (v[2*i] == 0 || v[2*i] > v[i] + 1)) { v[2*i] = v[i] + 1; } if((3*i <= n) && (v[3*i] == 0 || v[3*i] > v[i] + 1)) { v[3*i] = v[i] + 1; } } cout << v[n] - 1 << endl; vector<int> solution; while(n > 1) { solution.push_back(n); if(v[n - 1] == v[n] - 1) { n = n-1; } else if(n%2 == 0 && v[n/2] == v[n] - 1) { n = n/2; } else if(n%3 == 0 && v[n/3] == v[n] - 1) { n = n/3; } } solution.push_back(1); reverse(solution.begin(), solution.end()); for(size_t k = 0; k < solution.size(); k++) { cout << solution[k] << ' '; } } Input: 5 Output: 3 1 2 4 5 Do you have any optimized way to do this? A: using namespace std; Stop doing this. It is a easy, but sloppy, way to code; a change to the standard library that introduces a new identifier can break your code. Being explicit, and writing std::vector instead of vector everywhere would be painful. But there is a middle ground: #include <vector> using std::vector; Now you can lazily use vector, without fear that something you are not using from the standard library will suddenly become defined, colliding with your identifiers, and causing carnage. White space Either put white space around all binary operators, like v[i + 1], or never put the white space around the binary operators, like v[i*2]. But be consistent. cout << endl; Don't use this; it slows your code down. The endl manipulator does two things: it adds \n to the stream AND it flushes the stream. If you don't need to flush the stream (and you rarely do), simply write cout << '\n'; Avoid repeated calls to functions that return the same result for(int i = 1; i < v.size(); i++) What is the value of v.size()? Will it ever change? Can the compiler tell it won't, and optimize it out? Could you store the value in a local variable to avoid the repeated function calls? Or ... you could use the variable that already exists: n. for(int i = 1; i <= n; i++) Don't Repeat Yourself (DRY) if((v[i + 1] == 0) || (v[i + 1] > v[i] + 1)) { v[i + 1] = v[i] + 1; } if((2*i <= n) && (v[2*i] == 0 || v[2*i] > v[i] + 1)) { v[2*i] = v[i] + 1; } if((3*i <= n) && (v[3*i] == 0 || v[3*i] > v[i] + 1)) { v[3*i] = v[i] + 1; } These statements look very similar. if((target <= n) && (v[target] == 0 || v[target] > v[i] + 1)) { v[target] = v[i] + 1; } You could pull them out into a function: inline void explore_step(vector<int> &v, int n, int i, int target) { if ((target <= n) && (v[target] == 0 || v[target] > v[i] + 1)) { v[target] = v[i] + 1; } } And then write: explore_step(v, n, i, i+1); explore_step(v, n, i, i*2); explore_step(v, n, i, i*3); Optimization You approach takes \$O(n)\$ time, because you explore each value from 1 to n. You do this, because you don't know which values are going to be useful in reaching the target value, and test things like v[2*i] > v[i] + 1 because you don't know which values could be reached via a faster path. A slightly better approach: seed 1 into a list of values to explore for each value in the list of values to explore: for each of the 3 target values i+1, i*2, & i*3 if <= n: if v[target] == 0, then store v[target] = i add target to the list of values to explore if target == n, stop Consider n = 10. explore = [1], value = 1, targets = [2, -, 3] explore = [1, 2, 3], value = 2, targets = [-, 4, 6] explore = [1, 2, 3, 4, 6], value = 3, targets = [-, -, 9] explore = [1, 2, 3, 4, 6, 9], value = 4, targets = [5, 8, -] explore = [1, 2, 3, 4, 6, 9, 5, 8], value = 6, targets = [7, -, -] explore = [1, 2, 3, 4, 6, 9, 5, 8, 7], value = 9, targets = [10, -, -] You could use a queue for explore, but a vector of length n, and just walking forward through the items works fine. Notice that all values reachable after 1 step [2, 3] are processed before values reachable after 2 steps [4, 6, 9], and would be processed before those values reachable after 3 steps [5, 8, 7], and so on. More over, we've built up a trail of breadcrumbs for the fastest path. v[10] = 9 v[9] = 3 v[3] = 1 So no searching is required to find the correct path. Implementation left to student. Can we do better? What if we started with n, and explored n-1, n/2, and n/3? An odd value can't lead to an n/2 point, and a non-multiple-of-3 can't lead to a n/3 point, so you may be pruning more values out of the search, so might be slightly faster. [28] -> [27, 14] -> [26, 9, 13, 7] -> [25, 13, 8, 3, 12, 6] -> [24, 12, 4, 2, 1!, ....]
66,084,301
Our investment style can be best described as “ACTIVE value investing, with a willingness to consider contrarian ideas.” Our philosophy follows some of the modern era’s investing greats including: Sir John Templeton, Ben Graham, Peter Lynch, Charlie Munger, Warren Buffet, and Anthony Bolton. Each of these great investors have proven over time that, not only does a value investing approach make intuitive sense, but that it can equally be highly profitable.
66,084,313
Q: Jlist not displaying my items in String JButton btnAdd = new JButton("add"); btnAdd.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Main selectedValue = (Main)courseList.getSelectedValue(); if(selectedValue !=null){ orderList.addElement(chosenList); } } }); i have created a addButton which adds elements from one Jlist to another Jlist. However, when i run my applction and click the add buttton it gives me this error in my chosenList Jlist: javax.swing.JList[,-2008,0,2255x182,alignmentX=0.0,alignmentY=0.0,border=,flags=50332008,maximumSize=,minimumSize=,preferredSize=,fixedCellHeight=-1,fixedCellWidth=-1,horizontalScrollIncrement=-1,selectionBackground=javax.swing.plaf.ColorUIResource[r=184,g=207,b=229],selectionForeground=sun.swing.PrintColorUIResource[r=51,g=51,b=51],visibleRowCount=8,layoutOrientation=0] A: I believe addElement method should be called on an instance of class DefaultListModel. If you previously has added a DefaultListModel instance as the model for your orderList, you should use following code to add the element to your orderList. Object selectedValue = courseList.getSelectedValue(); DefaultListModle listModel = (DefaultListModle)orderList.getModel(); listModel.addElement(selectedValue); If you haven't set any instance of class which implements ListModel, you should initialize your orderList in this way: DefaultListModel listModel = new DefaultListModel(); orderList = new JList(listModel); // or orderList.setModel(listModel); Take a look at How to Use Lists from Java Tutorials.
66,084,592
Q: What is best way to record a doublebass in a home studio enviroment? What is best way to record a Doublebass in a home studio enviroment? What type of mic? What kind of processing? (compression? eq?) A: I have a session coming up and have been reading a lot about this to. The strangest yet most practical and cheaper technique, is a sm57 with the body and cable wrapped in a washcloth tucked in the bridge aiming upwards. (the washcloth is so that you can fit the mic into the bridge) http://www.gearslutz.com/board/remote-possibilities-acoustic-music-location-recording/474365-unusual-upright-bass-micing-technique.html lots of ideas there Fun :)
66,084,894
(openPR) - HTF Market Intelligence released a new research report of 123 pages on title 'Global (North America, Europe and Asia-Pacific, South America, Middle East and Africa) Glycine Market 2017 Forecast to 2022' with detailed analysis, forecast and strategies. The study covers key regions that includes North America, Europe and Asia-Pacific, South America, Middle East and Africa and important players such as Ajinomoto, Showa Denko KK, Yuki Gosei Kogyo........... Summary Glycine is an organic compound and is known as the smallest of the 20 amino acids which are found in proteins. It is the only amino acid that does not form an L or D optical rotation. It is a colorless, sweet-tasting crystalline solid. Scope of the Report: This report focuses on the Glycine in Global market, especially in North America, Europe and Asia-Pacific, South America, Middle East and Africa. This report categorizes the market based on manufacturers, regions, type and application. HTF Market Report is a wholly owned brand of HTF market Intelligence Consulting Private Limited. HTF Market Report global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the “Accurate Forecast” in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their “Goals & Objectives”.
66,085,149
'Words With Friends' adds President Trump's 'covfefe' to its dictionary Charles Ventura | USA TODAY Show Caption Hide Caption Trump's 'covfefe' typo may be his most famous tweet yet "Despite the negative press covfefe," Trump wrote, and nothing more. The tweet was deleted, but only after it had been up for several hours. It gave Twitter a lot to chew on. Sure, Merriam-Webster can't explain President Trump's bizarre "covfefe" tweet. That isn't stopping at least one gamemaker from trying, though. Words With Friends, a popular mobile word game, announced the decision to add covfefe to its dictionary Wednesday, allowing players to use the the president's already infamous tweet against their opponents. The multiplayer game even went a step further, defining the typo word as “the amount and quality of reporting when autocorrect fails you at 3am.” This is not the first time Words With Friends have added a trending social media word to its lexicon. Back in April, it added “ew” based off NBC's Tonight Show host Jimmy Fallon’s tweet. It was not immediately clear what Trump meant by his tweet. Hours after the original message was deleted, Trump himself tweeted Wednesday morning: "Who can figure out the true meaning of "covfefe" ??? Enjoy!" Who can figure out the true meaning of "covfefe" ??? Enjoy! — Donald J. Trump (@realDonaldTrump) May 31, 2017 Read more:
66,085,200
1. Field of the Invention The present invention relates to an optical modulator that modulates light with light to generate an m-ary optical signal for use in, for example, long-haul high-capacity fiber-optic communication, where m is an integer greater than two. 2. Description of the Related Art Due to the pervasive spread of the Internet, the need for long-haul large-capacity optical fiber communications has been increasing. Communication capacity is being enlarged in two ways: by using wavelength division multiplexing (WDM) to increase the number of simultaneously transmittable channels, and by increasing the transmission rate in each channel. M-ary modulation, which is already in use in mobile radio communication systems, is now attracting attention as one possible means for increasing the capacity and range of optical communication. Many researchers are currently studying its possible application to optical fiber communication systems. The two major optical communication systems that have been put into practice or are under study are amplitude shift keying (ASK) or on-off keying (OOK) modulation, in which the signal allocated to each time slot has either a weak (‘0’) or strong (‘1’) intensity, and binary phase shift keying (BPSK) modulation, in which the signal allocated to each time slot has a phase shift of either 0 or π radians. In both of these modulation systems, only one bit (two possible values) can be transmitted at once. A typical m-ary modulation scheme is quadrature phase shift keying (QPSK). In QPSK, the phase of the signal in a single time slot may shifted by 0, π/2, π, or 3π/2 radians, enabling the transmission of two bits (four possible values) at once. If used in optical fiber communications, QPSK modulation would allow twice as much data to be transmitted in the same frequency band as by OOK or BPSK modulation, resulting in increased communication capacity and improved spectral utilization efficiency. Conversely, since QPSK uses only half as much bandwidth as OOK or BPSK modulation to transmit the same amount of data, when QPSK is used in a WDM system, the wavelength channel spacing can be reduced, increasing the communication capacity and again improving the spectrum utilization efficiency. The reduced bandwidth would also make the transmitted signal less vulnerable to waveform distortion due to group velocity dispersion in the optical fiber, so another advantage of QPSK would be an increased communication range. The optical QPSK modulators now under study are typically electro-optical (E/O) systems that convert electrically modulated signals to optically modulated signals. An exemplary system of this type is described by Kawanishi et al. in ‘80 Gb/s DQPSK modulator’, Technical Digest of OFC 2007, OWH5, 2007. Gb/s is an abbreviation for gigabits per second. The abbreviations Gbps and Gbits/s are also used. The system described by Kawanishi et al. employs Mach-Zehnder (MZ) interferometric lithium niobate (LiNbO3) modulators, which exploit the Pockels effect in an LiNbO3 crystal. Two such modulators (MZA and MZB) are used to generate a pair of 40-Gb/s BPSK signals, which are then combined in an optical coupler (MZC) to generate an 80-Gb/s QPSK signal. In this and other known electro-optical QPSK modulators, the bit rate of the QPSK signal is limited by the operating speed of the component E/O modulators. In order to obtain faster bit rates, it is necessary to increase the operating speed of the electronic devices that generate the electrically modulated signals as well as the electro-optic conversion speed of the E/O modulators themselves. The state of the art in commercially available devices is currently about 50 Gbps, limiting the QPSK signal to about 100 Gbps. To generate QPSK signals beyond the limits of electronic devices and E/O optical modulators, it would be preferable to use an all-optical modulator in which the signal light is modulated by an optical modulating signal or control signal. A preferred optical modulation method uses the optical Kerr effect in an optical fiber. The optical Kerr effect occurs when the refractive indexes of a fiber vary due to propagation of light with high intensity in the fiber. The response speed of the optical Kerr effect is on the order of a few femtoseconds. An exemplary method of fabricating an ultra high-speed optical modulator or switch by utilizing the fiber-optic Kerr effect has been described by Morioka et al. in ‘Ultrafast optical multi/demultiplexer utilising optical Kerr effect in polarisation-maintaining single-mode fibres’, Electronic Letters, Vol. 23, No. 9, pp. 453-454, 1987. This type of optical fiber has two axes, referred to as the slow axis and fast axis, in a plane orthogonal to the longitudinal axis of the fiber. Linearly polarized light propagating through the fiber experiences different effective indexes of refraction depending on whether the light is polarized parallel to the fast axis or the slow axis. The Kerr medium used by Morioka et al. includes two polarization-maintaining optical fibers spliced end-to-end with mutually orthogonal slow axes so that the birefringence of the two fibers cancels out. In the experiment described by Morioka et al., linearly polarized OOK-modulated control light pulses and unmodulated probe light pulses were coupled into this medium, respectively polarized parallel to and at a 45° angle to the fiber axes. A pulse of probe light propagating through the medium together with a pulse of control light had its polarization plane rotated by the Kerr effect, which produced a phase difference φ between the probe light components polarized parallel to and orthogonal to the control light. The intensity of the control light could be adjusted to create a phase shift φ of π radians and thus a polarization rotation of 90°. When no control light pulse was present, there was no net phase shift and the polarization plane of the probe light pulse was not rotated. This experiment demonstrates that the fiber-optic Kerr effect can transform an OOK or ASK modulation pattern into a phase modulation pattern and suggests that the fiber-optic Kerr effect could be used to realize an all-optical BPSK modulator operating at a bit rate of at least several hundred gigabits per second. It is easy to infer that a QPSK optical signal could be generated by combining two BPSK signals generated in this way in an optical coupler such as coupler MZC described by Kawanishi et al. Generating an optical QPSK signal by combining two optical BPSK signals, however, requires precise control of the phase relationship between the two optical BPSK signals. In the typical case in which the two optical BPSK signals are modulated with phases of 0 and π, for example, an ideal optical QPSK signal is not obtained unless the phase difference between them is precisely π/2. The phase of the individual optical BPSK signals is not determined solely by the electrical modulating signal used by Kawanishi et al. or the optical control signal used by Morioka et al.; the phase is also shifted by the optical lengths of the individual paths taken by the optical signals. If the optical modulation scheme proposed by Morioka et al. is used, an optical fiber with a length of from several tens of meters to several kilometers is required to obtain an adequate optical phase modulation effect from control light of a practical intensity. This length is millions or billions of times the wavelength of the optical signal. Precise control of the relative phases of two optical signals propagating through fibers of this length would be extremely difficult; the necessary phase control equipment would have to respond at high speed with high precision to measured phase changes, and would also have to compensate for phase drift due to temperature changes and other environmental factors. Such a phase control system would be prohibitively complex and expensive. Thus while it is easy to conceive of an optical QPSK modulator using an optical coupler such as coupler MZC in Kawanishi et al. to combine two optical BPSK signals generated by the optical modulation technique described by Morioka et al., a practical optical QPSK modulator of this type would be extremely difficult to build and would require complex and very costly optical phase control apparatus.
66,085,202
Q: How can I query a Cassandra cluster for its metadata? We have a process creatively named "bootstrap" that sets up our Cassandra clusters for a given rev of software in an environment (Dev1, Dev2, QA, ..., PROD). This bootstrap Creates/Updates keyspaces and column families as well as populating initial data in non-prod. We are using Astyanax, but we could use Hector for bootstrapping. Given that another team has decided that each environment will have its own datacenter names. And Given that I want this to work in prod when we go from two to more datacenters. And Given that we will be using PropertyFileSnitch: How can I ask the Cassandra cluster for its layout? (Without shelling to nodetool ring) Specifically, I need to know the names of the datacenters so I can Create or Update a keyspace with the correct settings for strategy options when using NetworkTopologyStrategy. We want 3 copies per datacenter. Some envs have one and several have two, eventually production will have more. Is there CQL or a Thrift call that will give me info about the cluster layout? I have looked though several TOCs in various doc sets, and googled a bit. I thought I would ask here before digging though the nodetool code. A: I'm not sure how Hector or Astyanax expose this, but the basic Thrift method describeRing(keyspace) should give you what you're looking for. Part of the information that it contains are EndpointDetails structs that look like this: endpoint_details=[EndpointDetails(datacenter='datacenter1', host='127.0.0.1', rack='rack1')] Along with the rest of the results from that method, you should be able to figure out tokens, DCs, racks, and so on, for each node in the cluster. Since you're using a Java client, you could also use some of the JMX methods (which nodetool uses) to describe more select parts of the cluster. For example, you might look at the snitch mbean ("org.apache.cassandra.db:type=EndpointSnitchInfo"), specifically the getDatacenter(ip) and getRack(ip) methods.
66,085,399
Endocrine pancreas in the postnatal offspring of alloxan diabetic rats. This study was conducted to investigate the morphological changes in the postnatal pancreatic islets in offspring of diabetic rats. Diabetes was induced in female Sprague Dawley rats by intravenous injection of alloxan (40 mg/kg). After 1 week, rats with blood sugar above 270 mg% were bred and watched until spontaneous delivery occurred. Litters in the following age groups were sacrificed by decapitation, 0, 8, 24, 72 and 168 h, and compared with their controls. Blood sugar levels were significantly higher in the neonates of diabetic mothers immediately after delivery compared to the control, then became normal after 8 h. Islets of the offspring of the diabetics at birth showed weak positive staining for insulin using immunocytochemical techniques. By 72 h some cells showed immunopositive staining similar to the control, while at 168 h all the beta (B) cells were stained normally. Beta cells of the islets from the diabetic series at birth were almost completely degranulated except for scattered granules toward the periphery. Their cytoplasm exhibited glycogen and lipid accumulations. Cells also showed signs of hyperfunction in the form of an extensive endoplasmic reticulum and well-developed Golgi complex with distended Golgi cisternae. At 8 h postnatally the population of pale secretory granules was markedly increased. The changes described at birth persisted at 24 h and, to a lesser extent, 72 h after delivery. At the age of 1 week, the beta cells appeared to be normal.
66,085,543
Our Clients Our clients are on the move. Some are moving toward retirement and want to travel. Others are moving a hundred miles an hour with kids in school and burgeoning careers. Still, others are moving into new chapters of their lives, and want to make sure they’re making sound decisions. If you’re on the move, count on the InTrack team to monitor your financial life so you can concentrate on staying on track to achieving your goals. If you’re like most people approaching retirement age, you probably have some concerns. You may be worried that the money you’ve worked hard to accumulate won’t last. And you might be worried that you won’t be able to live the life you’ve become accustomed to. We work with our clients to build retirement plans that give you peace of mind. Raising a family and running a business at the same time often means you push yourself to your breaking point nearly every day, yet you go to bed at night knowing there’s more that needs to be done. We work with our clients in ways that free up their time, not add more to their to-do lists. Navigating your way through life on your own can be a rewarding process when you’re ready for it. But for those times when you’re not, we’re here to make sure you have the knowledge and support you need in order to comfortably navigate your life’s transitions without worry. Want to instantly see if you're on track to achieve your financial goals? This short exercise will lead you through questions about yourself, your goals, and your future objectives. You’ll get instant feedback about whether you’re on track or not. Then, you can choose to submit this information to our financial planning team and request that we review the information with you at no obligation. It’s the first step in utilizing our state-of-the-art financial planning software that aggregates all facets of your financial life. Quick, easy, and helpful. Quick Links The content is developed from sources believed to be providing accurate information. The information in this material is not intended as tax or legal advice. Please consult legal or tax professionals for specific information regarding your individual situation. Some of this material was developed and produced by FMG Suite to provide information on a topic that may be of interest. FMG Suite is not affiliated with the named representative, broker - dealer, state - or SEC - registered investment advisory firm. The opinions expressed and material provided are for general information, and should not be considered a solicitation for the purchase or sale of any security.
66,085,680
Ask HN: How to begin learning enterprise development? - throwaway-123 I am interested in writing software for business, where should I begin my journey? ====== sheraz Crud apps and java with spring boot. Dot net and sql server Strangely I see a lot of nodejs for IOT thinks with big companies as well. ------ nwrk IBM
66,085,730
The Income Tax department has slapped 121 cases for prosecution of those entities whose names have appeared in the HSBC Geneva bank list even as undisclosed income to the tune of Rs 4,800 crore (Rs...... Bank claims procedures were followed, affected communities and stakeholders were consulted A new report by the International Consortium of Investigative Journalists says the projects funded by the...... The bill is likely to be tabled in Parliament on Friday The Union Cabinet on Tuesday approved a Bill that seeks harsh penalties and rigorous imprisonment for those having unaccounted money abroad....... The Reserve Bank on Thursdaysaid it is keeping a close watch on global banking major HSBC, which is facing multi-nation probe including by Indian tax authorities for alleged tax evasion and money...... The former HSBC employee who leaked sensational secret documents alleging the bank helped wealthy customers dodge millions of dollars in taxes warned today that the revelations are just the "tip of......
66,085,792
"That's new." "Oh, yeah." "My girlfriend gave me a watch." "Do you give a crap, or are you just hoping that by pointing out something new of mine, i'll segue the conversation into talking about something new of yours, like..." "Your new prepubescent "miami vice" beard." "There are those who say I look like a young kenny loggins." "Who?" "me." "Time to go teach the new interns." "They started a week ago, and they..." "Suck." "This patient's loss of temperature sensation on the contralateral side" "is consistent with which syndrome, rodney?" "There was katie, the self-centered climber..." "Mcconaughey's." "Mcconaughey's." "Mcconaughey is not a syndrome." "He is, however, one of our finest working actors." "I recently learned how I could lose him in 10 days." "Katie is sabotaging you, I assume, because she knows the answer." "Brown-sequard syndrome." "Yay, katie got it." "There was denise, who could be a bit callous..." "You know..." "It's ironic that "cancer" starts wh "can,"" "because at this stage, there's nothing wecando about it." "Let's take a walk, sunshine." "I..." "And there was ed." "Ed, did you finish those case reports last night?" "I totally was gonna do it, man, but I was on this "lost" fan site last night talking to this chick." "About an hour in, I realize it's a dude messin' with me." "I've been there." "Revenge time." "I signed up for a new account-- Hotgirl99." "I start flirtin' with this dude." "I'm like, "oh, hey, I look just like kate."" "And he's totally into it, right?" "Next thing you know, I got him to agree to a personal meet-up here in the hospital" "where he'll be holding a red balloon." "Wait a minute, you're hotgirl99?" "Yeah." "The new interns all suck, yeah, but I'm gonna handle it." "Like you handled jimmy the overly touchy orderly?" "Somebody looking for me?" "No, jimmy, we're fine." "All right." "Let me know if you need anything." "Okay." "Have you noticed he only touches above the waist now?" "You're welcome." "You know, our intern class was the last good one." "You must've liked myclass a little, seeing as you almost married me." "Mm, yeah, but hello, keith, I didn't." "That was harsh, but no one cared because today we were meeting dr." "Kelso's replacement as chief of medicine-- Dr. Taylor maddox." "She was smokin' hot, so first I had to see her like this..." "I wanna be your lover, baby, I wanna be your man" "I don't want no other, baby, want you again and again and I've waited all night long" "now I know what I wanna do" "I just wanna make love to you come on!" "Yeah!" "Hi." "Hi." "But then I noticed how friendly she seemed." "She had the most infectious smile." "No one could resist it." "And I mean no one." "no." "Yes!" "Never!" "He it comes!" "Clear." "Aah!" "Brava!" "Oh, yeah!" "Yes!" "You proud fool." "Okay, I'm gonna tell you everything you need to know about me." "One" " I have an open door policy." "Two-- if you do your job well, you're great with me." "Okay?" "And three" " I don't like spiders." "So if you see one, I want you to stomp it." "I want you to stomp it dead, okay?" "I don't want you to put it into a little cup and take it outside, because it'll just find its way back in, okay?" "They're sneaky." "Oh, can someone help that man to his room?" "Oh." "No, I-i'm not sick." "I'm just cold, and there were no chairs." "I'm" " I'm a lawyer." "Of course you are, sweetie." "I'm on it." "She's the new boss, ted." "Aah!" "Does it hurt here?" "Or here?" "How about here?" "Or under here?" "Do I know you?" "Jimmy, heel!" "Slowly move." "Slowly move." "Don't rush anything." "No, jimmy." "Oh, jimmy." "Breathe deep for me real slow." "I just wanna feel you breathe." "Yeah." "That's a boy." "That's a boy." "That's a boy." "Good." "Breathe on my face." "Okay, time to connect with the new chief using a picture of my son and some brilliant acting." "Oh!" "Is that your boy?" "What's that?" "Oh, yeah." "His name is sam." "I have a daughter of my own." "It's working!" "Now seal the deal with a follow-up question, but nothing too personal." "Did you deliver vaginally?" "I did." "Big girl." "Must've hurt." "Wow." "Hi, Mr. Hicks." "So you were admitted to the hospital with shortness of breath?" "Yes, that's right." "Okay, so I'm just gonna..." "Die!" "Die!" "Die!" "Sorry." "Spider." "Um, i'm just gonna take you up and get you a full-body scan, okay?" "Yeah." "And there she goes." "A chief of medicine working one-on-one with a patient?" "Maybe she's not so bad." "I think she's probably a jerk." "Why?" "That position attracts jerks." "Plus, well, I know jerks." "Hell, I married a jerk." "I divorced a jerk." "New freckle." "I'm interrupted by jerks." "Look, just give me two minutes with this maddox, and I'll know for sure whether or not she's a jerk." "Well, go!" "Pass." "I know people are down on these new interns, but everyone's teachable, you know?" "Even jo." "Who?" "I like to call denise "jo" because she reminds me of that streetwise, mannish girl on "facts of life."" "You know, katie's cutesy and blonde." "You could call her "blair." "blair" is stupid." ""blair" is perfect, but now I can never use it in front of turk, or he'll say 'you're welcome!" "' in that reallymug way of his." "Okay, let's gather 'round for rounds." "Get it? "'round for rounds"?" "You can use it." "Our first patient is presenting with biliary colic..." "And, uh-- ed, would-- Would you mind-- would you mind turning off the beeping if you're gonna text?" "I'll turn it off." "Thank you." "Uh, upper right abdominal pain." "What's your diagnosis..." "Jo?" "I know." "Of course you do, katie, because you know anything that anyone's ever asked you ever." "But I didn't ask you." "I asked jo." "Well, the patient definitely looks like hell, so" "Quick side note-- When a patient's eyes are open, that usually means that they're awake." "Sorry, Mrs. Gallagher." "You look very beautiful today." "Doesn't she?" "Yeah, your jaundice makes you glow." "Yes, yellow like the sun." "Ed, stop texting!" "I'm not texting." "I'm looking at photos of sienna miller's breasts." "There's a difference." "Okay, well, do that more later..." "When we're together." "Jo, you were saying?" "I'm guessing Mrs. Gallagher probably has cholecy" "Cholecystis." "She has cholecystis." "I'm gonna cut your throat." "Okay, that's enough, jo." "You, too, blair." "Welcome!" "Damn it!" "If you want, you can call me "tootie."" "I don't think it's racist." "Oh, oh, fine." "I'm tootie, and I know how to go on the web and bittorrent." "You probably shouldn't be texting while you're leading rounds." "I'm-- Oh, I'm-- th-- this is his phone." "It's not my phone." "W.t.h.?" "!" "Oh, fine." "It's not your phone." "Hey, want a phone, buddy?" "no." "All right, listen, I want you to run some renal function tests on Mr. Hicks." "Can you do that, or do you have more questions about my vagina?" "Lie!" "no." "Hmm." "Dr. Maddox, I just wanted to say to be working with you." "Katie is such a kiss ass." "You mean "mini elliot"?" "What?" "That's what everybody's calling her." "It's probably just because we are..." "Both blonde and have perky boobs." "Or..." "It's because she is incredibly whiny and lf-involved, and you, barboo, for the last year and a half or so, have beenthemost self-involved and whiny person in the galaxy." "What nobody understands aboutme..." "Point proven." "Thank you." "Okay..." "Mm, suck-up's back." "You're abrasive." "That's enough, blair." "Welcome!" "Damn it!" "Okay, listen up, guys." "I gotta go take care of Mr. Hicks, so I need you guys to watch the floor." "Check every patient and switch out any lines that need changing." "Monitor Mr. Lombardi's blood gas, and intubate him if he starts getting acidotic." "Work as a team, you'll be all over it." "Let's have some hands in, okay?" "Somedy has some very soft hands." "I sleep in gloves." "Right on." "Okay, nobody die!" "Nobody die!" "Dr. Kelso..." "Now that you're retired, I can finally say this." "You, sir..." "Oh, I can't do it." "You'll get there, ted." "What's with the balloon?" "It's been a sad day." "Can you even believe Dr. Cox, callingmewhiny and self-involved?" "Elliot, you know how we're so close we can say anything to each other, right?" "Yeah?" "Look, over the last year or so, you've been going through a lot." "You got engaged." "You broke off your wedding at the last second." "It'd be easy for anyone to become a little self-absorbed." "What are you saying, carla?" "Thisis why I come here every day." "You come hereeveryday?" "Loser!" "My balloon!" "Mr. Hicks' renal tests came back negative." "Thank god." "See, bernie isn't just a patient." "He's also my lover." "Really?" "no!" "He's fat, bald and ugly!" "Thanks a lot!" "Aah!" "Okay, just swallow your pain and fix this." "Look, Dr. Maddox," "I think you're a very well-built..." "Sturdy woman." "Like a shed." "No, not like a shed." "Like a..." "Naughty..." "Like it." "Structure." "Structure?" "I should go." "Ooh!" "Ugh!" "Aah!" "ow!" "Stop confusing me by being nice and giving me phones." "Fine!" "But why did you have to trip me?" "I'll answer that question with another question-- 'cause I wanted to?" "Ahem!" "Excuse me." "Yeah?" "Do you think it would've been funny if he had broken his neck?" "I feel like you want me to say "no."" "What's your name?" "Oh, boy." "You really are new here." "Uh-oh." ""the janitor." Howdy." "Oh, say, beth, your patient Mr. Lombardi is about to crash." "How'd that happen?" "Well, sometimes when people get owies and they're left untreated, they become even bigger owies." "Hey!" "You guys are supposed to tube him if he got acidotic." "What the hell?" "!" "He's not my patient, man." "Uh, I-I was over there." "Now he's coding." "Crash cart!" "Sometimes just two words are enough to make your thoughts perfectly clear..." "So you actually agree with what Dr. Cox said about me?" "Whether you're being brutally honest..." "I do." "Or holding someone accountable..." "You're fired." "What about my son?" "That's my daughter!" "I'm" " I'm" " I'm" " I'm sorry." "May I see it again?" "Please?" "Aah!" "Ew!" "Or apologizing for a major screwup." "We're sorry." "Yeah." "We'll-- we'll get it together from now on." "Promise." "But for some reason, I didn't want to hear it." "You know what?" "I'm done with you guys." "Hey, t-bear, every time I see maddox," "I get one step closer to ending my career." "Aw, you'll be fine." "Maddox seems cool." "I don't know about that." "I" " I just can't shake the feeling that that woman is a complete tool." "You've been saying that all day." "Why don't you just gtalk to her and figure it out once and for all?" "Nah." "Hey, aren't you supposed to be at rounds?" "Oh, they're driving me crazy." "Someone needs to send those interns to an internment camp." "De, internment camps are never funny." "I always forget that turk is one-eighth japanese." "he" "Y, can you give Mr. Hicks a full cardiac workup?" "Of course." "And what would you say if I said he was my lover?" "I'd know that obviously you were joking because you are way out of his league." "Oh, there-- there that is." "We're doing this." "Okay." "I like you the best." "oh." "Thanks, mom..." "Ma'am." "Ma'am." "She's not your mom." "Well,shecertainly doesn't think th I'm all about me." "I was just being honest." "I'd want you to do the same for me if I were doing things I might regret later." "Name one thing that I've done lately that I'm gonna regret." "Mmm!" "oh!" "Hey, blondie!" "Ahh." "Show me your rack!" "no!" "Ooh!" "Yah!" "Oh, like that is gonna come back to haunt me." "What else you got?" "Well, what about the way you're always mocking keith about not marrying him?" "You probably don't even realize how devastated he still is." "Oh, hey, ladies." "Awesome day, huh?" "Awesome!" "Ugh." "I hope he'll pull through." "And now on to our next patient." "Let's go, bitches!" "Why is Dr. Squeaky pants leading rounds?" "Ere the hell is dorian?" "Keith is not still devastated." "I mean, what the hell is carla talking about?" "She's completely off base, right?" "I have to disagree with you." "You don't see my point at all?" "You don't understand." "I'm married to carla, right?" "She has spieseverywhere." "So ihaveto disagree with you." "I'm on to you, rochelle." "Dr. Reid?" "May I talk to you?" "Ted, are you gonna talk?" "In a second." "I'm just waiting for the antianxiety medication to kick in." "And..." "There it is." "Hey, baby." "Yeah, ted, i'm actually in charge of keeping a lot of people alive, so" "Carla's right." "Keith's a mess." "He hasn't been able to pull out of it since you ended things." "He's sad all the time." "Yeah, I don't see it." "Of course you don't." "No one ever wants the person who hurt them" "to ever see how badly they've been hurt." "Morning." "Morning." "It's amazing how the exact same question can have totally different connotations." "May I talk to you?" "May I talk to you?" "May I talk to you?" "I have to ask, when I fired you before..." "Did you think I was kidding?" "No, I knew you were serious 'cause I heard that you fired jimmy the orderly." "What'd he do?" "Okay, this is where you keep all of the tension." "Do you mind if I do a yogi chant?" "It relaxes the muscles." "Do you feel that?" "Mm-hmm." "But that's just jimmy being jimmy." "You know, the incident yesterday with Dr. Dorian" "I swear to you that will never happen again..." "Even though he deserved it." "I don't know." "Come on!" "I've been here forever." "You can't just throw me out of the hospital." "No, but I can walk you out without you even noticing." "Well played." "I need your keys." "mm." "Where are the rest?" "I got tired of carrying 'em all, so I made one that works on everything." "Watch." "Huh?" "How about that?" "Come on." "Hit the highway!" "Thank you." "no!" "Keith..." "I just realized that I never really took the time to apologize for the way things ended between us." "I mean, I did say that I was so sorry right when it happened." "Remember?" "We were outside," "I gave you the ring back, you started crying and cr" "Uh, no need to recap, elliot." "Right." "Look, I know this was my decision, so it was easier for me to move on, and, well, you know, make jokes and stuff." "I guess that I've been so self-involved" "I never stopped to think that you still may be hurting." "Anyway, I just wanted to really apologize..." "For everything." "Thanks." "It means a lot." "Hug?" "no." "Okay." "Why would you pass off your interns?" "I just" " I can't deal with them anymore." "Really?" "Because I had an intern just a couple of years back thatihated." "Honestly, he was so maddening that my therapist put me on a suicide/homicide watch." "Do I know this intern?" "Intimately." "I figured." "This is a teaching hospital." "You have to teach." "I know." "I'm just" " I'm" " I'm so tired of their attitude and I'm tired of their ignorance." "It's the same thing year after year." "I'm just..." "Tired." "Here comes the tongue-lashing." "Boy, I get that." "Why do you think I've been avoiding the new chief?" "Because if I do talk to her, and she is indeed a jerk, then once again, i'm gonna have to bethat guy who gets in her face over every little injustice." "But you wanna know something?" "I'm tired, too." "So what do we do?" "I don't know." "Oh, my god." "He's treating me like an equal." "Quick, do something equals do!" "Why would you do that?" "I don't know." "I thought equals shared coffee." "no." "So this intern that you mentioned..." "Earlier, i'm sure eventually he turned into a pretty amazing doctor." "Didn't he?" "Actually, it was a "she."" "It wasn't me?" "No, no, it was you." "It was you." "Hey." "We'll get there." "Okay, you get nice and comfy." "Dr. Reid gave me mr." "Hicks' tt results, but they're locked in my briefcase, and I lost the key." "Allow me." "Mm, loving this thing!" "And..." "Here it is." "Hey, how come all you have in here is a smiley face button and a revolver?" "Well, one's in case I get sad, and the other one's in case I get really sad." "Well, see you tomorrow." "We'll see." "ah." "Oh, Mr. Hicks' cardiac test results." "They're negative." "mm." "What do you know about that?" "Just like the 100 other tests we ordered for a man who's only complaint was shortness of breath." "I assume there's a nugget of a point buried in there." "Why you running that guy through the wringer?" "Because he's got awesome insurance." "He's 100% pure profit machine." "Ca..." "And might I add..." "Ching" "I mean, I may even order an m.r.i." "Just to see if he's actually stuffed with money." "In fact, I think I'm ordering one." "And since you cried about it, why don't you take him to radiology like a good little boy?" "My head is a box..." "Sometimes it really sucks to do the right thing..." "And that's the way I like it" "so please you were right about me..." "And thanks." "Open your heart don't mention it." "Love you." "I love you, too." "Trying to teach a bunch of jerks..." "Okay, when dealing with peripheral neuropathy, always think..." "Diabetes first." "Or once again, facing off with a jerk." "Look, you can't just bleed a guy's insurance dry just because you want to pay for a new x-ray machine." "Oh, will you shut up if I give you a key that opens everything?" "Oh..." "The more things change, the more they stay the same, huh?" "Hi, folks." "If you need anything cleaned up, just give me a shout, okay?" "Sure." "Who the hell is that?" "I don't know." "I like him." "You guys psyched?" "It's our eighth year." "Who's with me?" "Yay." "Come on." "I know it's tempting to just mail it in, but there's still a lot of people who rely on us week to week." "I think we owe it to them sotros semanalmente." "To be as inspired as we were in our first few years." "Now I know we never do great come medical awards season..." "Except for Dr. Shalhoub." "He wins everything." "But I still think we're as good as anybody else out there." "The nielsens certainly beg to differ." "Oh, they're just upset 'cause their insurance won't cover a private room."
66,085,961
Image copyright Reuters Image caption Damian Hinds emphasised academic rigour and the importance of "character" Schools need to prepare young people for a digital revolution and a fast-changing jobs market, says England's new education secretary, Damian Hinds. In his first public speech since taking up the post, Mr Hinds said schools needed a mix of traditional academic subjects and a sense of "resilience" and skills such as public speaking. Mr Hinds said that a high proportion of new jobs would require digital skills. He also called for improvements in vocational training for adults. The education secretary said young people needed the skills to be able to "write apps" as well as being able to use them. He said lessons in computing were needed to prepare young people for industries being changed by artificial intelligence and the arrival of technologies such as autonomous vehicles. Speaking at the Education World Forum in London, Mr Hinds emphasised the need for both "core academic subjects" and other, "soft skills" that could make young people more employable. 'Sports and voluntary work' But in his first presentation since becoming education secretary in the recent ministerial reshuffle, he gave few clues about any significant change in direction. Instead, Mr Hinds focused on how schools needed to prepare people for a shifting jobs market - and the importance of skills in communication and developing character. "I would suggest that there is nothing soft about these skills," he told this international education gathering. "The hard reality of soft skills is, actually, these things around the workplace, and these things around character and resilience are important for anybody to achieve in life, as well as for the success of our economy," said Mr Hinds. He stressed the importance of the "ethos of a school, the expectations set for students" and activities such as "sport, public speaking and voluntary work". Mr Hinds said these would shape the "character, resilience and workplace skills that our young people take with them". The new education secretary also pointed to the importance of helping adults to retrain for a changing jobs market. The conference heard that the UK's economy could receive a huge financial boost if there were improvements in the levels of basic skills. Andreas Schleicher, the Organisation for Economic Co-operation and Development's (OECD) director of education, said that a fifth of 15-year-olds in the UK struggled to achieve even the most basic levels in maths and reading. "If the United Kingdom were to ensure that all students had at least basic skills, the economic gains could reach $3.6 trillion (£2.58trn) in additional income for the economy over the working life of these students," he told the conference. On the basis of standards rising in other countries, Mr Schleicher said: "Such improvements in educational performance are entirely realistic."
66,086,178
Lead ions close steady-state sodium channels in Helix neurons. Extracellularly applied Pb2+ (1-150 microM) induced an outward current (IPb) in intracellularly perfused snail neurons. The current-voltage relationship of the Pb(2+)-induced current was linear over the potential range of -100 to -40 mV with negative slope conductance. The Pb-induced current was strongly dependent on the Na+ gradient. The IPb in intra- or extracellular K+- and Cl(-)-free or -rich solutions was almost the same as in control external and internal salines. The negative slope of the I-V curve and the decreased conductivity during Pb2+ application suggested that IPb is owing to the blocking of the resting Na conductance. Data obtained from single-channel measurements also supported this conclusion. Patch-clamp data showed that the steady-state Na channel has a conductance of 14 pS and both closed and open time-distributions displayed single exponential character.
66,086,437
Influence of Trypanosoma evansi infection on milk yield of dairy cattle in northeast Thailand. Effect of subclinical Trypanosoma evansi infection on the milk yield of newly introduced Holstein Friesian dairy cattle were investigated. Five hundred pregnant heifers were introduced in Loei Province, northeast Thailand and a total of 168 blood samples were collected at 20 farms during 6 visits over 2 years. Trypanosomes were found in cattle in June and November 1996, after which the parasite was rarely seen. On the other hand, the infection prevalences by antigen-detection ELISA (Ag-ELISA) were around 40% from the first sampling through October 1997; then, antigenemic cattle decreased to 20% by June 1998. Milk yields of the cattle with detectable parasitaemia in June and November 1996 were significantly lower than those of the non-infected cattle by Student's t-test. Similarly, the milk yields of Ag-ELISA positive cattle were lower than those of negative cattle at every sampling and significant differences were observed during the first year and in February, 1998 (tested by 2-way ANOVA; T. evansi status and herd as factors). This study suggested that subclinical trypanosomosis caused decrease in milk yield of newly introduced dairy.
66,086,997
The Bucs used a second-round pick on Southern Cal running back Ronald Jones, hoping he would bring explosive plays in the running and passing game. But Jones won't be given a chance to play in today's season opener against the Saints. The Bucs rookie is among the list of inactive players. Peyton Barber, Jacquizz Rodgers and undrafted rookie Shaun Wilson will be the running backs for Tampa Bay. Wilson also serves as the Bucs' kickoff returner while Jones is not a major contributor on special teams. Jones had a difficult preseason, gaining 22 yards on 28 carries, a 0.79 average. But he ran behind the second and third team offensive line. His longest rushing attempt was five yards. Coach Dirk Koetter defended Jones' lack of production after the final preseason game. "Again, when you have bad running plays, rarely is it one guy's fault," Koetter said. "I know from Ronald's standpoint, it's not at all from lack of effort, or from him not knowing what he's doing." But with all injuries in the secondary, the Bucs had to look for an position they could borrow from. Cornerback Brent Grimes strained his groin late in the week and did not travel to New Orleans. The Bucs signed cornerback Javien Elliott from the practice squad Saturday and placed cornerback De'Vante Harris on injured reserve. Elliott is among nine defensive backs active today.
66,087,213
--- abstract: | In this paper, we propose a model of decentralized energy storages, who serve as instruments to shift energy supply intertemporally. From storages’ perspective, we investigate their optimal buying or selling decisions under market uncertainty. The goal of this paper is to understand the economic value of future market information, as energy storages mitigate market uncertainty by forward-looking strategies. At a system level, we evaluate different information management policies to coordinate storages’ actions and improve their aggregate profitability: (1) providing a publicly available market forecasting channel; (2) encouraging decentralized storages to share their private forecasts with each other; (3) releasing additional market information to a targeted subset of storages exclusively. We highlight the perils of too much market information provision and advice on exclusiveness of market forecast channel. **** author: - 'Qiao-Chu He, Yun Yang, and Baosen Zhang [^1] [^2] [^3]' bibliography: - 'InfoProvision.bib' title: Information Management for Decentralized Energy Storages under Market Uncertainties --- Introduction ============ Energy storages serve as instruments for *energy supply shift* by storing excess renewable energy for the future. As the power industry transitions from a regulated towards a more competitive market environment, storage devices have incentives to be charged when prices are low (corresponding to low residual demand), and discharges when prices are higher (peak demand). The economic feasibility of such delicate operations requires optimal utilization of decentralized storage devices, wherein storage devices pursue their own objective (e.g., maximize profits or minimize costs) [@Sarker2015]. Tesla Powerwall is among the most renowned examples of such applications [@Tesla2016]. The application scenarios we focus on in this paper are when energy storages are integrated with renewable energy (e.g., wind and solar power) generation. For example, energy supply shift is necessary when the peak in solar power supply is during the daytime whereas the peak demand is during the night. In this case, energy storages are placed near the wind and solar power production sites to smooth generation output before connecting to aggregator and feeding to the grid [@Grothoff2015]. However, a fundamental problem in this integration is so called “merit order effect": The supply of renewable energy has negligible marginal costs and in turn reduces the spot equilibrium price [@acemoglu2015competition]. For example, since the price of power is expected to be lower during periods with high wind than in periods with low wind, the intermittency of renewable energy generation leads to market variability and uncertainty, i.e., energy prices fluctuates dynamically over time. Facing these challenges, energy storages have to employ market forecasts to improve buying and selling decisions. The goal of this paper is to understand the economic value of future market information, as energy storages mitigate market uncertainty by forward-looking strategies. In particular, we address the following research questions: - From storages’ perspective, what are the optimal decentralized buying or selling quantities, when the energy prices are both uncertain and variable over time? - At a system level, what is a good information management policy, to coordinate storages’ actions and improve their profitability? To be specific, we provide a stylized model of optimal storage planning under private market price forecasting. Decentralized decision makers have to consider their optimal strategies in a competitive environment with strategic interactions. In terms of information management, we consider the following possible policy interventions: (1) providing a publicly available market forecasting channel; (2) encouraging decentralized storages to share their private forecasts with each other; (3) release additional market information to a targeted subset of storages exclusively. The rest of this paper is organized as follows. Section \[s-lit\] reviews relevant literature. Section \[s-model\] introduces our model setup. In Section \[s-analysis\], we carry out the analysis for two basic (simplified) models. In Section \[s-policy\], we describe several policy intervention solutions. In Section \[s-extend\], we extend the basic models in several directions. Section \[s-con\] concludes this paper with a discussion of the future research directions. Literature Review {#s-lit} ================= Our work contributes to the literature on oligopoly energy market. The Cournot setup is a good approximation to some energy markets, e.g., California’s electricity industry as has been demonstrated in [@borenstein1999empirical]. Similar empirical work has been done in New Zealand’s electricity markets [@scott1996modelling]. This paper is partly inspired by [@acemoglu2015competition], wherein they also consider a competitive energy market with highly asymmetric information structures. While they seek to mitigate uncertainty and economic inefficiency via contractual designs, we aim the same target with informational interventions. We also focus on energy storages wherein they consider energy producers. In terms of energy storage modeling, our model extends a similar work presented in [@contreras2015cooperation], wherein they assume complete information and deterministic demand function. The fundamental inefficiency of such an energy market is driven by highly volatile local market conditions (e.g. electricity prices), for instance, due to intermittency in the renewable energy supply. For this reason, there is a growing literature on the use of an energy storage system to improve integration of the renewable energy [@Dicorato2012; @Shu2014]. With this motivation (while abstracting away from the physical characteristics of the renewables), our model is closely related to this literature by incorporating both intertemporal variability and uncertainty (exogenous market price shocks). Given such an environment, storages will be foresighted and joint storage planning and forecasting have been reported consequently [@Li2015; @Haessig2015]. While this literature is mostly simulation-based, our model admits tractable analysis and interpretable structural results. Beyond discussions on distributed storage planning and control, we put emphasis on information management at a system level. In a deregulated environment, it is natural to consider that the competing storages do not observe each other’s private information (private energy price forecast). Therefore, the storages have to estimate each other’s private forecast and conjecture on how each other’s action depends on its forecast. This strategic interaction poses technical challenge, which is new to the energy literature. A similar problem is studied in [@Kamalina2014] in a different context (generation capacity expansion), wherein no structural results are available. [@Shahidehpour2005] and [@Langary2014] touch upon this topic in the context of generating companies’ supply function equilibrium but resort to simulation. Furthermore, the private forecast in our model is sequentially revealed at every periods, while the private information in aforementioned literature is *static* (viewed as generation companies’ attribute or type). Finally, there is a long stream of literature in economics and operations research in terms of information management in such a decentralized setting. We consider a class of equilibria wherein each agent’s action depends linearly on its forecast and forms Bayes’ estimator for others’ forecasts. The uniqueness of this equilibrium prediction is guaranteed by [Radner1962]{}. The value of a public forecast in coordinating agents’ actions is pioneered in [@Morris2002]. The incentives for information sharing are studied in [@Gal-Or1985]. A recent work has demonstrated the power of targeted information release [@Zhou2016]. We systematically examine those ideas in the context of distributed energy storage market. Model {#s-model} ===== **Market structure.** Consider $n$ storages who purchase and sell substitutable energy through a common market. The storages, indexed by a set $I=\{1,2,\cdots ,n\}$, are homogeneous *ex ante* and engage in a Cournot competition. Let $d_{i}^{[t]}$ denote the energy purchased (when $d_{i}^{[t]}<0$) or sold (when $d_{i}^{[t]}>0$) by the $i^{th}$ storage at time $t$, and the aggregate storage quantity is denoted by $D^{[t]}=\sum_{i=1}^{i=n}d_{i}^{[t]}$. We model the demand side by assuming that the actual market clearing price $P^{[t]}(D^{[t]})$ is linear in $D^{[t]}$, i.e., $$P^{[t]}(D^{[t]})=\beta ^{\lbrack t]}-\gamma ^{\lbrack t]}D^{[t]}+\eta ^{\lbrack t]},\forall t\in T,$$ where a random variable $\eta ^{\lbrack t]}$ captures the market uncertainty. $\beta ^{\lbrack t]}>0$ corresponds to market potential, which also captures market variability since $\beta ^{\lbrack t]}$ is changing over time. $\gamma ^{\lbrack t]}>0$ (price elasticity) captures the fact that the market price decreases when the aggregate energy sold $D^{[t]}$ increases, as the market supply of energy increases. To model storages’ strategic interactions, our demand side setup corresponds to a scenario wherein storages are not price-takers but enjoy market power. This is supported by empirical evidence that energy prices vary in response to loads generation, especially when the storages are of sufficient scale [@Sioshansi2009]. Even if the storages are small-scaled, economics literature suggests that infinitesimal agents also act *as if* they are expecting the price-supply relationship [@Osborne2005]. Similar assumptions on storages’ market power and price-anticipatory behavior are not uncommon in the literature [@Sioshansi2010]. Finally, with the integration of renewables and consequently the “merit order effect”, the supply of renewable energy can drastically impact the spot equilibrium price considering its negligible marginal costs. We assume that the market uncertainty follows an autoregressive process such that $$\eta ^{\lbrack t+1]}=\delta \eta ^{\lbrack t]}+\epsilon _{t},\forall t\in T,\eta ^{\lbrack 1]}\sim N(0,\alpha ^{-1}).$$The parameter $\alpha $ is the initial information precision concerning the market uncertainty *a priori*. Standard assumptions for autoregressive process require that $\left\vert \delta \right\vert <1$, where $\epsilon _{t}$ is exogenous shock $\epsilon _{t}\sim N(0,\zeta ^{-1}) $. We choose this stochastic process as it is among the simplest ones which capture intertemporal correlation while remaining realistic. **Storage model**. Storages are agents who can buy energy at a certain time period and sell it at another. The net energy purchased and sold across time is required to be zero for every storage, i.e., $\sum_{t\in T}d_{i}^{[t]}=0$, $\forall i\in I$. As we seek to emphasize the interaction between storages, we also abstract away from other operational constraints such as energy and/or power limits; Instead, they are modeled by a cost function associated with each storage. The battery degradation, efficiency, and/or energy transaction costs of storage $i$ are represented by the cost function $c_{i}(\cdot )$. This treatment is similar to that in [contreras2015cooperation]{}. We assume that $c_{i}(d)=\varepsilon ^{\lbrack t]}\cdot d^{2}$ in the basic models. More realistically, $c_{i}(d)$ will be a power function $\varepsilon ^{\lbrack t]}\cdot d^{x}$ wherein $x\in (1,2)$, as it is known that as the depth of discharge increases, the costs of utilizing storage increase faster than linear. We can show that most of our results remain robust when the cost function $c_{i}(d)$ is generalized within this region. We choose $x=2$ for a clear presentation of results. We also generalize the basic model to consider heterogeneous cost functions in the extensions. To summarize this discussion, the payoff of storage $i$ can be expressed as $$\pi _{i}\left( d_{i}^{[t]},t\in T\right) =\sum_{t\in T}\left[ P^{[t]}(D^{[t]})\cdot d_{i}^{[t]}-\varepsilon ^{\lbrack t]}\cdot \left( d_{i}^{[t]}\right) ^{2}\right] .$$ **Information structure and sequence of events**. Storage $i$ has a private forecast channel for the market condition. At the beginning of period $t$, storage $i$ receives a private forecast $x_{i}^{[t]}$ with precision $\rho $, i.e., $x_{i}^{[t]}=\eta ^{\lbrack t]}+\xi _{i}^{[t]}$, where $\xi _{i}^{[t]}\sim $ $N(0,\rho ^{-1})$, for $\forall i\in I$. The realizations of the forecasts are private, while their precision is common knowledge. In this paper, we use “forecast" and “information" interchangeably depending on the context. At time $t$, the sequence of events proceeds as follows: (1) The storages observe (the realizations of) their private forecasts; (2) Each storage decides the purchase or selling quantities based on their information, anticipating the rational decisions of the other storages; (3) The actual market price is realized and the market is cleared for period $t$. ----------------------------- ----------------------------------------------------------------------------- $I$ The set of energy storages. $|I|=n$. $J$ The set of targeted information release recipient. $|J|=m$. $T$ The set of time periods. $|T|=L$. $X_{i}^{[t]}$ Information set of storage $i$ in period $t$. $A$ Equilibrium base storage quantity. $C$ Equilibrium response factors with respect to private forecasts. $B$ Equilibrium response factor towards the public forecast. $d_{i}^{[t]}$ The amount of energy purchased or sold by the $i^{th}$ storage at time $t$. $D^{[t]}$ The aggregate storage quantity $D^{[t]}=\sum_{i=1}^{i=n}d_{i}^{[t]} $. $P^{[t]}$ Market clearing energy price at time $t$. $x_{i}^{[t]}$ Private forecast received by storage $i$ regarding market uncertainty. $x_{0}^{[t]}$ Public forecast. $\eta ^{\lbrack t]}$ Market price uncertainty at time $t$. $\beta ^{\lbrack t]}$ Market potential. $\gamma ^{\lbrack t]}$ Energy price elasticity. $\alpha $ Information precision for market uncertainty *a priori*. $\delta$ Autoregression parameter for market uncertainty. $\epsilon _{t}$ Exogeneous price shock at time $t$. $\zeta$ Precision of $\epsilon _{t}$. $\varepsilon ^{\lbrack t]}$ Quadratic energy storage cost coefficient. $\rho$ Precision of the private forecast $x_{i}^{[t]}$. $\pi _{i}$ Storage $i$’s payoff function. $\sigma$ Precision of the public forecast $\xi_{0}^{[t]}$. $\xi _{i}^{[t]}$ Noise of the private information channel by storage $i$. $\xi_{0}^{[t]}$ Noise of the public information channel. ----------------------------- ----------------------------------------------------------------------------- : Summary of nomenclature. Model Analysis {#s-analysis} ============== Centralized Energy Storage Model -------------------------------- We begin by considering a single storage and thus dropping the subscript in this section. The optimal storage quantities are obtained by solving $$\max_{d^{[t]},t\in T}\sum_{t\in T}\mathbb{E}\left[ \left. \begin{array}{c} P^{[t]}\cdot d^{[t]} \\ -\varepsilon ^{\lbrack t]}\cdot \left( d^{[t]}\right) ^{2}\end{array}\right\vert X^{[t]}\right] ,$$subject to $$\mathbb{E}\left[ \left. \sum_{t\in T}d^{[t]}\right\vert X^{[t]}\right] =0,$$for any sample path generated by $\{X^{[t]}\}^{\prime }s$, wherein $X^{[t]}=\left\{ x^{[1]},\eta ^{\lbrack 1]},...,\eta ^{\lbrack t-1]},x^{[t]}\right\} $ indicates the corresponding information set. For clarity of presentation, we solve for an optimal $d^{[t]}$ in an arbitrary period $t$. In addition, we drop the superscript for $\gamma $ and $\varepsilon $ to focus on the intertemporal variability solely in market price. Notice that in period $t$, $d^{[t+1]},...,d^{[L]}$ will be anticipated future optimal quantities based on the current information set $X^{[t]}$, whereas $d^{[1]},...,d^{[t-1]}$ will be previous decisions realized to the storage. To avoid confusion, we denote their solutions by a general $\mathbb{E}_{t}d^{[\tau ]}$, $\tau =1,...,L$. Under this notation, $\mathbb{E}_{t}d^{[\tau ]}$, $\tau =1,...,t-1$ will be known data, $\mathbb{E}_{t}d^{[t]}$ is the decision to be made in period $t$, and $\mathbb{E}_{t}d^{[\tau ]}$, $\tau =t+1,...,L$ will be the anticipated future optimal quantities. It should be emphasized that $\mathbb{E}_{t}d^{[\tau ]}$ may not be the same as the actual decision made in a future period $\tau $, for $\tau =t+1,...,L$. The timeline in this model is shown in Figure [fig:timelinecentral]{}. -4cm ![Timeline of the centralized storage model.[]{data-label="fig:timelinecentral"}](timeline.eps "fig:") By the *Principal of Optimality*, in period $t$, we use the following induced sub-problem to find $\mathbb{E}_{t}d^{[t]}$ (optimal solution of $d^{[t]}$): $$\max_{d^{[t]},d^{[t+1]},...,d^{[L]}}\sum_{\tau =t}^{\tau =L}\mathbb{E}\left[ \left. \left( \beta ^{\lbrack \tau ]}-\gamma d^{[\tau ]}+\eta ^{\lbrack \tau ]}\right) d^{[\tau ]}-\varepsilon \cdot \left( d^{[\tau ]}\right) ^{2}\right\vert X^{[t]}\right] ,$$ subject to $$\mathbb{E}\left[ \left. \sum_{\tau =t}^{\tau =L}d^{[\tau ]}\right\vert X^{[t]}\right] =-\sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}.$$ \[pro\_central\] The optimal storage quantity in period $t$ is denoted by $$\begin{aligned} \mathbb{E}_{t}d^{[t]} &=&\underset{\text{base storage quantity}}{\underbrace{\frac{\beta ^{\lbrack t]}-\frac{\sum_{\tau =t}^{\tau =L}\beta ^{\lbrack \tau ]}}{L-t+1}}{2\left( \varepsilon +\gamma \right) }-\frac{\sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}}{L-t+1}}} \\ &&+\underset{\text{response factor}}{\underbrace{\frac{\left( 1-\frac{\sum_{\tau =t}^{\tau =L}\delta ^{\tau -t}}{L-t+1}\right) }{2\left( \varepsilon +\gamma \right) }}}\cdot \mathbb{E}\left[ \left. \eta ^{\lbrack t]}\right\vert X^{[t]}\right],\end{aligned}$$for $t=1,2,...,L-1$, wherein $\mathbb{E}\left[ \left. \eta ^{\lbrack t]}\right\vert X^{[t]}\right] =\frac{\rho }{\rho +\zeta }x^{[t]}+\frac{\zeta \delta }{\rho +\zeta }\eta ^{\lbrack t-1]}$, for $t=2,...,L$, and $\mathbb{E}\left[ \left. \eta ^{\lbrack 1]}\right\vert X^{[1]}\right] =\frac{\rho }{\rho +\alpha }x^{[1]}$. In the final period, $\mathbb{E}_{L}d^{[L]}=-\sum_{\tau =1}^{\tau =L-1}d^{[\tau ]}$. In this proposition, we are able to derive the optimal storage quantities in closed form, comprising of two parts: The first part is the *base storage quantity*, and the second part is the *response factor* multiplied by an estimation of the market price uncertainty. The base storage quantity decreases in $\varepsilon $ (cost coefficients with respect to the depth of discharge) and $\gamma $ ( energy price elasticity). The market price uncertainty is estimated by a convex combination of a current forecast and a last-period observation: $\mathbb{E}\left[ \left. \eta ^{\lbrack t]}\right\vert X^{[t]}\right] =\frac{\rho }{\rho +\zeta }x^{[t]}+\frac{\zeta \delta }{\rho +\zeta }\eta ^{\lbrack t-1]}$. The weighting factors are proportional to their relative precision levels $\rho $ and $\zeta $. In additional, the last-period observation weights more when the intertemporal correlation is stronger ($\delta $ is higher). From this proposition, we can clearly see that the optimal storage quantities depend on both variability (captured by market potential coefficient $\beta ^{\lbrack t]}$) and uncertainty (captured by $\eta ^{\lbrack t]}$) in market price. The base storage quantity demonstrates a downward distortion to the per-stage optimal storage quantity $\frac{\beta ^{\lbrack t]}}{2\left( \varepsilon +\gamma \right) }$: The component $\frac{\sum_{\tau =t}^{\tau =L}\frac{\beta ^{\lbrack \tau ]}}{L-t+1}}{2\left( \varepsilon +\gamma \right) }$ is the average per-stage optimal storage quantity across all future periods, and the component $\frac{\sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}}{L-t+1}$ is subtracted to compensate the existing energy storage level built up in the past. The downward distortion ensures that the overall storage quantities offset each other, i.e., $\mathbb{E}\left[ \left. \sum_{t\in T}d^{[t]}\right\vert X^{[t]}\right] =0$. Decentralized Two-Period Model ------------------------------ In this section, we recover the superscript for $\gamma $ and $\varepsilon $. We also recover the subscript for $x_{i}^{[t]}$ since the storages observe heterogeneous information. To characterize the equilibrium outcome under such highly asymmetric information structure, we first introduce our solution concept. **Equilibrium concept**. Storage $i$ chooses a storage quantity $d_{i}^{[1]}$ to maximize $\mathbb{E}[\pi _{i}|x_{i}^{[1]}]$, by forming an expectation of the other producers’ production levels $\mathbb{E}(d_{j}^{[1]}|x_{i}^{[1]})$, for $\forall j\neq i$. $d_{i}^{[2]}$ is determined thereafter, due to the constraint $d_{i}^{[1]}+d_{i}^{[2]}=0$, $\forall i\in I$. We focus exclusively on the *linear symmetric Bayesian-Nash equilibrium*, i.e., $d_{i}^{[1]}=A+Cx_{i}^{[1]},$ for some constants $A$ and $C$. We can interpret $A$ as the *base storage quantity*; $C$ as the *response factors* with respect to the forecast $x_{i}^{[1]}$, respectively. \[prop\_two period\] For a two-period model under private market forecasting, the storage quantity in the linear symmetric Bayesian-Nash equilibrium is $d_{i}^{[1]}=A+Cx_{i}^{[1]}$ for every storage, wherein $$A=\frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(n+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) },$$ $$C=\frac{(1-\delta )\rho }{\left[ \begin{array}{c} (n-1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \rho + \\ 2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}+\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) (\alpha +\rho )\end{array}\right] }.$$ In period $t=1$, the base storage quantity $A$ is positive (selling energy) if and only if $\beta ^{\lbrack 1]}>\beta ^{\lbrack 2]}$, i.e., the energy price decreases at $t=2$. Conversely, the storage buys energy ($A<0$) when it can be sold at a higher price at $t=2$ ($\beta ^{\lbrack 1]}<\beta ^{\lbrack 2]}$). Consistent with our results in the single storage model, the base selling quantity decreases in $\sum_{t\in T}\varepsilon ^{\lbrack t]}$ (cost coefficients with respect to the depth of discharge), and in $\sum_{t\in T}\gamma ^{\lbrack t]}$ (aggregate energy price elasticity). Furthermore, when $A>0$, it decreases in the number of storages. This is because that the market competition is more intense as the number of Cournot competitors increases, and consequently, the price of energy decreases. The converse is true when $A<0$. The reaction to private forecast $C$ is more aggressive when its precision $\rho $ increases, as a storage relies more on an accurate market forecast. The reactions to forecasts are less aggressive when the intertemporal correlation $\delta $ increases. In the extreme case where $\delta =1$, the storage does not respond to forecasts. Intuitively, this is because that any action (either buy or sell) in response to a market forecasts at $t=1$ will be offset by a reverse operation (under the energy balance constraints $\sum_{t\in T}d_{i}^{[t]}=0$) at $t=2$ when the market condition remains the same. \[prop:twoperiodpayoff\] When there are a large number of storages, every storage’s equilibrium payoff converges asymptotically to $$\lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}]\rightarrow \sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \left[ \begin{array}{c} \left( \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}+ \\ \left( \frac{1-\delta }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}\rho ^{-1}\end{array}\right] \cdot n^{-2}.$$ When there are a large number of storages, $\lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}]$ decreases in $\rho $. This result demonstrates the negative economic value of a private forecast, and is interpreted as *competition effect*. Although storages’ private forecasts are independent, they lead to similar reaction to a market price shock. When the number of storages is large, the purchase or selling quantity responses are exaggerated and the over-precision in forecasts can lead to even lower payoffs. The inverse-square decay with respect to the number of storages demonstrates the impact of competition intensity. We complement the analysis with the following numerical example, wherein $\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}=1$, $\varepsilon ^{\lbrack 1]}=\varepsilon ^{\lbrack 2]}=\gamma ^{\lbrack 1]}=\gamma ^{\lbrack 2]}=1$, and the payoff is calculated with a finite number of storages. We summarize the results in Figure \[fig:sensitivity4\]. A storage’s payoff decreases in the autoregressive parameter $\delta$. As a storage’s reactions to forecasts are less responsive when the intertemporal correlation ($\delta$) increases, the economic value of market information also decreases. For a similar reason, a storage’s payoff also decreases in market uncertainty parameter $\alpha$, as the storage’s reactions to forecasts are also less responsive in this case. In addition, the inverse-square decay with respect to the number of storages confirms our result in the asymptotic analysis. Finally, a storage’s payoff first increases then decreases in its private information precision. When private information is scarce, the economic value of private information increases in its precision as it mitigate uncertainty. However, when the information precision further increases, the competition effect dominates and the payoff decreases. \ Operational Policy Analysis {#s-policy} =========================== Public Forecast Provision ------------------------- Suppose that instead of private forecasts, all the storages receive a public forecast $x_{0}^{[t]}=\eta ^{\lbrack t]}+\xi _{0}^{[t]}$, where $\xi _{0}^{[t]}\sim $ $N(0,\sigma ^{-1})$. In this section, we analyze the possibility for a public forecast to coordinate storages’ actions. This can be potentially provided by the aggregator. Following a similar analysis as in the private forecasting model, we assume that $d_{i}^{[1]}=A+Bx_{0}^{[1]}$, where $B$ is the response factor towards the public forecast. \[prop\_public\] For a two-period model under public market forecasting, each storage’s equilibrium storage quantity in the linear symmetric Bayesian-Nash equilibrium is $d_{i}^{[1]}=A+Bx_{0}^{[1]}$, wherein $$A=\frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(n+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) },$$$$B=\frac{(1-\delta )\sigma /(\alpha +\sigma )}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(n+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }.$$The equilibrium payoff $$\lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}]\rightarrow \sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \left[ \begin{array}{c} \left( \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2} \\ +\left( \frac{1-\delta }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}\cdot \frac{\sigma }{\left( \alpha +\sigma \right) ^{2}}\end{array}\right] \cdot n^{-2}.$$ Similar to the private forecasting model, the reaction to public forecast $B$ is more aggressive when its precision $\sigma $ increases. $\lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}]$ is pseudo-concave in the public forecast precision $\sigma $, and thus reaches global maximum when $\sigma =\alpha $. Notice that, when $\sigma >\alpha $, $\lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}]$ decreases in $\sigma $, which demonstrate the negative economic value of a public forecast. We interpret this as *congestion effect*: Intuitively, when $\sigma >\alpha $, over-reaction to a public forecast leads to either too much purchase quantity (when forecast is favorable) or too much selling quantity (unfavorable forecast) from all storages. The public forecast becomes a herding signal. It is beneficial to maintain certain exclusiveness of a public forecast. For given information provision, a storage’s payoff suffers inverse-square decay in the number of storages, as the economic value of a public forecast is diluted when more storages respond to it. We complement the analysis with the following numerical example summarized in Figure \[fig:sensitivity2\], wherein $\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}=1$, $\varepsilon ^{\lbrack 1]}=\varepsilon ^{\lbrack 2]}=\gamma ^{\lbrack 1]}=\gamma ^{\lbrack 2]}=1$, and the payoff is calculated with a finite number of storages. Again, a storage’s payoff decreases to the inverse-square of the number of storages as shown in the analytical result. A storage’s payoff first increases (due to positive economic value) then decreases (due to the congestion effect) in the precision of the public market forecast. Encourage Information Sharing ----------------------------- Suppose that the storages pool their private forecasts $x_{i}^{[1]}$ together. In this case, it can be checked that it is equivalent for them to observe a public forecast $x_{0}^{[1]}$ with precision $\sigma =n\rho $: $$\mathbb{E}[\eta ^{\lbrack 1]}|x_{1}^{[1]},...,x_{n}^{[1]}]=\frac{\rho }{\alpha +n\rho }\sum_{i\in I}x_{i}^{[1]},\mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]}]=\frac{\sigma }{\alpha +\sigma }x_{0}^{[1]},$$ and it can be checked that these two estimators are stochastically equivalent, i.e., both $N(0,\frac{n\rho }{(\alpha +n\rho )^{2}})$. Therefore, we can calculate the corresponding payoffs under pooling private forecasts: $$\begin{aligned} \lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i};\sigma &=&n\rho ]\rightarrow \sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \cdot \left[ \begin{array}{c} \left( \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2} \\ +\left( \frac{1-\delta }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}\cdot \left( n\rho \right) ^{-1}\end{array}\right] \cdot n^{-2} \\ &<&\sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \left[ \left( \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}+\left( \frac{1-\delta }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}\rho ^{-1}\right] \cdot n^{-2}.\end{aligned}$$ By comparing this payoff with that under private forecasts, we find that the economic value of forecast under information sharing is (to an order of magnitude in the number of storages) lower than that under private forecasts. There will be no incentive for the storage to share information with each others. From this analysis, we find that communication among the storages fails to achieve a coordinated effort to increase market efficiency. To maintain the exclusiveness of their private forecasts, the decentralized storages should not be encouraged to share market information in this regime. This result is confirmed by a numerical analysis summarized in Figure \[fig:sensitivity3\] when there are a large number of storages. However, when the number of storages is small, it is possible that every storage is better off by information sharing. This regime is possible when each private forecast is extremely fuzzy, and pooling them together can amplify the market signal. Targeted Information Release ---------------------------- Now that we know the exclusiveness of a market forecast is important, we analyze an alternative policy intervention through public information channel. Suppose that the aggregator/government offers a public forecast only to a subset of storages $J$ ($|J|=m\leq n$). For informed storages (ones who receive the public forecast), their $d_{i}^{[1]}=A+Bx_{0}^{[1]}$, $\forall i$ $\in J$. For uninformed storages, their $d_{i}^{[1]}=C$, $\forall i$ $\in I-J$. $A$, $B$ and $C$ are all unknown constant coefficients. \[prop\_target\] For a two-period model under targeted information release, storages’ equilibrium storage quantities in the linear Bayesian-Nash equilibrium are $d_{i}^{[1]}=A+Bx_{0}^{[1]}$, $\forall i$ $\in J$, and $d_{i}^{[1]}=C$, $\forall i$ $\in I-J$, wherein $$A=C=\frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(n+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) },$$$$B=\frac{(1-\delta )\sigma /(\alpha +\sigma )}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(m+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }.$$The storages’ aggregate payoff $\sum_{i\in I}\mathbb{E}[\pi _{i}]$ is maximized when the population of information recipient$$m=1+\frac{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}.$$ In this case, the storages’ payoffs are stratified, due to their asymmetrical informational status. The fact that an interior solution (of $m$) exists suggests a trade-off between the economic value of a public forecast in coordinating the storages’ actions, and the congestion effect due to the lack of exclusiveness of such information dissemination. Model Generalizations {#s-extend} ===================== Multi-Period Model ------------------ In this section, we demonstrate that the model can be extended in multiple directions. A multi-period version of this problem has to be solved recursively using backward induction while unfolding the information set throughout the process. Instead, we analyze a relaxed problem. In this case, equilibrium characterization requires solving the following optimization problems: $$\max_{d_{i}^{[t]},t\in T}\sum_{t\in T}\mathbb{E}\left[ \left. \begin{array}{c} P^{[t]}(D^{[t]})\cdot d_{i}^{[t]} \\ -\varepsilon ^{\lbrack t]}\cdot \left( d_{i}^{[t]}\right) ^{2}\end{array}\right\vert X_{i}^{[t]}\right] ,$$ subject to $$\mathbb{E}\left[ \sum_{t\in T}d_{i}^{[t]}\right] =0.$$$X_{i}^{[t]}=\left\{ x_{0}^{[1]},x_{i}^{[1]},\eta ^{\lbrack 1]},...,\eta ^{\lbrack t-1]},x_{0}^{[t]},x_{i}^{[t]}\right\} $ indicates the corresponding information set. Notice that we simultaneous incorporate private forecasts $\{x_{i}^{[t]}\}$’s and a public forecast $x_{0}^{[t]}$. The storage quantity will be $d_{i}^{[t]}=A^{[t]}+B^{[t]}x_{0}^{[t]}+C^{[t]}x_{i}^{[t]}$ for some unknown coefficients $A^{[t]}$, $B^{[t]}$ and $C^{[t]}$. This is a relaxation because an exact solution requires that $\mathbb{E}\left[ \left. \sum_{t\in T}\left( d_{i}^{[t]}\right) ^{\ast }\right\vert X_{i}^{[t]}\right] =0$, for any sample path generated by $\{X_{i}^{[t]}\}^{\prime }s$. \[prop\_multiple period\] For a multi-period model under both private and public market forecasting, the storage quantities in the linear symmetric Bayesian-Nash equilibrium can be approximated by $d_{i}^{[t]}=A^{[t]}+B^{[t]}x_{0}^{[t]}+C^{[t]}x_{i}^{[t]}$, wherein $$A^{[t]}=\frac{\beta ^{\lbrack t]}-\lambda }{2\varepsilon ^{\lbrack t]}+(n+1)\gamma ^{\lbrack t]}},$$ $$C^{[t]}=\frac{\rho }{\left[ 2\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) (\alpha +\sigma +\rho )+(n-1)\gamma ^{\lbrack t]}\rho \right] },$$ $$B^{[t]}=\frac{\sigma -(n-1)\gamma ^{\lbrack t]}\sigma C}{\left[ (n+1)\gamma ^{\lbrack t]}+2\varepsilon ^{\lbrack t]}\right] (\alpha +\sigma +\rho )},$$ and the Lagrangian multiplier $$\lambda =\frac{\sum_{t\in T}\beta ^{\lbrack t]}\prod\limits_{\tau \neq t}\left[ 2\varepsilon ^{\lbrack \tau ]}+(n+1)\gamma ^{\lbrack \tau ]}\right] }{\sum_{t\in T}\prod\limits_{\tau \neq t}\left[ 2\varepsilon ^{\lbrack \tau ]}+(n+1)\gamma ^{\lbrack \tau ]}\right] }.$$. The downside of this analysis is that we can not guarantee that $\mathbb{E}\left[ \left. \sum_{t\in T}\left( d_{i}^{[t]}\right) ^{\ast }\right\vert X_{i}^{[t]}\right] =0$. Essentially, the storages reduce the baseline quantity $A^{[t]}$ by their time-average so that the aggregate buy/sell quantities sum up to zero in the statistical sense. This approximation of the storages’ actions ignores the intertemporal correlation of uncertainties. Future research is needed for an exact analysis of the full-fledged model. Heterogeneous Storages ---------------------- We model storages with heterogeneous physical attributes and information status by assuming that the costs of utilizing storage $c_{i}(d)=\varepsilon _{i}^{[t]}\cdot d^{2}$, and storage $i$ receives a private forecast $x_{i}^{[t]}$ with precision $\rho _{i}$. To illustrate the major points, we extend the two-period model. \[prop\_Hetero\] For a two-period model under both private and public market forecasting, the heterogeneous storage quantities in the linear Bayesian-Nash equilibrium take the form of $d_{i}^{[1]}=A_{i}+B_{i}x_{0}^{[1]}+C_{i}x_{i}^{[1]}$, wherein $A_{i}$, $B_{i} $, and $C_{i}$ are given in the Appendix. As in the basic model with homogeneous storages, each storage holds their own forecast but with varying precision: $$\mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]},x_{i}^{[1]}]=\frac{\sigma }{\alpha +\sigma +\rho }x_{0}^{[1]}+\frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}x_{i}^{[1]}.$$The interesting new feature in this extension is that each storage ($i$) needs to guess another storage’s ($j$) quantity decision via its own information set: $$\mathbb{E}[d_{j}^{[1]}|x_{0}^{[1]},x_{i}^{[1]}]=A_{j}+B_{j}x_{0}^{[1]}+C_{j}\mathbb{E}[x_{j}^{[1]}|x_{0}^{[1]},x_{i}^{[1]}].$$ To obtain a correct conjecture, storage $i$ needs to estimate storage $j$’s private forecast: $$\mathbb{E}[x_{j}^{[1]}|x_{0}^{[1]},x_{i}^{[1]}]=\mathbb{E}[\eta ^{\lbrack 1]}+\xi _{j}^{[1]}|x_{0}^{[1]},x_{i}^{[1]}]=\mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]},x_{i}^{[1]}].$$ Following a similar procedure as in the basic homogeneous model, we can obtain the coefficients summarized in the Appendix. The dependency of the value of both public and private forecasts on their precisions $\sigma $ and $\rho _{i}$ is highly nonlinear. Due to the complicated payoff functional forms of this general model, we start with the homogeneous model for a clear presentation of results. It can be checked that some of our findings and intuitions remain robust under this generalization. For example, the base storage quantity $A$ is positive (selling energy) if and only if $\beta ^{\lbrack 1]}>\beta ^{\lbrack 2]}$, i.e., the energy price decreases at $t=2$. The value of a private forecast ($C_{i}^{2}\mathbb{E}[x_{i}^{[1]}]^{2}$) is proportional to $\left[ \frac{(1-\delta )}{\alpha +\sigma +\rho _{i}}\right] ^{2}\rho _{i}$, and thus decreasing in the intertemporal correlation $\delta $, as the reactions to forecasts are less aggressive when $\delta $ increases. Conclusion {#s-con} ========== In this paper, we propose stylized models of decentralized energy storage planning under private and public market forecasting, when energy prices are both uncertain and variable over time. We derive the optimal buying or selling quantities for storages in a competitive environment with strategic interactions. Coarsely speaking, a foresighted storage will plan to buy energy when its price is low and sell when the price high. The value of a private forecast decreases in the intertemporal correlation of market price shock. We demonstrate the potentially negative economic value of a private forecast, due to *competition effect*: When there are a large number of storages, the purchase or selling quantity responses are exaggerated and the over-precision in forecasts can lead to even lower payoffs. These fundamental observations are robust when we generalize the model to multi-period or heterogeneous storages. We also examine several information management policies to coordinate storages’ actions and improve their profitability. Firstly, we demonstrate the potential negative economic value of a public forecast, due to *congestion effect*: A precise public forecast leads to herding behavior, and over-reaction to a public forecast leads to either too much purchase quantity (when forecast is favorable) or too much selling quantity (unfavorable forecast) from all storages. Secondly, we find that communication among the storages could fail to achieve a coordinated effort to increase market efficiency. The decentralized storages will not participate in any information sharing program when there are a large number of storages. Thirdly, we find it optimal to release additional information to a subset of energy storages exclusively by targeted information release. Future research is needed for a full-fledged analysis of a multi-period, decentralized, and heterogeneous model. Another direction to go is to incorporate operational constraints such as energy and/or power limits. Explicit modeling of renewable energy generation will contribute to a holistic understanding of the entire integrated system. Finally, information management research in other energy markets is likely to be promising. Appendix. Proofs. ================= **Proof of Proposition \[pro\_central\]**. Introduce a Lagrangian multiplier $\lambda ^{\lbrack t]}$ to relax the conservation constraint. We denote the Lagrangian by $$\begin{aligned} L &=&\sum_{\tau =t}^{\tau =L}l^{[\tau ]} \notag \\ &=&\sum_{\tau =t}^{\tau =L}\mathbb{E}\left[ \left. \begin{array}{c} \left( \beta ^{\lbrack \tau ]}-\gamma d^{[\tau ]}+\eta ^{\lbrack \tau ]}\right) d^{[\tau ]} \\ -\varepsilon \cdot \left( d^{[\tau ]}\right) ^{2}-\lambda ^{\lbrack t]}d^{[\tau ]}\end{array}\right\vert X^{[t]}\right] -\lambda ^{\lbrack t]}\sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}\end{aligned}$$ By the first-order condition $\frac{\partial l^{[\tau ]}}{\partial d^{[\tau ]}}=0$, for $\tau =t,...,L$ $\Rightarrow $ $$\mathbb{E}_{t}d^{[\tau ]}=\frac{\beta ^{\lbrack \tau ]}-\lambda ^{\lbrack t]}+\mathbb{E}\left[ \left. \eta ^{\lbrack \tau ]}\right\vert X^{[t]}\right] }{2\left( \varepsilon +\gamma \right) }.$$ Notice that $\mathbb{E}\left[ \left. \eta ^{\lbrack \tau ]}\right\vert X^{[t]}\right] =\delta ^{\tau -t}\mathbb{E}\left[ \left. \eta ^{\lbrack t]}\right\vert X^{[t]}\right] $, for $\tau =t,...,L$, and $\mathbb{E}\left[ \left. \sum_{\tau =t}^{\tau =L}d^{[\tau ]}\right\vert X^{[t]}\right] =-\sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}\Rightarrow $ $$\lambda ^{\lbrack t]}=\frac{\sum_{\tau =t}^{\tau =L}\beta ^{\lbrack \tau ]}+\sum_{\tau =t}^{\tau =L}\delta ^{\tau -t}\mathbb{E}\left[ \left. \eta ^{\lbrack t]}\right\vert X^{[t]}\right] +2\left( \varepsilon +\gamma \right) \sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}}{L-t+1}$$ Therefore,$$\mathbb{E}_{t}d^{[t]}=\frac{\beta ^{\lbrack t]}-\frac{\sum_{\tau =t}^{\tau =L}\beta ^{\lbrack \tau ]}}{L-t+1}}{2\left( \varepsilon +\gamma \right) }+\frac{\left( 1-\frac{\sum_{\tau =t}^{\tau =L}\delta ^{\tau -t}}{L-t+1}\right) \mathbb{E}\left[ \left. \eta ^{\lbrack t]}\right\vert X^{[t]}\right] }{2\left( \varepsilon +\gamma \right) }-\frac{\sum_{\tau =1}^{\tau =t-1}d^{[\tau ]}}{L-t+1}.$$$\square $ **Proof of Proposition \[prop\_two period\]**. The payoff can be simplified by plugging $d_{i}^{[2]}=-d_{i}^{[1]}.$ To derive the equilibrium storage quantities, we set $\frac{\partial \mathbb{E}[\pi _{i}|x_{i}^{[1]}]}{\partial d_{i}^{[1]}}=0$: $$\left[ \begin{array}{c} \beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}-\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \sum_{j\neq i}\mathbb{E}[d_{j}^{[1]}|x_{i}^{[1]}] \\ +\mathbb{E}[\eta ^{\lbrack 1]}|x_{i}^{[1]}]-\mathbb{E}[\eta ^{\lbrack 2]}|x_{i}^{[1]}] \\ -2\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}+\varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) \cdot d_{i}^{[1]}\end{array}\right] =0.$$ Notice that $$\mathbb{E}[d_{j}^{[1]}|x_{i}^{[1]}]=A+Bx_{0}^{[1]}+C\mathbb{E}[x_{j}^{[1]}|,x_{i}^{[1]}],$$ $$\begin{aligned} \mathbb{E}[x_{j}^{[1]}|x_{i}^{[1]}] &=&\mathbb{E}[\eta ^{\lbrack 1]}+\xi _{j}^{[1]}|x_{i}^{[1]}] \\ &=&\mathbb{E}[\eta ^{\lbrack 1]}|x_{i}^{[1]}],\end{aligned}$$ $$\begin{aligned} \mathbb{E}[\eta ^{\lbrack 2]}|x_{i}^{[1]}] &=&\mathbb{E}[\delta \eta ^{\lbrack 1]}+\epsilon _{1}|x_{i}^{[1]}] \\ &=&\delta \mathbb{E}[\eta ^{\lbrack 1]}|x_{i}^{[1]}],\end{aligned}$$ $$\mathbb{E}[\eta ^{\lbrack 1]}|x_{i}^{[1]}]=\frac{\rho }{\alpha +\rho }x_{i}^{[1]}.$$ By matching the coefficients with respect to $x_{i}^{[1]}$, we have $$A=\frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(n+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) },$$ $$C=\frac{(1-\delta )\rho }{\left[ \begin{array}{c} (n-1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \rho + \\ 2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}+\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) (\alpha +\rho )\end{array}\right] }.$$ $\square$ **Proof of Proposition \[prop:twoperiodpayoff\]**. The payoff can be calculated through $$\begin{aligned} \mathbb{E}[\pi _{i}] &=&\mathbb{E}\left[ \mathbb{E}[\pi _{i}|x_{i}^{[1]}]\right] \notag \\ &=&\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}+\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \notag \\ &&\cdot \left( A^{2}+C^{2}\mathbb{E}[x_{i}^{[1]}]^{2}\right) .\end{aligned}$$Notice that storage $i$’s payoff $\mathbb{E}[\pi _{i}]=\sum_{t\in T}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) A^{2}$ when there is no information available. The additional payoff proportional to $C^{2}\mathbb{E}[x_{i}^{[1]}]^{2}$ corresponds to the economic value of the private forecast. $$\begin{aligned} \lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}] &=&\lim_{n\rightarrow \infty }\sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \left[ \begin{array}{c} \frac{\left( \beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}\right) ^{2}}{(n+1)^{2}\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) 2} \\ +\frac{(1-\delta )^{2}}{(n-1)^{2}\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) ^{2}\rho }\end{array}\right] \notag \\ &&\lim_{n\rightarrow \infty }\sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \left[ \begin{array}{c} \left( \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}+ \\ \left( \frac{1-\delta }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}\rho ^{-1}\end{array}\right] \cdot n^{-2}.\end{aligned}$$ $\square $ **Proof of Proposition \[prop\_public\]**. To derive the equilibrium storage quantities, we set $\frac{\partial \mathbb{E}[\pi _{i}|x_{0}^{[1]}]}{\partial d_{i}^{[1]}}=0$: $$\left[ \begin{array}{c} \beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}-\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \sum_{j\neq i}\mathbb{E}[d_{j}^{[1]}|x_{0}^{[1]}] \\ +\mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]}]-\mathbb{E}[\eta ^{\lbrack 2]}|x_{0}^{[1]}] \\ -2\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}+\varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) \cdot d_{i}^{[1]}\end{array}\right] =0.$$ Notice that $\mathbb{E}[d_{j}^{[1]}|x_{0}^{[1]}]=A+Bx_{0}^{[1]},$ since $x_{0}^{[1]}$ is common knowledge. $\mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]}]=\frac{\sigma }{\alpha +\sigma }x_{0}^{[1]}$, and $$\begin{aligned} \mathbb{E}[\eta ^{\lbrack 2]}|x_{0}^{[1]}] &=&\mathbb{E}[\delta \eta ^{\lbrack 1]}+\epsilon _{1}|x_{0}^{[1]}] \\ &=&\delta \mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]}].\end{aligned}$$ By matching the coefficients with respect to $x_{i}^{[1]}$, we have $$A=\frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) +(n+1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) },$$ $$B=\frac{(1-\delta )\sigma /(\alpha +\sigma )}{\left[ \begin{array}{c} 2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}+\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \\ +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) (n-1)\end{array}\right] }.$$ The corresponding payoff can be calculated as follows: $$\mathbb{E}[\pi _{i}]=\sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right)$$ $$\cdot \left\{ \begin{array}{c} \left[ \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\sum_{t=1,2}2\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) +(n-1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }\right] ^{2} \\ +\left[ \frac{(1-\delta )}{\sum_{t=1,2}2\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) (n-1)}\right] ^{2}\cdot \frac{\sigma }{\left( \alpha +\sigma \right) ^{2}}\end{array}\right\} .$$ $$\lim_{n\rightarrow \infty }\mathbb{E}[\pi _{i}]\rightarrow \sum_{t=1,2}\left( \varepsilon ^{\lbrack t]}+\gamma ^{\lbrack t]}\right) \left[ \begin{array}{c} \left( \frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2} \\ +\left( \frac{1-\delta }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}\right) ^{2}\cdot \frac{\sigma }{\left( \alpha +\sigma \right) ^{2}}\end{array}\right] \cdot n^{-2}.$$ $\square $ **Proof of Proposition \[prop\_target\]**. To solve for equilibrium storage quantities, we set $\frac{\partial \mathbb{E}[\pi _{i}|x_{0}^{[1]}]}{\partial d_{i}^{[1]}}=0$ for $\forall i$ $\in J$ and $\frac{\partial \pi _{i}}{\partial d_{i}^{[1]}}=0$ for $\forall i$ $\in I-J$, separately. For $\forall i\in J,$ $$\begin{array}{c} \beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}-\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \left[ \begin{array}{c} (n-m)C+ \\ (m-1)\left( A+Bx_{0}^{[1]}\right)\end{array}\right] \\ +\frac{(1-\delta )\sigma }{\alpha +\sigma }x_{0}^{[1]}-2\left( \begin{array}{c} \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]} \\ +\varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\end{array}\right) \cdot \left( A+Bx_{0}^{[1]}\right)\end{array}=0,$$ whereas for $\forall i\in I-J,$ $$\begin{array}{c} \beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}-\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \left[ \begin{array}{c} mA+ \\ (n-m-1)C\end{array}\right] \\ -2\left( \begin{array}{c} \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]} \\ +\varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\end{array}\right) \cdot C\end{array}=0.$$ Matching coefficients with respect to $x_{0}^{[1]}$, we can obtain $A,B$ and $C$ following a similar procedure as before. We measure the economic efficiency by aggregate payoff: $$\sum_{i\in I}\mathbb{E}[\pi _{i}]=\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}+\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \notag$$ $$\cdot \left( nA^{2}+\frac{(1-\delta )^{2}m}{\left[ \begin{array}{c} 2\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}+\varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) \\ +(m-1)\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right)\end{array}\right] ^{2}}\frac{\sigma }{\left( \alpha +\sigma \right) ^{2}}\right) .$$ It can be checked that the storages’ aggregate payoff $\sum_{i\in I}\mathbb{E}[\pi _{i}]$ is maximized when$$m=1+\frac{2\left( \varepsilon ^{\lbrack 1]}+\varepsilon ^{\lbrack 2]}\right) }{\gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}}.$$$\square $ **Proof of Proposition \[prop\_multiple period\]**. The payoff from storage $i$ under a Lagrangian relaxation can be expressed as$$L_{i}\left( d_{i}^{[t]},t\in T\right) =\sum_{t\in T}l_{i}^{[t]}\left( d_{i}^{[t]}\right)$$ $$\begin{aligned} &=&\sum_{t\in T}\mathbb{E}\left[ \left. \begin{array}{c} P^{[t]}(D^{[t]})\cdot d_{i}^{[t]} \\ -\varepsilon ^{\lbrack t]}\cdot \left( d_{i}^{[t]}\right) ^{2}\end{array}\right\vert X_{i}^{[t]}\right] -\lambda ^{\lbrack t]}\mathbb{E}\left[ \left. \sum_{t\in T}d_{i}^{[t]}\right\vert X_{i}^{[t]}\right] \notag \\ &=&\sum_{t\in T}\mathbb{E}\left[ \left. \begin{array}{c} P^{[t]}(D^{[t]})\cdot d_{i}^{[t]} \\ -\varepsilon ^{\lbrack t]}\cdot \left( d_{i}^{[t]}\right) ^{2}-\lambda ^{\lbrack t]}d_{i}^{[t]}\end{array}\right\vert X_{i}^{[t]}\right] ,\end{aligned}$$ wherein $\lambda ^{\lbrack t]}$ is a Lagrangian multiplier to relax the constraint that $\mathbb{E}\left[ \left. \sum_{t\in T}d_{i}^{[t]}\right\vert X_{i}^{[t]}\right] =0$. To solve for equilibrium storage quantities, we set $\frac{\partial l_{i}^{[t]}\left( d_{i}^{[t]}\right) }{\partial d_{i}^{[t]}}=0 $: $$\left[ \begin{array}{c} \beta ^{\lbrack t]}-\gamma ^{\lbrack t]}\sum_{j\neq i}\mathbb{E}[d_{j}^{[t]}|X_{i}^{[t]}] \\ +\mathbb{E}[\eta ^{\lbrack t]}|X_{i}^{[t]}]-2\left( \gamma ^{\lbrack t]}+\varepsilon ^{\lbrack t]}\right) \cdot d_{i}^{[1]}-\lambda ^{\lbrack t]}\end{array}\right] =0.$$ By $\mathbb{E}\left[ \sum_{t\in T}\left( d_{i}^{[t]}\right) ^{\ast }\right] =0$, we use $\lambda $ as a static approximate solution instead of $\lambda ^{\lbrack t]}$. $\square $ **Proof of Proposition \[prop\_Hetero\]**. Again, the first-order condition for payoff-maximization requires $$\left[ \begin{array}{c} \beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}-\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \sum_{j\neq i}\mathbb{E}[d_{j}^{[1]}|x_{0}^{[1]},x_{i}^{[1]}] \\ +\mathbb{E}[\eta ^{\lbrack 1]}|x_{0}^{[1]},x_{i}^{[1]}]-\mathbb{E}[\eta ^{\lbrack 2]}|x_{0}^{[1]},x_{i}^{[1]}] \\ -2\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}+\varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) \cdot d_{i}^{[1]}\end{array}\right] =0.$$ The unknown coefficients in the equilibrium buying or selling quantities are summarized as follows. $$\begin{aligned} A_{i} &=&\frac{\beta ^{\lbrack 1]}-\beta ^{\lbrack 2]}}{2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) } \notag \\ &&\cdot \left[ \sum_{i\in I}\frac{\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }{2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }+1\right] ^{-1}.\end{aligned}$$ $$\begin{aligned} C_{i} &=&-\frac{\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \frac{(1-\delta )\rho _{i}}{\alpha +\sigma +\rho _{i}}}{\left[ 2\left( \begin{array}{c} \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]} \\ +\varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\end{array}\right) -\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}\right] } \notag \\ &&\cdot \frac{\sum_{i\in I}\frac{\frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}}{\left[ 2\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}+\varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) -\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}\right] }}{1+\sum_{i\in I}\frac{\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}}{\left[ 2\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}+\varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) -\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}\right] }} \notag \\ &&+\frac{\frac{(1-\delta )\rho _{i}}{\alpha +\sigma +\rho _{i}}}{\left[ 2\left( \begin{array}{c} \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]} \\ +\varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\end{array}\right) -\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \frac{\rho _{i}}{\alpha +\sigma +\rho _{i}}\right] }.\end{aligned}$$ $$\begin{aligned} B_{i} &=&-\frac{\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \left( \sum_{i\in I}B_{i}+\frac{\sigma }{\alpha +\sigma +\rho _{i}}\sum_{j\neq i}C_{j}\right) }{\left[ 2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \right] } \notag \\ &&+\frac{\frac{(1-\delta )\sigma }{\alpha +\sigma +\rho _{i}}}{\left[ 2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) \right] },\end{aligned}$$ wherein $$\begin{aligned} \sum_{i\in I}B_{i} &=&-\frac{\sum_{i\in I}\frac{\left( \gamma _{i}^{[1]}+\gamma _{i}^{[2]}\right) \frac{\sigma }{\alpha +\sigma +\rho _{i}}\sum_{j\neq i}C_{j}}{2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma _{i}^{[1]}+\gamma _{i}^{[2]}\right) }}{\left[ 1+\sum_{i\in I}\frac{\left( \gamma _{i}^{[1]}+\gamma _{i}^{[2]}\right) }{2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma _{i}^{[1]}+\gamma _{i}^{[2]}\right) }\right] } \notag \\ &&+\frac{\sum_{i\in I}\frac{\frac{(1-\delta )\sigma }{\alpha +\sigma +\rho _{i}}}{2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }}{\left[ 1+\sum_{i\in I}\frac{\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }{2\left( \varepsilon _{i}^{[1]}+\varepsilon _{i}^{[2]}\right) +\left( \gamma ^{\lbrack 1]}+\gamma ^{\lbrack 2]}\right) }\right] }.\end{aligned}$$ $\square$ [^1]: Q. He is with the Department of Systems Engineering and Engineering Management, University of North Carolina, Charlotte, NC, 28223 USA. Email: [email protected]. [^2]: Y. Yang is with the Statistics Department, Florida State University, Tallahassee, FL, 32306 USA. Email: [email protected]. [^3]: B. Zhang is with the Electrical Engineering Department, University of Washington, Seattle, WA 98195 USA. Email: [email protected].
66,087,471
[Non-O1, non-O139 Vibrio cholerae bacteremia in a chronic hemodialysis patient]. Non-O1, and non-O139 Vibrio cholerae is an infrequent cause of bacteremia. There are no reports of such bacteremia in chronic hemodialysis patients. This work describes the case of a chronic hemodialysis patient that had an episode of septicemia associated with dialysis. Blood cultures were obtained and treatment was begun with vancomycin and ceftazidime. After 6.5 hours of incubation in the Bact/Alert system there is evidence of gram-negative curved bacilli that were identified as Vibrio cholerae by conventional biochemical tests, API 20 NE and the VITEK 2 system. This microorganism was sent to the reference laboratory for evaluation of serogroup and virulence factors and was identified as belonging to the non-O1 and non-O139 serogroup. The cholera toxin, colonization factor and heat-stable toxin were not detected. The isolate was susceptible to ampicillin, trimethoprim-sulfamethoxazole, ciprofloxacin, tetracycline, ceftazidime and cefotaxime by the disk diffusion method and the VITEK 2 system. The patient received intravenous ceftazidime for a 14 day- period and had a favorable outcome.
66,087,535
Hackers have siphoned about $103,000 out of Bitcoin accounts that were protected with an alternative security measure, according to research that tracked six years' worth of transactions. Account-holders used easy-to-remember passwords to protect their accounts instead of the long cryptographic keys normally required. The heists were carried out against almost 900 accounts where the owners used passwords to generate the private encryption keys required to withdraw funds. In many cases, the vulnerable accounts were drained within minutes or seconds of going live. The electronic wallets were popularly known as "brain wallets" because, the thinking went, Bitcoin funds were stored in users' minds through memorization of a password rather than a 64-character private key that had to be written on paper or stored digitally. For years, brain wallets were promoted as a safer and more user-friendly way to secure Bitcoins and other digital currencies, although Gregory Maxwell, Gavin Andresen, and many other Bitcoin experts had long warned that they were a bad idea. The security concerns were finally proven once and for all last August when Ryan Castellucci, a researcher with security firm White Ops, presented research at the Defcon hacker convention that showed how easy it was to attack brain wallets at scale. Brain wallets used no cryptographic salt and passed plaintext passwords through a single hash iteration (in this case, the SHA256 function), a shortcoming that made it possible for attackers to crack large numbers of brain wallet passwords at once. Worse, a form of the insecurely hashed passwords are stored in the Bitcoin blockchain, providing all the material needed to compromise the accounts. By contrast, Google, Facebook, and virtually all other security-conscious services protect passwords by storing them in cryptographic form that's been passed through a hash function, typically tens of thousands of times or more, a process known as key stretching that greatly increases the time and resources required by crackers. The services also use cryptographic salt, a measure that requires each hash to be processed separately to prevent the kind of mass cracking Castellucci did. Security-conscious services also go to great lengths to keep password hashes confidential, a secrecy that's not possible with Bitcoin because of the transparency provided by the blockchain. Brain drain According to a recently published research paper, the brain wallet vulnerability was known widely enough to have been regularly exploited by real attackers going after real accounts. Over a six-year span that ended last August, attackers used the cracking technique to drain 884 brain wallet accounts of 1,806 bitcoins. Based on the value of each coin at the time the theft took place, the value of the purloined coins was $103,000. "Our results reveal the existence of an active attacker community that rapidly steals funds from vulnerable brain wallets in nearly all cases we identify," the paper authors wrote. "In total, approximately $100K worth of bitcoin has been loaded into brain wallets, with the ten most valuable wallets accounting for over three-quarters of the total value. Many brain wallets are drained within minutes, and while those storing larger values are emptied faster, nearly all wallets are drained within 24 hours." The paper, titled "The Bitcoin Brain Drain: A Short Paper on the Use and Abuse of Bitcoin Brain Wallets," is scheduled to be presented later this month at the Financial Cryptography and Data Security 2016 conference. Its publication comes about six months after Brainwallet.org, the most widely used Bitcoin-based brain wallet service, permanently ceased operations. The service voluntarily shut down following the Defcon presentation by Castellucci, who is one of the authors of the most recent paper. To identify brain wallets and then crack them, the research team compiled 300 billion password candidates taken from more than 20 lists, including the Urban Dictionary, the English language Wikipedia, the seminal plaintext password leak from the RockYou gaming website, and other large online compromises. By collecting words and entire phrases from a wide body of sources, the researchers employed a technique Ars covered in 2013 that allowed them to crack words and phrases many people would have considered to be strong passwords. Cracked passphrases included "say hello to my little friend," "yohohoandabottleofrum," and "dudewheresmycar." The researchers ran each password candidate through the SHA256 function to derive a list of potential private keys for Bitcoin addresses used by brain wallets. They then used a cryptographic operation based on elliptic curves to find the public key corresponding to each potential private key. Since the Bitcoin blockchain contains the public key of every account wallet, it was easy to know when a password guess was used by a real Bitcoin user. The paper reported that vulnerable accounts were often drained within minutes of going live, and in an interview, Castellucci said that some accounts were liquidated in seconds. Castellucci said he suspects the speed was the result of attackers who used large precomputed tables containing millions or billions of potential passwords. While many of the attackers who drained vulnerable accounts earned paltry sums for their work, the top four drainers netted about a total of $35,000 among them. Meanwhile, the drainer who emptied the most brain wallets—about 100 in all—made $3,219. The thefts were often chronicled in online forums, where participants would report that their Bitcoin wallets had mysteriously been emptied. For a while, people assuming the role of a digital Robin Hood claimed to crack vulnerable wallets, drain them of their contents, and then wait for the victim to publicly complain of the theft on Reddit or various bitcoin forums. The Robin Hood and Little John hackers would then claim to return the funds once the victim proved control of the compromised private key. While plenty of people publicly warned of risks of brain wallets over the years, the vulnerability was often dismissed as theoretical by some. Brain wallets are now generally shunned by Bitcoin users, but Castellucci warned that an alternative crypto currency known as Ethereum can use a brain wallet scheme that's every bit as weak as the Bitcoin one was. He is withholding details for now in the hopes that Ethereum brain wallets will soon be abandoned.
66,087,640
I know most Americans are sick of these organized caravans rushing our border. Immigration “activists” are aiding these people, who have the audacity to think they can just waltz into our country. Previous attempts have failed, thanks to President Trump. But instead of rushing our ports or climbing walls, they are taking straight lines to the open border. But our POTUS isn’t going to let them get away with it. From MSN: A caravan of almost 2,000 Central American migrants has arrived just outside the Texas-Mexico border with the hopes of crossing into the United States, according to officials… There to meet the influx of migrants was a surge in law enforcement that included multiple sheriff’s offices, the U.S. Border Patrol and the Texas Department of Public Safety. But that’s not all. From NBC News: The Pentagon is moving 250 active duty troops to the border town of Eagle Pass, Texas, in advance of the arrival of a new caravan of migrants, according to a statement Wednesday by Defense Department spokesperson Capt. Bill Speaks. SHARE to thank President Trump and our troops for protecting our border! It’s mind-boggling to think these people keep trying to barge into our country. Gone are the days of small packs trying to sneak in inside trunks of cars. Thanks to the Democrats, they are marching in huge groups! They really think Trump isn’t serious about his America First agenda. No country should tolerate like kind of insulting assault on their borders. We have every right to decide who should or should not come into our country. This wouldn’t even be an issue if Congress worked with Trump to actually secure our border. It’s insanity to think that American leaders refuse to protect American citizens. But this is what happens when you elect corrupt Democrats to powerful positions. President Trump won’t allow corruption and broken policy to harm Americans. He is doing whatever it takes to prevent caravans from overrunning our country. SHARE to thank President Trump and our troops for protecting our border!
66,087,696
Water of Ken The Water of Ken is a river in the historical county of Kirkcudbrightshire in Galloway, south-west Scotland. It rises on Blacklorg Hill, north-east of Cairnsmore of Carsphairn in the Carsphairn hills, and flows south-westward into the Glenkens valley, passing through Carsfad and Earlstoun lochs, both of which are dammed to supply the Galloway Hydro Electric Scheme. From there, the river flows south, passing St. John's Town of Dalry and New Galloway, before widening to form the 9-mile long Loch Ken. The Black Water of Dee also enters halfway down the loch, and from Glenlochar, at the south end of the loch, the river continues as the Dee towards Kirkcudbright and the coast. Ken
66,087,845
In motor vehicles, “fields of view” are legally-prescribed (required by law) in accordance with the type of the motor vehicle, such as e.g., motorcycles, motor vehicles for transporting passengers, motor vehicles for transporting goods, etc. The fields of view must be provided by one or more “device for indirect viewing”, which is conventionally a mirror, and the fields of view must be visible or viewable by a driver sitting in the driver's seat using the device(s) for indirect viewing. Depending upon the type of the vehicle and, in particular, which areas around the vehicle that can be directly seen by the driver, different legal provisions require, depending upon the vehicle type, that certain fields of view can be seen at all times using the device(s) for indirect viewing. Therefore, for commercial vehicles, such as e.g., trucks or delivery vehicles, e.g., a primary mirror is currently provided on each of the driver's side and the passenger's side as devices for indirect viewing. Using the primary mirror, the vehicle driver can see a level and horizontal part of the road surface of a certain width, which extends from a stipulated distance behind the vehicle driver's eye point up to the horizon. In addition, a band of lesser width must be visible or viewable for the vehicle driver using this mirror, which band begins at a short distance behind the driver's eye point. The area near the vehicle, which can be viewed using the primary mirror and is legally required to be always visible or viewable, will be designated in the following as the “field of view of the primary mirror”. In addition to these two primary mirrors, fields of view provided by wide-angle mirrors are required to be seen on both sides of the commercial vehicle. Each area behind the driver's eye point is viewed using the wide-angle mirrors in a defined length in the longitudinal direction of the vehicle. Although this area is wider than the area viewable by the driver using the primary mirror; it extends only a certain (shorter) length along the side of the vehicle. In accordance with the applicable legal provisions for implementing or providing the required fields of view, commercial vehicles further require a close-proximity mirror. Using the close-proximity mirror, an area lying in the front area next to the driver's cab and an area directly adjacent to the driver's cab are visible or viewable by the driver. Finally, for at least some types of commercial vehicles, one additional field of view is required to provided, e.g., by a front mirror. Using this additional (e.g., front) mirror, an area directly in front of the commercial vehicle, which area extends in the lateral direction of the commercial vehicle beyond the passenger-side edge of the commercial vehicle, is viewable by the driver. However, in spite of these legally-prescribed mirrors and/or devices for indirect viewing, it is scarcely possible and/or very difficult for a vehicle driver to completely and sufficiently maintain in view at all times the areas around a commercial vehicle that are prone to accidents. Moreover, due to the plurality of mirrors, the burden on the vehicle driver to substantially simultaneously maintain all of these mirrors in view increases. Moreover, because the fields of view are provided using mirrors, the mirrors disrupt the smooth flow of air around the vehicle while driving as a consequence, such that vehicle drag is increased and thus fuel consumption is increased. A display device is known from WO 2011/061238 A1 (post-published prior art with respect to the priority date of the present application). A device for monitoring the surroundings of vehicles having at least first and second monitoring devices is described therein. The monitoring devices output display signals to a control device. A display device connected with a control device is capable of depicting the display signals from the monitoring devices in a split-screen mode in at least two sections. The control device is connected with a state of motion signaling line so that it causes the at least two display signals from the monitoring devices to be displayed on the display device in the split screen mode in accordance with the state of motion of the vehicle. Further, a display device is known from DE 10 2006 020 511 A1. To transmit video signals from at least one camera at the tail of a motor vehicle and/or motor vehicle trailer to at least one display unit in the motor vehicle, in particular a truck, the video signals output by the camera are digitalized and encoded, while performing a data compression in real time, and then these encoded video signals are impressed on a power supply voltage system and transmitted via the power supply voltage system. The encoded video signals transmitted via the supply voltage system are subsequently filtered out from the power supply voltage system, decoded and/or decompressed and supplied to the display unit for image rendering.
66,088,122
News Release US Labor Department provides $27 million to help workers displaced by oil spill in Gulf of Mexico Funds to help with job training, placement WASHINGTON — The U.S. Department of Labor has announced a total of $27 million in National Emergency Grant awards to four key states to assist workers along the Gulf Coast who have been displaced as a result of the ongoing Deepwater Horizon oil spill. The states are Alabama, Florida, Louisiana and Mississippi. "Working families in the Gulf Coast have been dealt a tremendous blow by this oil spill, and they are facing serious long-term challenges. They need and deserve our help now," said U.S. Secretary of Labor Hilda L. Solis. "From the start, we have been actively engaged in ensuring workers tackling the cleanup are kept safe and healthy. These grants will help those still looking for work find jobs that are good, safe and will help the region's economy get back on track." The funds are being granted to workforce agencies in the four states experiencing economic hardship as a result of wage decline and job loss in the shrimping, fishing, hospitality and tourism industries. Alabama and Mississippi each will receive $5 million. Florida will receive $7 million, and Louisiana will receive $10 million. The resources are being provided to the states to increase their capacity to help workers now while they seek reimbursement from BP for the costs associated with retraining and re-employment assistance. Services funded by the grant money may include skills assessment, basic skills training, individual career counseling and occupational skills training. Since April, the Labor Department has been involved in the Deepwater Horizon response. The department's Occupational Safety and Health Administration is deployed across the Gulf Coast monitoring the cleanup and ensuring BP provides appropriate worker safety and health training and protections. Learn more at http://www.osha.gov/oilspills. The department's Employment and Training Administration has created One-Stop Career Centers where workers can receive information on unemployment insurance and job opportunities posted through the public workforce system. Learn more by calling 877-US2-JOBS (872-5627), 877-872-5627 or 877-889-5627 TTY, or visiting http://www.careeronestop.org Additionally, the department's Wage and Hour Division has been on the ground consulting with multiple agencies and interested parties, and providing materials to ensure cleanup workers are paid the wages they deserve. National Emergency Grants are part of the secretary of labor's discretionary fund and are awarded based on a state's ability to meet specific guidelines. For more information, visit http://www.doleta.gov/NEG/.
66,088,130
In human rights terms, the word equity represents equality and fairness. It is synonymous with the notion of distributive justice, or fair distribution of good things within a society, whether they may be material possessions, access to health care or simply survival. Health equity has been defined as the absence of systematic disparities in health (or its social determinants) between more and less advantaged groups \[[@R1]\]. Health indicators such as infant mortality have improved in India over time but still continue to be differential across gender, caste, wealth, education and geography \[[@R2]\]. For example, the National Family Health Survey 2005--2006 showed that infant mortality was 70 per 1000 live births for the poorest and 29 per 1000 for the least poor, 42 and 62 per 1000 live births for urban and rural areas respectively, and 70 and 26 per 1000 live births for those with illiterate mothers and mothers with 12 or more years of schooling respectively. In the past few years India's economic growth has been impressive, but neither the distribution of wealth generated by economic growth nor direct investments in health infrastructure and support systems have been equitably distributed. The result is that poorer families are less likely to access maternal and child health services than wealthier ones. In addition to economic inequity in access to health care, there are social inequities as well. For example, girls, infants from lower caste families and those with illiterate mothers are less likely to receive health care than boys, infants from higher caste families and those with mothers who have completed secondary school. In 2002, implementation of the Integrated Management of Neonatal and Childhood Illness (IMNCI) strategy was started in India. In addition to treatment of common neonatal and childhood illnesses, IMNCI included home visits to all newborns in the first week of life, and community mobilization activities. We conducted a cluster randomized trial to evaluate IMNCI and found that its implementation resulted in 15% lower infant mortality in the intervention clusters. We also found a substantial improvement in the home based newborn care practices such as initiation of breast feeding within an hour, exclusive breast feeding at four weeks, delayed bathing and appropriate cord care, and in treatment seeking practices in the intervention clusters \[[@R3]\]. Most large studies to evaluate the effect of interventions on newborn and child mortality report only overall results, and not the effect in vulnerable population subgroups. We believe that for an intervention shown to be efficacious in a representative population, several factors require attention when translating research findings to program policy; these include intervention impact on vulnerable groups. We therefore hypothesized that IMNCI implementation would result in a reduction of inequity in neonatal and post--neonatal mortality, health care for illness and in newborn care practices. In this paper we present the results of a secondary analysis to examine the extent to which the IMNCI implementation changed the prevailing health inequities. METHODS ======= Methods of the main trial ------------------------- The methods of the cluster--randomized trial evaluating IMNCI have been previously published and are briefly summarized below \[[@R3]\]. Setting ------- The trial was conducted in 18 rural areas served by primary health centres in district Faridabad, Haryana, India, with a population of 1.1 million. In this setting, about half of the mothers had never been to school; 95% of the women do not work outside home. 25% of the newborns are low birth weight and 60% of sick children sought care from medically unqualified private practitioners \[[@R4],[@R5]\]. Randomization ------------- In order to randomize the primary health centre areas into intervention and control groups, a baseline survey was conducted and information was obtained on proportion of home deliveries, mothers who had never been to school, population per cluster, and neonatal and infant mortality. The clusters were divided into three strata with 6 clusters each according to their baseline neonatal mortality rates. Ten stratified randomization schemes were generated by an independent epidemiologist, of which seven schemes had a similar neonatal mortality rate, proportion of home births, proportion of mothers never been to school and population size in the intervention and control groups. One of these seven schemes was selected by a computer generated random number and was used to allocate the clusters into intervention and control groups. IMNCI intervention ------------------ The intervention was designed following the guidelines defined by the Government of India for IMNCI \[[@R6]--[@R9]\]. The study activities in the intervention clusters included: a\) Post--natal home visits during the newborn period: Community health workers in the intervention clusters were trained to conduct home visits; counsel mothers on optimal essential newborn care practices, identify illnesses, treat mild illness and refer newborns with danger signs. b\) Improving health worker skills for case management of neonatal and childhood illness: All staff working in the public health facilities were trained in improving their existing skills for management of sick neonates and children. Training was given using the Government of India IMNCI training module. Formal and informal sector private providers also underwent IMNCI orientation sessions. c\) Strengthening the health system to implement IMNCI: Supervision of community health workers was improved, workers were provided performance--based incentives, uninterrupted supplies of essential medicines were ensured through village level depots. To improve community awareness of the available services three monthly women's group meetings were conducted in each village. Routine care ------------ Routine care includes the activities that were provided by the health care system for newborns and children in both intervention and control areas. This care was provided by two types of community health workers (Anganwadi workers and Accredited Social Health Activists or ASHAs), first level health workers (Auxiliary Nurse Midwives) and primary health care physicians. The activities of each category of workers are briefly described below: **Anganwadi workers**: Their routine care activities included preschool education, supplementary nutrition and growth monitoring, largely delivered at Anganwadi centres. Their IMNCI--specific activity (only in intervention areas) was to make home visits after birth to promote optimal newborn care practices. **Accredited Social Health Activists (ASHAs):** Their routine care activities included promotion of antenatal care, hospital births and immunization and contraception services. Their IMNCI--specific activities (only in intervention areas) were to conduct women's group meetings to promote newborn care and to treat minor illnesses using the IMNCI algorithm. **Auxiliary Nurse Midwives (ANMs):** Their routine care activities included provision of immunization, family planning, antenatal care, first level treatment of children with illness and conduction of deliveries. Their IMNCI--specific activity (only in intervention areas) was to treat newborn and childhood illnesses using the IMNCI algorithm. **Primary health care physicians:** Their routine care activities included provision of outpatient treatment of childhood illnesses. Their IMNCI--specific activity (only in intervention areas) was to treat newborn and childhood illnesses using the IMNCI algorithm. Outcome measurement ------------------- The primary outcomes of the trial were neonatal and infant mortality, and the secondary outcomes included newborn care practices and care--seeking for illness. The intervention was initiated in January 2007, and data collection for outcome measurement was started in January 2008. The overall sample size of the study was about 30 000 live births per group, which was calculated for ascertaining a 20% difference in neonatal and infant mortality, the primary outcomes of the study. All live births in the intervention and control clusters were visited on day 29 (for ascertaining neonatal mortality) and at 6 and 12 months of age (for ascertaining post--neonatal mortality). Households in the intervention and control areas were allocated to one of the 110 study field workers who were not involved with IMNCI implementation. The workers visited the allocated households every month to identify new pregnancies and inquire about the outcome of previously identified pregnancies. All live births identified by the workers were entered into a database, which was used to generate the due dates to follow up these infants by making home visits. All households with live births were visited on day 29 and at ages 3, 6, 9, and 12 months to document the vital status of the infant by the worker to whom the household was allocated. The worker confirmed the identification of the infant through a set of questions before asking about the health status of the infant. These surveillance workers were not told the intervention status of the clusters. The follow--up procedures were identical in intervention and control clusters. Information was also obtained from all enrolled infants about socio--demographic characteristics, and possession of assets at enrolment. Secondary outcomes, including newborn care practices and treatment seeking for illness, were ascertained in a subset of enrolled infants at day 29 of life. These outcomes were assessed through an interview by a research assistant with the primary caregiver that lasted 45 minutes to an hour. The sample size for these outcomes was 6200 per group, which was calculated to ascertain at least a 10% absolute difference in care seeking from an appropriate provider for neonatal illness. A random sample of enrolled infants in both the intervention and control clusters was selected for ascertaining secondary outcomes in the following manner. All live births identified by the surveillance workers were entered into a database. Dates for their 29--day visit were generated using a computer program. At the same time, one of five enrolled infants was randomly selected by the computer program for an interview for secondary outcomes. The identification numbers of infants selected for interview were communicated to the research assistants of the secondary outcome assessment team a day before the scheduled interview. Ethical considerations ---------------------- The study was approved by the ethics review committee of the Society for Applied Studies and World Health Organization. Permissions were also obtained from the state and district authorities. Informed consent was taken from the women with a live birth prior to the first interview. Oversight to the study was provided by a study advisory group and Data Safety Monitoring Board (DSMB). Secondary analysis for ascertaining impact on equity ---------------------------------------------------- Analysis was performed using Stata software version 11 (StataCorp, College Station, TX, USA) and the methods are described below. Population subgroups -------------------- The infants in intervention and control clusters were divided into subgroups based on their families' wealth, religion and caste, mother's years of schooling and the sex of the infant. The wealth of an individual was determined by a wealth index created using primary component analysis based on all of the assets owned by a household. The fact that a household did not own a particular asset that was generally associated with poor households was also used in the calculation of wealth index. The following variables from the initial survey were used to determine the assets owned by a household: the source of drinking water, use of electricity, type of sanitation, type of cooking fuel used, construction materials used for roof, floor and walls of the house, ownership of items like mattress, a pressure cooker, a chair, a cot/bed, a table, an electric fan, a radio/transistor, a black and white television, a colour television, a sewing machine, a mobile telephone, any other telephone, a computer, a refrigerator, a watch or clock, a bicycle, a motorcycle or scooter, an animal--drawn cart, a car, a water pump, a thresher, a tractor, house ownership; number of household members per sleeping room; ownership of a bank or post--office account. An asset score with a mean of 0 and standard deviation of 1 was used in the principal component analysis. Using the score from the wealth index the population was divided into five equal wealth quintiles. Religion and caste was classified into upper caste Hindu, lower caste Hindus (scheduled castes and tribes), and non--Hindu. Maternal education was classified as none, 1--9 years, 10--11 and 12 or more years of schooling. Inequities in health outcomes ----------------------------- Neonatal mortality, post neonatal mortality, newborn care practices (eg, exclusive breastfeeding within 1 hour) and careseeking from an appropriate provider for danger signs and pneumonia were displayed for intervention and control areas in subgroups by wealth quintiles, religion and caste, maternal education and sex of the infant. We chose to analyze inequities in neonatal and post--neonatal mortality separately because the overall results of IMNCI trial showed that most of the effect of the intervention on infant mortality was attributable to post--neonatal mortality. In order to visually assess the degree of income--related inequity in the distribution of health outcomes in intervention and control clusters (neonatal deaths, post neonatal deaths, number of infants who initiated breastfeeding within one hour after birth), we used General Lorenz concentration curves. The concentration curve plots the cumulative percentage of the health outcome (y--axis) against the cumulative percentage of the population ranked by wealth quintile, beginning with the poorest, and ending with the richest (x--axis). The curve is expected to be above the diagonal equity line for a negative outcome like mortality indicating that more deaths occur in the poorer than richer quintiles in the population. Conversely, the curve is expected to be below the equity line for a positive outcome such as utilization of health care indicating that relatively lower number of the poorer quintiles has the outcome. Effect of the intervention on inequity -------------------------------------- The results were analyzed through a multiple linear regression model with a health outcome (neonatal mortality, post--neonatal mortality, exclusive breastfeeding within 1 hour and care seeking from an appropriate provider for a danger sign) as the dependent variable and population subgroups (by wealth quintile, religion and caste, level of education of the mother and sex of infant) as the independent variable. This multiple regression model was adjusted for cluster design and possible confounders such as distance of the cluster from the highway and percent of home births in the cluster. Additional covariates were the intervention group (intervention or control) and an interaction term of the intervention with the population subgroup (eg, wealth quintile × intervention group). The regression coefficient of this interaction term, which reflects the difference in inequities between the intervention and control groups, was the main indicator of the effect of the intervention on equity. RESULTS ======= Overall results of the IMNCI trial ---------------------------------- The overall results of the trial have been published previously \[[@R3]\] but are briefly described here in order to provide the reader an overview of the overall impact of the intervention before presenting the results related to inequities. A total of 60 702 infants were enrolled into the trial. There were some differences between the intervention and control clusters at baseline. The control clusters had features of urbanization; a higher proportion of houses had private toilets (46% vs 38%) and a lower proportion possessed 'below poverty line' card, the families in the control clusters were nearer to the highway than families in the intervention areas (7.0 km vs 15.3 km) and had lower proportion of home births (65.9% vs 71.9%). Overall, the infant mortality rate was significantly lower in the intervention clusters than in the control clusters (adjusted hazard ratio 0.85, 95% CI 0.77 to 0.94). The adjusted hazard ratio for neonatal mortality rate was 0.91 (0.80 to 1.03) and that for post--neonatal mortality was 0.76 (0.67 to 0.85). The intervention clusters had significant improvement in newborn and infant care practices. For example, almost 41% of the caregivers in the intervention clusters reported starting breastfeeding within an hour of birth, compared with 11.2% in the control clusters (odds ratio 5.21, 95% CI 4.33 to 6.28). Population sub--groups in intervention and control clusters ----------------------------------------------------------- The proportion of poorer households and mothers with no formal schooling was slightly lower in the intervention compared with control clusters. Sex was equally distributed across intervention and control clusters. The largest difference between study groups was in the proportion of non--Hindus (8.9% in intervention and 24.3% in control clusters, [**Table 1**](#T1){ref-type="table"}). ###### Population sub--groups in intervention and control clusters Characteristics of families of recruited infants Intervention clusters (%) Control clusters (%) -------------------------------------------------- --------------------------- ---------------------- **Wealth quintiles of household:** n = 29 589 n = 30 604 Poorest 5620 (19.0) 6421 (20.9) Very poor 5380 (18.2) 6660 (21.8) Poor 5818 (19.7) 6222 (20.3) Less poor 6039 (20.4) 6001 (19.6) Least poor 6732 (22.8) 5300 (17.3) **Mother's education level:** n = 29 545 n = 30 499 None 11 220 (38.0) 12 846 (42.1) 1--9 years of schooling 12 238 (41.4) 11 604 (38.1) 10--11 years of schooling 3460 (11.7) 3405 (11.2) ≥12 years of schooling 2627 (8.9) 2644 (8.7) **Sex:** n = 29 667 n = 30 813 Male 15 623 (52.7) 16 252 (52.7) Female 14 044 (47.3) 14 561 (47.3) **Religion/caste:** n = 29 565 n = 30 577 Upper caste 19 407 (65.6) 16 122 (52.7) Schedule caste/schedule tribe 7532 (25.5) 7013 (22.9) Non--Hindu 2626 (8.9) 7442 (24.3) Inequities in health outcomes in the control population ------------------------------------------------------- There were large inequities in health outcomes across different population subgroups. Mortality outcomes were substantially higher among more vulnerable population sub--groups. For instance, in the control clusters, post--neonatal mortality was 41.7 per 1000 in the poorest and 14.0 per 1000 live births for the least poor, 36.5 and 18.5 per 1000 live births in non--Hindus and upper caste Hindus, 32.3 and 20.8 per 1000 live births among female and male infants, 36.3 and 9.8 per 1000 live births in infants of mothers with no formal schooling and those with 12 years or more of schooling. On the other hand, access to health care was lower in the vulnerable population subgroups. In the control clusters, 17.1% and 42.7% of neonates from the poorest and least poor households were taken for health care from an appropriate provider when they had a danger sign. The corresponding values for the same outcome were 12.3% and 38.4% for non--Hindu and upper--caste Hindus, 19.3% and 36.4% of female and male infants, 19.6% and 51.4% of infants of mothers with no formal schooling and 12 or more years of schooling ([**Tables 2**](#T2){ref-type="table"}**to**[**5**](#T5){ref-type="table"}). ###### Effect of intervention on inequities in neonatal mortality in the intervention and control clusters Subgroups
(total infants in intervention/control clusters) No. of deaths (NMR/1000) Difference in inequity gradient
(95% CI)\* P--value ------------------------------------------------------------ ------------------------------- -------------------------------------------- ---------------------- ------- **Intervention (n = 29 667)** **Control (n = 30 813)** **Wealth quintile:** Poorest (5620/6421) 293 (52.1) 348 (54.2) Very poor (5380/6660) 248 (46.1) 334 (50.2) Poor (5818/6222) 252 (43.3) 224 (36.0) Less poor (6039/6001) 241 (39.9) 218 (36.3) Least poor (6732/5300) 208 (30.9) 177 (33.4) Change in NMR/subgroup (inequity gradient) --3.6 (--6.0 to --1.2) --4.1 (--5.9 to --2.3) 0.5 (--2.0 to 2.9) 0.681 **Religion and caste:** Hindu scheduled caste/tribe (7532/7013) 352 (46.7) 330 (47.1) Non--Hindu (2626/7442) 117 (44.6) 322 (43.3) Hindu Upper Caste (19 407/16 122) 773 (39.8) 648 (40.2) Change in NMR/subgroup (inequity gradient) --0.2 (--3.6 to 3.3) 0.2 (--3.7 to 4.0) --0.3 (--4.8 to 4.1) 0.872 **Gender:** Female (14 044/14 561) 577 (41.1) 614 (42.2) Male (15 623/16 252) 667 (42.7) 712 (43.8) Change in NMR/subgroup (inequity gradient) 1.9 (--4.9 to 8.7) 2.0 (--3.1 to 7.2) --0.1 (--8.7 to 8.4) 0.974 **Mother\'s years of schooling:** None (11 220/12 846) 537 (47.9) 626 (48.7) 1--9 years (12 238/11 604) 501 (40.9) 478 (41.2) 10--11 years (3460/3405) 117 (33.8) 127 (37.3) 12+ years (2627/2644) 83 (31.6) 57 (21.6) Change in NMR/subgroup (inequity gradient) --2.9 (--5.1 to --0.71) --4.8 (--8.2 to --1.4) 1.9 (--1.9 to 5.7) 0.296 NMR -- neonatal mortality, CI -- confidence interval \*Multiple linear regressions adjusted for cluster design and potential confounders (distance of nearest point from PHC to highway, percent of home births, and years of schooling of mother, gender, religion and caste and wealth quintile). ###### Effect of intervention on inequities in care--seeking from an appropriate provider for a danger sign during the neonatal period in intervention and control clusters Subgroups (newborns with danger signs in intervention/control groups) N (%) taken for care to an appropriate provider Difference in inequity gradients (95% CI)\* P--value ----------------------------------------------------------------------- ------------------------------------------------- --------------------------------------------- ------------------------- ------- **Intervention (n = 1010)** **Control (n = 1269)** **Wealth quintile:** Poorest (185/257) 60 (32.4) 44 (17.1) Very poor (164/258) 58 (35.4) 47 (18.2) Poor (187/256) 89 (47.6) 86 (33.6) Less poor (208/250) 100 (48.1) 91 (36.4) Least poor (264/246) 165 (62.5) 105 (42.7) Change in % taken for appropriate care/subgroup (inequity gradient) 4.6 (2.8 to 6.4) 4.0 (2.5 to 5.5) 0.6 (--1.6 to 2.8) 0.554 **Religion and caste:** Schedule caste and tribe (254/304) 97 (38.2) 84 (27.6) Non--Hindu (79/308) 18 (22.8) 38 (12.3) Hindu Upper Caste (677/653) 359 (53.0) 251 (38.4) Change in % taken for appropriate care/subgroup (inequity gradient) 3.9 (--0.2 to 7.9) 2.8 (0.1 to 5.4) 1.1 (--3.9 to 6.1) 0.653 **Gender:** Female (400/514) 165 (41.3) 99 (19.3) Male (610/755) 309 (50.7) 275 (36.4) Change in % taken for appropriate care/subgroup (inequity gradient) 8.3 (1.6 to 15.1) 17.6 (11.4 to 23.8) --9.3 (--18.2 to --0.4) 0.042 **Mother\'s years of schooling:** None (405/555) 156 (38.5) 109 (19.6) 1--9 years (395/447) 188 (47.6) 144 (32.2) 10--11 years (119/157) 67 (56.3) 65 (41.4) 12+ years (91/109) 63 (69.2) 56 (51.4) Change in % taken for appropriate care/subgroup (inequity gradient) 5.5 (1.5 to 9.4) 6.5 (2.4 to 10.6) --1.0 (--6.5 to 4.4) 0.694 CI -- confidence interval \*Multiple linear regressions adjusted for cluster design and potential confounders (distance of nearest point from PHC to highway, percent of home births, years of schooling of mother, gender, religion and caste and wealth quintile). Appropriate care provider: Physicians in government and private facilities, auxiliary nurse midwife, Anganwadi worker, or accredited social health activist. Effect of the IMNCI intervention on inequities in health indicators ------------------------------------------------------------------- Inequities in health outcomes in intervention and control clusters are graphically depicted in [**Figure 1**](#F1){ref-type="fig"}. The IMNCI intervention does not appear to substantially change inequities in neonatal mortality but the concentration curves for post--neonatal mortality indicate greater equity in the intervention clusters compared with the control clusters. The intervention clusters also show a more equitable distribution of early initiation of breastfeeding and seeking care for danger signs from an appropriate provider. ![Concentration curves for different health outcomes and wealth quintiles. A. Early initiation of breastfeeding. B. Care seeking for danger signs from an appropriate provider. C. Neonatal mortality D. Post--neonatal mortality.](jogh-05-010401-F1){#F1} The results of multiple linear regression analysis confirmed that IMNCI intervention did not have a significant effect on inequities in neonatal mortality by wealth status, religion and caste, maternal education or gender. The inequities in neonatal mortality were similar in intervention and control groups across different subgroups after adjustment for cluster design and potential confounders ([**Table 2**](#T2){ref-type="table"}). The inequities in post--neonatal infant mortality by wealth status were significantly lower in the intervention as compared to control clusters. Post--neonatal mortality was lower by 4.9 per 1000 per wealth quintile when going from the poorest to the least poor in the control group, but only by 2.8 per 1000 per quintile in the intervention group (adjusted difference in gradients 2.2 per 1000, 95% confidence interval 0 to 4.4 per 1000, *P* = 0.053). There were similar differences in gradients across subgroups by religion and caste, gender and years of schooling of the mother but these differences were not statistically significant ([**Table 3**](#T3){ref-type="table"}). ###### Effect of intervention on inequities in post--neonatal mortality in the intervention and control clusters Subgroups
(total infants in intervention/control cluster) No. of deaths (rate/1000) Difference in inequity gradients
(95% CI)\* P--value ----------------------------------------------------------- ------------------------------- --------------------------------------------- -------------------- ------- **Intervention (n = 29 667)** **Control (n = 30 813)** **Wealth quintile:** Poorest (5620/6421) 214 (38.1) 268 (41.7) Very poor (5380/6660) 134 (24.9) 219 (32.9) Poor (5818/6222) 119 (20.5) 153 (24.6) Less poor (6039/6001) 111 (18.4) 91 (15.2) Least poor (6732/5300) 100 (14.9) 74 (14.0) Change in mortality rate/subgroup (inequity gradient) --2.8 (--4.2 to --1.3) --4.9 (--7.0 to --2.8) 2.2 (0 to 4.4) 0.053 **Religion and caste:** Schedule caste and tribe (7532/7013) 229 (30.4) 233 (33.2) Non--Hindu (2626/7442) 69 (26.3) 272 (36.5) Hindu Upper Caste (19 407/16 122) 379 (19.5) 298 (18.5) Change in mortality rate/subgroup (inequity gradient) --1.8 (--4.1 to 0.51) --4.8 (--7.7 to --1.8) 3.0 (--0.6 to 6.6) 0.101 **Gender:** Female (14 044/14 561) 392 (27.9) 471 (32.3) Male (15 623/16 252) 289 (18.5) 338 (20.8) Change in mortality rate/subgroup (inequity gradient) --9.1 (--12.2 to --6.0) --10.8 (--14.7 to --6.9) 1.7 (--3.2 to 6.6) 0.479 **Mother\'s years of schooling:** None (11 220/12 846) 355 (31.6) 466 (36.3) 1--9 years (12 238/11 604) 247 (20.2) 261 (22.5) 10--11 years (3460/3405) 52 (15.0) 45 (13.2) 12+ years (2627/2644) 24 (9.1) 26 (9.8) Change in mortality rate/subgroup (inequity gradient) --4.0 (--6.4 to --1.5) --5.9 (--8.1 to --3.7) 2.0 (--1.3 to 5.2) 0.222 CI -- confidence interval \*Multiple linear regressions adjusted for cluster design and potential confounders (distance of nearest point from PHC to highway, percent of home births, years of schooling of mother, gender, religion and caste and wealth quintile). Among all the outcomes examined in this analysis, inequities in the control group were the smallest for the practice of initiating breastfeeding within 1 hour of birth. The IMNCI intervention substantially increased the prevalence of this practice, and had greater benefit for the more vulnerable population subgroups resulting in inequity gradients that favored infants from poorer families (difference in gradients between intervention and control clusters 3.0%, CI 1.5 to 4.5, *P* \< 0.001), lower caste Hindus and non--Hindus (difference in gradients 3.9%, CI 1.8 to 6.0, *P* \< 0.001) and mothers with fewer years of schooling (difference in gradients 5.4%, CI 3.4 to 7.4, *P* \< 0.001). This pattern of beneficial effects was not seen by infant sex, with boys and girls benefitting equally by the intervention ([**Table 4**](#T4){ref-type="table"}). ###### Effect of intervention on inequities in breastfeeding initiation within 1 h of birth (as reported by the mother) in intervention and control clusters Subgroups
(total infants in intervention/control clusters) No. breastfed in first hour (%) Difference in inequity gradients
(95% CI)\* P--value ------------------------------------------------------------------------ --------------------------------- --------------------------------------------- ------------------------ --------- **Intervention (n = 6204)** **Control (n = 6163)** **Wealth quintile:** Poorest (1201/1231) 527 (43.9) 127 (10.3) Very poor (1089/1299) 510 (46.8) 154 (11.9) Poor (1182/1278) 517 (43.7) 139 (10.9) Less poor (1276/1222) 497 (38.9) 140 (11.5) Least poor (1452/1122) 475 (32.7) 128 (11.4) Change in % initiated breastfeeding early/subgroup (inequity gradient) --2.8 (--4.2 to --1.1) 0.4 (--0.3 to 1.0) --3.0 (--4.5 to --1.5) \<0.001 **Religion and caste:** Schedule caste and tribe (1556/1469) 718 (46.1) 193 (13.1) Non--Hindu (526/1420) 238 (45.3) 93 (6.6) Hindu Upper Caste (4119/3254) 1569 (38.1) 399 (12.3) Change in % initiated breastfeeding early/subgroup (inequity gradient) --3.4 (--5.2 to --1.7) --0.5 (--1.2 to 2.1) --3.9 (--6.0 to --1.8) \<0.001 **Gender:** Female (2893/2845) 1168 (40.4) 323 (11.4) Male (3310/3318) 1358 (41.0) 366 (11.0) Change in % initiated breastfeeding early/subgroup (inequity gradient) --0.8 (--2.0 to 3.6) --0.2 (--2.3 to 1.9) --1.0 (--2.5 to 4.5) 0.542 **Mother\'s years of schooling:** None (2465/2687) 1068 (43.3) 237 (8.8) 1--9 years (2548/2260) 1061 (41.6) 301 (13.3) 10--11 years (642/637) 253 (39.4) 95 (14.9) 12+ years (547/574) 144 (26.3) 56 (9.8) Change in % initiated breastfeeding early/subgroup (inequity gradient) --3.1 (--4.9 to --1.3) 2.2 (0.8 to 3.7) --5.4 (--7.4 to --3.4) \<0.001 CI -- confidence interval \*Multiple linear regressions adjusted for cluster design and potential confounders (distance of nearest point from PHC to highway, percent of home births, years of schooling of mother, gender, religion and caste and wealth quintile). Neonates who were taken for health care when they had a danger sign was inequitably distributed in both control and intervention groups. While the IMNCI intervention improved this outcome overall, the differences in inequity gradients in intervention and control clusters were not statistically significant in subgroups by wealth, religion and caste and maternal education. However, the intervention had an impact on reducing inequity in this outcome by infant's sex. In the control group, only 19.3% of girls compared to 36.3% of severely ill boys were taken for care to an appropriate provider but this difference was reduced in the intervention group with 41.3% of girls and 50.7% of boys taken for appropriate care (difference in gradients 9.3%, CI 0.4 to 18.2, *P* = 0.042). DISCUSSION ========== Main findings ------------- The beneficial effects of the IMNCI intervention on newborn and infant care practices and survival were equitably distributed among population subgroups. The intervention reduced inequities in post--neonatal mortality between wealth quintiles but did not reduce inequities in neonatal mortality. There was a greater increase in the proportion of neonates who initiated breastfeeding within one hour of birth in the intervention clusters among poorer families, lower caste and minority families and infants of mothers with fewer years of schooling. Care seeking for severe neonatal illness from an appropriate provider improved more for girls reducing gender inequity but inequities in this outcome by wealth, religion and caste and maternal education did not change. Potential mechanisms that could explain the results --------------------------------------------------- While there was no attempt to specifically target the poorer and other vulnerable populations in the IMNCI strategy, substantial efforts were made to deliver the intervention to the entire population. We believe that this led to the intervention being delivered to a large proportion of vulnerable population subgroups. These vulnerable population subgroups were also more likely to respond positively to counselling advice as evidenced by a greater improvement of appropriate practices like early initiation of breastfeeding among them because it is least demanding in terms of resources on the mother/family. Availability of appropriate health care close to home resulted in improved care seeking for girls perhaps, due to reduced need of financial resources. It has previously been shown in this population that care for girls is not obtained from hospitals and other health facilities because of lower value placed on girls than that on boys and reluctance of families to use meagre financial resources on the health of girls \[[@R10]\]. Impact on the intervention in reducing inequities in post neonatal mortality is evident but was not observed in neonatal mortality. This lack of impact on inequities in neonatal mortality could be because a high proportion of neonatal deaths occur in the first days of life and are related to maternal health care, which was not part of the IMNCI programme. Further, clinical problems in the neonatal period may develop and evolve rapidly to become serious, and require inpatient care, which was also not included in the IMNCI strategy. There is no statistically significant effect on differences in post--neonatal mortality between boys and girls. However, the mortality rate in boys was lower in intervention group compared to the control group by 2.3 per 1000, whereas the corresponding difference for girls was 4.4 per 1000. This means that there might be some effect of the improved care seeking in girls on their mortality, but there might other inequities that girls face that limit the effect on the difference in mortality between boys and girls. Comparison with other studies that have reported impact of interventions on inequities in neonatal and post neonatal mortality ------------------------------------------------------------------------------------------------------------------------------ We could only find one study that reported on the impact of IMCI on inequalities in child health \[[@R11]\]. The effect was mixed. Equity differentials for six child health indicators (underweight, stunting, measles immunization, access to treated and untreated bednets, treatment of fever with antimalarials) improved significantly in IMCI districts compared with comparison districts (*P* \< 0.05), while four indicators (wasting, DPT coverage, caretakers' knowledge of danger signs and appropriate care seeking) improved significantly in comparison districts compared with IMCI districts (*P* \< 0.05). A systematic review published in 2014 summarized evidence about the differential effects of interventions on different socio--demographic groups in order to identify interventions that were effective in reducing maternal or child health inequalities \[[@R12]\]. Eleven of 22 studies included in the review reported on the infant and under--five mortality rate. These studies covered five kinds of interventions: immunization campaigns, nutrition supplement programs, health care provision improvement interventions, demand side interventions, and mixed interventions. The review concluded that the studies on effectiveness of interventions on equity in maternal or child health are limited. The limited evidence showed that the interventions that were effective in reducing inequity included the improvement of health care delivery by outreach methods, using human resources in local areas or provided at the community level nearest to residents and the provision of financial or knowledge support to improve demand side determinants \[[@R12]\]. May be vulnerable groups would benefit more if IMNCI incorporated community based treatment for the less severely ill neonates and leaving referral to health facilities for the severely ill. For neonatal mortality, one of the studies included in the above review reported that participatory women group intervention can substantial reduce socio--economic inequalities in neonatal mortality \[[@R13]\]. Strengths and weaknesses of this analysis ----------------------------------------- The IMNCI evaluation study was a cluster randomized effectiveness trial with a large sample size involving about 60 000 births and it was therefore possible to study the effect of the intervention on inequities with reasonable precision. Detailed baseline information was available for all births in intervention and control clusters allowing accurate classification into population subgroups by wealth, religion and caste, sex and level of maternal education. There was an independent and similar measurement of outcomes in intervention and control clusters with very low rates of follow up. There are a couple of weaknesses of this analysis that merit consideration. There are inherent weaknesses of a subgroup analysis, but examination of equity is only possible with such an analysis. There were some baseline differences between intervention and control clusters which could have resulted in some differences in inequity gradients between them. However, we adjusted the analysis for the baseline characteristics that showed important differences between intervention and control clusters. Finally, it is difficult to separate the effects of different components of the IMCI package, or the effect of "IMNCI home visits" that were made to promote newborn care practices from home visits without any health intervention. However, making home visits with no health intervention in the control group was not possible in this pragmatic cluster randomized trial. Conclusions and implications of this paper ------------------------------------------ The IMNCI strategy, as implemented in the trial, promotes equity in post--neonatal mortality, newborn care practices, particularly for early initiation of breastfeeding and health care seeking for severe illness for some of the vulnerable population subgroups. However, substantial inequities continue to exist despite the intervention and therefore additional efforts are required for health programs like IMNCI not only to reach vulnerable populations such as mothers and children of families with lower socio--economic status, but also to identify and implement interventions that have a greater effect on reducing inequities. **Disclaimers:** The views expressed in the manuscript are authors' own and not an official position of the institution or the funder. **Acknowledgments:** We acknowledge the contributions of Drs Pavitra Mohan, Betty R Kirkwood and Henri Van Den Hombergh who were members of the IMNCI Study Advisory Group. We thank Drs. Harish Kumar and VK Anand for facilitation of the Integrated Management of Neonatal and Childhood Illness training and providing feedback at different stages. We acknowledge the members of the Data Safety Monitoring Board; Simon Cousens (Chair), Bert Pelto, and Siddarth Ramji. We are thankful to the Government of Haryana, the Civil Surgeons of districts Faridabad and Palwal in position during the study for their cooperation, to the participating health and Integrated Child Development Services Scheme officers, and workers of the Faridabad district. We acknowledge the cooperation extended by the population of the district who participated in the study. We are grateful for the core support provided to our organisation by the Department of Maternal, Newborn, Child and Adolescent Health, World Health Organization (Geneva) and the Centre for Intervention Science in Maternal and Child Health (RCN Project No. 223269), Centre for International Health, University of Bergen (Norway). **Funding:** World Health Organization, Geneva (through umbrella grant from United States Agency for International Development); United Nations Children's Fund, New Delhi, and the Programme for Global Health and Vaccination Research of the Research Council of Norway through Grant No. 183722. **Authorship declaration:** All authors conceived and designed this manuscript. Sunita Taneja and Shikhar Bahl conducted the analysis and wrote the first draft of the manuscript. All other authors provided critical inputs to the manuscript. **Competing interests:** All authors have completed the Unified Competing Interest form at [www.icmje.org/coi_disclosure.pdf](http://www.icmje.org/coi_disclosure.pdf) (available on request from the corresponding author). The authors declare no competing interests.
66,088,140