Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
2,600
Public Key Encryption: Temporary Privacy, Temporary Security
Privacy is a perennial national preoccupation, entering the zeitgeist for a few weeks every time a major breach or revelation of malpractice hits the news, then gradually fading from view. As interest waxes and wanes, facts on the ground have also been mixed: various states increase their espionage programs while others enshrine data ownership; companies increasingly take data security seriously while collecting ever more invasive data from users; end-to-end encrypted chat apps explode in popularity while security flaws (or, some allege, intentional backdoors) are found with regularity. These are the natural growing pains of our increasingly online, connected society: when so much of life is lived by transmitting sensitive information over hundreds or thousands of miles, through dozens of intermediaries, to dozens of different individuals or companies, it is only logical that an arms race between those who want that information and those who want it kept private would result. The key weapon on the side of the information-hiders, be they individuals wary of corporate tracking or criminals hiding illicit activity, has always been encryption, particularly of the public-key kind — that magical set of algorithms that allows two individuals to share no prior secret, have all of their communication heard by eavesdroppers, and yet still communicate something that no one but the two of them can understand. No less than Edward Snowden (who, whether you believe him a hero or traitor, is certainly well-versed in data privacy) says this about encryption in his book Permanent Record: Deletion is a dream for the surveillant and a nightmare for the surveilled, but encryption is, or should be, a reality for all. It is the only true protection against surveillance. If the whole of your storage drive is encrypted to begin with, your adversaries can’t rummage through it for deleted files, or for anything else — unless they have the encryption key. If all the emails in your inbox are encrypted, Google can’t read them to profile you — unless they have the encryption key. If all your communications that pass through hostile Australian or British or American or Chinese or Russian networks are encrypted, spies can’t read them — unless they have the encryption key. This is the ordering principle of encryption: all power to the key holder. Privacy advocates the world over imagine a world where cryptography replaces trust, where you can be certain of your data’s privacy because you can mathematically prove that only the intended recipients can discern anything at all from it. Even without such lofty crypo-Utopian ambitions, it is hard to deny that the modern internet, and hence much of the modern economy, is an edifice largely built on encryption: without it, secure and private connections over such a shared network would be impossible. Unfortunately, though, most contemporary encryption doesn’t quite give us the guarantees we think it does: the confluence of cheap storage and nonlinear advances in computing means that the only thing you can be sure of is that your data is secure for now. Quantum Computing + Cheap Storage = Limited Privacy The general public is increasingly aware of the threat that quantum computing will have on the current internet: each stride forward in quantum computing brings us a little closer to the day when all public key algorithms — Diffie-Hellman, RSA, elliptic curve — become effectively useless against an adversary with such a computer. The standard response to this is usually fairly blase: whenever such machines are within a few years of being feasible, we are assured, standards will shift to a new, quantum-resistant public-key algorithm; users will not even notice the change except maybe through higher data usage fees and CPU utilization as they move to less efficient algorithms. This response is entirely valid for a whole range of use cases: there’s little to worry about when it comes to banking or online purchases because these will move to quantum-resistant algorithms before any damage can be done. However, accessible quantum computing means that anything that is public-key encrypted using the state-of-the-art today will become readable to anybody who has that data. Hence the implicit assumption that encrypted data in an adversary’s hands is not useful (e.g., why we feel comfortable sending personal communications and browsing the web on our phones or over wifi) is incomplete — this is true, but only for a limited, uncertain time. Everything encrypted you send over a public network is a time capsule: anyone willing to hold onto it for long enough will be able to see its contents. The attack this implies is very simple: Alice and Bob are communicating “securely” via public key encryption algorithm X. Eve (the “eavesdropper”) holds onto all of the traffic that passes between them. At some later date, when algorithm X is broken by quantum or some similar technological advance, Eve can go back, decrypt all of the traffic, and learn everything that Alice and Bob wanted kept secure This is why cheap long-term storage is the second prerequisite for the attack: if public key encryption temporarily hides data, an adversary trying to attack a target cannot be sure which traffic to hold onto, and must collect all of it. Luckily for these adversaries (and unfortunately for the rest of us), storage is cheap and getting cheaper: as of today, a retail customer can easily buy a hard-drive for less than $20 per terabyte. I, an avid user of my cell phone, use much less than 20GB a month, or <240GB a year. If the ratio between data usage and storage cost stays this low, a relatively unsophisticated adversary could store as much of my data as he could sniff indefinitely for around $5/year. The “sniffing” portion provides more of a barrier to such a targeted attack for cell phones (e.g., the attacker would need something like a StingRay, which is expensive and of dubious legality), but sniffing all traffic on a wifi network requires no special hardware and is difficult to detect. From where I sit typing this in my apartment, my laptop can detect over 50 wifi networks. This reality is scary: you could be subject to surveillance not only by companies or by government agencies, both of which can be reined in by law, but also by any private individual with malicious intent, a few hundred dollars of equipment, and a long time horizon. What can we do about it? As individuals, basically the only thing that can be done is to treat privacy on the internet as something temporary and unreliable. This obviously leads to a chilling effect, but it’s the reality of the situation: if you don’t want your web history, private emails, or nude photos leaked, then do not trust the current internet to protect them. These problems are surmountable, but not in a way that any individual can accomplish alone. From my (non-expert) viewpoint, solutions include: Adoption of quantum-resistant public-key encryption algorithms as soon as technically feasible. This is the quickest way to address the most obvious attack described above, though any algorithm has the danger of being made obsolete by further breakthroughs. More widespread options for symmetric (i.e., non-public) key encryption (as the NSA has used since 2015). This requires sharing a secret key over a secure channel, e.g., physically, but you could imagine an app making it easy to do this with the handful of people with whom you actually need to communicate with true secrecy. In the most extreme case, a “one-time-pad” can be used for communication (i.e., a key as long as the message). This would be absurdly memory intensive, but makes it literally impossible to decrypt the message, no matter what breakthroughs are made in computing. More options for obfuscating traffic. If an adversary cannot differentiate between the important and unimportant encrypted data, then you could use a service that creates bogus traffic to hide what you’re actually doing and, ideally, make it less feasible for the adversary to store all of your data. The COVID crisis, in particular, has shown us how deeply we now rely on the internet and how enmeshed it is in every part of society. This is not a genie that we can, or should, put back into its bottle. But neither can we keep our heads in the sand and ignore the serious danger posed by using algorithms with only short-term privacy guarantees to protect deeply personal information online. I’m sure I’ve missed out on a lot of nuances to this topic and other ways of resolving this problem, not being an expert in cryptography — please feel free to explain what I’ve missed or link to further readings below!
https://medium.com/@digital-cygnet/public-key-encryption-temporary-privacy-temporary-security-204bec7fb0bf
['Digital Cygnet']
2020-10-25 23:08:22.914000+00:00
['Encryption', 'Data', 'Privacy', 'Technology', 'Policy']
2,601
Official! The Undoing~ Series: 1 ‘Episode: 5’ —HBO’s
🌺The Undoing,🌺The Undoing 1X5,🌺The Undoing S1E5,🌺The Undoing S1XE5,🌺The Undoing 1X5,🌺The Undoing S1E5,🌺The Undoing Cast,🌺The Undoing TV’s,🌺The Undoing Season 1,🌺The Undoing Episode 5,🌺The Undoing Watch Online, 🌺The Undoing Season 1 Episode 5, Watch 🌺The Undoing Season 1 Episode 5 Online,🌺The Undoing Eps. 5, 🌺The Undoing Episode 5 🌺Rights of Author/creator🌺 Copyright is a bundle of rights given to the author by the judiciary. As per the Indian Copyright act Section 14 of chapter III and 57 of chapter XI, author has been conferred with some exclusive and special rights; these rights can be divided into 3 categories and as follows; 1. Statutory Rights or Negative rights Copyright law provides an exclusive legal or statutory right to the original author on his creation of work. It imposes a ‘negative duty’ on others that prohibits from using or getting benefit from the work without the consent of the author. 2. Economic Rights The economic right provides author to enjoy the financial benefits. The creator can earn royalty by assigning rights to others either fully or partially. As per the international conventions, generally every national copyright statute provides following exclusive rights to the copyright holder. ● Adaption rights ● Distribution rights ● Public performance rights ● Public display of works rights ● Rental rights ● Reproduction rights ● Translation rights 3. Moral Rights. Copyright law always protects the creator even after the assignment of copyright work to others either fully or partially. Moral rights grant an author the right to have his name kept on the work forever and protects from any distortion or modification of the work, or other offensive action in relation to the work, which would be damaging to the author’s reputation. 🌺Term of Copyright protection🌺 Normally current copyright doesn’t require any kind of registration for its protection. Once the work created in tangible form, an author automatically gets the copyright on his creation. The term of the copyright protection on different works has described in section 22–29 of chapter V of Indian copyright act. The term of the copyright protection is as mentioned below. 1. The copyright term in respect to published literary, dramatic, musical and artistic works is lifetime of the author plus 60 years from the death of the author. In case of multiple authors the term is 60 years from the death of last author. 2. In case of anonymous and pseudonymous works, the copyright term is 60 years from the date publication. 3. Copyright protection for photographs, cinematograph films, and sound recordings is 60 years from the date of publication. 🌺Conclusion of the material🌺 Though there are many copyright restrictions and issues, the understanding of copyright law and fair use dealings would direct us to use copyrighted content for academic and research purpose in a secure manner. Sufficient level of understanding of copy right problems needs to be conceived during/ before procurement/subscription of any resources. At this point an agreement/contract/terms and conditions between concerned parties on procurement of resources would play a major role in protecting copyright of holders. As a facilitator the librarian regularly needs to educate his users about copyright issues. And this could become one of the important factors that would play a major role in decline in copyright violations among library users. As per the Indian copyright act 1957 it is very clear that neither the publisher nor the facilitator is responsible for any infringement of copyrighted material, but a person who is involved in the activity of infringement is solely held responsible for his act of misconduct.
https://medium.com/@teguh-turangga-tea/official-the-undoing-season-1-episode-5-hbo-s-b7109dddb40c
['The Undoing']
2020-11-23 00:41:07.724000+00:00
['Technology', 'Life', 'Startup']
2,602
Corda 4.7 Beta — our winter release
Photo by Hans-Jurgen Mager We are (almost) through 2020, and our team has been working hard on new enhancements to ease node operations and support existing production deployments to scale over time. As we are finishing off the release, you can get your hands on these new features with the beta. Please contact our support team for access at [email protected]. This quarter our focus has been across three long term optimizations: Archiving Transaction Breaks Notary Burst Handling Archiving As production CorDapps grow in size and maturity, nodes accumulate more and more ledger related data over their lifecycle. Some longer-term considerations such as cost and performance over an increasing dataset come into focus. In Corda Enterprise 4.7 — we are introducing a new tool that offers node operators the ability to identify and safely remove data from the ledger that is no longer needed — in a safe and reversible way. This tool adds to the suite of enterprise utilities that now support the Corda ledger — from the distributed integrity check, collaborative disaster recovery (introduced earlier this year) and now archiving functionalities. Transaction Breaks Secondly, we are introducing better support for a developer pattern known as “Chain snipping.” Today, a CorDapp developer can write custom logic to allow a state to periodically be exited then reissued onto the ledger in separate transactions. A developer might want to do this for privacy reasons or to optimize performance. While this is already possible today — it relies on the developer foresight in anticipating performance issues when chains grow to a certain size and its implementation varies. In Corda 4.7, we are introducing platform support for creating breaks in transaction chains. State owners can request a break via a flow and comes with platform guarantees that states aren’t removed without a replacement or duplicate. Providing the same benefits in terms of privacy and performance as chain snipping, but in a way that addresses known sharp edges, and imposes minimal or zero development costs on the CorDapp developer. Notary Burst Handling Finally, we have enhanced the way the notary handles traffic. We built the notary to withstand an extremely heavy amount of traffic and perform its duties in an orderly fashion. It does so via a back-pressure mechanism that handles the queue of requests and ensures that any node retries due to a timeout (typical in periods of high traffic) are a function of the notary capacity. Ensuring nodes will get the guarantee of seeing through request and retry if necessary but also avoids artificially increasing the queue in the notary with unnecessary retries, thus reducing its efficacy. In Corda Enterprise 4.7, we have improved this mechanism to be more accurate and responsive under heavy load conditions, resulting in fewer retries and better end-user performance from the node point of view. Corda Gets a Face — New User Interfaces Yes! Our first attempt at simplifying the operations of a Corda node and Corda Network via a User Interface (UI) is here. In this first release, we are delivering the initial preview of UIs on components of the platform and start with a developer preview that is narrowly focused on simplifying some key operations of a node and a network. On the node management console, we have focused on the management of flows — offering operators the ability to monitor and filter checkpoints and safely act on that information from the UI. We have also looked at node lifecycle and configuration management. We built on the previous releases of Corda Enterprise Network Manager that introduced an interface to manage user-permissioning — expanding support for CSR/CRR lifecycle monitoring, Flag day management (changing a network parameter — e.g., introducing a new notary) and configuration management. This is just a preview of what is to come — we are committed to simplifying our products’ user experience and are keen to get these features out to the community to hear your feedback! More in the box As usual — in addition to the major areas of focus above we have included some improvements you might like, these include: HSM APIs: We are introducing an HSM library with its own API that external tooling developers can use to expand Corda Enterprise Hardware Security Module support. We are introducing an HSM library with its own API that external tooling developers can use to expand Corda Enterprise Hardware Security Module support. More HSMs support Confidential identity: We are expanding our key support matrix to our existing HSM vendors, Corda Enterprise now provides HSM support for securely storing Confidential Identity keys and Account keys across Securosys Primus X, AWS CloudHSM, Azure Key Valut, nCipher and Future X HSMs. We are expanding our key support matrix to our existing HSM vendors, Corda Enterprise now provides HSM support for securely storing Confidential Identity keys and Account keys across Securosys Primus X, AWS CloudHSM, Azure Key Valut, nCipher and Future X HSMs. Corda Enterprise Network Manager supports AAD: We have introduced support for Azure Active Directory as an SSO for the CENM RBAC service. We have introduced support for Azure Active Directory as an SSO for the CENM RBAC service. Business Network Extension: We are introducing enhancements to the Business Network extension to allow for access control group reporting, batch onboarding, membership group querying and a way to log and report actions to membership attestations. Look out for the generally available release in mid-December! And as always, let us know what you like and what you would like to see in future Corda releases by emailing us at [email protected]. — Gabe Farei a Lead Product Manager at R3, an enterprise blockchain software firm working with a global ecosystem of more than 350 participants across multiple industries from both the private and public sectors to develop on Corda, its open-source blockchain platform, and Corda Enterprise, a commercial version of Corda for enterprise usage.
https://medium.com/corda/corda-4-7-beta-our-winter-release-6e235019e4d1
['The Team']
2020-12-08 14:56:16.571000+00:00
['Blockchain Technology', 'Blockchain Development', 'Corda', 'Open Source', 'Distributed Ledgers']
2,603
Your Startup Needs a Product Engineer Immediately
Who’s in charge of building your company’s product? Unless you’re flying solo, there are probably multiple answers to this question. But that’s changing. The role of “product” within almost all industries is trending toward less emphasis on product management and product marketing, and more towards using technology and data to determine everything from what we’re building to how we’re selling it. It’s the science of Product Engineering, and your company needs one of these engineers, preferably at the executive level. Probably now. The Evolution of Product Let’s start by taking a quick walk down the path of entrepreneurial history. It used to be that a company was started with an idea and someone to sell it. We can go all the way back to plucky quacks shilling Dr. Kokane’s Good Tyme Health Elixir and things of that nature. When business became Business, let’s say around the middle of the last century, companies were founded and/or led by enterprising folks with a grip on growth — MBAs from Ivy League schools who were maybe already monied and connected. People who knew finance, deals, politics. The technology revolution that exploded from the garages of the 1970s made it mandatory that tech become a part of the product process. And still today, we have the traditional pairing of the business founder/leader and the technology founder/leader. One needs the other, or at least needs to supplement the other side with a well-run team. When we see both business and technology talents baked into the same person, that tends to be rare. An engineer who can sell? A leader who can code? Unheard of! These people are no longer rare, and we need them building our product. The CEO builds the company, the CTO builds the technology. The Chief Product Officer wields the technology to grow the company. From Management to Marketing to Engineering I’m going to use myself as an example of how product science is evolving with this new dual-threat skillset. I’m an industrial and systems engineer by education. I was a developer early in my career. As an entrepreneur, I’ve been in the hybrid role I just laid out for the last 20 years, having started, run, and sold several companies along the way, both building the tech and leading or co-leading the org. Up until a few years ago, I never would have considered myself a product person by skillset. I still don’t call myself a product manager. I use the terms product developer or product engineer when describing what I do. That’s because product management is commonly thought of as project management, but managing a thing instead of a service. Product marketing also gets lumped into product management, but on the sell side, in roles like customer success. Even product design, especially on the tech side in UX/UI roles, is considered product management. Here’s the thing. Over the last 20 years, what I do really hasn’t changed, but the emphasis on it and importance of it has, especially in startup. I’m not a project manager, I’m not a marketer, I’m not a designer, but I do all these things when building a product. I was a developer, and I still code, but I don’t code into production. I use code to figure out what to build and how to sell it. And I’m not alone. The Ranks of Product Engineers Are Growing Almost every single up-and-coming product leader I talk to these days is a former software developer or engineer. Most of them have at least an entrepreneurial bent, if not already a founder or an early startup employee. All of them are frustrated, at least a little, with how their role is laid out. On the other side, software developers have always had a tough time maturing. They tend to age out like athletes because there’s always younger folks with newer knowledge coming in cheaper. Also offshoring. Senior developers usually either turn to project management, which they hate, or people management, which they’re either ill-equipped for or… they hate. Some will get into sales, some become entrepreneurs. With the exception of entrepreneurship, none of what I just described allows the former software developer to apply the massive creative experience and invaluable real-world business knowledge they’ve acquired over their software development career. But wait! There’s a huge gap in the org just screaming for that kind of knowledge. Product Engineer: The Arbiter of Quality vs Delivery The CEO wants customers at all costs. The CTO wants excellent software at all costs. The CPO settles this dispute with ultimate authority, using that aforementioned knowledge and experience, along with data and a growing number of new tools and methods, as the power vested in them. Day to day, the product engineer is tasked with building a product that grows the company in their target market. This means the product engineer has to know a lot about a lot. The product engineer must use their technical experience and creativity to work with the development team to prioritize what needs to be done against what can be done, when it’ll get done, and how much it’ll cost for what return. To understand that return, the product engineer must use data to discover what customers need and how much value they’ll find in filling those needs. To determine that value, the product engineer should be able to use technology to prototype new features and feature sets, figure out how they fit into the user experience, and understand how they impact the product as a whole. Then the product engineer must use their industry knowledge to work with sales and marketing to redefine the product, making it drop-dead simple for the customer to find the new value immediately. To verify that value and build on it, the product engineer must use technology to create feedback loops to determine what should be built next. They interpret the results against their industry knowledge, take that back to the developers, and the cycle starts all over again. Product Engineer: The Industry Expert That industry knowledge is where a lot of the product engineers I talk to find the most joy. In just my last two roles as CPO, I’ve had to become a non-technical expert in auto repair, journalism, publishing, fantasy football, trucking, insurance, cable television, corporate earnings, education, and about a dozen more. Product engineers can’t just know enough about these industries to be able to talk the talk, they have to know enough to disrupt them, to figure out what those industries need before the players themselves do, and fill needs that might not even be needs yet. This all comes down to technology, and knowing not just what’s changed, but what’s changing, much like I try to tackle in these posts themselves, and how that’s going to impact the market their serving. The product engineer is the perfect hybrid of technology and application, which brings growth, which is why every company needs one. And the sooner, the better.
https://jproco.medium.com/your-startup-needs-a-product-engineer-immediately-8902f7787c25
['Joe Procopio']
2019-03-05 12:18:25.781000+00:00
['Entrepreneurship', 'Startup', 'Product Management', 'Business', 'Technology']
2,604
How to Build Advanced SQL
How to Build Advanced SQL Building more maintainable, readable and optimized data workflows Photo by Alexandru Acea on Unsplash SQL remains the language for data. Developed back in the 1970s, it’s one of the few technologies that has remained constant. Regardless of what drag and drop tools come around or what new query paradigms try to overtake it. SQL remains the most widely used technologies to interact with data. With the advent of databases that utilize NOSQL or (Not Only SQL), layers like Presto and Hive have been developed on top, to provide a friendly SQL interact. Not only that, but the use of SQL has far expanded beyond data engineers and analysts. Product managers, analytical partners, and software engineers at large tech companies all use SQL to access data and answer questions quickly. The point is, SQL is worth knowing. But once you know the basics, how do you progress? What takes a SQL user from novice to advanced? Over the past few years, we’ve spent a lot of time writing SQL for data pipelines, dashboards, data products, and other odds and ends. We don’t think advanced SQL is about syntax. There aren’t too many fancy clauses after you learn about analytic clauses. Sure, you can loop in SQL and even edit files. However, they’re all actions that can occur in code. So what separates basic SQL users from advanced SQL users? We believe it’s more about thinking big picture. Advanced SQL developers think long-term vs short-term. They develop SQL that is maintainable, easy to read, and that requires more time and consideration. In this article, we’ll focus on many of the design decisions that we believe separate novice SQL developers from senior and advanced SQL developers. You’ll notice that this goes beyond SQL. A lot of it will go into more conceptual problems, where there aren’t definite answers to the best solutions. The format of the tips will be problem or behavior, followed by solution or improved methods. In fact, some of the solutions could be considered design preferences. Some of you might even disagree with the tips we give here. Please leave comments if so — we would love to discuss them further. With that, let’s get into learning!
https://medium.com/better-programming/how-to-build-advanced-sql-798d615ba323
[]
2020-07-10 02:44:21.602000+00:00
['Programming', 'Data Science', 'Sql', 'Technology', 'Big Data']
2,605
Rocket “Assembly Line” Comes to Life in Texas
Rocket “Assembly Line” Comes to Life in Texas An assembly line…for rockets? It might seem like a crazy idea, but it is about to come to fruition in Boca Chica Texas. It is there that Spacex is developing a new generation of rocket to r eplace the groundbreaking Falcon series. This new rocket, called Starship, will have many times the payload capacity of its predecessor, and with 100% reusability, will offer vastly cheaper access to space…perhaps 10 times cheaper. But Spacex isn’t just building a new rocket, they also strive to build a whole production system, an assembly line for rockets, that allows for rapid design iteration. Getting here has been difficult, but the “assembly line” looks poised to kick into high gear this winter. It all started over a year ago with the construction of the “Starhopper” prototype rocket, the first “Starship” test vehicle. This vehicle was essentially a proof-of-concept, performing three test hops, up to 150 meters in height. The vehicle used thick 12mm steel on its hull, and therefore was incredibly mass inefficient. Starshopper was followed by the “Mark 1” prototype late last year, a full size upper stage test vehicle using thinner and lighter 301 stainless steel. Unfortunately, the welding techniques used on Starhopper didn’t carry over well to this thinner steel, and Mark 1 was lost during pressure testing. Undeterred, Spacex has been refining both the design of the vehicle and the production process with each successive iteration. SN1, the follow up vehicle to Mark 1, used even thinner 4mm steel and much improved welding techniques. Despite this, it too was lost during pressure testing. SN2 however, successfully demonstrated an ability to hold enough pressure for flight. SN3 was lost due to ground service equipment failure, but was the first to vehicle to feature landing legs. SN4 was further refined, featuring a methane header tank, and even lit its engines a few times. Sadly, SN4 exploded due to ground equipment failure before it could take to the skies. Since then, Spacex has been more successful. SN5 took off and reached a height of 150 meter before landing softly on an adjacent landing pad. SN6 repeated this test with greater ease. SN7 and SN7.1 have tested out new construction methods and materials, all culminating in the latest prototype, SN8. SN8 will be the first full-sized upper stage test vehicle since the ill-fated Mark 1…and what an improvement it is. It is significantly stronger and more robust, it is much lighter, while also being cheaper and faster to build. SN8 will attempt to fly to about 18km and perform a complex mid-air “flip” maneuver that is crucial to being able to land and reuse the rocket coming from orbit. From there, Elon Musk’s vision of a Starship rocket factory comes into focus. SN9, which will utilize slightly revised materials and build on lessons learned from SN8, is already under construction. According to Musk’s recent tweets, the design will evolve less after SN9, with a number of Starships being built primarily to refine and improve the production methodology. In other words, beginning with SN9, the focus will be less on the rocket and more on getting the “rocket assembly line” up and running. That assembly line is already starting to churn, with SN9 under construction, SN10 a few weeks behind it, and SN11 a few weeks behind that. Make no mistake, what is being attempted here is unprecedented on multiple levels and more failures are likely. But the technology behind Starship has come a long way in the past year. The design is maturing, and the production system is maturing alongside it, albeit a few weeks behind. The Starship factory is coming to life.
https://medium.datadriveninvestor.com/rocket-assembly-line-comes-to-life-in-texas-fd442eb38631
['J. Lund']
2020-10-09 15:16:10.742000+00:00
['Business', 'Spacex', 'Space', 'Space Exploration', 'Technology']
2,606
The Tesla bombshell almost nobody is talking about
Last week, Tesla held an event focused on their advances in autopilot and what they call “full self driving”. There, nearly three hours into the event, they made the announcement: not only will they have fully autonomous vehicles ready years ahead of the industry’s best estimates, Tesla expects to have a fleet of one million robotaxis on the road in 2020. This time, there wasn’t the usual whooping and hollering from the Tesla loyalists. Instead, this was an event for investors. It was a more steadied and almost scholarly affair, with dizzyingly deep dives into the technology powering Tesla’s self-driving ambitions. And the announcements—carrying world-changing ramifications, if the numbers are right—were delivered aloud into otherwise pindrop silence. It seemed the collective reaction of those in the room was a furrowed brow. Could Elon be believed? Still, onward he pressed, proclaiming a grand vision as though he’d only just returned from the future to share what he had seen. Maybe the claims seemed too far-fetched, maybe the event was too long, or perhaps it’s because the investors at the event were, for the most part, non-technical. As someone in the tech industry myself, I had a hard time keeping up with the swirling Acronym Soup shared by presenters like Pete Bannon, esteemed former-Apple chip designer. But the overall message resonated: Tesla is ahead, they argued, because Tesla has the data. Whereas nearly every competitor is relying on Lidar, Tesla has placed their self-driving bets almost entirely on computer vision. Choosing to rely on cameras and cheap radar + ultrasonics has allowed them to deploy these sensors on every car they’ve sold for the past several years. Having sensors on every vehicle means they’ve been able to collect data from every mile driven from every Tesla produced in the past several years. That’s a huge number, and it’s increasing rapidly. Lidar, meanwhile, is power hungry and expensive, adding anywhere from around $7K to $70K to the cost of the vehicle. The upshot is that the major Lidar-based competitors have several hundred cars on the road each, while Tesla has nearly half a million. And machine learning, which is needed for object recognition in any self-driving system, depends on access to mountains of data. In fact, it thrives on it — there’s a direct correlation between how much data you throw at a neural network and the quality of the results. Because they make their own cars, and because they’ve bet on cheaper sensors, Tesla is now sitting on an unmatched (and possibly unmatchable) pile of data, and that pile grows with each mile driven, with the rate of growth multiplying with each new vehicle sold. In that light, Elon emphatically assures us their self-driving capabilities are improving “exponentially”, which would make the advent of full autonomy arrive much sooner than expected (and, frustratingly, will also make estimating its exact arrival date even more difficult). To demonstrate these advances, they gave investors fully-autonomous rides in standard off-the-shelf Teslas with remarkable capability improvements over the previous generation of software. In other words, given their advances in chip hardware and their substantial lead in real-world data, the final piece of their self-driving puzzle is software. Software which, once ready, can be deployed at the push of a button. Tesla’s Full Self Driving system stopping for stop signs on surface roads. See the full demo video at the end of the article. With the facts and figures out of the way, they delivered the real shocker of the event: robotaxis. As early as next year, your Tesla will be able to drive you home as you read a book, they say, as well as go off to make money for you as an autonomous robotaxi whenever you like. You can make money while you sleep, or earn a second paycheck while at your day job. The car can pay for itself, and then some. Now, even taking that with the massive tablet of salt called “regulatory approval” combined with a healthy schedule adjustment to pad for Elon Time™, that’s still a staggeringly audacious proposal. Especially when you consider their back-of-the-napkin math: The base self-driving Tesla costs about $38K. As a robotaxi, the car will be able to earn around $30K per year. This assumes rides at half the cost of a Lyft or Uber, with half of the miles travelled being empty “dead legs”. Tesla cars will be rated for one million miles, including the battery. In the lifespan of the car, it can earn approximately $200K of income for the owner. A $38K car bringing in $200K of income on its own? That’s insane. That’s impossible. And that’s being spearheaded by a team famous for achieving the impossibly insane.
https://medium.com/swlh/the-tesla-bombshell-almost-nobody-is-talking-about-robotaxis-930556d9f965
['Hans Van De Bruggen']
2020-07-06 19:43:17.151000+00:00
['Technology', 'Futurology', 'Future', 'Self Driving Cars', 'Tesla']
2,607
The Graph vs Bitquery — Solving Blockchain Data Problems
Blockchains are “ Mirror of Erised.” You will always find your interests in them. Economist sees blockchains as economies. Technologist sees blockchains as platforms to build Decentralized applications. Entrepreneurs see them as a new way to monetize their products, and law enforcement agencies are looking for criminal activities in the blockchain. Everyone is looking at blockchains in their way. However, without easy and reliable access to blockchain data, everyone is blind. Blockchain data problem Blockchains emit millions of transactions and events every day. Therefore, to analyze blockchains for useful information, you need to extract, store, and index data and then provide an efficient way to access it. This creates two main problems: Infrastructure cost — Before developing an application, you need reliable access to blockchain data. For this, you need to invest in the infrastructure, which is costly and a barrier for developers and startups. — Before developing an application, you need reliable access to blockchain data. For this, you need to invest in the infrastructure, which is costly and a barrier for developers and startups. Actionable insights — To drive blockchain data’s value, we need to add context. For example — Is a blockchain transaction is a standard transaction or a DEX trade. Is it normal DEX trade or an arbitrage? Meaningful blockchain data is helpful for businesses in providing actionable insights to solve real-world problems. This article will look at similarities and differences between The Graph and Bitquery. The Graph Overview The Graph project is building a caching layer on top of Ethereum and IPFS. Using The Graph project, anyone can create a GraphQL schema (Subgraph) and define blockchain data APIs according to their need. The Graph nodes use that schema to extract, and index that data and provide you simple GraphQL APIs to access it. Problem Addressed by The Graph Developers building Decentralized applications (Dapps) have to depend on centralized servers to process and index their smart contract data for multiple reasons, such as creating APIs for third party services or providing more data to their Dapp users to enhance UX. However, this creates a risk of a single point of failure for Dapps. The Graph project address this problem by creating a decentralized network to access indexed smart contract data for Dapps and removing the need for centralized servers. Bitquery Overview Bitquery is building a blockchain data engine, which provides simple access to data across multiple blockchains. Using Bitquery’s GraphQL APIs, you can access any type of blockchain data for more than 30 blockchains. Problem Addressed by Bitquery Developers, analysts, businesses all need blockchain data for various reasons, such as analyzing the network, building applications, investigating crimes, etc. Bitquery provides unified APIs for access data across multiple blockchains to fulfill any blockchain data needs for various sectors such as Compliance, Gaming, Analytics, DEX trading, etc. Our Unified schema allows developers to quickly scale to multiple blockchains and pull data from multiple chains in a single API. Common Things GraphQL Both, The Graph and Bitquery use GraphQL extensively and enable GraphQL APIs to provide freedom to end-users to query blockchain data flexibly. When it comes to blockchain data, read here why GraphQL is better than Rest APIs. Removing Infrastructure Cost Both projects remove infrastructure costs for end-users and provide them with a model where they pay only for what they use. The Graph Architecture The Graph embraces decentralization through an army of Indexers and curators. Indexers run Graph nodes and store and index Subgraph data. And Curators help verify data integrity and signaling new useful subgraphs. The Graph aims to become a decentralized caching layer to enable fast, secure, and verifiable access to Ethereum and IPFS data. Bitquery Architecture Bitquery embraces performance and developer experience over decentralization. Our centralized servers process more than 200 terabytes of data from more than 30 blockchains. We are focus on building tools to explore, analyze, and consume blockchain data easily for individuals and businesses. Differences between The Graph and Bitquery There are considerable differences between The Graph and Bitquery. Let’ see some of the significant differences. Blockchain Support The Graph only supports Etheruem and IPFS. However, Bitquery supports more than 20 blockchains and allows you to query any of them using GraphQL APIs. API Support The Graph allows you to create your GraphQL schema(Subgraph) and deploy it on Graph nodes. Creating your schema enables developers to access any blockchain data as APIs. Bitquery follows the Unified schema model, meaning it has a similar GraphQL schema for all blockchains it support. Currently, Bitquery extends this schema to enable broader support of blockchain data APIs. However, we are building FlexiGraph, a tool that will allow anyone to extend our schema to enable more complex blockchain data queries. Ease of Use With Bitquery, you only need to learn GraphQL and use our schema to query the blockchain. However, with The Graph, you also need to understand coding because you need to deploy your schema if the data you are looking not available through community schema. Decentralization The Graph is a decentralized network of Graph nodes to index and curate Ethereum data. We think The Graph’s mission to decentralize blockchain data access a novel goal, and we appreciate it. However, Bitquery focuses on building APIs to enable the fastest, scalable multi-blockchain data access, coupled with useful query tooling. Performace Bitquery’s technology stack is optimized for performance and reliability. Besides, our centralized architecture helps us optimizing latency and response rate and other performance metrics. The Graph decentralization approach makes it a robust network for data access. However, The Graph is still working to achieve continuous performance delivery. Open Source The Graph is a fully open source project. Developers can verify the codebase, fork it, or integrate it according to their needs. We at Bitquery also embrace open source development and make our tools open source as much as we can. For example, our Explorer’s front end is entirely open-source, but our backend is closed source. However, we always revisit our technology on time and see if there is an opportunity to open source any module. Data Verifiability Almost all the data on blockchains is financial data; therefore, data verifiability is very important. The Graph network has curators, who are responsible for verifying data accuracy. At Bitquery, we have built automated systems to check data accuracy for our APIs. Pricing The Graph project created the GRT token, which will drive the pricing on its network. However, The GRT token is not available to the public for now. Bitquery is also at the open beta stage; therefore, pricing not yet open to the public. However, Bitquery and The Graph are used by many projects in production. Currently, both projects provide their APIs are free. Conclusion Blockchain data is filled with rich information, waiting for analysts to find it. We embrace TheGraph project’s aims to decentralize the Ethereum and IPFS data access for application builders. However, we at Bitquery choose a different path and unlock the true potential of highly reliable multi-blockchain data for individuals and businesses. We believe The Graph and Bitquery complement each other and address different needs in the blockchain data market with some apparent intersections. We aim to build a suite of products to easily explore, analyze, and consume blockchain data for individuals and businesses. And The Graph aims to build a decentralized network to enable reliable access to Ethereum and IPFS data. Let us know what similarities and differences you see between The Graph and Bitquery in the comment section. You might also be interested in: About Bitquery Bitquery is a set of software tools that parse, index, access, search, and use information across blockchain networks in a unified way. Our products are: If you have any questions about our products, ask them on our Telegram channel or email us at [email protected]. Also, subscribe to our newsletter below, we will keep you updated with the latest in the cryptocurrency world. You might also be interested in:
https://medium.com/coinmonks/the-graph-vs-bitquery-solving-blockchain-data-problems-331eb69013b7
['Gaurav Agrawal']
2020-11-06 16:52:34.898000+00:00
['Blockchain', 'Ethereum', 'Data', 'Blockchain Api', 'Blockchain Technology']
2,608
Avancargo, LATAM B2B trucking platform
Avancargo has raised $1M in total. We talked with Diego Bertezzolo, its CEO. How would you describe Avancargo in a single tweet? Avancargo is an on-demand B2B trucking platform, connecting FTL carriers and shippers in LATAM. How did it all start and why? It all started back in 2017, when the three founding partners were doing their MBA in Buenos Aires. We all were connected to logistics (my personal experience was managing sales and marketing for Volvo CE in Argentina and Uruguay) and found that there was a big opportunity to digitize and improve service in the sector. We arrive quickly to an MVP, and in a couple of months we found the first angel investors. Leveraging opportunities and supply in the agri sector was our fist goal, where only Argentina is moving over 4 million trips per year. What have you achieved so far? So far we can sum our achievements in the following lines: Over 8,000 companies onboard, with more than 30,000 heavy trucks (10% of Argentina’s) 800 shippers, among which we are regularly operating with Walmart, Cargill, Bunge, Cresud 14 people in our team, with IT and operations as the largest areas US$1 million investment, with some strategic partners such as Globant, Supervielle Bank, Murchison Group and Organization Roman Over 10,000 trips requested in the last 12 months What do you plan to achieve in the next 2–3 years? Our goal is to settle the Argentinian operation during 2019/Q2 2020 in order to scale the service to Chile, Peru, Colombia and Mexico.
https://medium.com/petacrunch/avancargo-latam-b2b-trucking-platform-51168c3d3707
['Kevin Hart']
2019-10-22 17:02:08.351000+00:00
['Travel', 'B2B', 'Startup', 'Technology', 'Latam']
2,609
Asynchronous vs Synchronous Programming in JavaScript
Asynchronous vs Synchronous Programming in JavaScript JavaScript break Promises & keep Callbacks Introduction JavaScript gets executed by a single thread. According to this, it is advisable to avoid long-lasting operations in the first place. What we are going to do if callbacks are omnipresent? Whenever it comes to I/O-operations, like a network or a file system, then this circumstance can get quite heavy. Fortunately, there are two kinds of callbacks in JavaScript. Want to take a really deep dive into this? Havoc’s Blog [1] did a pretty detailed investigation about that. The base difference between synchronous and asynchronous callbacks is: Synchronous callbacks are getting executed in the calling's context method, whereas asynchronous not. A good example is to flatten an array of arrays. When this function gets executed, the passed-in parameter will be reduced to get rid of the multiple arrays inside an array and convert it into a single flattened one. When this would be asynchronous, the function could not return to the assignment of the variable flat. It has to be synchronous to do so and is a good example for the field of application of a synchronous callback and why they are used at all. Synchronous and asynchronous Callbacks When there are reasons to use a synchronous callback, then there are also reasons for an asynchronous callback. Whenever the program needs external resources and has to wait for them, for example, connections being established, files are going to be downloaded, etc. The following example will request a status code and a status message from the famous search engine Google and print it to the console. The first message that is going to be printed out is the last line of this short script: Requesting… followed by 200 and OK. This example mirrors the non-blocking and asynchronous call to I/O-Resources, propagated by JavaScript. Crucial to the consistency and dependability of an API is its behavior. It has to be consistent and never changing like functional programming is designed. Throwing in A as an input will always return a B as an output, nothing else. Converting this back to API: A function should always return a synchronous callback XOR (exclusive or) an asynchronous one. There should be no distinction between cases where the function itself returns sometimes a synchronous and other times an asynchronous callback. When you read Havoc’s Blog-Post about that one, you will find the line “Choose sync or async, but not both” and give a well-founded justification for that: Because sync and async callbacks have different rules, they create different bugs. It’s very typical that the test suite only triggers the callback asynchronously, but then some less-common case in production runs it synchronously and breaks (Or vice versa). Requiring application developers to plan for and test both sync and async cases is just too hard, and it’s simple to solve in the library: If the callback must be deferred in any situation, always defer it. — Havoc’s Blog Process.nextTick versus setImmediate JavaScript offers two functions for server-side applications: process.nextTick and setImmediate. They seem to be interchangeable at first glance. You can read them both within their documentations: nextTick [2] & setImmediate [3]. For the code interpretation on the client-side, only setImmediate is available. Both expect a callback as a parameter and will execute this callback later. Based on the definition, it seems both are doing the same. Look at the following code. Both show to do the same. But when you look under the hood, they are both clearly different. process.nextTick will delay the execution until a later date, but before Node.JS makes I/O accesses and gives the control back to the event loop. Imagine you are calling process.nextTick recursively. Where will this end? It ends in another delay until they have accumulated and lets the event loop starving. Giving the child a name “event loop starvation” [6]. Node.js under the Hood by dev.to, Lucas Santos (All rights reserved) SetImmediate, the name reveals it, does execute the callback function immediately. Excuse me, not immediately, but at the next round of the event loop. Almost immediately. If you are a true sherlock, then you solved the riddle here and found out that, especially when calling the process.nextTick with recursion and used setImmediate in the code as well, not only the event loop is left to starve but setImmediate is also being held on. This example illustrates, how important it is to know exactly what is going on with the API you use. Use the official documentation [2], [3] to inform yourself about the difference between these two in detail, when you plan to work with them. The last bastion: Asynchronicity In most cases, the conversion to an asynchronous function from a synchronous one is possible by taking the process.nextTick method. The following example states a mixed-method, working asynchronous and synchronous. Changing the synchronous part to an asynchronous one will bring uniformity and consistency to this function, stating the principle of getting B as an output when putting in A into the function every time. Would you like to get much more detailed information? View the blog post from Isaac Z. Schlueter [5]. Conclusion We should build programming upon stability and consistency. Just because there are variables in the code, you don’t have to make the code itself a variable. A function call has to be reliable and not case dependant. Make your callbacks either pure synchronously or pure asynchronously to avoid inconsistency and built your API upon reliability. In case of doubt, convert your synchronous call into a synthetic asynchronous one. Good for us, that Node.JS knows both of them: process.NextTick and setImmediate.
https://medium.com/javascript-in-plain-english/javascript-break-promises-keep-callbacks-4dbf9cff3d9a
['Arnold Abraham']
2020-12-10 09:10:55.330000+00:00
['Technology', 'JavaScript', 'Programming', 'Web Development', 'Coding']
2,610
For better survival chances, ICOs need to do this one thing
A recent study has revealed the one thing that ICOs need to do if they are to boost their survival chances. The study was jointly conducted by Emmanuel De George who is an assistant professor of accounting and fellow academics from Columbia Business School and the University of Utah. With over 75% of ICO projects dying within the first six months, what De George and his fellows sought to understand was what made the difference between the surviving 25% and the rest. They analyzed 776 projects and realized on top of having good ideas that were solving real problems the one other thing that stood out was transparency at the beginning. ICOs that failed to give their investors proper detailed disclosure at the start had higher chances of failing as time progressed. Part of the necessary information that should be revealed at the outset includes availing the source code of tokens to show that there is a finite supply of the and also revealing that tokens held by founders will only be cashed after a period of time and not immediately after the issue. Also, it’s crucial that ICOs provide informative white papers and also seek to be rated by top firms like ICObench, ICO Drops, ICORating and ICO Alert as this helps reassure the investors. The high crash rate has alarmed regulators who have resulted in warning investors of the potential dangers of investing in ICOs however, despite this ICOs have experienced high growth rate and De George said“ICOs are running at a rate of about 100 a month, We don’t envision that to be slowing down at all.” Adding: “What you are seeing is a lot of entrepreneurs with a lot of ideas. This is by far the cheapest way of raising capital these days because you don’t have the regulation that comes with other forms of capital raising.” Recently Autonomous Research LLP released data that indicated that ICOs have already attracted 12 Billion in 2018. Do you agree that being transparent is the one thing ICOs need to do if they are to boost their chances of survival? Share your thoughts in the comment section below.
https://medium.com/refcrypto/for-better-survival-chances-icos-need-to-do-this-one-thing-2c751ee60912
['Ref Crypto']
2018-08-14 03:53:20.555000+00:00
['Blockchain Startup', 'Coins', 'Blockchain Technology', 'Blockchain', 'Blockchain Development']
2,611
Why Flutter is the Future Trend in Mobile App Development?
Make Your Business Successful with Flutter Mobile App Why Flutter is the Future Trend in Mobile App Development? In this Blog, You can Get the Overview flutter and Why Flutter is More Efficient for Startups. In startups, there is confusion regarding which cross-platform mobile app development is more efficient in the future for rapid growth in the competitive market. By choosing the wrong mobile application platform many startups fail. So, The Quick Solution is Flutter. With the right choice of technology, startups can easily survive in the competitive world for a long time with more efficiency. So in this blog, we will discuss the reasons why flutter is the right choice for cross-platform mobile application development. Brief Introduction of Flutter: I know that to understand the technology is making you boring but believe me, the flutter is easy to understand and more interesting. Flutter is a single code base application for both Android and iOS. It is a free and open-source cross-platform app with high performance. It is launched in 2018 by Google so it is more trustworthy. It is faster app development for developers. Flutter is the hot reload feature that saves more time and you can change the codebase instantly. The developers can build the app without compromising the performance. Flutter is the more customize and attractive app ever. Pick points of the Blog: Why Flutter is the Best Platform? Why Flutter is the Development Trend What is the Scope of Flutter? Amazing Apps using Flutter Framework Conclusion Why Flutter is the Best Platform? React Native, Angular Js or Xamarin are other mobile frameworks available over flutter. So when the decision has come many developers and owners think that flutter why is the best platform for mobile app development. Refer the below image for clear comparison: Flutter is developed and supported by google so the long term maintenance is more than the other frameworks. Look at some benefits of having the mobile app in a flutter. Cost-effective It is cost-effective so for startups it is the best option for mobile app development. Fewer developers There is no requirement to hire separate developers for Android and iOS because Flutter has required fewer developers. With the one small team of Flutter developers, you can build the cross-platform mobile app speedily. Faster code development With the Flutter, you can develop your app with faster code than other frameworks. It increases the developers’ efficiency and saves more time in your business. Go beyond mobile Flutter has the potential ability to go beyond the mobile that leads to the more growth of your business. Before choosing any technology the research, the pros and cons are necessary to know each framework. Thus, after knowing these benefits of Flutter you can make a decision that Flutter mobile app is more efficient for your next mobile app development for startups. Why Flutter is the Development Trend: Quickly look at some reasons why Flutter has more development trends in mobile app development. By editing the code for IOS and Android apps both we can easily adjust the UI. Not spending more time in-app development you can save time to develop the app and instant changes without losing the present application state. Flutter is more similar to native app performance. What is the Scope of Flutter? The future scope of the Flutter is as long as Google has. The Flutter beat the React Native the market. Let’s have a look at the Scope of the Flutter in the mobile app development. The resources are less used in Flutter compared to others in less money and investment, fewer developers you can build the app. Flutter is easy to learn and more popular among the developers also in the market. Flutter is an excellent pixel-perfect design. It is a system in Dart that developers implement the Flutter for reading, replace, remove and change the operations in an easier way. Constantly updated Dart libraries and the quality of the code is more in Flutter that creates a more precise, accurate and less bulky app. Amazing Apps using Flutter Framework: Google ads users can view their campaign on the smartphone that provides the details of the campaign, alert notifications, suggestions and also allow the calling Google expert. You can add, edit and remove the keywords of the particular campaign and more. So this Flutter app also helps to manage all the activity of your app without a desktop anywhere. Alibaba is the world’s largest e-commerce company that connects dealers around the world. Alibaba app is for the wholesale marketplace app. It is for the global trade app that provides the users to buy the products from suppliers across the world in the mobile app. Birch Finance is the app for the credit card reward that allows users to manage the cards which are existing. It provides various ways to redeem and earn rewards. Coach yourself for the German-language market. It is a meditation app that helps users in personal development. This app provides news, videos, and lotteries daily. This app for New York, Chicago, London, and more tour locations and more. Watermaniac is a healthcare app that provides users to track the amount of water they drink. By using this app the users can set reminders, alerts regarding the drinking of the water, it is a more customized app that helps users to set and achieve the daily goal of water. Conclusion:
https://medium.com/devtechtoday/why-is-fluter-the-future-trend-in-mobile-app-development-26596c84296b
['Binal Prajapati']
2020-03-19 12:26:46.629000+00:00
['Mobile App Development', 'Technology', 'Startup', 'Business', 'Flutter']
2,612
How to keep the plates spinning in software development
How to keep the plates spinning in software development Using deliberate action and implied intent to uplift your team Ray Massey/GettyImages Imagine you have seven plates spinning on top of sticks in front of you. Each is precious and it’s up to you to keep them from toppling over. The last thing you want is a floor full of broken plates (unless you’re at a Greek wedding, in which case smashed plates might be the goal). Sometimes this is what software development can feel like. As part of the comms pod (one of the engineering pods within Xero’s platform engineering team), we own a diverse collection of products. Each has a critical role to play in the function of Xero as a whole. In this analogy, each is a spinning plate. Keeping the plates balanced should be easy, right? Just keep them spinning. But it’s not always that simple. The plates all behave differently; they are handmade and have their own quirks and imperfections. And to ensure they keep spinning, we have to keep switching our attention from one to the other, and taking the right action to ensure disaster doesn’t strike. The more we have to switch our attention, the greater our cognitive load. Cognitive load is how much stuff we have bouncing around in our short term ‘doing’ memory. Too much of it isn’t good, it leads to stress and mistakes — in this case, a higher chance that we’ll be distracted for a moment and the plates will come crashing down. Reducing our cognitive load A year ago, the cognitive load in our team was really high. This resulted in a few mistakes that led to some inevitable breakages. Around this time, the opportunity arose for me to move from my role as Senior Engineer in the pod, to taking on the responsibility of People Lead. I saw an opportunity to reduce our cognitive load, mitigate the risk of operational failures and empower our team to do great things, so I took on the role. Luckily, a couple of years ago I had completed the M@X 101 (Managing at Xero) course. Our facilitator referenced one of her favourite leadership books, Turn the Ship Around by former Navy captain, David Marquet, who details his experiences operating a nuclear submarine. I made a note of it and when I made the decision to pursue a leadership path, read the book. Its concepts really resonated with me and I could see how they could be applied to improve the way we develop software. Marquet was trained to give orders in the traditional leadership model, which he refers to as ‘know all, tell all’. It’s where the leader knows the answers, so gives the orders. This resulted in the crew doing things because they were told to, rather than using their own instincts and skills — and everyone suffered as a result. But what Marquet did next changed everything. He resisted telling the crew what to do, and instead allowed them to tell him how they would solve the problem. He also began injecting what he refers to as ‘deliberate action’ into everything the team did. It meant a crew member would pause and vocalise their intended action (what Marquet calls ‘signalling intent’), to try and remove the automatic mistakes that would occur from acting on muscle memory and unconscious competence. It’s like when you change gears in a car without having to think about it. You’re aware you’re doing something, but not really thinking the process through. I observed this ‘automatic’ behaviour when we were deploying software. Our development pipeline was smooth and fast and we were used to releasing code to production with no problems. It became part of the pod’s unconscious competence. But it also meant sometimes team members would take action without communicating what they were doing to the team, or really thinking it through. And that was a common culprit for the errors that were happening. How we introduced signalling intent The first step was to introduce the concept to my team. I wanted to ensure I was doing it in the right way, at the right time, so I thought back to what I had learned in another excellent leadership book, Patrick Lencioni’s The Five Dysfunctions of a Team. It describes five common team dysfunctions and uses a pyramid to show the levels — trust being at the foundation of the pyramid. So I asked myself: are my people at a stage where they are open to changing the way they work? And do they trust me and each other? Having regular one-on-one sessions in place with each of my people had equipped me with enough knowledge to be confident they would be open to trying something new. We had also built up mutual trust within the team, which I hoped would bring with it an openness and willingness to improve the way we communicate with each other. I raised my observations to the team for discussion and explained how, in my view, some recent situations could have had better outcomes if we had stopped to think and communicated our intended actions to the team. I posed the question to the team: How could the outcomes have been different if we had shared our intentions with the team, before taking action? Better communication and collaboration The team was quick to grasp the concept and receptive to giving it a try. Since then, the pod has been practicing applied intent as a behaviour. Not in such a strict fashion as you may expect on a submarine. But whenever the team does a deployment or makes a change, they’ll let everyone know. In practical terms, this is usually communicated via the pod’s Slack channel. It may seem like common sense, but it is very easy to get into a pattern of unconscious action unless you are actively taking the time to think about what it is you are about to do. In addition to helping us slow down and think about what we are doing, it has improved the way we communicate with each other. As team members began sharing their intentions, it gave others a chance to provide feedback to help mitigate any risks. This approach has also encouraged the team to think more about how they’re solving problems, rather than taking an action because they were told to. If problems do occur, the pod dives in to resolve the issue before allowing it to become a wider problem, something that used to only happen from time to time. This means breakages happen less, which in turn helps reduce the cognitive load of the team, because we’re not dropping what we’re doing to focus on issues as they arise. Essentially, we have less plates to juggle. Do we manage to always keep all of our plates spinning? Even with the best systems in place, accidents can happen. There are areas that we need to keep improving, and we are human after all, with our own quirks and imperfections like the plates we’re spinning. But now when something does go wrong, we’re all clued into what’s going on. This is especially helpful for people on call, so if they are brought into help with an issue, they already know the context. It has also helped us enjoy a much more collaborative environment. Our team engagement scores are up and everyone has a greater appreciation of the contributions of individual members towards the efforts of the team as a whole. Useful tips to get started For anyone interested in applying this concept within their own team, here are some tips to consider:
https://medium.com/humans-of-xero/how-to-keep-the-plates-spinning-in-software-development-b3df0fb222fc
['Mike Reid']
2020-09-13 21:37:22.737000+00:00
['Teamwork', 'Communication', 'Technology', 'Leadership', 'Software Development']
2,613
THE TRANSFORMATION OF FINANCE THROUGH OPEN BANKING
More convenience for customers, digital disruption for the banks’ business model: How a new standard is fundamentally changing the rules of the payment game. “Everything starts with trust” („Vertrauen ist der Anfang von allem“) — Deutsche Bank used this well-known slogan to woo customers in the 1990s. There’s no question about it: trust is one of the most important assets a bank can have. After all, we entrust our bank with a great deal of sensitive data — whether it’s incoming and outgoing payments to our checking account or in the context of a loan due diligence. This makes the new trend of “open banking”, which is currently gaining momentum in the financial sector, all the more interesting. Roughly speaking, this involves third-party providers gaining access to a bank customer’s account data. Bank customers should thus be able to decide for themselves whether to release their accounts to other providers and thus use services from different providers in parallel. The recently introduced EU Payment Services Directive 2 (PSD2) even explicitly requires banks to set up such interfaces or “system accesses” for third-party providers. In this way, legislators want to strengthen competition between banks and also new financial service providers. After all, in the previous “closed” system, banks had exclusive access to customer data, resulting in a barrier to market entry for new providers. In this blog post, I discuss the impact of the new standard on (digital) financial management as well as the financial industry as a whole — and finally provide insight on how traditional banks should leverage open banking and digital marketing for themselves to continue to be present in the market with a new business model. The new digital autonomy for managing finance So what is the benefit do bank customers have, when they open up their account, and thus personal data, to a third-party provider? First of all, this means having greater freedom of choice between different financial service providers with less effort. For example, a securities account could be released for an external investment advisor (or an AI-supported “robo-advisor”), which would carry out the desired transactions. Or a loan could be applied for from an institution with which no business relationship has existed to date. The credit provider would then have immediate access to the relevant financial data needed to assess the probability of default. The idea of open banking is similar to that of the electronic health record (EHR) planned in Germany. All essential information and processes relating to personal healthcare (doctor’s visits, medication supply, etc.) are stored in a centralized data wallet. The patients decide which specific information they want to make available to which institution, for example to different specialists or clinics. In Open Banking, it is possible to bring together financial data in a centralized manner to compile a completely individual portfolio of financial services from different providers. The effort to administrate all of the accounts and services is no greater than if all of them were obtained from just one institution. The traditional “principal bank” gives way to the digital platform economy This has profound implications for the traditional business model of banks. Until now, both private individuals and companies typically entrusted themselves to a “principal bank” that bundled various financial services — such as checking accounts, lending, or investment advice. This had the practical advantage that the bank had access to all data and no administrative steps had to be taken when adding another service. The bank has a deep insight into its customer’s financial situation and can therefore provide highly individualized advice. Open Banking now has the potential to disrupt this model of a “full service” bank. Data sovereignty now lies with the customer himself and he is given the freedom to choose the best provider for a desired financial service in each case without any additional effort. As a result, competitive pressure on banks is increasing sharply. According to Porter’s well-known “Five Forces” model, mainly the threat of new providers and the bargaining power of the buyers (i.e. bank customers) would bring more competition and jeopardize previous profits. Moreover, a strategic problem arises from the fact that a “full service” and a “boutique” approach are usually mutually exclusive. That is, specialized providers usually have a better offering in their respective disciplines than generalists. However, generalists benefit by controlling market entry as well as having the ability to offer a wide range from a single source. At the same time, they reduce the effort and expense for the customer in purchasing. New strategies, old strengths: How established credit institutions should deal with the potential disruption from Open Banking So how should banks respond? In all likelihood, sticking to what they have done so far is out of the question. After all, the old business model of the “principal bank” was based on the exclusivity of data access in a closed system. In principle, there are now two ways in which banks can respond to the trend toward digital platforms: Strategy 1: Establish their unique platform — Here, the bank acts as the operator of a marketplace for financial services on which competitors can also place their offers. An example of this is Deutsche Bank’s “ZinsMarkt”, where fixed-term deposit offers from various banks can be compared and purchased. Strategy 2: Offer participants on a platform — Banks use marketplaces and platforms as sales channels for special services to acquire new customers. This gives them access to the customer base of other banks via open banking. Depending on the individual market situation, a bank will opt for strategy 1, strategy 2, or a combination of both strategies. The tendency will be for large banks to offer platforms and small banks to place their specialized offerings on these platforms. Regardless of the specific design of the strategy, digital marketing is the be-all and end-all for success. Marketplaces and platforms through which financial services are brokered will compete with each other. Here, it is primarily the user experience (UX) that determines how frequently the customer uses the respective platform. If financial service providers participate as providers in a platform, they will have to build up competencies similar to those that successful online retailers already have today. These include, above all, dynamic pricing, optimizing the findability of their product on the various platforms, and working with affiliate networks to pick up as many customers as possible online. At the end of the day, however, traditional banks still have a decisive advantage: namely, their established brand and the trust of previous customers gathered in it. The saying “everything starts with trust” still holds, especially today. Maintaining and developing brands and the trust placed in them is the most important step for banks to continue to be competitive. However, banks also face new challenges in the context of digital brand management: they must make their brand a tangible experience in digital channels and platforms as well. The requirement of traditional brand management to keep the brand image consistent at all times is giving way here — in favor of a dynamic presentation of the brand that is precisely tailored to the respective target group or even individual.
https://medium.com/@felix-schoenherr/the-transformation-of-finance-through-open-banking-ac228e502b71
['Felix Schönherr']
2021-04-09 07:42:27.178000+00:00
['Digital Marketing', 'Financial Services', 'Banking Technology', 'Open Banking', 'Banking']
2,614
A Summer as a Data Scientist
What did I learn from these projects? I learned a range of technical skills, but the non-technical skills are a little more digestible to discuss. Photo by Branko Stancevic on Unsplash Beyond the ubiquitous skills that everyone working from home had to develop, I picked up a number of skills that are crucial for a data scientist, skills that you just don’t learn in a traditional classroom setting. I was thoroughly impressed with the level of respect and independence I was offered despite my limited experience and age. The program respected my ability to learn, problem-solve, and adapt, whilst always ensuring I had a helping hand should I need it. This environment was instrumental in helping me learn and grow as a data scientist. It is also one of the key elements of a good internship that many other programs miss. My peers who have, or are currently interning with other companies have often felt claustrophobic and patronized, not offered the level of independence they need to learn and grow within the program. GSI’s internship really breaks the status quo in this way. I was given the responsibility of a full-fledged data scientist to the degree that I was even making decisions that shaped the development and outcome of the project. This encouraged me to learn and grow, but was paced in such a way that I didn’t have to over-exert myself. Since GSI was able to break the mold encountered in a more conventional internship, I was able to develop and utilize a range of skills vital to the data scientist role. In some cases I was left to solve a variety of problems beyond the scope of my education with the option of a safety net, should I struggle too much. A few of the non-technical skills I picked up included: Google is your friend : In many cases, I was working on problems and projects out of the scope of my previous experience. This meant I was encountering a slew of issues that I didn’t know how to solve. Since I was encouraged to solve these issues independently, I had to approach these problems in a resourceful and creative way. This is a valuable skill to hone when working in any job or any environment. : In many cases, I was working on problems and projects out of the scope of my previous experience. This meant I was encountering a slew of issues that I didn’t know how to solve. Since I was encouraged to solve these issues independently, I had to approach these problems in a resourceful and creative way. This is a valuable skill to hone when working in any job or any environment. Communication is key : One of the key skills a data scientist must have is the ability to communicate. In my case, I had to learn how to quickly and concisely communicate issues whilst also conveying key details to a variety of colleagues. I was encouraged to write these medium articles as a way to refine this skill. : One of the key skills a data scientist must have is the ability to communicate. In my case, I had to learn how to quickly and concisely communicate issues whilst also conveying key details to a variety of colleagues. I was encouraged to write these medium articles as a way to refine this skill. Keep things straightforward : Some problems I encountered could be solved in a variety of ways. The challenge was picking the solution that was both simple to explain and effective. In many cases, I or others would be returning to the code that I had written weeks or months later. The portions of the project that included simple and intuitive code helped us solve problems faster and more effectively. : Some problems I encountered could be solved in a variety of ways. The challenge was picking the solution that was both simple to explain and effective. In many cases, I or others would be returning to the code that I had written weeks or months later. The portions of the project that included simple and intuitive code helped us solve problems faster and more effectively. Simplify the problem : On occasion, parts of various projects would break. Going back to troubleshoot these problems was always difficult as there were many moving parts and things that could go wrong. One strategy George encouraged me to use was to approach these problems one step at a time rather than try to tackle the whole suite of issues at once. Whilst this sounds intuitive, there were many times when I had to catch myself and focus on incremental improvements rather than fixing everything at once. This strategy ensures that you aren’t overwhelmed when solving a problem and often leads to more elegant and robust solutions as opposed to a jury-rigged solution. : On occasion, parts of various projects would break. Going back to troubleshoot these problems was always difficult as there were many moving parts and things that could go wrong. One strategy George encouraged me to use was to approach these problems one step at a time rather than try to tackle the whole suite of issues at once. Whilst this sounds intuitive, there were many times when I had to catch myself and focus on incremental improvements rather than fixing everything at once. This strategy ensures that you aren’t overwhelmed when solving a problem and often leads to more elegant and robust solutions as opposed to a jury-rigged solution. Enjoy what you do: During the summer this was my full-time job, and my first full-time salaried job. I learned that enjoying your work goes a very long way towards improving the quality of your experience, your productivity, and overall well being. It is hard to find out what you like to do, especially in such a multifaceted field like data science, but it is important to do so because it will be a significant portion of your experience. As I have clearly expressed, I have thoroughly enjoyed my time working at GSI and working as a data scientist. I wanted to write this blog to reflect on just how lucky I am to have had this amazing opportunity and to summarize the things that I have learned and accomplished over the past three months. I think it is important to retrospectively dwell on experiences such as this as it helps solidify the skills I have picked up and the experiences I’ve had. Using this adventure as a guideline, I feel much more confident about entering the professional world, following my graduation this upcoming June. I have a better understanding of the topics I want to further explore this academic year and the areas of data science that I am going to be interested in pursuing further. Whilst I am sure every university student aims to have a summer internship, I would encourage any college-level readers to strive that extra bit harder to attain one as there are a plethora of skills and experiences that you just can't find within the classroom. If you are interested in reading more of my work or learning about GSI Technology, I’ve included some links below to check out. Furthermore, feel free to follow me if you enjoy my writing as I will be diving into more data science topics over the next few months.
https://medium.com/gsi-technology/a-summer-as-a-data-scientist-5f57902ea8b6
['Braden Riggs']
2020-09-25 23:34:40.451000+00:00
['Programming', 'Python', 'Technology', 'Data Science', 'Internship Experience']
2,615
Spacex’s Starship Rocket Set for Suicide Mission
Spacex’s Starship Rocket Set for Suicide Mission Why Spacex’s Next Rocket Test Will (Probably) End in Flames Next week, Spacex’s latest Starship prototype is going to attempt a test flight t hat CEO Elon Musk says it has only a one in three chance of survival. In all probability, Starship will end up in a fiery grave. While this launch test is crucial to the development of the Starship rocket, its “success” or “failure” is not. Here is what can go wrong and why it may not matter anyway. What is the Starship Rocket? The Starship rocket is Spacex’s next generation launch vehicle intended to replace the Falcon 9/Falcon Heavy. Starship will be largest rocket ever built, even larger than the Saturn V rocket that took humans to the Moon. Unlike the Saturn V, however, Starship intends to be fully reusable, with both the first and second stages returning to Earth. What is this Test About? The Starship prototype that will be flying next week is the 8th iteration and known as “SN8” or Serial Number 8. SN8 is the upper stage of the full Starship rocket. The upper stage will return from orbit much like the Space Shuttle did, reentering the atmosphere on its side, where the surface area is greatest, with a large heat shield taking the brunt of reentry heat. Unlike the Space Shuttle, however, Starship does not have proper wings or landing gear. Instead, Starship will land vertically using the same engines that propelled it into the heavens. In order to land vertically, however, the craft must reorient itself from falling on its side to falling bottom first….and this will be no easy feat. What Could Go Wrong? To do this, the craft has two pairs of “flaps,” that provide stability during reentry. As the rocket approaches the ground, the rear flaps will need to lift upward to reduce drag, helping ease the bottom of the rocket beneath the front. If these flaps are damaged during reentry or otherwise fail to move, the rocket will be doomed. The rocket will also need to successfully switch from using its (near empty) main fuel tanks to much smaller “header” tanks. These smaller tanks will have less sloshing of fuel that may cause the engines to suck in a bubble of air as the rocket falls. The switch to the header tanks has to occur as the rocket is falling rapidly at an angle, so one can imagine that unexpected issues could arise. Finally, the three Raptor engines at the base of the craft will need to light simultaneously on cue, something that has proved problematic in prior ground tests. These engines will also have to gimbal (vector their thrust) as much as 15 degrees to fire their exhaust at an angle to complete the rocket reorientation. They will need to do this carefully such they can propel the rocket into a vertical orientation but not overshoot such that the rocket flips the other direction. If/When it Explodes….What’s Next? If SN8 explodes or crashes, as is quite likely to happen, it should not be seen in a negative light. The purpose of this test flight is to identify points of failure in the Starship design and modelling. In a sense, failure is a positive development, because it will help Spacex engineers refine the design on future iterations. SN9, a slightly improved iteration of the Starship, is nearly complete and can be modified to feature adjustments to correct for the issues uncovered by SN8. It make take 2–3 lost Starships before this maneuver is perfected, but once it is perfect, few variables stand in the way to affordable access to space.
https://medium.com/datadriveninvestor/spacexs-starship-rocket-set-for-suicide-mission-ba2cdb6eae9e
['J. Lund']
2020-12-03 19:26:31.076000+00:00
['Technology', 'Space Exploration', 'Business', 'Space', 'Spacex']
2,616
How to Turn on Spell Check in Sublime Text
Original photo by Jason Leung on Unsplash; Sublime Text logo via Fair Use Recently, I was writing some some HTML using Sublime Text for an article that I was writing. The tag completion and tab spacing were the features I really wanted (I’ve done it free-typing before, and it’s just way too manual and error-prone), but I was missing out on a key feature and didn’t it realize it until later: spell check. As I was working on the HTML, I was also adding and editing some lines to my page here and there, and unfortunately, I made a typo. Now, the backstory as to how I got there isn’t too important, but the experience made me realize I really wanted spell check in my Sublime Text. Since I do a lot of web development in general, I would want another check beyond my own eyes to make sure any typos don’t make it out to user-facing pages. How to turn spell check on In Sublime Text, go to Preferences > Settings. A new window will open: On the left-hand side are the default Sublime Text settings all in one JSON file. On the right-hand side, you can override any setting in the default file simply by choosing a different value for the key. Let’s ctrl+f (Windows) or cmd+f (Mac) for spell_check . If you just type up to spell , actually, it’ll find the correct key. Now, on the right-hand side, simply set spell_check to true like so: Now, when you have something misspelled in Sublime Text, there will be a jagged red line underneath the word:
https://medium.com/dev-genius/how-to-turn-on-spell-check-in-sublime-text-45b2d82cb1a
['Tremaine Eto']
2020-11-24 08:39:56.611000+00:00
['Technology', 'Software', 'Software Development', 'Programming', 'Front End Development']
2,617
Mathematically provable audit trails to protect your data integrity.
The operational book of record is full of holes. What’s the biggest business loss in the COVID crisis? Some would argue that it’s the profits, but let us debate that and suggest that it’s something else. It really is the loss of trust and accountability that’s tearing companies apart. Dispersed across home offices and DIY operational stacks, today’s operations are losing in transparency and accountability. Even before, the use of enterprise software was extremely fragmented and made it almost impossible for one process to talk to another. To tackle that, Taraxa takes the audit trail functionality to the next level by building a cryptographically secured audit log of all business processes and interactions. It gives your company an independent chain of custody that spans every business unit and automatically captures all critical OpsData you need to prove the integrity of your organizational planning, execution, and assessment, from end to end. Cryptographically secured audit trails for next-level transparency. Database audit trails have proved their effectiveness for monitoring operational activity to prevent financial and legal consequences of data breaches. The problem with data auditing is that not all audit logs are equally valuable to the auditors. Primarily designed for database administrators, database-integrated audit systems fall short when trying to capture employee-generated data, same goes for presenting the audit data in the format required by auditors. More so, even with advanced dedicated audit trail applications, you can’t be 100% confident in the audit trail integrity, i.e. that there was no unauthorized activity, or user logs were not tampered with. Taraxa solves the problem of authority and credibility of audit logs by using data anchoring and cryptographic proofs that allow to mathematically prove the origin and integrity of operations at any given point of time. Now you can keep close track of all internal operations, and be sure that no data is tweaked, deleted, or tampered with. You get an advanced data auditing system that chronologically retains critical documentation and traces every movement across all business units. Recorded in a clear, tamper-proof audit trail, all inter-department communication, external partner operations, and customer interactions are in full view of project owners. Trusted and audit-ready. All facts and deeds are consistently backed up by the audit log to eliminate the risk of tampering with sensitive documentation and other files. Its validity is assured by a robust DLT ledger making the system highly trusted by any third party. A SAMPLE CASE STUDY — Courtesy of KLN brands, source: https://diginomica.com/erp-lean-processes-increased-profit-kln-family-brands ‘We are now able to document that materials have been through our control plan so that we can provide auditors with appropriate information whenever required. And purchase orders in our ERP system have their Certificate of Analysis attached, which is vital for our ability to ensure traceability of organic products and verify the chemical composition of material we include in our products or products we purchase from others. This audit traceability is absolutely vital for us as a company to maintain our customers’ trust, and we can now achieve it with less administrative overhead’. Prevent audit trail changes: Automated information captured at the time of record creation, alteration, or deletion. Immutable storage security: audit trail alterations by any user or administrator impossible. Accountability for actions performed in a particular schema, table, row, or affecting specific content. Prevent database users from inappropriate actions. A detailed account of each business interaction from start to finish lets you zoom in and investigate the issue or check on the area of accountability. Now you can always go back and check on the event log to react before negligence becomes a dispute. All processes at your fingertips. A complete record of all customer-facing and internal operations backed by an audit trail provides strong proof to verify the authenticity of documents and contracts for all sorts of evidence requests, commercial arbitrations, and court procedures.
https://medium.com/taraxa-project/mathematically-provable-audit-trails-to-protect-your-data-integrity-73921b3af4e6
['Olya Green']
2020-12-07 15:19:13.166000+00:00
['Erp Software', 'Blockchain Technology', 'Audit Trail', 'Taraxa']
2,618
Can Green Energy Be Economically Viable: Pros and Cons
Can Green Energy Be Economically Viable: Pros and Cons @Chantelle via Twenty20 Electric energy is one of the most important forms of energy that supplies our world as more than 80% of households depend on electricity. Thanks to scientists and technology, this form of energy is going to run the electric range vehicles in the future on the regular basis. Even at this moment, we have an electric car or an electric SUV that through electric charging charges and to run instead of the fuel that is toxic to the environment. As technology makes progress, our society is slowly gravitating towards renewable and economical energy. Electric vehicles are slowly becoming a standard way of transportation. It is the same with the green energy — the demand for green energy is rising, which reduces its cost of installation and maintenance. Thus, the economic potential, which is based on several data metrics (including photovoltaics, wind, geothermal, biomass and hydropower resources), is huge and it is going to increase exponentially every year. But, can it be viable in the long run? Looks like it can — not only from the perspective of the clear and safe environment but also as it will expand the job market and introduce some new work-spaces. Still, things are far from perfect, but the main concept is good and worthy. What Do Economists and Ecologists Say? Rob Jeffrey, a council member of the Fossil Fuel Foundation, noted that the renewable sources were the main cause of political backlashes both in the UK and the US. At the same time, these sources may lead to reduced energy availability but also to poverty as people may lose jobs after the technology changes. Still, this is not something that should be taken for granted. On the other hand, the Governments and countries like the idea of the renewables as these make space for subsidies and fixed rates. Governments impose different taxes on renewable energy programs and therefore collect money aside. The reason for this is that all renewable sources of energy could reduce global warming, but also stability in the long term, especially if inflation happens. But what happens if we have a wind-powered plant and suddenly there is no wind? Or if your solar installation does not get enough sunlight? How to charge your solar panels? Looking from one side, economists may be right when they say that the whole concept is unstable (because of the mentioned problems) but there is a solution in the form of smart metering or storage batteries that would serve as the backup. @TonyTheTigersSon via Twenty20 Therefore, it would be easier to accumulate solar energy and get more solar power from your solar roof than to use electrical equipment to generate electricity. Thanks to this, the cost of these systems is going to be even lower due to the competition and the entire structure of the energy structures is going to change. The thoughtless country’s interventions that have a goal of decarbonization increased the electricity’s cost and people end up paying a high price for the electricity. However, as the new concepts arrive with greater efficiency solar features and lower start-up costs, the concept of green energy is going to become more affordable, especially when large countries like the UK or the US implement these and encourage other countries to apply it. The damage we had done to our environment is much greater than the price our children would pay for the powered systems. What are the Benefits of Renewable Energy From Scientists’ Perspective? The first and obvious benefit from the scientists’ perspective is the less pollution that is harmful to all living beings. Of course, there are some pollutions but this is not as harmful as it is today with the existing technologies. Once the process of catalysis is researched better and we find the appropriate agents that would not emit the harmful gases or byproducts, the pollution will be decreased to zero, which is the ultimate goal and the main principle of the alternative energy sources. Solar price is not going to be high as the process of building solar powered structures is not expensive nor hard to maintain. Therefore, in terms of building and maintaining, it’s much better than the regular systems for the production of electricity. Placing such a system in the area where the wind blows more than 80% of the year is bingo streak as there is no need for any additional power supply. As long as the wind blows, your plant is able to operate and produce electricity. So, you invest 100% and you get 90% in return, which is more than good math. The second benefit is that we will not run out of the solar energy as long as Sun exists. According to the scientists, the Sun will remain to exist for the next 5 billion years, which is, let’s be honest, more than enough to find a totally new way of renewable energy. The only problem that might arise is poor weather, which can influence the amount of accumulated/produced energy. But all areas that have 250+ sunny days throughout the year can produce enough energy to rely on this system. In addition, you can save on your electricity bill in this way, especially if you have a surplus amount of energy that you return back to the grid. So, if you generate more energy with the solar equipment that you spend, you might even earn some money aside! What are the Benefits in the Long Term? Since there is no pollution (or it’s reduced to a minimum, at least), our environment stays healthy and we make its lifespan longer. Therefore, the concept of green energy is more than profitable in the long run. Not only in terms of ecology and the environment. As I mentioned, a lot of countries give subsidies and get tax return on these programs, which is what every stable country needs. Therefore, the economic part will be healthy, especially when the installation costs get in the normal price range of commercial use. Not to mention that all the energy will be 100% available, 24/7 round a clock. At the moment, the UK produces around 20% of the entire energy throughout the renewable sources and it is possible that it will increase up to 80% by 2030. The whole thing makes everything more stable and economically secure. The same way some job occupations will disappear with the existence of the cost solar system, the new jobs will be introduced which makes the solar price affordable and profitable. It is estimated that by 2030, we will have 77GWs of wind-generated energy, while 28GWs will come from the solar systems. These statistics pave the road for energy export as well as for the new companies that will regulate the export of energy. Translated into the jobs — there will be 7.8 million jobs available in wind and 9.7 million jobs in solar installation and maintenance. Getting a piece of this cake would skyrocket the domestic economy and make the country more advanced. Pros Speaking of pros, the first pro is the technological advancement that will improve the existing principles or even replace them completely. This means that pollution will not be a problem anymore and we will not ruin our nature with the deadly poisons and gases. The innovations in the fields of quantum physics and nanotechnology would lead to the point where the car battery replacement will not be the case as it will be impossible to empty your battery. Instead of the ion batteries, we could have the next-level technology-based battery that will work all the time, until you plug it off from the car. The possibilities are endless as the technology will so advance that we will not be able to understand the present concept of energy production and use. @hannievanbaarle via Twenty20 The current concept for electricity grids is highly expensive for maintenance and one of the strongest characteristics of renewable energy is the low maintenance cost. There are no moving parts that are always problematic, which means there is no wear and tear. The only thing you would need to replace is the inverter that constantly converts sunlight into the electricity, which should be replaced after some 5 to 10 years. In addition, the cables might need a replacement as well. But other than that, there is nothing that could break and stop the system. The remote and rural areas where the power grid does not exist suffer a lot and the renewable energy systems could solve this problem once for all. The remote areas often have a lot of sunny days, which is perfect for solar systems. Not only these areas could get the electricity and other features that go along, but they could also use the same system for water filtration. Or even to power the satellites! Therefore, it could lead to a bigger civil advancement that would bring global growth. Cons The first con is the huge amount of space that we need for building the plants and factories that would use solar/wind energy to produce electricity and offer other sustainable solutions. It means that some parts of wildlife would suffer as we would need to take its space, which is the last thing we need. Then, we have the ruining of wild life’s harmony. Once you penetrate into the wild life’s territory, it makes permanent damage to the wilderness, which can cause the disappearing of some endemic species. The efficiency-solar panels take cca 100 square feet for the production of 1kW of energy, in the ideal conditions. Therefore, the solar factory would need probably more space as it is more complex and therefore our wildlife territory could suffer. The next con is air pollution, which has been a serious global problem for the last several decades. The air pollution is inevitable as we would create various chemical processes that could ruin the environment and quality of life. Of course, it would not be anywhere near what we do today, but still, the effects of greenhouse gases would be evident. Just analyze the transportation of the equipment and you can realize that we make pollution before we even start healing it. In addition, during the production of photovoltaic systems, various toxic materials are used for the manufacturing process. Also, the level of carbon dioxide’s emission is going to decrease with the use of electric cars. Tesla battery for example could see the redesign as the plan is to use renewable electricity to split water and produce hydrogen-rich fuel that would run the car. The renewable electricity, with the power of solar/wind energy, would help us to store this hydrogen-rich fuel in the fuel cells. This way, we use both solar and wind to produce electricity that would “charge” our batteries and improve battery capacity. But, the main problem is in water as it requires a lot of energy to split it so it could be appropriate for fuel. This could be solved by adding Iridium or Ruthenium noble metals, but due to their high costs, this could not be a commercial solution. Therefore, the biggest con is the lack of right catalysts that are cost-effective and environment-friendly. There are some researches that are giving a glimpse of hope that the ideal catalysts will be produced, but we are still waiting for the concrete evidence and details that would replace the ion battery. What Can We Expect in the Future? @artist3650 via Twenty20 As I said, the perfect implementation of renewable energy systems does not exist at the moment, but the advancements are made every day both in solar and wind researches. Each step is a huge leap in the process of making the world a more sustainable place, starting from battery electric cars, to the wind-powered plants that filter water. The major countries of the world invest a lot of resources in the researches of alternative energies and making the green energy available for commercial use is going to take some time. But, the leaf battery will become our reality that we will embrace with both hands. The future is not only about us, but also about our environment and wildlife. Before the electrical battery cost drops down, we need to research more and perform solar experiments. Therefore, at the moment, green energy may not be 100% economically viable, but it will become in the future as the experiments and researches reveal new things in the field of solar energy. The new jobs will replace the ones we have today, a more economic approach to the electricity use will redefine the entire concept of using/producing it, while the wildlife should remain unaffected. So, we definitely can expect green energy to be economically viable, especially in the long run.
https://medium.com/@romanreitman/can-green-energy-be-economically-viable-pros-and-cons-f4c645ec1
['Roman Reitman']
2020-10-18 13:02:07.161000+00:00
['Sustainability', 'Solar Energy', 'Technology', 'Green Energy']
2,619
QuarkChain Monthly Project Progress Report: January, 2020
Welcome to the 49th QuarkChain Monthly Report for January. Starting this month, we will change the bi-weekly report to the monthly report including development progress, monthly news, and events. We will post the report at the end of each month. In the future, QuarkChain will strive to do better. Let’s see what happened in the past month! Highlights: QuarkChain Foundation Grants Programs is ongoing QuarkChain published annual work summary and video Added Java SDK for QuarkChain JSONRPC together with corresponding tutorials, and will be open-sourced soon Development Progress # Major Updates 1.1 Added Supported VM operations on balance query for non-default native tokens Added Java SDK for QuarkChain JSONRPC together with corresponding tutorials, and will be open-sourced soon Added nightly CI check for goquarkchain Started the foundation grant program for UI implementation for multi-native token management 1.2 Updated Improved multi-cloud platform support for QuarkChain infrastructure Improved JSONRPC stress testing framework in Go to identify performance bottlenecks Developer Events QuarkChain Foundation Grants Program At QuarkChain, we believe in the power of our community and regard it as an essential factor to help the network grow and thrive. Therefore we are looking for engineering and product help in several areas. For now, at this very first round, we will explicitly list certain tasks and projects. Developers and engineers who finish tasks will receive huge rewards. For more details: https://community.quarkchain.io/t/quarkchain-foundation-grants-programs/103 Articles Summary 3.1 QuarkChain Annual Video: From 2019 to 2020 In the process of moving forward from 2019 to 2020, the theme of this year’s annual video is the globalized and 24/7 working model in QuarkChain. In 2020, we will continue to work hard and implement more blockchain technology. Click for video: https://www.youtube.com/watch?v=h0FdY8ZntgY&t=6s 3.2 QuarkChain Annual Work Summary 2020 is a year of challenges and opportunities. We summarized our work last year. In the new year, we will continue to work hard to build a flourishing QuarkChain ecosystem. For full article summary: https://medium.com/@quarkchainio/quarkchain-2019-achievement-summary-7dc232c4c544 3.3 The Next Generation of Open Sourced Financial Infrastructure Driven by API Bank, Open Bank, and DeBank Ting Du, CBO of QuarkChain, introduced what iAPI Bank, Open Bank, and DeBank are. He envisioned a combination of financial and data technology. By creating a new decentralized financial market, the society would enjoy tremendous benefits, similar to what it enjoyed from the birth of the Internet. 3.4 Face-to-face Interview Issue 2 Quarker is an exclusive face-to-face interview programme produced by QuarkChain. The current issue introduced QuarkChain BD director Xi Xie. She shared with us the reason why she joined QuarkChain and the future development of the blockchain industry. She helps QuarkChain to build ecosystem ,to coordinate various projects like Dapp, public chain tool development, IoT related public chain, encryption, cloud computing and so on. For full interview: https://medium.com/quarkchain-official/quarkchain-bd-director-xi-xie-watching-a-science-fiction-film-called-blockchain-d388e2623458 Events 4.1 12/30 Online AMA with QuarkChain CMO On December 31, 2019, Anthurine, CMO of QuarkChain, made a speech entitled “If 2019 Was the Waterloo of Blockchain, Then How Can Pubic Chain Rise Again In 2020?” . She summarized the development of the public chain industry in 2019 and previewed the industry trends in 2020. Click for the full summary: https://medium.com/@quarkchainio/if-2019-was-the-waterloo-of-blockchain-then-how-can-pubic-chain-rise-again-in-2020-ad8db0db6c1b 4.2 1/21 Korea Blockchain Community Meetup The QuarkChain Korean team participated in the Korean Blockchain Community Meetup held at Huobi Blockchain House on the 21st. The meeting was co-hosted by Maker Dao and Pay Protocol. About 100 industry professionals gathered to talk about the Korean industry and the market over the past year and the outlook for 2020. Upcoming Events Stanford Blockchain Conference Stanford Blockchain Conference is a leading technical conference on blockchain technology bringing together academics and practitioners. This conference will be held from Feb 19 to Feb 21, QuarkChain CEO Dr. Zhou and our Head of Engineering will explore methods of security engineering and risk management in blockchain systems with guests. FYI Thanks for reading this report. QuarkChain always appreciates your support and company. Website https://www.quarkchain.io Telegram https://t.me/quarkchainio Twitter https://twitter.com/Quark_Chain Medium https://medium.com/quarkchain-official Reddit https://www.reddit.com/r/quarkchainio/ Facebook https://www.facebook.com/quarkchainofficial/ Discord https://discord.me/quarkchain
https://medium.com/quarkchain-official/quarkchain-monthly-project-progress-report-january-2020-b57ad38654a4
[]
2020-02-28 22:51:21.180000+00:00
['Blockchain', 'Quarkchain', 'Blockchain Technology', 'Blockchain Development', 'Weekly']
2,620
The History Behind: The Antikythera Mechanism
Investigations into the problematic piece were dropped, the device primarily ignored and written off until 1951 when the eminent British physicist and historian of science Derek John de Solla Price became interested in what the discovery had actually been. Price and the Greek nuclear physicist Charalampos Karakalos published an extensive paper in 1974 under the title Gears from the Greeks: The Antikythera Mechanism, a Calendar Computer from c. 80 BC. The comprehensive 70-page work included x-rays and gamma-ray images of the device and laid out how it may have worked. Price was the first to conclude that the Antikythera Mechanism had been used to predict the position of planets and stars dependent on the month. He stated that the main gear would move and represented the calendar year, this, in turn, would move the smaller cogs which represented the planets, sun and moon. With the user providing input and the clockwork mechanism making a calculation to give an output, the device could legitimately be considered a basic computer. “The mechanism is like a great astronomical clock … or like a modern analogue computer which uses mechanical parts to save tedious calculation.” Derek J. de Solla Price, Scientific American The mechanism had initially been recovered in a single heavily encrusted piece, soon breaking into three and, since, many more as smaller bits have fallen off through handling and cleaning. Other parts of the device were later found on the sea bed during an expedition by the famed French diver Jacques Cousteau. There are overall 83 known surviving parts with seven of those being mechanically significant. These parts contain the majority of the device’s mechanism and inscription. There are also sixteen smaller parts to the device which have incomplete engravings. Reconstruction | Moravec, Wikimedia Commons, (CC BY-SA 4.0) The device was encased in wood and had doors, inscriptions on the back acting as an instruction manual of sorts. Inside the device, there is a front face and a rear face, with internal clockwork gears working an adjustable mechanism controlled by a hand crank. Adjusting the device would allow the user to predict astronomical positions and solar events such as eclipses decades in advance. The 30+ gears of the machine would follow the movements of the moon and sun through the zodiac, even modelling the moon’s orbit. Knowledge of the technology used to create the Antikythera Mechanism mechanism was lost. Despite similar devices appearing during the Islamic golden age, nothing of such complexity would be made again until the invention of the astronomical clock in the fourteenth century. However, there is evidence that the devices may not have been all that rare in Ancient Greece. Writing in the first century BC, the famed Roman statesman Cicero mentioned two such machines that predicted the movement of celestial bodies. Cicero said that these mechanisms were built by the scientist Archimedes and brought to Rome by General Marcus Claudius Marcellus following the siege of Syracuse in 212 BC. Marcellus had taken the device with him, reportedly being saddened by the death of Archimedes whom he’d held in the highest regard. The plunder then became a family heirloom and was still in existence at the time of Cicero’s writing. Antikythera mechanism right sideview, showing the inner worings of the device, Thessaloniki Technology Museum | Gts-tg, Wikimedia Commons, CC BY-SA 4.0 The two devices in Roman hands were said to be very different, one described as somewhat crude-looking compared to a second more ornate form. Perhaps indicating either a level of development or that unique versions of the device existed for the more affluent. The more elaborate form of the machine had been deposited at Rome’s Temple of Virtue by Marcellus. The links to Archimedes have been reinforced by later Roman writers such as Lactantius, Claudian, and Proclus. One of the last great Greek mathematicians of antiquity Pappus of Alexandria said that Archimedes had written extensively on the subject of the machines, penning a manuscript by the name of On Sphere-Making. Sadly this is now lost. Other documents do survive, however, with some even including drawings of such mechanisms and instructions on how they worked. One of these devices was the odometer, the modern version of which is an essential component of any car dashboard. The original invention was used by the ancient Romans to place their famous mile markers alongside Roman roads. While the first descriptions of the device came from Vitruvius around 27 BC, the odometer has been attributed to Archimedes himself over 200 years prior. When scientists attempted to build the device depicted in the images, it failed to work until the shown square gears were replaced by the cogs of the type found in the Antikythera Mechanism, leading to speculation that the mechanism and Archimedes are linked. Tying with the reports from Cicero, it seems that the Antikythera Mechanism may well have been invented by Archimedes of Syracuse. However, it could not possibly being one of the devices mentioned, with both stated to exist in Rome long after his death. Besides the two devices already highlighted, Cicero also identifies a third in production by his friend Posidonius, again, which can’t have been the artefact found in 1901. This then leads to a conclusion that the devices were not as uncommon as perhaps initially thought, with at least four known to exist and possibly many more. The technology of Ancient Greece and Rome was seemingly lost for centuries following the conquest of Greece by Rome in 146 BC and then subsequently the fall of the Western Roman Empire. Similar technology would appear again, however, in the Byzantine Empire before flourishing in the Islamic World. In the 9th century, the Caliph of Baghdad commissioned the Banū Mūsā brothers, both noted scholars, to write the Book of Ingenious Devices, an extensive illustrated work on technical devices, amazingly including automata. The brothers were working at the legendary Bayt al-Hikma (House of Wisdom) where Islamic scholars poured over ancient Greek and Roman texts, largely forgotten and ignored in the West. The Banū Mūsā brothers described all manner of devices that would have been considered wonders in 9th century Europe such as automatic controlling systems and feedback controllers. Other automata included fountains, musical instruments and automated cranks. “Nothing like this instrument is preserved elsewhere. Nothing comparable to it is known from any ancient scientific text or literary allusion. It is a bit frightening, to know that just before the fall of their great civilization the ancient Greeks had come so close to our age, not only in their thought but also in their scientific technology.” Derek J. de Solla Price There is a tendency in the to believe that computers, automata and other modern marvels are the work solely of Britain or the United States, that our age alone is the first to see technological innovation. Yet, this is far from the truth. While much of the world was in darkness, Rome and Greece were making spectacular advances in computation and sciences such as astronomy. While Europe was fighting off Vikings, the Islamic world was deep in study, reviving these ancient technologies and adding their own modifications and advancements. Eventually, these theories of science and philosophy would drift into the West during the enlightenment, the dark ages that had covered Europe following the fall of Rome finally being overcome. The Antikythera Mechanism stands as a symbol of what was lost with that fall and equally what might have been possible had Greece and Rome continued to thrive. The Caliphs of Baghdad knew that these ancient empires had much to tell us and that remains true even today, with much left undiscovered about the real power and technology of philosophers, thinkers and scholars such as Archimedes, Hipparchus and hundreds more besides.
https://medium.com/the-mystery-box/the-history-behind-the-antikythera-mechanism-4ca6240146d5
['Michael East']
2020-12-07 15:23:57.978000+00:00
['Archaeology', 'History', 'Ancient History', 'Technology', 'Science']
2,621
Nearshore, Onshore OR Offshore services?
In the business world it is possible to see an urgent need for digital implementation and evolution, which makes major investment decisions more complex. Addressing the IT world in particular, there is a pressure on these organizations to access the best profiles and talent, with an exponential increase in demand for high quality software engineers, surpassing the current market supply. Due to this incessant demand, there was the need to create and develop different software development solutions, which allow organizations of any kind of industry, access to the technological innovation needed for their products/services. But what will be the best solution for your company? Outsourcing, Offshore or Nearshore ? Companies tend to prefer the Nearshore model because it is a new, high impact approach to remote software development. Our experience here at Hexis Technology Hub as a partner is that we can offer the best talent combined with a visible reduction in fixed costs. But let’s go by parts: What is the difference between Offshore, Onshore and Nearshore in IT outsourcing? In the difficulty of a recruitment process for a developer, there are other options to consider to build a software development team. Time wasted, too much money invested, or accumulated stress will not be a problem if you choose to outsource software development. There are different models, each with its own benefits and downsides: Onshore outsourcing: practice of hiring services to help your company from the same country. Onshore outsourcing is the model that will be closest to your headquarters, bringing advantages of short distance travel and working with a qualified software engineer in your own country. However, this option will bring higher cost ranges due to the common unavailability of profiles and high maintenance when these developers do not have projects assigned. practice of hiring services to help your company from the same country. Onshore outsourcing is the model that will be closest to your headquarters, bringing advantages of short distance travel and working with a qualified software engineer in your own country. However, this option will bring higher cost ranges due to the common unavailability of profiles and high maintenance when these developers do not have projects assigned. Offshore outsourcing: practice of contracting services located anywhere in the world, usually referring to countries like India, China, Ukraine or Poland. Although the cost of work is low, there are some drawbacks such as different time zones and language barriers that make communication extremely difficult, and cultural differences can easily increase costs. In practice, there may also be problems with trade laws, intellectual property and data protection due to differences in management and country policy. However, if reduced costs are the company’s immediate priority, this model may be the best option. What is Nearshore Outsourcing? By definition, Nearshore Outsourcing is the practice of service/product development by experts in similar time zones and geographic proximity. The Nearshore model provides cost-effective access to the most advanced and modern technologies as well as the agility to increase software production capabilities while maintaining or even reducing business costs. With effective communication, daily meetings, more frequent visits to the remote teams and access to cost-effective rate cards, organizations have the opportunity to solve their IT problems. For our European clients, the benefits of Nearshore compared to Offshoring is the close relationship between their country and Portugal, as for example clients from the UK choose Portugal (GMT+0) due to the same time zone and cultural compatibility. Is Nearshore a way to outsource software development? In a way, yes, but the model offers more than just outsourcing. IT outsourcing is the use of external service providers to effectively deliver business processes, application services and IT infrastructure solutions. The benefits of Nearshoring go far beyond technology outsourcing: we help clients develop the right strategies and vision, structure the best possible contracts, and manage business for sustainable win-win relationships with external vendors. Nearshoring can enable companies to reduce costs, accelerate time to market, and take advantage of external expertise, assets and/or intellectual property. Which companies use the Nearshore model? Hexis, from Lisbon, has helped clients across Europe to solve their complex digital problems by taking on the full technological side of their projects and delivering custom software solutions. Recently, we have had the opportunity to: Partnering with global FMCG brands based in Germany, in order to get a vision of their customers’ behavior and add a competitive advantage to their products with IoT, Big Data and AI ; based in Germany, in order to get a vision of their customers’ behavior and add a competitive advantage to their products with ; We support British Fintech and Blockchain companies with applications and MVPs in the gaming, retail and education markets companies with applications and in the gaming, retail and education markets Partnerships with dedicated team extensions remotely controlled by the client, working with internal developers, with vision aligned and commitment to project and client objectives, such as online retail giants, in order to enable regular software innovation. remotely controlled by the client, working with internal developers, with vision aligned and commitment to project and client objectives, such as online retail giants, in order to enable regular software innovation. Providing advice, with new perspectives and ideas to our clients, such as start-ups from all kinds of industries, to incubate new ideas in our technology center that drive digital disruption for market change and transformation. But does traditional offshoring lead to cheaper and faster software? We know that the offshore model can be an attractive option due to the low cost, scalability of the teams with flexibility that can generate benefits to the end customer. However, the main problem with most offshore IT is the low commitment and lack of motivation of staff: with a high turnover, the exit of staff from projects leads to team instability and loss of knowledge and work already done. The compound effect of these issues has a negative impact on projects in terms of time and money. However, as a Portuguese company, Hexis is culturally aligned with our European clients, ensuring high quality software from autonomous and reliable teams. We address all the shortcomings of the traditional offshore IT outsourcing model, and share the same time zone that supports regular communication and relationship with our clients. They have full confidence in our work and gaining their trust is the key to our relationships. Conclusion Nearshore development offers a mix of onshore and offshore benefits: it lowers fixed costs while still providing you with some of the advantages of onshore outsourcing, such as regular communication during working hours. In price, rates cards tend to be higher than offshore rates, but you will save money again through more efficient communication and lower travel costs. However, before you begin to consider hiring a software development company, you must first determine your priorities: high quality profiles, the best price, more cultural security, or a mix of everything? At the end, we can ensure that at Hexis we not only facilitate a rapid and sustainable increase in the scale of the engineering teams but also ensure a knowledge management team that guarantees the security of our clients’ intellectual property.
https://medium.com/@hexishub/nearshore-onshore-or-offshore-services-a64796fefeb9
['Hexis Technology Hub']
2020-12-10 11:09:48.481000+00:00
['Benefits', 'Information Technology', 'Outsourcing', 'Offshore Development', 'Nearshore']
2,622
Cost To Develop An On-Demand Salon Booking App In 2021
Whether you are planning to start up a salon business or simply curious to know about the ways of scaling up your business, an on-demand salon booking and management app can be a perfect solution for all your business needs in this digitized era. With the projection that the spa and beauty salon market size will take a steep shift from $128.59 billion in 2017 to $190.82 billion in 2024, undoubtedly the beauty industry is growing at a backbreaking pace. If you are into this industry and own a beauty salon, then it is an ideal time to target the market with a salon booking and management app to have better control of your operations. While technology is ruling everywhere, the beauty and salon industry has not left an exception today. Managing your customers, bookings and availability of your stylists has become far easier by creating a salon management app. Today, where things have become so easy to access through an app, no one wants to wait at the salon or be in a queue to get their salon services. Yes, in this fast-paced life now people have become time savvy and prefer booking an appointment prior to their salon visit. And pre-bookings through an app will not just help in saving time, but also in money. In addition, considering the statistics, 49.6% of the world’s population are females and surprisingly each lady spent about $3,756 at a salon per year. And, this is a huge figure! So when you add an on-demand facility to your business, surely these enterprises will leverage the higher returns on their investment. Wherever you look- whether it’s a story of Uber, Grubhub or Instacart, their success stories are already inspiring startups with their rapidly growing revenue numbers. All you need is to hire mobile app developer to get started with your application. You can become the dominator of your business. Right from managing online appointments to automated reminders, you can manage everything at your fingertips with a custom app. If you are still doubtful, then here are the industry stats that will blow your mind and provide you real insights of the beauty salon industry. Surprising Beauty Salon Industry Statistics Many of you are curious to ask why and how salon business is trending all over the world? I think these statistics will help you better understand the overview of the beauty industry. With detailed market research, below we have gathered some information about the salon business and categorised it into various sections: customer behaviours and opening a salon. Changing Behaviour of Customer in the Salon Industry Opening a Salon More than 80,000 are established in the US and make $20 billion in revenue. Between 2010 and 2020, it is projected the number of open salon positions has increased by 16%. In a Nutshell: The state of the beauty salon industry is clearly portraying the progressive picture of the market. In fact, if you think about the on-demand app market’s top performers, you’ll notice that there’s something common- the rising numbers, the collection of features, the must-have technology that overall boost their performance in the market. Before you proceed to hire an app development company for a robust yet scalable solution, it is worth all your stakeholders to understand why you need this application for your business. Why Do Salon Owners Need An On-demand Salon Application For Their Business? While you have already invested $100,000+ on your salon’s establishment, you must be keen to know how to stand first in the competitive market. And one of the most potential ways of making an investment is an On-demand salon app development solution! No matter whether you own a standalone small beauty salon or having a franchisee of a leading brand, having a salon mobile app can add a perfect competitive edge and directly impact the growth of the business. All the key tasks right from managing the staff, online appointment, tracking inventory, handling multiple customers to advance bookings, an app can handle everything without leaving any scope of having a human error. Apart, here are the few key benefits of developing an on-demand salon app for your business: According to the survey, a smartphone user spends 90% of their mobile time on the application. And launching your salon management app customized with your logo and name for the Android and iOS platform can seamlessly expand your reach to the customers. 68% of customers immediately form an opinion about your brand after checking reviews on the search engine. So it is important to build an app that helps customers leave a positive review of the services and build a lasting relationship with your brand. Do you know that 70% of customers leave a service review? And an on-demand beauty salon app can help you boost customer loyalty and retention rate by sending coupons through push notifications. And you can’t underestimate this feature of your app as there’s nothing more powerful and impactful marketing tool available for branding other than using SMS alerts. Managing online bookings, reviewing client details, getting monthly business insights, tracking inventory can be at your fingertips through the simple touch on the app. The app integrated with a list of features can dramatically boost sales and let the customer know about your USPs. An app customized with excellent UI/UX design enables you to easily attract the attention of the users, save time on marketing and push your business to new heights. The estimated cost to develop a salon management and booking app can be starting from $35k+, depending upon the features, functionalities and complexity of the app. Trending GoBeyond.ai articles: Since beauty salon app development services are trending in the market, therefore, it makes sense to hire a mobile app development company to create a product that can help your business stay competitive in the market for the next decades. But before that you need to have a clear idea about the ecosystem of the salon management and booking app as it directly adds up to your budget. Basically, the app consists of 3 main units: Business owners or Admin, Customers, beauticians, stylists or other staff members. And the features and functionalities of the entire app revolve around these 3 units. So let the game begin! Now, from this last statement, many of you drill straight into the features and functionality section without knowing what kind of salon app you need to develop for your business. Here’s Types of Beauty Salon App You Should Look To Develop in 2021 There are a number of beauty salon apps available in the market but the final idea of app development is depending upon your business needs and budget. So here is the few types of salon app that you can choose to develop by simply hiring a software development company: An App For Hairdresser Beauty And Wellness Application Hair and Beauty Salon App Beauty Salon Mobile App Hair and Beauty Product Selling App Complete Beauty Service Solution Beauty and Hair Salon Solution Salon Appointment Booking App The average cost of developing any of these applications can be starting from $25k to $35K and go anywhere to $50k+. No matter what type of app model you choose, here are the few essential features of the beauty salon app that works best. Key Features To Develop Beauty Salon Booking and Management App The rule of thumb to developing a perfect application is to try to be in the app user’s shoe to understand exactly what they will look into your application. And accordingly, list out the features to develop a salon management software. This way you can emphasize the user’s journey and can pave it in rightly. So let’s contemplate some of the essential features that you should have in salon booking and management app: Features from the Customer Perspective: Registration: Allow users to access an app with a simple authentication process and provide basic details such as name, mobile number and email ID. Allow users to access an app with a simple authentication process and provide basic details such as name, mobile number and email ID. Sign Up/Login: Make your login process easier, simple and easy for the users allow them to log in to an account from multiple options including social media platforms, mobile number or email ID. Make your login process easier, simple and easy for the users allow them to log in to an account from multiple options including social media platforms, mobile number or email ID. View service list: Once the user gets registered with your app, he/she can view the service list of your salon. Once the user gets registered with your app, he/she can view the service list of your salon. Select Salon/Stylist: What if you are managing multiple branches of the salon? In that case, mention the locations of your branch and allow users to select the stylist or beautician for a specific location. What if you are managing multiple branches of the salon? In that case, mention the locations of your branch and allow users to select the stylist or beautician for a specific location. View Profile: Allow users to view a complete profile of the salon, its convenient timings, contact details, ratings, reviews, pricing and more. Allow users to view a complete profile of the salon, its convenient timings, contact details, ratings, reviews, pricing and more. Schedule services: Users can send the request to their choice of stylist or beautician or salon owner for any query. Users can send the request to their choice of stylist or beautician or salon owner for any query. Booking Appointment: Book an advance appointment by simply clicking on the “Book Now” button. Book an advance appointment by simply clicking on the “Book Now” button. Make Payment: With multiple payment gateway integrations, users can easily make a payment through a Credit/Debit card, wallet or net banking. With multiple payment gateway integrations, users can easily make a payment through a Credit/Debit card, wallet or net banking. Loyalty Points: To increase client retention rate and increase customer loyalty, you can reward loyalty points on every service that they can redeem on the next service. To increase client retention rate and increase customer loyalty, you can reward loyalty points on every service that they can redeem on the next service. In-App Chat: Users can get directly in touch with a stylist or beautician and are able to confirm timings, service charges, explaining address, or any other concern. Users can get directly in touch with a stylist or beautician and are able to confirm timings, service charges, explaining address, or any other concern. Rating, Review and Feedback: Customers can view the ratings, reviews of the salon and are able to drop the service feedback. Customers can view the ratings, reviews of the salon and are able to drop the service feedback. View History: Customers can check the record of services or appointments scheduled with the salon while rescheduling an appointment. Customers can check the record of services or appointments scheduled with the salon while rescheduling an appointment. Push Notifications: Alert messages will keep reminding your customers about their appointments, offers, subscription packages, deals, discounts and coupons. Features from the Beauty-Expert’s Perspective View Bookings: This is one of the most important features of your application for the beauty expert where they can keep tracking their earned bookings. This is one of the most important features of your application for the beauty expert where they can keep tracking their earned bookings. Accept or Reject the Booking: According to the availability of the stylist or beautician, professionals can accept or reject the booking. According to the availability of the stylist or beautician, professionals can accept or reject the booking. Price List and Services: This will be the screen of their profile where they can list the services they provide along with the price list. This will be the screen of their profile where they can list the services they provide along with the price list. Managing Calendar: Leveraging this calendar, they can manage their booking schedule and mark their availability for specific date and time. Leveraging this calendar, they can manage their booking schedule and mark their availability for specific date and time. Service History: Under this section of the app, professionals can keep track of their services that they delivered to date and are able to sum up their commission towards their services so far. Features from the Perspective of Admin or Salon Owner Managing Professionals: With this feature, admin can have full control of managing their professionals and ensure the availability to users. With this feature, admin can have full control of managing their professionals and ensure the availability to users. Accept or Reject Registration Request: The app owner would have the access to either reject or accept the request for the sake of a brand’s reputation. The app owner would have the access to either reject or accept the request for the sake of a brand’s reputation. Generate Monthly Reports: Under these features, salon owners can get complete analytics related to monthly bookings, sales, profit or loss. Under these features, salon owners can get complete analytics related to monthly bookings, sales, profit or loss. Managing Payments: Managing all the different payment related issues and staying on top of all the payments made within the application. Managing all the different payment related issues and staying on top of all the payments made within the application. Dashboard Management: As appointments are booked online, the admin can have a clear view of how many bookings are made, how many customers have visited the salon and which attendees would serve them. Admin can get insights on which services are popular among customers. Let’s have a quick look at the infographic view of the features that you should integrate in the Salon Booking and management app: These are the basic features of developing a salon management app that any mobile app development company can help you develop with the starting price of $10,000+, but to stay ahead in this cut-throat competitive market, you need to have a quick look at these advanced features for a modern solution. Additional Features To Consider In-Salon App Development Promo Codes/Discount Offers: Increase customer engagement and improve the use of the app by providing special access to promotional codes to the app users. Allow them to access promo codes, discount deals, offers, service bundles and more. Increase customer engagement and improve the use of the app by providing special access to promotional codes to the app users. Allow them to access promo codes, discount deals, offers, service bundles and more. Multi-Lingual Integration: Depending on your targeted audience or service seekers, multi-lingual integration can help users easily access the app. While choosing to hire a cross-platform mobile app development company , integrating this feature becomes far easier for the app owner. Depending on your targeted audience or service seekers, multi-lingual integration can help users easily access the app. While choosing to hire a , integrating this feature becomes far easier for the app owner. Referral Programs: In the field of salon and beauty business, it’s common to share the experience with friends and family. So you can compensate users upon positive sign-ups or registrations. In the field of salon and beauty business, it’s common to share the experience with friends and family. So you can compensate users upon positive sign-ups or registrations. Heating Window: Admin can have a look over the areas where there is a high demand and guide service providers to those locations through this feature. Admin can have a look over the areas where there is a high demand and guide service providers to those locations through this feature. Packages: Customers would love to buy monthly packages customized as per their needs to avail the benefits. Customers would love to buy monthly packages customized as per their needs to avail the benefits. Membership: You can provide membership to your customer to leverage extra benefits on the services. Adding an advanced list of features would definitely add up a cost to the budget, but the unique selling point of your app will help you beat the competition. Tech Stack That Answers How To Build On-Demand Salon Booking App While having an app with millions of users, one thing that every app owner expects- seamless processing of the app. That’s where backend and frontend of the app plays a major role to ensure smooth processing of the app. So having the best app idea and selecting outstanding features are one important aspect of your app, but selecting the right technology will help you put everything in action with the right impact on the users. The success of the app is majorly depending on how mobile app developers can transform your app idea into the final product. And this is one of the highly important aspects of mobile app development that you can’t afford to ignore. The app’s end quality, its scalability and the entire future of the app is depending upon the technology you choose for the development. So before you hire an app developer, let us give you the insight into the technology stack right from the starting point: Push Notifications / SMS Alert : FCM and APNS / : FCM and APNS SMS / Voice / Phone verification : Twilio, Sinch and Nexmo / / : Twilio, Sinch and Nexmo Data Management of the App : Datastax : Datastax Payment Gateway : PayPal and Stripe : PayPal and Stripe Robust Programming Mandrill: GWT Mandrill: GWT Database : Hbase, MongoDB, Cassandra, and Postgres : Hbase, MongoDB, Cassandra, and Postgres Cloud Environment : AWS and Google : AWS and Google Real - time Analytics : Hadoop, Spark, BigData, Apache Flink, Cisco and IBM - : Hadoop, Spark, BigData, Apache Flink, Cisco and IBM Framework: Flutter, React Native or Ionic While on-demand app development is booming at the fast paced, so avoid putting your efforts and budget at risk, it is worth hiring a software developer that can build an application with the right choice of technology. How Much Does It Cost To Develop An On-demand Beauty Service Mobile App? Estimating the final cost of the salon booking and management app is one of the nerve wrecking tasks as it takes a number of factors into consideration. And there are a variety of variables that go beyond the estimating the cost of features and tech development. According to the market survey of Clutch, the average cost of app development will be starting from $50,000 to $70,000+ but in case of beauty & Salon app, mobile app development company with an experience of developing on-demand applications can cost you $25k to $35k. Now the most interesting part here to remember is, if you choose to get your application developed in Dubai, US, or Europe, the cost estimation will automatically increase as hourly cost of developers greatly vary according to their location. And experts from all across the world have acknowledged that India is the standalone cheapest place to hire resources. But it doesn’t mean they are less talented, instead they have low living cost and boast cut-throat competition. In a Nutshell: Since cost and app development time varies greatly from project to project. So, the final Cost of the app development is majorly depending upon your business needs, complexity of the features, functionalities and technologies and more, that you select for your project. But developing an app only makes sense when you know the best ways to monetize it. So let us help you gain some insights on the best monetizing strategies for your salon booking and management app. How To Make Money From Your On-demand Beauty Salon App? There are various strategies that you can look at to raise the revenue on your beauty salon app. But there are few that you can blend in your business model specifics. They are: Commission: This is one of the most common ways of monetizing your app by taking a portion of money on selling beauty products or services of other brands. To get a steady income into your company, you need to implement proper procedure. Subscription Plans: Allow customers to try your subscription plan free for one month and provide various benefits to make them understand how useful or beneficial this plan would be. Once they get to know about this plan, you can offer the subscriptions for 3 months, 6 months or 12 months at the best prices. Special beauty Packages: Allow users to buy monthly packages through the app to add reward points in their wallet and redeem those points on their next billing. Advertisement: Allow beauty brands to advertise their products on your app by paying the advertising amount to have a one call button on your app. In a Nutshell: Though these are the few potential monetizing ways that you can leverage to generate revenue. So now you are aware of the fact how much it costs to develop a salon app for your business and how to make money from it. The only step left here is to hire a mobile app development company that will help you turn your concept into a beauty service application that seamlessly operates on demand. Conclusion: Get Ready To Boost Your Salon Business Growth! Owning a business and expecting it to be running at its best is one of the most daunting tasks for the business owners. But with an on-demand business salon app, you can easily get control of your business and be able to manage everything with a few taps on the screen. Apart from managing the business and professionals, a salon app will add a perfect competitive edge to your business and provide a convenient way of making appointments to customers. Whether you are the one who wants to streamline your salon operations and manage bookings with an app or simply want to upgrade your existing salon app with latest features to stay competitive, that’s completely up to you. A mobile app development company can help you get started with an app, no matter whether you are a small business or what company you have. They can strategically build apps that can boost your business growth at the fast speed. So don’t wait further and become a part of this thriving industry with a perfect robust, scalable and flexible solution. Don’t forget to give us your 👏 !
https://medium.com/codex/cost-to-develop-an-on-demand-salon-booking-app-in-2021-eb95c7329cdd
['Sara Khan']
2021-04-13 06:42:59.923000+00:00
['Startup', 'Apps', 'Mobile App Development', 'Mobile Apps', 'Technology']
2,623
SPACs are the Future of AV Startups
LIDAR is a core technology that powers self-driving cars. It’s also what I’m naming my new cover band where we do 90’s themed remixes of Lionel Richie songs. LIDAR systems are usually (not always) mounted on top of self-driving cars/other autonomous vehicles (AVs): Besides being a very fashionable top hat for you car, LIDAR systems are core to helping an AV “see”. A LIDAR system is like a group of TIE fighters strapped to your car roof — they’re constantly shooting out lasers. Most used to spin, but modern ones are smaller and will still provide a 360° view. “Pew pew pew” Thousands of laser pulses are sent out every second. Every SECOND. When they hit an object and bounce back to the LIDAR, the reflection points are recorded to build a 3D point cloud. This can be done because we know how fast light is and how far it travels. That cloud can then be turned into a 3D representation of the car’s surroundings. Here’s a look at how Uber tackles the problem: Velodyne is a huge player in the LIDAR market. They’re one of the OGs in the space and they’ve kept up with industry innovations via sexy, smaller lidars. As you can see, LIDAR is pretty great at telling the car how far away things are, but not really great at telling the car what they are. Because of this, most current AV approaches combine LIDAR data with camera + visual recognition data.
https://medium.com/datadriveninvestor/spac-secures-self-driving-car-accessory-startup-800d2bfd659d
['Murto Hilali']
2020-07-21 16:16:53.310000+00:00
['Finance', 'Technology', 'Business', 'Startup', 'Lidar']
2,624
Is the Future Controllable? If We Could, Should We?
Copyright 2020. All rights reserved Human progress often takes place when an individual genius offers an innovation that transforms everything. We benefit from an ancient inventor who decided that carrying things would be better by using wheels. The same with the domesticators of fire. Ask anyone what inventions are society-changing and they will rattle off electricity, the light bulb, vaccines, automobiles, radio, TV, and the Internet. We adopt these new technologies because they are demonstrably superior to what we did before. The problem is that there is an unacknowledged period where the innovation begins to show its flaws or, more worrisome, have negative effects. Let’s look at some quick examples: Automobiles allow us to easily travel long distances in physical comfort. But they also produce suburban sprawl and spew carbon into the atmosphere, contributing to global warming. They also support sedentary lifestyles that can lead to our maladies of diabetes and heart disease. Taking another, the Internet gave us long-distance, instantaneous communications, but also (in ascending order of awfulness) cute cat videos, email spam, online trolls, cyberstalking, and manipulated elections. Are these just unanticipated consequences, or could they have been mitigated or managed in some way? As a species, we need to find ways to think through the future of innovation without having to fix a problem that the future as thrust upon us. Sadly, I do not have an answer, but perhaps a way to talk about it. As we look at the rise of monopolies and oligopolies in technology — in search, online buying stuff, social media — what we see are innovations that were originally innovative. But over time their adoption and business growth tends to crowd out the next generation of innovations. Granted, there are always potential disruptive innovators who come at the problem from a radically different angle (Thank you, Clayton Christensen), but quite often they are adopted, bought out, or destroyed by the existing paradigm. Here’s a thought experiment: When was the last time you produced a business document that you wrote using word processing software not created by a multi-billion-dollar tech firm? I use Microsoft Word — an application that was first created in 1981. (Does anyone remember WordStar?) We have let technology companies grow to become monopolies partly because they are beneficial, like Amazon, and partly because they are predatory, like Amazon. Let us not just accept these realities without asking ourselves if these business concentrations have costs to society, justice, and even freedom. Amazon may offer incredible ease and convenience (Our neighbors thought we ran an online business because of all the delivery trucks that came to our house in the run-up to Christmas — they were all Amazon orders. Yes, we have a problem.), but the company also pushes their workers hard to keep the line moving, sometimes to the detriment of comfort and even safety. Jeff Bezos may be the richest man in the world, but his wealth depends on a lot of other people, and he does not share very well. (He needs to read the wisdom of Robert Fulghum — he may be able to find it on Amazon.) There’s a remedy for monopoly power, it’s based in a 19th Century law, the Sherman Antitrust Act. It’s been used to break up companies from Standard Oil to AT&T. The problem with it is that it’s used after the fact. After the damage to competition and the free market have been done. Granted, no one knew that John D. Rockefeller was going to become a monopolist when he started Standard Oil in 1870, but by 1880 the New York World observed that it was “the most cruel, impudent, pitiless, and grasping monopoly that ever fastened upon a country.” (Wikipedia) It remained so until 1911, when it was broken up. So, legally, we are running behind the curve. At least until now, maybe. One of the more recent and profoundly powerful innovations, CRISPR (short for Clustered Regularly Interspaced Short Palindromic Repeats) technology offers a rapid and relatively easy method to change our genes — genetic engineering in other words. Is this a good innovation? Or a harbinger of a dystopian future? It could be used to address the genes that cause sickle cell disease or cystic fibrosis or Duchenne muscular dystrophy, forever. Not only the recipient of the gene therapy may be spared these maladies, but their children and children’s children. It sounds great. But there are both costs and risks. Editing genes is still experimental, meaning very expensive even now and not assured of success. There’s still much we do not know. Its expense puts it in the class of luxury goods, only available to those with the money to pay for the procedure. Moreover, so far, it’s not legal. That’s actually where it becomes interesting for the future of the procedure, but also for the future of innovation. The newness of this innovation buys us some time to consider the costs and benefits of the approach. Granted, a Chinese researcher claimed to use the technique to provide immunity to HIV in 2018, but he was sentenced to prison for doing so. The rest of society still has time to take a breath and consider the ramifications of genetic engineering. If we proceed, there are methods for research that allow us to test safety and efficacy before broad application. We also have time to wrestle with the thornier issues of the ethics of such manipulation and the implications of its cost. Yet, this is a biotechnology issue before it becomes a monopoly issue. What can we do to address the problems of the next tech innovation that breeds a new generation of monopoly? Perhaps start building in some safeguards in patents to allow a pause in implementation before widespread application and growth. Subject such patent applications to a societal impact assessment like environmental impact studies before we make irreversible changes. Yes, it will slow down the pace of innovation, but what if we had such a consideration before the Internet was loosed on the world? It had no security layer and was wide open to hacks that compromise systems. Would we be better off if there had been a short pause to reflect on the future? If we brought in a technology Devil’s Advocate? There’s no perfect answer, but the answers we have been getting in the last generation or so don’t look so good where we stand now and as we turn our gaze forward.
https://medium.com/the-innovation/is-the-future-controllable-if-we-could-should-we-3d331f644d42
['David Potenziani']
2020-07-27 06:16:14.610000+00:00
['Monopoly', 'Social Impact', 'Technology', 'Innovation', 'Biotechnology']
2,625
Are we too dependent on technology?
Photo by Domenico Loia on Unsplash I have come to the realization how dependent I am on the technology around me, recently my laptop. One morning, I am simply folding clothes while watching some Netflix and all of a sudden, it dies. I get the blue screen of death. So, I am thinking that no problem, it’ll just restart and all will be fine. But, it doesn’t. I try and force shut down and boot it up and learn that my hard drive has essentially died, not being able to boot Windows. I have no laptop. Since losing my laptop, I have been working off my iPad which has been very limited. While I am able to get some basic work done, it has certainly slowed down my work flow a lot and made it increasingly difficult to get anything done. It made me realize how dependent I am on my laptop and the technology around me. Thinking back, I almost did not get the iPad, but now I am happier than ever or else I would not be able to do any work whatsoever. Without my laptop, I have been less productive in many ways. Online school is much harder, taking notes and viewing the lecture slides from a much smaller device is no fun. As a photographer, I am unable to edit any photos from my iPad, something that I absolutely need a laptop for. I spent more time procrastinating and playing games and my overall mood had gone down since I feel bad for wasting time and another day doing nothing. This may seem overly dramatic, but think about it. If your technology all of a sudden stops working one day, what would you do? (I am genuinely curious, feel free to comment). I can’t be the only one where their computer and smartphone is how they function. Some may try to move back to a more paper and pen workflow where nothing can crash, things are far less expensive, but the fact of the matter is that the entire world has gone digital, especially during this pandemic. So if you don’t have a computer or a device to work off of, your essentially screwed. And it is not just work. We use our devices everyday to talk to people, play games, watch tv, and more. Work, play and relax and all done on our devices. This brings up the question: As a society, are we too dependent on technology? We all start and end our days looking at a screen, working on a screen, talking to a screen, watching a screen and putting our screen down before we sleep. Is there anyway that we can be productive or be entertained without looking or using a device? Now I know that there are probably many people who work without computers and electronic devices, but I think it is safe to say that most people do and, like me, are dependent on our devices to get work done, call people, use it for relaxing. I think that this experience has made me realize that I need to find activities and practices to do that are not focused on electronics and devices. Something productive that I can achieve without a computer or my phone. It does not necessarily have to be work related, but a task that makes me focus and fires up my brain so that I no longer feel like a slob, wasting time until I have a new laptop. If we can become less dependent and focused on the electronics around us, we would be better off, for our mental and physical health. Mentally being focused on other tasks that do not include a screen, going out rather than being a couch potato and watching Netflix. I may be late, but I have learned to take some time off electronics everyday to achieve something that does not require a screen so that I feel more at ease and less stressed.
https://medium.com/@aamerseth/are-we-too-dependent-on-technology-26bb5085556d
['Aamer Seth']
2020-10-17 00:03:05.422000+00:00
['Self Help', 'Self Improvement', 'Mental Health', 'Electronics', 'Technology']
2,626
Where to Invest in 2021
Where to Invest in 2021 After a tumultuous 2020, we are all ready to start a more typical year. However, we expect that 2021 will be a transition year as companies and investors adjust to changes in the economy, social activity, and new work practices. Like many observers, we expect a rush to travel, leisure, dining, and entertainment activities due to massive pent-up demand. This by itself is likely to show stress points for some industries and the need for new platforms for bookings, ordering, and logistics. But at the same time, we expect visionary entrepreneurs to focus on the long-term market pain points and innovative new platforms. We outlined Two Megatrends that we believe define the future of the Post-COVID world. The acceleration of the migration from physical to online platforms is the single biggest trend defining our investment landscape and economy for the next five years. Our task is to dissect this Megatrend to find stress points and see where investments can lead to outsized gains by targeting big pain points and providing the ‘lubricant’ to facilitate this growth. As such, we have identified the four areas of e-Commerce Ecosystem, Digital Health and Fitness, Enterprise Cloud, and EdTech as the most significant growth areas in the next five years. There are common themes that connect all these sectors, namely automation and acceleration of online platforms. The next five years will be building the infrastructure for this new wave of growth. Within the four target areas, the following sectors are of particular interest to us: E-commerce Ecosystem: Customer Communications; Logistics and Delivery Customer Communications; Logistics and Delivery Digital Health and Fitness: Digital Therapeutics, Remote Monitoring, Drug Discovery, Diagnostics, FemTech, and remote Fitness Digital Therapeutics, Remote Monitoring, Drug Discovery, Diagnostics, FemTech, and remote Fitness Enterprise Cloud: Data Platforms, Informatics, Collaborative tools Data Platforms, Informatics, Collaborative tools Ed-Tech: Re-training, Learning Management Systems (LMS) 2.0 E-Commerce Ecosystem As we discussed in our Megatrend piece, we believe the adoption of online platforms has reached a critical inflection point and is likely to reaccelerate for many categories. While we expect a downward adjustment (in Q1-Q2) once social distancing requirements are relaxed and COVID is more manageable, we expect the demand to continue to be strong in the second half of 2021. We can simplify a commerce transaction consisting of three major areas: We believe the two areas that still need significant improvements are Customer Communications and Logistics and Delivery. It’s easier for companies to automate and improve the efficiency of their transaction processing and fulfillment. It has proven to be more challenging to gain the same efficiencies in delivery. Customer Communications. Customer experience is still lagging during discovery and order, which is the leading cause of churn and cart abandonment. With the increased demand, the need for a better customer experience will be critical. This involves customers’ search, order, transaction, and support needs. We will be looking for companies that improve the efficiency of each of these stages for both e-commerce and offline retailers. Improved product search with real-time updated inventory with real-time updated inventory Single-page ordering and payment with proper authentication with proper authentication Streamlined customer support The above areas apply to e-commerce and offline retailers who will find that many of their customers would prefer to first search for the product on their company’s or store’s website before visiting. We believe online browsing through the stores’ website will increasingly compete with physically browsing through the aisles. Logistics and Delivery. While the online delivery system worked better-than-expected during the pandemic, the cost structure remains too high, and deliveries are inefficient. A well-functioning delivery system requires both warehouse logistics (local distribution centers) and robust last-mile delivery technologies. We are investors in one such company, BoxBot, which improves the efficiency and cost of local delivery by as much as 30% through better package management. We believe this area needs almost a complete retooling so that we can create a unified and efficient delivery system, similar to what we consider below: Notice that we envision a platform that can aggregate and combine deliveries from various sources on an optimized daily route. In short, we are looking for companies that provide: Demand aggregation and delivery syndication platforms Local smart storage platforms An optimized and scheduled local delivery model Digital Health and Fitness Covid-19 is a health crisis, but it provided tailwinds for Digital Health sectors in 2020; we expect this sector to expand without moderation in a post-Covid era. We have been focused on the Digital Health industry for many years and have now identified the following six sectors for investment in 2021 and beyond: Digital Therapeutics. This sector has enjoyed the double benefit of showing efficacy (see CBT for chronic insomnia disorder) and, of course, increased adoption driven by stay-at-home mandates. Digital Therapeutics includes platforms and services that provide treatment and management of physical and mental disorders using digital means such as apps, programs, videos, and devices, used remotely by the patients. We are especially interested in Digital Therapeutic platforms targeting disorders that traditionally suffer from low patient participation due to perceived stigma, lack of easy access, or cost. The second category of Digital Therapeutics that targets more common disorders, such as high blood pressure, anxiety, or obesity, is also interesting since they reduce the barrier to adoption by making the service highly accessible. Examples of this category include the success of companies like Livongo and Omada Health. Remote Patient Monitoring. We believe this is a developing area that has only started to show promises in cases that previously required frequent practitioner check-ins. Chronic disorders such as COPD, high blood pressure, or certain coronary diseases often require monitoring of patients by health care professionals. Advances in new wearable and other at-home devices are making this highly costly operation far more effective. We are also hoping to see this category expand to include broader health care monitoring, especially for at-risk patients, and for non-chronic disorders. Next-Gen Drug Discovery. The use of both AI, as well as massive databases, has opened the possibility of a) finding new molecules that target specific indications, b) significantly expedite the drug testing process through both simulations as well as pre-targeted design. We are also very focused on the next generation of aging-related discoveries that can form a platform for many new categories of drugs and interventions. AI-Enabled Diagnostics. This is an area that some people, justifiably, believe was overhyped. For example, while there were promises of radiology becoming obsolete, say by 2022, we are far from that. On the other hand, diagnostics of breast cancer and other tumors by machines have improved so rapidly that they nearly match the best radiologists now. We are very hopeful that AI, as well as could-based data models, will significantly improve both the accuracy and the depth of diagnostics. This is one area that the value of data is exponentially beneficial in each new diagnostic task. Femtech. Technology, products, and services that focus on improving women’s health and fitness have found a golden opportunity in a market that had been largely neglected by most health care providers. We expect to see the success of companies such as Nurx and Pill Club to expand to many new players and categories in pregnancy/motherhood, reproductive health, gynecological, and overall health. Femtech companies benefit from the dual advantage of a gap in the market and new digital therapeutics technologies that have made such services easily accessible. We also consider femtech technologies potentially enabling disadvantaged populations, particularly low-income women. Fitness and Wellbeing. We are looking for new models that take advantage of video, app, and wearable technologies to provide remote fitness training. Peloton, Tempo, Whoop, Apple Fitness +, and others are examples in this category, which we believe has room for significant expansion. We believe the fitness market is not one size fits all but rather segmented by budget, goal, lifestyle, and location. The essential characteristics of a successful platform is to motivate and make the training comfortable while showing the results and thus closing the loop for the consumer’s effort. We are more interested in equipment-light models that don’t require significant investments from consumers and are also available to large swaths of income groups. Using a few wearables and minimal equipment, with the right content and mix of automated, group, and individualized training, a company can achieve significant market share. Enterprise Cloud Within the enterprise market, we believe Cloud adoption has clearly defined the next areas of growth. We are specifically interested in three segments: 1. Cloud Data Platforms — We believe that there are opportunities in this space to disrupt the on-premises data platform vendors by building these solutions as SaaS applications on top of the IaaS services of public cloud platforms. For example, Snowflake is disrupting the data warehouse market by creating a Teradata equivalent product package as SaaS. The advantages are that both the initial costs of engagement and the development costs are significantly lower given the public cloud services’ leverage. 2. Collaboration tools — Given the mass adoption of remote work, there is an enormous untapped opportunity for integrated platform plays. Current tools are often disjointed and requiring a lock-in to a vendor’s platform. We believe there is a white space for an independent vendor to create a comprehensive platform for collaboration leveraging AI/ML to improve productivity, efficiency, and allow analytics to measure remote workers. 3. Bio-informatics Data Platforms — We have observed that the majority of bio-informatics platforms are proprietary and vendor-specific. Given the rise of gene sequencing and the massive data sets created, we believe there is an opportunity for an open and extensible platform optimized for the storage and data mining of genomic data. EdTech and Re-training We believe online education, as well as specific job training (beyond the usual STEM categories), is not only crucial but will be in high demand over the next ten years as we transition from a mostly service industry to a more automated and cloud-based models. We are looking for new platforms such as Income Share Agreements (ISAs) and the next generation of online teaching models that target re-training the labor force for new jobs. We also believe that the existing EdTech platforms are inadequate and need more collaborative elements within the Learning Management Systems (LMS). One area that we recently invested in is peer-review and collaborative content creation provided by Canada-based Kritik.
https://medium.com/think-ventures/where-to-invest-in-2021-c33c15886fca
['Safa Rashtchy']
2020-12-18 20:46:03.934000+00:00
['Trends', 'Investing', 'Business', 'Technology', 'Venture Capital']
2,627
It’s Hard to Stay Friends Online, but It’s Possible
We live in a world where I can FaceTime my mom on my laptop in real-time even though she’s over 4,000 miles away. I can see her new puppy bounce around adorably, listening to his tiny barks as clearly as I could in real life. Simultaneously, I can text my friend Megan who’s living in Australia. She’s asleep right now, but I want her to see these puppy pictures when she wakes up. I haven’t seen her in person for nearly a year, but our friendship is just as strong as ever. I speak nearly daily to folks I’ve never met before — people who I’d consider good friends, people who support me in some of the biggest challenges I face nowadays. I may have never heard their voice, but I know them, sometimes more than I know people I’m friends with in “real” life. Whether by choice, necessity, or convenience, a lot of relationships start, continue and end online. More than ever, people like me are building online relationships. We gain a lot of our interaction from Slack, Facebook, Instagram, WhatsApp. And yet it’s tough. Why? It’s hard to make, maintain, and strengthen online friendships. Online interactions have been rightly criticized for being weaker, less viable methods of keeping friendships on “life support.” From a personal standpoint, I understand. If 93% of communication is nonverbal, then 100% of the limited interactions we have with one another online are only fulfilling 7% of the spectrum of human communication. And that sucks. That’s not great news for the vast range of people we interact with online — but I’m not talking only about the “friends” that you speak to once a year when Facebook prompts you to wish them a happy birthday. Even the long-lived, fulfilling, rich relationships you’ve spent years or even decades cultivating might transition to the online world where they die unless you're careful. I know when I work from home, many times the conversations I have with coworkers are stilted or awkward, forced into an unnatural medium when they’d be far more straightforward in person. When I’m comforting friends who have experienced tragedy, bereavement, job loss or even just had a bad day, it’s more difficult to gauge the social cues. In person, I know when to stay silent, and when to prompt the conversation. Online, my only cue is the three dots in a chat box that might come up and disappear as my friend struggles with what to say. screenshot from author While in university, people would frequently gather in my room to have tea and biscuits (I went to a British uni) — we’d chat and laugh and joke. I’d never invite the same group of folks into a video call or chat group to do the same. There are some things which are facilitated with physical props and feel awkward to do online. Finally, it’s hard to be friends with someone when all you see is their highlight reel: their ups and even higher ups, as they are engaged, promoted, vacationed, as they enter parenthood or get their nails done. In real life, I see a much more nuanced view of my friends. I see them tired, angry, hungover and exhausted along with exuberant, thrilled, exultant, and tipsy. There’s only so much success you can see in your friends, unmoderated by the day-to-day inanities we all experience, without starting to feel at least a little jealous. At the end of the day, it’s hard to make friends online, and harder in some ways to keep those friendships that have migrated to the digital world alive. Not only that, normal faux-pas that would test but not break an in-person friendship have a far greater effect on your online relationships. But it’s necessary. I’ve written before about how much I appreciate and love the technology that lets me keep in touch with far-flung friends and relatives. When you’re thousands of miles and five time zones away from the people you grew up with, it’s tricky to rely on in-person interactions, expensive phone calls, and late-arriving letters to stay in touch. Photo by Isaac Smith on Unsplash Flights are cheap, jobs are global, and I’m not so rich in meaningful i-person friendships that I can turn online ones down just because I’ve rarely see them in person. Our worlds are growing wider, and to eschew digital communication altogether seems not only unnecessary, but actively harmful to ourselves, our relationships, and our communities. Staying friends with people online is hard. But it’s possible. Take the basic tenets of friendship further. Think about every basic characteristic that you like in a friend. For me, I like people who show up when they say they will, who listen well, who are funny, who reach out to check in with me, who support me, and who are honest. None of those things rely on in-person communication. It just so happens that most of those things are easier in person. So to keep your online friendships strong, you need to identify what traits you appreciate and admire most and try to take them to the next level online. If you say you’ll show up to a video call, don’t drop out at the last moment. Show up and be present. If you can, call people. Video call. Group call. It’s free if you have a decent internet connection and it makes all the difference to get back some of that 93% of communication you’re missing with only texts. Meme from Halloumi Memes for the Gloomy Teens Facebook group Things like sharing memes can be a small and silly way to reach out. When someone tags me in a meme or sends me one, it’s a way for them to say they thought of me and wanted to let me know. Though it might seem silly especially to older generations, memes are a low-commitment, high-result way to keep up with friends. I regularly tag my friends in pictures of cute cats on the Catspotting Facebook group, and am frequently tagged in return. Start traditions. One group of friends I know of sends pictures in a WhatsApp group every time they see a consecutive number — so one person photographs the first number one they see, then a number two and so on. While some of the chat may just be people sending pictures of highway exits, it promps conversations — where are you going, what are you up to, why is that person selling 101 eggs. Give yourself excuses — no matter how silly or serious — to stay in touch. Listening is more difficult online, and consequently more important. In chats, it’s so easy to “talk over” someone simply by typing out whatever you want, whether or not it’s related to the conversation. Sometimes you’re grocery shopping, or you’re interrupted in the middle of your chat, or for whatever reason, you have to put your phone down and you might forget to come back to the chat. It’s so important to tune your virtual ears to your friend who’s typing to you and even if you do it asynchronously over a span of ten hours, ensure you’re giving it your all. Stay on topic. Ask open-ended questions. Reflect back to them what you’re understanding — the written medium is easier to misinterpret, after all — and even if you can do nothing else, tell them you’re ready to give them your time. When it’s so easy to text people, it can feel like an insult when you don’t spare those thirty seconds to send a quick text. Make sure you’re on board with what the other people you’re friends with online want. If you only ever want to vent to them, it might be draining for them to stay in touch with you. If you’re constantly offering solutions when they want emotional support, they’re not going to feel heard. Communicate your expectations for what you both want out of the friendship early and often. Being friends online is harder, but it’s worthwhile doing. All of these things are what we do far more instinctively in person than we do online. In my opinion, online friendships aren’t radically different to those that are predominantly in person. It’s just easier in some respects — easier to feel closer to people while doing none of the work, easier to send a text to stay in touch. But it’s harder in others. You won’t see these online friends at parties, you won’t run into them grocery shopping. You’ll have to make the effort. One of the hardest things for me is that many, many times, I’m the friend who reaches out. I send the text, I make the call, I ask for the meetup. It hurts to think that I value the relationship more, but it’s something you might have to accept. Many of your relationships now may not be as even as you think — putting them into the digital world just makes that gap more obvious. You’ll have to make the choice, as I did, whether it’s worth swallowing your pride and being the keener friend, or whether it’s not. I don’t think we’ll return to predominantly in-person relationships any time soon, unless we see some kind of apocalyptic post-technology world. Instead of complaining about how online relationships aren’t as good, or as easy, or as meaningful as in-person ones, I prefer to work on improving all the relationships I value, whether in person or not.
https://zulie.medium.com/its-hard-to-stay-friends-online-but-it-s-possible-852fac403938
['Zulie Rane']
2019-07-09 10:10:00.797000+00:00
['Relationships', 'Friendship', 'Technology', 'Social Media', 'Communication']
2,628
Ensuring Tech is an Equalizer
Over the past 8 months, Fearless has been fortunate enough to continue working through the COVID-19 pandemic. Our teams have been able to transition to working from home and our projects and contracts have continued allowing us to keep building software with a soul. But the experience of tech employees and other digital-first companies like Fearless is not the same as everyone in our community. “You think about this economy today and we can just hop on a computer and work. People are working globally, and those with the ability are able to get up and work from anywhere they choose. But there are also so many people who don’t have access to reliable internet or devices,” said Fearless CEO Delali Dzirasa during a panel about tech’s role in economic responsibility. “While we’re sitting here in a virtual conference from the comfort of our homes, some people can’t even do their schoolwork. That is ridiculous in 2020.” Fearless is focused on the communities and people who are being left out, they are the people who use our technology and benefit from the improvements made on our projects. “Our focus is the user. Yes, many of our contracts are with the government, but they are tasked with serving the people. This is about struggling people who are trying to get their healthcare and are struggling to access healthcare and benefits. Those are the people who we work for and who are always front of mind for us,” Delali said. More companies need to make community and people a core part of their mindset and business. There has always been a potential for technology to improve our lives. Think of how we deliver information. 150 years ago if you wanted to share something long-distance, your best option was mailing something that traveled on a horse and buggy. Now mail takes days instead of weeks, and your information can get to someone in moments if you send a text, email, or social media direct message. Now we need to make sure tech is supporting everyone, not just a small group. There are good things happening in tech, but all companies need to accelerate and push those efforts forward. “This idea that community and giving back is a cute add on to a company, and not in its DNA, is really problematic. You shouldn’t do something only to check the box of corporate responsibility. At Fearless we have three focuses: customer, culture (our team), and community. All of them must work in concert for us to be a great company,” Delali said. Talk is one thing, but action is another. A core component of Fearless’ strategic plan is our 50/50 goal. By 2024, we plan for our member representation to be at least 50% women and 50% minorities. Did you catch the most important word in that last sentence? Plan. We are actively working towards our 50/50 goal and apply the thinking we’d use for any other business goal to this initiative. We know technology can be the great equalizer and we have a strategic plan to achieve our goal: Build Talent : Through our community work and partnerships, we are working to create talent, not simply hoard the existing diverse talent in our area. By creating space for the next generation to learn and succeed in this industry, we can create a larger talent pipeline in our cities. : Through our community work and partnerships, we are working to create talent, not simply hoard the existing diverse talent in our area. By creating space for the next generation to learn and succeed in this industry, we can create a larger talent pipeline in our cities. Amplify the voices around us : As we rise as a company, we can bring others up alongside us. We want to give the Black voices in our company and in our cities the space to share and be heard. : As we rise as a company, we can bring others up alongside us. We want to give the Black voices in our company and in our cities the space to share and be heard. Provide exposure: We invite our city’s next generation of leaders into our spaces and share our craft so they are exposed to the tech community long before their first job. When communities of color see themselves in the tech industry, we can begin to break down the barriers and fear of the unknown to make tech feel accessible. “People see us and say we’ve done a good job of building a good company, and maybe they say, ‘Well you’re black so that’s why you were able to do it.’ But no, we planned for it,” Delali said. “Companies have goals and plans and tactics but when it comes to corporate responsibility, it’s not often executed the same way. So it’s really easy to forget when it’s not in your long-term plans.” Making sure you’re a good corporate citizen will add to your bottom line, not detract from it. Studies show that companies perform better when there are diverse thoughts, perspectives, and experiences and in the spaces where decisions are made and work is done. At Fearless that diversity and differences in opinion and experience are critical to our work. The software we build is used by millions of people, so we must design tools that will be useful to millions of different people. Having a homogeneous team makes it harder to create something for a non-homogenous population. But saying you want to hire diverse teams and create an inclusive environment is very different from doing the work to make that happen. “How do you bridge the gap and get people in the room together that haven’t traditionally been in the room together? How do you get people to build new relationships so they can extend opportunities,” Delali said. “It is on business leaders to be good translators and get conversations and connections happening in the space from the boardroom to the community.” Tech leaders often discount the fact that there are so many people who are able and can power their company, but the dots aren’t being connected. The inability to activate people who can help you hurts your company and your bottom line. “There is a community of folks who want to work, they want access and they’re getting slapped on the hand multiple times, whether it’s redlining or the education is bad or they don’t have access to the technology and– again, they can’t have access to these jobs. I feel there is a personal responsibility and question of how do we bring all people and communities together?”
https://medium.com/@fearlessbmore/ensuring-tech-is-an-equalizer-af45a4d5d240
[]
2020-11-24 00:38:27.002000+00:00
['Technology', 'Diversity In Tech', 'Diversity And Inclusion', 'Baltimore']
2,629
Augmented Reality using Tango
What is Tango? Tango is a technology platform developed and authored by Google that uses computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allows application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world. (Source: https://en.wikipedia.org/wiki/Tango_(platform)) Four devices are supporting the Tango technology at the moment we’ve written this article: The Yellowstone tablet (Project Tango Tablet Development Kit) a 7-inch tablet with full Tango functionality, released in June 2014 (This is the device we are using in our showcase). The Peanut was the first production Tango device, released in the first quarter of 2014. Lenovo’s Phab 2 Pro is the first smartphone with the Tango technology, the device was announced at the beginning of 2016. Asus ZenFone AR is the world’s first 5.7-inch smartphone with Tango and Daydream by Google. The Idea behind working on a showcase using Tango was to learn more about this smart and powerful device. The showcase is consisting of two requirements: Place a virtual Modeso logo on a real-world surface (e.g. wall) Make the virtual object — the Modeso logo in our case — not cover real world objects in front or rather occlude these In the red rectangle you can see the goal we want to reach The Tango project consists of three core technologies: Motion tracking Area learning Depth perception Our target was required to work with both, area learning and depth perception. Area learning is very important for learning the surrounding environment and it depends on the depth perception. So, you cannot enable the area learning in an application without enabling the depth perception. First Requirement Place a virtual Modeso logo on a real-world surface (e.g. wall) by touching the screen. While the device camera is pointing to some flat surface the virtual Modeso logo should be placed on this targeted surface. We could simplify it through the following approach: Detecting the position (x,y) of the touch on the screen and convert it to (u,v) coordinates. Get the color to depth pose by calculating it using the calculateRelativePose method available via Tango support library provided by Google. Get the last valid depth data provided by the Tango. Get the current connected Tango camera intrinsics. From the data above the intersection between the touch point and the plane model can be calculated. This can be easily done using a method from the support library called fitPlaneModelNearClick. This method will return null if there is no clear plane object to calculate the point on it. Get the transform setting for OpenGL as the base engine and Tango as the target engine. The transform is needed to calculate the virtual object poses in the OpenGL environment. From the transform and the IntersectionPointPlaneModelPair we can now calculate the object’s pose that will be used in the OpenGL renderer to render the object on the screen. To make things easier while working with OpenGL we were using the Rajawali library. Google provided examples achieving the same concept with some extra functions and features. Result demo of first requirement in the Swiss office Second Requirement From the first requirement we already know the depth on the logo in the 3d world. With the help of depth perception and the provided cloud points we can filter these points and get the subset of points with a depth value less than the depth of the logo putting in mind the quaternion of the logo. Using the Tango update listener we can use the callback onXyzIjAvailable. This callback is invoked when new cloud point data gets available from Tango. Important to know is, that this callback is not running on the main thread. Google issued a warning about these callbacks. Meaning you have to be very restrictive while working inside the callback, because you won’t receive new cloud data until you have returned from the callback. So, for example if you are working on heavy stuff inside the callback it will affect the performance of your readings. Every time we receive new cloud points from the Tango we filter these points depending on the the depth of the logo and update the Rajawali renderer with the new data. Belal covered with the virtual Modeso logo and rendered cloud points The virtual Modeso logo in front of an iPad, which was detected by Tango The second step is to show a real object in front of the model to get the occlusion of real-world objects in front of the logo. In order to make this possible the following three steps are required: Get depth matrix of the model Get intersection points Add mask to intersection points We used the Rajawali method calculateModelMatrix from the ATransformable3D class which takes the matrix of the current point cloud matrix. With the result points we could add a mask to it. We were lucky to stumble upon https://github.com/stetro/project-Tango-poc which was a great reference and gave us the right idea on how to implement the masking. Unmasked hand in front of the flipped virtual Modeso logo on the wall Approaches Here we will go a little bit deeper into the different approaches used and proposed for achieving the target goal. Depth Mapping & 3D Reconstruction This is the technique used by the repo at https://github.com/stetro/project-Tango-poc and it is based upon depth mapping. In short a depth map is a regular image (occasionally a grey scaled image) that contains information related to the distance of the surfaces of scene objects in the image from a specific viewpoint/camera. The color of each pixel of the depth image represents the distance (depth) of this pixel in real world from the camera e.g. in gray scale depth images dark areas represent closer points to the camera while lighter areas represent further points (or the reverse according to how you encoded the image) How will depth maps help us achieving the goal? If we have a depth map for our camera view i.e. if we know the depth of each pixel of the camera view we can perform what is called the “Depth Test” or “Z-Buffering” in Computer Graphics, in this algorithm, while rendering your 3D content, the hardware compares the value of each pixel of the rendered 3d content to the corresponding pixel in the depth map and decides whether it should draw this pixel or not according to whether it is occluded i.e, there is another object at the same pixel closer to the camera, or not. Basically that’s the idea. How this is done in code? The first step is that we need to compute the depth map of our view using Tango color camera to achieve that we have to: 1 Initialize the Tango service as documented using the C API https://developers.google.com/tango/apis/c/ 2 Configure the Tango device to use the color camera: 3 Configure the Tango device to use depth camera: 4 Configure the Tango device to connect local callbacks for different feeds from the device: 5 In the “OnFrameAvailableRouter” callback, which is called when a new camera frame arrives and the frame image buffer is passed to it, we construct our depth image from the camera image: But what are we doing inside this callback exactly? First we are converting the received image buffer (TangoImageBuffer) format from YUV color space (default) to RGB color format. This is done using the YUV2RGB function for each pixel. Next we construct an OpenCV Matrix (image) over the resulting RGB image from the first step and create a gray scaled version of it. After that we apply a GuidedFilter to the OpenCV image. Guided filter is an edge-preserving smoothing filter. See here for more. In effect this filtering smoothes the occluded parts of the 3d object so it looks more real. The resulting filtered gray scaled image is used as our Depth map. Based on this depth map we will try to construct a 3D representation of the camera view. We now have the z coordinate of each pixel in the camera view and need to know the other two coordinates (X, Y). Here Tango supports us providing an equation to translate 2d coordinates to 3d ones and vice versa using the camera intrinsics. Given a 3D point (X, Y, Z) in camera coordinates, the corresponding pixel coordinates (x, y) are: x = X / Z * fx * rd / ru + cx y = Y / Z * fy * rd / ru + cy After solving the previous equation for X and Y we now have the three components X,Y and Z. So now we can construct a 3D points version of the camera’s real view. For that we construct a vector of vertices and fill it with the data as last required step. With that all in place we can perform the “Depth Test” in OpenGl as we have the 3d model to render and a 3d representation of the camera view. To render parts closer to camera and occlude the further ones we only need to check them against each other: This is what you will get: 3d marker occluded by office chair Gray scaled depth image (darker means closer) Conclusion The results were not as accurate as expected to proceed with the targeted scenario. We have noticed that the Tango is heating up very heavy while using it for more than five minutes. Google is mentioning this problem already. Area learning is using heavy processing power causing the processor to heat up and to protect the processor the device reduces the processor speed which has a negative effect on the readings and the data produced by the Tango sensors in general. Furthermore, the object detection is very imprecise, the camera is not yet good enough to produce good results from image processing. Even with a better camera the image processing would be very expensive in processing power and would impact overall performance. On the other hand it’s very good with augmented reality in general and placing objects on walls or floors without the need for markers. In direct comparison with vuforia and it’s markers it’s the winner without questioning. But for real world real time object occlusion it seems to still miss needed functionality. Credits: Modeso’s Mobile Engineers Belal Mohamed, Mahmoud Abd El Fattah Galal & Mahmoud Galal.
https://medium.com/modeso/augmented-reality-using-tango-b30b3b6806a
['Samuel Schmid']
2017-07-14 07:06:07.689000+00:00
['Software Development', 'Augmented Reality', 'Mobile', 'Tango', 'Technology']
2,630
This Obscure Truck Simulator Had an Open World in 2001
This Obscure Truck Simulator Had an Open World in 2001 With CyberPunk 2077 around the corner (hopefully), a case of chronic doomscrolling led to my curious self discovering a fragment of CD Projekt Red’s past. While the game developer’s original life as a Polish translator of videogames is well-known, the fact that they almost made a Polish version of Hard Truck 2: King of the Road caught my attention. It was no diamond in the rough, but it was still a game that I have fond memories of. After a bit of time traveling, I found that this little-known game was ahead of its time. 70 miles to explore with some big rigs? Check. Weather systems and cops with speed traps and helicopters? Check. Circuit races with trucks? You bet. Hard Truck 2 sets you up for the long haul. As a trucker, your goal is to deliver all kinds of goods across the map in a race against other truckers. Show up late and you’ll be paid a pittance compared to the early birds. Every now and then you could get an invite to a good-old circuit race. Winning a race brought you a license to hire other truckers, letting you form a company of your own. The goal of the game? Become the biggest player in the delivery market. The venerable BMW M5 as it appears in the game. Source: JoWood. Trucking along In Hard Truck 2, money makes the world spin. Everything has a cost, from refueling to paying for being caught by a speed trap, so cash keeps your truck in the race. Speaking of trucks, the game lets you pick from over two dozen vehicles, from salons to 18-wheel monsters with their own unique handling and speed stats. Want a snazzy new ride? They don’t come for free (but you could get stolen ones on the cheap). You earn money by fulfilling deliveries (after bidding for them) and racing with rig-masters through obstacle courses and other truck racing shenanigans. Unlike most racing games, you can’t just get free repairs at the end of each race. Those nasty dents from screwing up your parking? That’s going to cost you. Putting the pedal to the medal and flooring it all the way from point A to point B won’t cut it. Speed traps and the weather will test your patience at every turn and that’s before the cops are involved. You’ll have to stop to service your vehicle or refuel it. And if you crash, the costs will mow through your pocket money. Once you hire a few employees (incredibly, you can encounter them on the road), your cash flow gets a massive boost but it’s a hassle nonetheless. Trucks up your sleeve Apparently, you could use your radio to negotiate terms with the mob and the police on your tail but the game didn’t exactly give me a how-to guide. You could even talk rival truckers out of their feud. But back then, I didn’t even have an internet connection for multiplayer, let alone a silly tutorial. Those were the days when learning how a game worked was half the fun. Competing to reach the drop-off point first was an experience quite unlike any other racing game out there. For starters, even if a bit of paint is chipped from your vehicle, the goods you’re hauling take a beating. Add to that inevitable repair costs and you can see where this is going. If you want to make a dent in the delivery business, caution is the name of the game. Despite being the very soul of vigilance (I even stopped at red lights) I still made the cops antsy fairly often. Carrying precious cargo would entice members of the mob to put some holes on your fresh paint job, ruining a potential “peaceful trucking” experience. Again, the game supposedly had a store where one could purchase scanners and protection systems but I rode it out in the dark. Some upgrades could even render you invisible to the police. How’s that for stealth? The map hid plenty of secrets. Source: GOG. Truck or treat If you were expecting the game’s visuals to remotely compete with Forza or Gran Turismo, I’m sorry to disappoint you. The pixelated mess you see above is what you get. By 2001 standards, I can’t really complain. Fortunately, exhaust fumes and tire marks are a part of the visual package. The pixels on your windscreen that bear a passing resemblance to real-life objects often have to contest with raindrops, fog, or the blinding sun thanks to Hard Truck 2’s impressive weather system. This game even has a day/night cycle. While the environments seem diverse at first, you’ll be treading the same paths (even the secret ones) often enough that they become monotonous. Pesky bugs smeared rock salt on those graphic wounds. They marred the experience more often than I expected them to, leading to unexpected crashes and glitches that would make me forsake my goods in transit. Created by the Russian rock band Aria, the soundtrack was pretty good for a truck simulator but it pales in comparison to what popular Need for Speed entries used to offer. Overall, it’s a fairly solid package, especially for a hardcore truck simulator. Few games have the same kind of in-depth systems that are found here, which apply quality additions to the highway trucking life, from cheap stolen trucks to an upgrade system that lets you get away with carnivorous capitalism. The trucking genre has its fair share of fans: the game made over $2.1 million and was among the top 100 games sold in the US at the time. It’s no surprise that CD Projekt Red agreed to work on a Polish version of Hard Truck 2. Unfortunately, it was canceled in August 2002. Either way, King of the Road is a game that outlived its flaws with systems that made getting from point A to B in a truck a cautious yet memorable ride.
https://medium.com/super-jump/this-obscure-truck-simulator-had-an-open-world-in-2001-99f8a483bd94
['Antony Terence']
2020-11-07 23:56:40.037000+00:00
['Features', 'Gaming', 'Digital Life', 'History', 'Technology']
2,631
Minimalist Writing Devices, #3: Raspberry Pi 400
My Covid-era 2020 Christmas present to myself was an eye-catching red and white keyboard with a computer inside: a Raspberry Pi 400. Like a 1980s-vintage Commodore 64 all it needed was a cable connection to my monitor and I was sitting in front of a fully operational Linux computer. Cost: $70 US for the unit alone, or $100 for a complete kit that includes the keyboard/computer, color-coordinated mouse, HDMI video cable, and a book, The Raspberry Pi Beginner’s Guide. As a writer, I’m fascinated by low-cost, minimalist writing devices and the Raspberry Pi 400 (RPi 400) delivers more power per dollar of computing device I’ve yet encountered. Let’s take a look. Introducing the Raspberry Pi 400. What you get in a Raspberry Pi 400 is not just an attractive keyboard, but a full 64-bit ARM CPU computer inside, with 4GB RAM, a microSD slot to store the operating system and local data, 2 micro-HDMI ports, 1 USB-2 port, 2 USB-3 ports,, a USB-C port for power, a Gigabit Ethernet port, built-in WiFi and Bluetooth, and a GPIO (general purpose input output) 40-pin port. The GPIO port is for makers and experimenters — — those who create things such as robots and robotic structures, specialty electronic circuit boards, art and light installations, and much more. To this crowd the Raspberry Pi is at the heart of many a specialty project. For them Raspberry Pi is as common a brand name as Dell, HP, Lenovo, Acer, or Asus to most home computer users. Chances are you’ve not heard the Raspberry Pi name bandied about much in writing circles … yet. With the RPi 400 that may be about to change. This is the first Raspberry Pi model that is a ready-to-boot-and-use Linux computer with appeal beyond its usual user base. I can see parents picking up one or two of these for their kids. It’s a inexpensive and great way for anyone who has heard of Linux, but may have been shy about trying it, to get a hands-on introduction. The purpose of this review is to examine this device as a potential minimalist writing tool that could be used by someone with no previous experience with a Linux computer. Setting Up the Unit The RPi 400 arrives with a 16GB microSD card inserted, ready to boot up Raspberry Pi OS as soon as you add a monitor or TV, and a USB mouse for convenience. The first time you boot the system it prompts you for your country, language, time zone, and a new password. The RPi then scans for a WiFi connection and prompts for its password. Once set up, the interface looks similar to Windows or MacOS, with the task bar at the top instead of the bottom. Navigation is simple: click on the red raspberry icon in the top left corner to display a menu from which you may launch any of the included programs or apps. The RPi 400 comes loaded with programming editors, text editors, and the Libre Office suite, which includes a Word-like word processor. The default browser is Chromium, the open-source version of Chrome. A file manager allows you to browse through your folders to copy, move, delete, or select files. The operations are intuitive and familiar to any Windows or Mac user. And that’s it! You’re ready to write. The RPi 400 as Writing Device Because I use Google Docs for much of my writing, I fired up Docs for this review and found the RPi a very comfortable device to work with. The keyboard is full size, minus a numeric keypad. Because it’s weighted with a computer inside, it has enough heft to feel solid as you work. The keys are well spaced and the layout is normal with well positioned arrow keys in the lower right-hand corner. At this price you don’t get a first-class keyboard, but it’s completely serviceable. The one caution with the keyboard is that you occasionally get keyboard bounce — — two characters appearing with one press of the key. The bounce is infrequent enough that it’s not a show stopper, but you need to keep an eye on the output for occasional misbehavings. Some of the bounce may be determined by your touch on the keypads. I’m a heavy-handed typist, raised on upright typewriters and the original IBM PC keyboards. The RPi 400 is not a speed demon. It has enough zip that it doesn’t lag while you type but it’s not a sports car. It’s more like a cute VW Beetle with rear engine. Fun to use and it gets you there. Who is the Raspberry Pi 400 for? The RPi 400 is a variant of the small Raspberry Pi 4 used in maker projects. As such it will certainly be of interest to makers and experimenters, but putting the computer inside the keyboard opens the device to a much wider audience. Parents can purchase this unit for their kids as a way to learn programming, or just for general use. It’s a little sluggish on websites that include heavy graphic material but that’s to be expected. Writers may be interested in this unit if they’re in need of a cheap computer and already have a monitor or HD TV it can attach to. At this price, it could serve as a complementary machine to a laptop or tablet, or even a unit you might want to leave at a site you visit regularly, such as a cottage or other external location. Overall, the Raspberry Pi 400 is cute, highly usable and cheap. For most writers I would recommend the $100 kit over the $70 standalone model. The kit comes with matching USB mouse plus the critical HDMI video cable. The Command Line Although you don’t need to know much about the included terminal app that is similar to the Windows Command Prompt and nearly identical with the Mac Terminal program, you will need to use the command line occasionally to make certain your software is up to date. This is done by starting up the terminal and typing the following two lines at the command prompt: $ sudo apt update $ sudo apt upgrade Running this once a week or so will keep the Raspberry Pi 400 software and operating system up to date with the latest upgrades and security updates. Bottom Line As you can tell, I’m enthusiastic about the Raspberry Pi 400 as an inexpensive, minimalist writing device. The bang for the buck is incredible and there’s nothing difficult about using a Linux computer for writing. All the usual amenities are here, packed inside a keyboard. The unit, while easy enough to carry to other locations, is not a portable. This is a small desktop computer waiting for you when you’re ready to create the next best seller. Happy typing!
https://medium.com/@genewilburn/minimalist-writing-devices-3-raspberry-pi-400-1bece8eb74f4
['Gene Wilburn']
2020-12-25 10:33:54.847000+00:00
['Computers', 'Technology', 'Linux', 'Raspberry Pi 400', 'Writing']
2,632
Sci-Fi Thriller I — The Election of 2052
I tried to change the subject. “You played today, Bob?” I knew what he’d say. “Naw. No time. “Besides, this isn’t politics,” he said. “It’s science.” “Don’t tell me he’s going to have the Democracy-implants put in,” I said. “I will tell you.” “That’s politics Bob,” Joe growled. “Indirectly,” Robert countered. “Of all things!” Joe exclaimed. “All the candidates should be forced to get the Dem-plants!” “Why on earth! Everybody’s already socially imprinted with the idea of democracy.” It’s true. Joe just can’t help himself. Robert said, with his mouth full, “It’d be the best thing. Remember what T.Rump did back then. He wouldn’t have attempted it if they’d had Dem-plants.” Of course, I said, “It just so happens I’m teaching T.Rump this week. And I can tell you that there’s no implant out there that would have prevented Trump from doing anything he wanted.” I decided to give them a lecture. “It wasn’t that T.Rump didn’t have a concept of democracy. The problem was that he was a flawed human being. A lot of it came out when his tax returns went public. Sure, he falsely claimed the election was a fraud. Half the country knew it was false at the time. But that wasn’t because he didn’t want a democracy. That was because his stunted personality couldn’t accept the possibility of losing. That’s why he ignored the start of WW II½.” I finished with, “It didn’t take historians long to conclude he was totally, completely, and absolutely the worst president the U.S. ever had. It had as much to do with all his corruption about the money and pardons, as his fundamentally flawed psyche.” Joe joined in. He’d been writing a lot about the implants lately. “Factually speaking, they had implants back in 1997. Just nothing like what we have today. But it’s really questionable if any implants we have today would’ve made any difference at all.” My thoughts exactly. I imagined that Robert probably would have voted for Trump, but I didn’t say anything. The cute waitress refilled our drinks, but Robert was too excited to look her over. Robert said, “I don’t know about 1997. That was way before my time. But I heard they were putting implants in monkeys when T.Rump was around. So they almost could-a had something like a Dem-plant. “Think of all the agony that would-a saved. He’d never-a tried to overthrow democracy by claiming the election was rigged. And him and his cronies were talking about martial law!” “They could read single neurons back then,” Joe said. But they had no way to stimulate a single neuron until way later. The implants were miniscule. They were just getting started with DBS on the vagus nerve. That’s what started the whole ball rolling. Trying to cure heart disease and diabetes. That’s where it really picked up.” My education in physics and history didn’t get down to that level of detail, so I didn’t comment. What I did say was, “Democracy the only implant he’s getting, Bob? I doubt it.” “He’ll get the Truth-plant as a matter of course.” Joe practically jumped down his throat. “That won’t make any difference! Your candidate will never tell the truth, implant or not!” “I’ve got you there! The Truth implant will definitely work. That throws your argument right outta the window!” All Joe could come back with was, “So you say.” I interjected, “How can they be so sure they’re connecting to the right neurons?!” Robert claimed, “It’s an exact science.” I was going to say, “Well, you have to know a little bit about science in order to learn something about physics. I’m here to tell you, it’s not an exact science yet. Not by a long shot.” Instead, I got real snarky and said, “You know, Bob? I saw a quote the other day that I think applies.” The opinion of 10,000 men is of no value if none of them know anything about the subject. Robert didn’t take it as I intended it and said, in all good humor, “I thought I was only dealing with Joe!” Then we all paused for a bite and a sip.
https://medium.com/illumination/sci-fi-thriller-i-the-election-of-2052-according-to-marcus-aurelius-57f17d9ca9ca
['Jay Toran']
2020-12-28 21:00:14.154000+00:00
['Politics', 'Biotechnology', 'Future', 'Society', 'Technology']
2,633
Rocketing for a Crowded Orbit
The family, donned in an intra-vehicular activity space suits, strapped in a capsule, are ready for 384,400 km. Within minutes their own weight doubles and in just eight and half minutes, they are completely weightless. Inflatable orbiting habitat rented for their holiday in space! Welcome to the future! It is not new to travel into space using a private space jet. Ever since 2010, when the crew size of the International Space Station was increased, several ordinary people have travelled into space. The idea was novel then but not today. We have many private space leaders coming up wanting to make space travel and exploration mainstream. Nonetheless, commercialization of space is not just about sending the elite on extra-terrestrial excursions. It is about the space industry transforming itself from monopolies to global space competitors being able to be self-sufficient, come up with different innovations and getting space completely commercialized! It is about the aging infrastructure or ideas to step aside for the technology to breathe-in again. However, in the pursuit of exploring the future in space, the global space competitors are growing too rapidly. The Space Race — Happening right NOW!! While space has been the subject of human interest and a quest for discovery for a very long time, today we are at a pivotal stage. Many national governments are launching ambitious space strategies. Private-sector players are making major breakthroughs to open the space sector to common man. Bigelow Aerospace aims to build an enormous cargo space habitat that can accommodate about six individuals for many months. Then there is another company, Sierra Nevada, working on a three-story inflatable space habitat prototype. This inflatable habitat serves multiple purposes like having a garden for fresh produce for space travelers, manufacturing facility, a lab, and a hotel. It is not all about human travel to space… there is a huge set of companies trying to get into data transmission in space as well. Elon Musk has a plan to blanket the Earth in high-speed affordable internet. And SpaceX proved it by launching 180 satellites for internet faster than any other company in the world. This is going to provide ultra-high-speed web access in rural and remote regions around the globe. Instead of beaming it to the ground and back, there are also space lasers that transmit data directly to each satellite in orbit. It is possible to transfer hundreds of gigabytes of data at once, making Starlink the fastest data transfer solution available. Furthermore, this ambitious journey has started to become competitive. Now, Bezos also wants to launch his own constellation (3,236 internet satellites to be precise). And yet another bankrupt company, OneWeb seeked to increase satellite constellation up to 48000 satellites. Feeling crowded? Not yet? Check this out! NASA’s next big move is to search for life on Mars. It has launched a rover that is also responsible for trial technologies for future expeditions including oxygen production. Actually, Elon Musk is pretty confident that it would take only two more years for SpaceX to fly the first cargo mission to Mars, with the first humans landing there by 2026. These latest initiatives suggest that space is an area where we will see significant development soon. It is potentially addressing opportunities with respect to surveillance, mission deployment, cyber, and artificial intelligence. At this rate, by the time the next world war comes around, the war will be fought in space! All jokes aside, space is growing exponentially and, in more ways, than you think. The Global Space Economy Space as an investment will impact several industries in a huge way. The global space industry will generate revenue of more than $1 trillion by 2040, up from $350 billion, currently. The most significant short- and medium-term opportunities will come from satellite broadband Internet access. The demand for data is growing at an exponential rate. The largest opportunity will come from providing Internet access to the unserved parts of the world. There is also going to be an increased demand for bandwidth from autonomous cars, the internet of things, artificial intelligence, and virtual reality. In fact, as data demand surges, the cost of wireless data per-megabyte will be less than 1% of today’s levels. In addition, today we are using satellites for GPS, navigation, and various other applications. More than half of Earth’s operational satellites are launched for commercial purposes. About 61% of those provide communications, including everything from satellite TV, Internet of Things, connectivity, to global internet. Second to communications, 27% of commercial satellites have been launched for Earth Observation (EO) purposes, including environmental monitoring and border security. Beyond the opportunities generated by satellite broadband Internet, the new frontiers in rocketry offer some tantalizing possibilities. Packages today delivered by airplane or truck could be delivered more quickly by rockets. Private space travel will become commercially available. Mining equipment could be sent to asteroids to extract minerals — all possible, theoretically, with the recent breakthroughs in rocketry. Adding on, Space X has reduced millions of dollars per seat travelling to space. Here is an infographic showing the cost since 1961 till the time Space X entered the game. Looking at such exciting space explorations and possibilities, I have the following predictions to make for space in 2021 and beyond. 1. SpaceX will face tough competition in Space Tourism Virgin Galactic is ready for their final series of test flights early in the year and expects to begin its space tourism flights soon. Blue Origin is also planning its first space flight with people this year, as it was ready last year but wanted to perform a few more test flights without people on board first. Not just private companies… India and China have their own space programs that will make it much cheaper to visit space. 2. FCC prepares to run public C-band auction The U.S. Federal Communications Commission (FCC) decided to run its own auction of satellite C-band spectrum instead of letting satellite operators handle it. The FCC has emphasized a desire for speed in transferring 280 megahertz of C-band spectrum for use in 5G cellular networks. 3. Satellite servicing and debris retrieval missions will move forward Almost 60 years of space activities and more than 5,450 launches have resulted in approximately 23,000 objects remaining in orbit. This has a negative effect on future launches, and it has been theorized that sending objects into Earth’s orbit could become impossible due the risk of collision. This debris must be removed from orbit if the space industry is to continue to grow. The European industry is leading the debris removal activity with, ClearSpace established by an experienced team of space debris researchers from many EU nations leading the activity. However, with many leaders wanting to have their own constellations, space habitats or stations, this will either result in collisions or create a roadblock in studying space through telescopes. 4. New small launch vehicles will enter the market Several companies working on small launch vehicles will likely attempt their first launches this year. Virgin Orbit announced its first orbital launch recently. Firefly Aerospace will also begin static-fire tests of the first stage of its Alpha rocket soon. Other companies, like ABL Space Systems, Relativity Space, and Stealth Space Company will make progress towards a first launch, as the industry awaits a long-anticipated shakeout among the dozens of companies that have announced plans to build small launchers. 5. Flexible communications satellites will reign supreme Satellite manufacturers have used the past few years of slow sales to invest in high technologies that offer more ability for better control. Manufacturers say the ability to offer “flexible” communications satellites that can adjust the power, shape, and position of their beams is now the de facto standard to do business. Airbus, Thales Alenia Space and Boeing all rolled out new flexible satellite lines last year. 6. Commercial alternatives will surface to NASA’s Tracking and Data Relay Satellite System The increasing human and robotic space activity in lower Earth’s orbit will prompt government agencies and commercial firms to invest in networks to relay communications to and from the ground. There are many private firms who are planning to expand the production and delivery of Inter-satellite Data Relay System terminals, which it developed with satellite fleet operator Inmarsat. In addition, NASA’s Space Communications and Navigation program office will work to establish public-private partnerships aimed at creating resilient communications and navigation networks. 7. We will print living tissues in space Coming-up long-term space explorations will involve great exposure of humans to space conditions. With increasing distances to Earth, there is no possibility of returning for medical treatment in time. So, to protect human lives and health, the space agencies are looking at Bioprinting in space. It promises to offer treatment options for bad accidental injuries that are likely to happen during long-term space exploratory missions and extra-terrestrial human settlements, for example, aid for severe burns or difficult bone fractures. 8. The human genome will change to support human deep space exploration Increased presence in space will enable us to conduct more medical research in zero gravity. This will provide opportunities to discover new treatments for conditions we thought weren’t possible. Furthermore, we may start to see the ability to deliberately alter the human genome to further support humanity’s sustained exploration of space. 9. Micro Satellites The past ten years have seen the nano/microsatellite segment grow by a factor of 10x. The industry has matured rapidly, and nano/microsatellites are increasingly being used for commercial applications in earth observation, remote sensing, communications, and more. As operators continue to strike the balance between capability and affordability, future growth may also be split between the traditional nano/microsatellite segment. Space Controversies Even while these innovations are happening in the space market, there are some other controversial aspects of space for you to understand/know: A UFO specialist has reportedly detected remains of an ancient jet engine on the surface of Mars saying that the advanced technology of a Jet engine is a proof that Martians exist. Moreover, a UFO has been sighted just recently . Pentagon formally releases the video. Then we have people who think the epic landing on moon was a hoax. However, it is amazing to know that NASA is making plans to send astronauts back to the moon in 2024 by taking the Gateway — a mini-space station to be assembled in lunar orbit. Conclusion By 2030, I expect almost all businesses across all industries, whether related or not, to benefit from space, with many having dedicated space teams and resources. Organizations will be experimenting — from medical research to manufacturing — in space, introducing new products and solutions into the market. This may include growing tissue and artificial transplants in zero gravity, as well as manufacturing fiber optics for communication. And not just private companies, both China and India have also embarked on human spaceflight missions. They both have their own space programs that will make it much cheaper. Be it optical imagery, infrared, hyperspectral, or synthetic aperture radar (SAR), Space datasets will have a critical role to play in everything from measuring greenhouse gas emissions, the early detection of and response to natural disasters, through to the monitoring of our forests, oceans, rivers, farmland, and weather. By the way, Elon Musk is planning to launch all of us on Mars by 2050. Yes! 1 Million and 3 starship rockets will go every day. Learning to do much more with less will be one of the defining mega-trends over the next decade. Lastly, I think we will soon have many masters and PhDs in space ecosystem because the space race is on!
https://medium.com/@unfoldlabs/rocketing-for-a-crowded-orbit-acc269555490
[]
2021-01-25 19:37:02.245000+00:00
['NASA', 'Space', 'Space Exploration', 'Satellite Technology', 'Technology']
2,634
How We’ll Access the Water on Mars
Any hope humans have for an off-world future relies on several factors for survival. One of the most important? Water. Continuously shipping water across the galaxy to resupply astronauts requires extraordinary expense in transportation costs. The next planet humans inhabit will need to have access to a local supply. Scientists have labored to locate water on Mars but finding it was only the first step. Now scientists and engineers need to tap into this supply which, given the harsh environmental conditions on Mars, isn’t as easy as it sounds. In January, NASA released the Mars Rodwell Experiment Final Report, documenting a series of tests and analyses led by Dr. Stephen Hoffman, Senior Engineer Specialist at The Aerospace Corporation. The team investigated the use of a Rodriguez Well — a concept developed decades ago by the U.S. Army — as one of many approaches for extracting water from the massive ice deposits on Mars. A series of lab-scale Rodriguez Well tests were performed by Hoffman and Alida Andrews of Aerospace at the Johnson Space Center (JSC) Energy Systems Test Area facility. Using Mars-equivalent environmental factors such as atmospheric pressure and density, test results were used to replace terrestrial environmental factors with modeled Martian equivalents. We spoke with Dr. Hoffman about water of Mars, how humans can access it and what happens next. Is there water on Mars? If so, how much? There is actually quite a bit and scientists are finding more deposits as the instrumentation they use in the search improves. For decades we’ve known that water ice exists at the poles on Mars — Earth-based telescopes could detect it using spectrometers. But early spacecraft flying by or orbiting the planet found a desert-like landscape at lower latitudes. For many years, it was assumed Mars had lost most of the water responsible for creating terrain features that appeared to be lakes, rivers, flood plains, and even shorelines for ocean-scale bodies of water. There has been an ongoing effort by scientists to understand what happened to all of this water. Orbiting spacecraft have carried more sophisticated instrumentation designed to answer this question. NASA’s Phoenix Mars Lander shows the trench, called ‘Dodo-Goldilocks,’ lacking lumps of ice seen previously. The ice had sublimated, a process similar to evaporation, over the course of four days. Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M Liquid water cannot exist on the surface of Mars under present environmental conditions. The atmospheric pressure at the surface is approximately 5–10 millibars — about 1% of sea level pressure on Earth. Mars atmospheric temperatures can range from -140 C to +30 C (-284 F to +86F). These conditions are near the triple point of water, but for the most part water exists only as ice or vapor at the surface of Mars unless some other special circumstances exist. Scientists used two orbiting radars, named MARSIS and SHARAD, to look for liquid water aquifers below the surface where conditions would allow liquid water to exist. Following a global survey of Mars, no aquifers were found down to a depth of about 300 meters. Using high resolution imagers and other remote sensing instruments also in orbit around Mars, scientists have begun to realize that there is a great deal of buried ice in the mid latitude — from roughly 35 degrees to 50 degrees — in both hemispheres. This ice is protected from surface conditions by a layer of sand, gravel, and dust. These deposits are occasionally revealed when small meteorites strike the surface and scatter very distinct white ice across the surface and these strikes are quite visible from orbit. Scientists have also spotted ice cliffs measuring 10s of meters in height in areas where some unknown event has exposed part of a buried ice deposit and the ice has slowly sublimated, revealing more and more of the deposit. A recent NASA-sponsored study called Subsurface Water Ice Mapping, or SWIM, has begun to correlate all of the independent data sets possibly indicating the presence of water to understand how much water is on Mars and where it is located. To answer the original question, the estimates for the amount of water on Mars continues to evolve but it is safe to say that the total volume is many, many cubic kilometers. How significant is mining this water to colonizing Mars? Successful colonization is a long way off for many practical reasons. There is a great deal we still need to learn about Mars. But even early human missions of exploration and reconnaissance could benefit from access to significant quantities of water. There are technically feasible approaches to these early human missions in which everything needed for the mission is brought from Earth. Anything the crew can find and access on Mars means savings of many times its mass in rocket propellant and hardware that does not need to be transported. Water is a very good example of this type of material. Water can be used for the obvious things like potable water and breathing gases for direct use by the crew, and even rocket propellant to launch the crew off the surface if electrolyzed into its constituent elements. If early exploration missions lead to a long-term presence then water will also be used in as many applications as it is known for here on Earth and its value will increase in proportion to the number of uses. What is required to mine for water on Mars? What are some of the challenges? Scientists have identified more and more significant deposits of ice on Mars. For purposes of this question, I would divide these deposits into two broad categories: those in which ice is mixed with significant quantities of dust or rocky material and those in which ice is essentially pure, i.e., greater than about 95% ice, the current limit of instruments to resolve the content of these deposits. For ice mixed with dust or rocks, mining would require excavating the material and likely heating it to capture and condense the vapor. However, ice mixed with rocky material can be as hard as concrete and can be similarly difficult to excavate. The pure ice deposits are typically covered by a layer of sand, gravel, and dust. Mining these deposits could be accomplished by stripping away this protective layer and excavating the ice. This approach would face similar difficulties as excavating the ice-mixed-with-rocks deposits. Another method would be to drill through the overburden of sand, gravel, and dust and into the ice deposit where something called a Rodriguez Well could be established. This approach has been the focus of our applied research. It would face difficulties similar to drilling a water well here on Earth coupled with the unique aspects of establishing and maintaining a Rodriguez Well. What is a Rodriguez Well? Why was it chosen for study? The Rodriguez Well was developed by U.S. Army engineer Raul Rodriguez at Camp Century in Greenland during the early 1960s. A Rodriguez Well uses heat and a submersible pump to create a cavity filled with water deep under a glacier’s surface. The submersible pump is used to cycle heated water in the cavity, return cooler water to the surface, and siphon a portion of the flow for consumption before reheating the rest and sending it back down to the cavity. Diesel-electric generators in use at many of the field stations constructed on the Greenland ice sheet in the 1960s provided a “free” source of “waste” heat to make a Rodriguez Well and provide potable water. The Rodriguez Well was used operationally at several locations in Greenland, in addition to the well-known Camp Century. The Rodriguez Well concept was tested at the National Science Foundation’s (NSF) Amundsen-Scott South Pole Station in the early 1970’s. When a major reconstruction of the South Pole Station was started in the mid 90’s the Rodriguez Well was chosen to provide potable water for the Station. South Pole Station still relies on a Rodriguez Well to this day. The Station is on its third well; the others were abandoned when the water pool reached a depth at which the pumps could no longer lift water to the surface. I became aware of the Rodriguez Well concept while working with the U.S. Army’s Cold Regions Research and Engineering Laboratory (CRREL) on other NASA-related tasks. However, it was not until relatively recent discovery and confirmation of substantial bodies of ice in the Martian mid-latitudes that the feasibility of using a Rodriguez Well warranted a serious look at its application for human missions on Mars. The case for using the Rodriguez Well on Mars is compelling because of its relative simplicity and maturity, as well as the number of places and duration of use here on Earth. However, environmental conditions on Mars are different from here on Earth in several significant ways. Additional applied research is necessary to understand the changes in hardware or operations that may be required to make a significant commitment to using this technology on Mars. How do you mimic the Mars environment? It depends on what aspect of Mars you want to mimic. In our case we wanted to mimic the atmospheric composition, temperature, and pressures. We used a small bell jar facility available to us at the NASA Johnson Space Center. This bell jar can reach near vacuum pressures and cryogenic temperatures — both much more extreme than needed to mimic conditions on Mars. This bell jar has a testing volume measuring approximately two feet in diameter and two feet tall. We scaled our equipment to fit in this space, but this was sufficient for the initial testing. As we progress to more sophisticated tests, we will require larger volumes, but test chambers already exist at JSC and other NASA facilities to meet these needs. What comes next in your work on this? Our work so far has given us data we can use to customize computer simulation tools developed by CRREL for terrestrial use, allowing use for Mars applications. The data we have in hand are very basic — there are nuances that need to be explored with tests similar to those we have already completed. Once we are more comfortable with how we think the Rodriguez Well will perform on Mars, we will confirm these findings by establishing and operating subscale versions of these wells in appropriately sized test chambers simulating Martian environmental conditions. These results will, in turn, be used to develop equipment and operations that can be demonstrated on Mars under actual conditions. At that point, we should know enough to determine whether this technology is effective and reliable enough to become part of the critical infrastructure supporting human crews on Mars.
https://medium.com/@aerospacecorp/how-well-access-the-water-on-mars-8ebae0a5f470
['The Aerospace Corporation']
2021-06-09 22:37:20.749000+00:00
['Space', 'NASA', 'Mars', 'Technology', 'Science']
2,635
Why you should never agree to use teleportation
Why you should never agree to use teleportation Spoiler: because it’ll probably kill you…at least for a little while. If you’ve seen any sort of science fiction movie — you’ve probably come across the notion of teleportation. The ability to instantly be transported from one side of the planet — to the other. Imagine a world where you could be in Paris for breakfast, Buenos Aires for lunch, and the newest restaurant on the moon for dinner. Pure fantasy right? It may have been fantasy…until 2018 anyway. Scientists in China successfully teleported a photon from Earth onto a satellite 300 miles away. This moved the concept of teleportation from being impossible to simply being a herculean endeavour. Before we start tasting that freshly baked French bread each morning — we first need to work out how to teleport larger particles, small inanimate objects, “lesser” forms of life, and finally humans. That is to say nothing of the seemingly astronomical amount of computing power and transmission bandwidth we will need to be capable of harnessing in order to teleport a human. One day, a century or two from now, this technology will be mature. The question then arises — should you use a transporter, or will it mean your instant death with your life being taken over by a doppelganger? How do you know that whoever steps into the transporter is the same person who steps out? Let us consider four ways in which a transporter might work, and whether that would mean that “you” come out the other end or a copy. Facsimile Body Transmission Mind Transmission Wormholes Facsimile Your body is scanned by the teleporter in your lounge room and deconstructed. You are reprinted at the destination with new “ink”. Whilst atomically (and genetically) identical — the person at the destination would be a copy as the base materials used are different “instances” of those elements. You, of course, are dead — and will stay dead. To demonstrate with another example — imagine transporting a house from point A to point B using this method. The house in point A has been destroyed, and while the bricks being printed in Point B look identical — they are mere copies. Body Transmission Your body is scanned, and deconstructed into its constituent “Lego blocks” (read: atoms). These same blocks are then fed through some sort of pipe (or via quantum entanglement) and drop out at the destination — where they are reassembled into yourself. Unlike the previous example, the very same atoms in the original you have made it to the destination. In this scenario — you were definitely killed but were you brought back to life and consciousness. Or was a new instance of your consciousness that was “booted up”? Does it even matter if it’s a different instance of consciousness? Mind Transmission Your body is scanned. A replica is reprinted at the destination — including all the data in your brain (memories, facts, relationships, and neural pathways). The electrochemical impulses that course through your brain are transmitted (similar to a data file over Bluetooth or wi-fi) and into your new brain. This way, while the body is new, the original “spark of life” has been transmitted over to Point B. The consciousness of the individual may have effectively just blanked out (as you would under a coma or deep sleep) for a few milliseconds. Wormholes The teleportation device creates and opens a wormhole under your feet that creates a tunnel through space-time, with the other end of the wormhole terminating at your destination. In this way — you and your atoms remain wholly intact, and you effectively walk through a door or get onto a slide which takes you to where you need to go. This solution saves you from any death and preserves the continuity of your consciousness.
https://medium.com/predict/why-you-should-never-agree-to-use-teleportation-cec3a3de58f2
['Kesh Anand']
2019-06-26 20:20:59.707000+00:00
['Consciousness', 'Future', 'Science Fiction', 'Technology', 'Science']
2,636
eBay’s Campaign of Corporate Sponsored Terror Funded By Sellers!
Not surprisingly, ebay, the world’s most morally bankrupt e-commerce platform, had six former executives and employees indicted for crimes of cyber stalking; for maliciously harassing a couple who operated an online newsletter that was critical of the company itself. Being a long time seller on the platform myself, and an avid reader of the blog’s operators who were harassed, I am horribly appalled and disgusted with the recent allegations being made. But there’s more, a lot more! Not only did these degenerate trash bags for human beings harass and threaten the very people who fought to defend millions of ebay sellers, they used their own sellers money, including mine, to pay for the corporate sponsored terror campaign they unleashed on that blog’s operators. They even went as far as to have a lavish meal to the tune of over $700 on sellers dime while they staked out and stalked those poor bloggers. Sorry ebay, but you got me bent! I am not paying for your mentally deranged fantasies. ebay is going to pay back every fucking dollar in fees I paid last year, you can make a sure bet of that! Besides terrorizing journalists, what the Justice Department may not be aware of is the potential multi-billion dollar fraud the company’s executives have been perpetrating against their sellers, originally starting at the hands of its ex-chief executive officer Devin Wenig. Which I will explain further here shortly, keep reading, grab some popcorn, its gonna’ to get real folks! Everything is starting to come together now with Wenig’s departure, and the fact he looked incredibly fearful, stressed, and was sweating profusely during the last ebay open. As noted in this article from Wired: Former eBay Execs Allegedly Made Life Hell for Critics, Wenig is thought to be “Executive 1” as noted in the current indictment that was issued by the Justice Department. This would probably help to explain the whole “I’m going to prison, where’s my soap on a rope when I need it” look he had on his face. Wenig’s awe inspiring critiques says it all, including “We’re Going To Crush This Lady!”, and “Take Her Down!” I too have been very critical of ebay and its executives over the years. Here’s a quick snapshot of Wenig’s account on Twitter, notice he blocked me. lmfao. For whatever reason, Wenig decided to not to retaliate against me, or did he? Will get into that later. eBay’s multi-billion Dollar Fraud? Here’s where things get exciting! Not only did ebay execs result to domestic terror to silence their victims, they also potentially stole billions of dollars from their sellers with bullshit fees based on service metrics that are 100% outside of seller control. You see, part of ebay’s service metrics for sellers involves what ebay refers to as a transaction defect rate. Among the many variables ebay considers is how many “Not As Described” claims sellers receive from buyers. Sellers who have too high a rate of returns for this metric get slapped with an added 5% fee for the categories in question. Here’s the problem, this entire metric is clearly outside of sellers control, as buyers can select any reason for their return, and sellers have zero recourse. The system was literally designed to ensure you fail and get slapped with more fees, further padding ebay’s bottom line. But hey, what do you expect from a company who’s board is run by its investors, who could give a flying fuck less whether or not the platform’s sellers starved to death. As I stated before, its up to buyers to decide the reason for their return, meaning they can simply lie about the reasons in general. Many do lie too, for the simple fact if they select not as described/defective as their reason, ebay will put the return costs back on the seller, then the seller takes a hit on their metrics. Now, why on earth is ebay charging sellers inflated fees because buyers changed their minds or lied about the reasons for their returns? I can’t wrap my head around it. Why punish sellers for returns at all? After all, many sellers offer free 30 day returns as it is, there’s no reason for ebay to punish sellers like this, nor rob them the way they are. That being said, what ebay is doing in terms of the fees they charge their sellers is akin to having law enforcement arrest you because your neighbor robbed a fucking bank. Which makes no logical sense right. The reality is millions of Americans rely on the ebay platform to put food on their families tables, and you can make a sure bet I will continue to fight to defend these people no matter what! It also looks like they’re fighting to defend me! Many thanks to the secret ebayer who funded my journalistic efforts here. I got paid to speak my mind here, and they too are not happy with ebay. I wrote this article out of the realization I owe it to (Name Redacted For Privacy Reasons) to stand up for what is right. She proved the pen truly is mightier than the sword. As for ebay, us sellers are willing to come to the table whenever ebay execs are ready. As for my views and statements here, ebay may not like them, but they are protected under U.S. federal law as outlined in our constitution. I love ebay, but I am disgusted and sickened by those who run it. Its a love/hate relationship for sure. Hopefully ebay’s execs wont send me any surprises in the mail like they did their other victims; such as live cock roaches, a funeral wreath, a book about coping with the loss of a spouse, etc. There’s more, a lot more in fact! I haven’t even scratched the surface here. Besides harassing and threatening those poor bloggers, they harassed me too, but I can save that story for another day. What angers me more than anything is the simple fact these crooked executives used their sellers money to pay for the horrific campaign of terror they embarked upon. All being said, I want to see ebay thrive as a fighting force in the e-commerce world, and if there’s one statement I stand behind, its that they do in fact create economic opportunity. But I cannot, and will not allow ebay to threaten the livelihoods of its millions of sellers, nor those who make statements the platform doesn’t necessarily agree with. What ebay needs to realize is that respect is a two way street, if you can’t give it, don’t expect it in return. This company needs to look past its immoral and flagrant desire for money and power and realize those who make their platform truly tick are humans just like you and me. Many thanks for reading! Written by Daniel Imbellino — Co-Founder of Strategic Social Networking! Additional Resources:
https://medium.com/strategic-social-news-wire/ebays-campaign-of-corporate-sponsored-terror-funded-by-sellers-ae288ee931d8
['Daniel Imbellino']
2020-06-20 06:17:19.162000+00:00
['eBay', 'Corporate Culture', 'Information Technology', 'E-commerce', 'E Commerce Business']
2,637
100 Words On….. Patching
Photo by Markus Winkler on Unsplash For every action, there is an equal and opposite malfunction. While I am an advocate for system hardening, it must be done bearing the context and business needs in mind. Simply applying the latest firmware and patches looks like a good idea in practice, but doing so blindly and without planning to understand the pros and cons can be more of a hindrance than a help. The same holds true for disabling and removing services, installing new tools and taking a draconian approach to policies. Sometimes you break more than you fix, ending up doing the hackers job for them.
https://medium.com/the-100-words-project/100-words-on-patching-ba32265ab66c
['Digitally Vicarious']
2020-12-16 23:31:03.384000+00:00
['Information Technology', '100 Words Project', 'Cybersecurity', 'Updates', 'Patch Management']
2,638
7 Best e-Commerce Open Source CMS Platforms in 2018!
You know choosing the right e-commerce platform is essential for your business and this is a bit confusing as there are lots of eCommerce CMS platforms available in the world. To run your e-commerce shop you have to select the best and suitable eCommerce CMS for growing your business. So, let’s read out the best 7 eCommerce open source CMS platforms in 2018CLICK TO TWEET. Magento is undoubtedly the top leading platform for open commerce innovation in the eCommerce CMS world. Every year, Magento handles over $100 billion in gross merchandise volume. They have a massive marketplace stocked with website themes and useful applications. Magento was recently acquired by Adobe, the American software giant, for $1.68 billion. Adobe has a rather large developer base, active creative community, and a strong cloud infrastructure. Adobe’s resources will undoubtedly shape Magento. At present, it’s unclear how the partnership between the two companies will change e-commerce as a whole. All we know now is that Magento is bound to benefit. Though Magento is an open-source (free to use) you’ll have to pay for hosting, apps, and premium themes, but the use of the platform itself is completely free. PrestaShop is one of the most powerful and popular open-source e-commerce application mostly used in the European zone. This CMS has just been used by more than 270,000 e-Commerce stores run worldwide using PrestaShop technology. Their mission is to develop world-class eCommerce software through open source innovation. This is why anyone can download, install and set up PrestaShop for free. PrestaShop is on the 2016 Inc. 5000 list of fastest-growing private companies in Europe. The company also received the 2016 CMS Critic Award for Best eCommerce Software. They have a huge addons and theme based marketplace hosted in their site that selling premium modules and themes by them and third party agency too. You know WordPress is the leader of all CMS and this is why WooCommerce is one of the most powerful open-source eCommerce plugins, which enables a WordPress website into an E-commerce.WooCommerce has a lot of free themes and plugins which can make your e-commerce site more functional and active. Most important advantages of these plugins are they can enable various features to the basic WooCommerce software in a single click and most of these plugins are free for downloading and installation. WooCommerce extra product options are one such plugin that equips the product page of Woocommerce site with many additional features like collecting input fields, file upload option, date and time picker, color picker, price selector, location selector etc at an ease. Shopify is a Canadian e-commerce CMC company which is now one of the most growing eCommerce premium CMS. It is also the name of its proprietary e-commerce platform for online stores and retail point-of-sale systems. Shopify has been well received by tech website by CNET which said the platform is “clean, simple, and easy-to-use.” The company reported that it had more than 600,000 active Shopify stores using its platform as of August 2017 with total gross merchandise volume exceeding $63 billion worth of sales. Opencart is an online store management system, easy-to-use, powerful, Open Source platform that can manage multiple online stores from a single back-end. There are many professionally-written extensions available to customize the store to your needs. Currently, they have almost 317,000 live OpenCart sites. This CMS is absolutely free, no monthly fees, no catches; just an effective and customizable platform for your new e-commerce store. Simply install, choose your template, and add products and you’re ready to start accepting orders. An OpenCart store can be ready to take orders soon after installation. All you have to do is have it installed for you (any web hosts do it for free), select a template form the many free or low-cost template sites, and your product descriptions and photos, click a few settings, and you are ready to begin accepting orders. Our free Installation & Quick Start chapters show you how. So, OpenCart is perfect for e-commerce stores of any size, any industry, any budget.
https://medium.com/the-technews/7-best-e-commerce-open-source-cms-platforms-in-2018-604403d3a4f8
['Md. Nazrul Islam']
2018-09-06 13:31:13.998000+00:00
['Technews', 'Technology', 'Best Ecommerce Platform', 'Prestashop', 'Ecommerce']
2,639
Scoping the Huge Prospect of Electric Car in Indonesia
Scoping the Huge Prospect of Electric Car in Indonesia Image by Mikes-Photography on Pixabay Electric Vehicle (EV) has been developed since quite while ago. Some big players across Europe, United States, and Asia have been triggered to civilize EV in their country. As one of most-watched countries, in terms of economic growth, unfortunately Indonesia has just involved in EV market recently, particularly in electric car. In the other hand, Plug-In Hybrid Electric Vehicle (PHEV) car was introduced earlier and the arrival of electric car in Indonesia seem to be attractive for Indonesian. Currently, there are three electric car manufacturers that have executed their plan in Indonesia, they are Hyundai, BMW, and Lexus. All of them have officially sold their electric car models (Hyundai Ioniq, Hyundai Kona, BMW i3s, and Lexus U300e) to the public through authorized dealers. Meanwhile, consumers can buy other electric car brand through importer, like buying Tesla from Prestige Motorcars. Other manufacturers seem to continue in the future. For instance, Toyota are ready to invest around $2 billion on EV development in Indonesia and Elon Musk as Tesla CEO have talked directly with President of Indonesia to discuss the opportunity of investing on electric car in Indonesia. From this circumstance, firms have seen reasons and opportunities behind their decision on why they should expand their market to Indonesia. Large Potential Market Generally speaking, by 2018, Indonesia is ranked fourth as most populous countries in the world with about 267.7 million citizens. Central Statistics Agency of Indonesia reported, there are approximately 146.8 million vehicles in Indonesia, divided into cars, buses, freight trucks, and motorcycles. The data also shows the increasing car number around 1.9 million in two years (2016–2018). Frankly, car’s portion on this number is only 11% since more Indonesians use motorcycle as their daily transport. In 2019, some SUV and LMPV models recorded 5% of sales growth, while brands’ sales in general fell by 10%. During this pandemic, car sales in Indonesia are heavily decreasing and expected to recover in 2022. Although motorcycle sales are also affected, the growth in the past four years is better than car sales. Perhaps some of us might think, “Well, then what makes electric car special? Doesn’t electric motorcycle look more interesting?” The answer is: electric car is more widely discussed and recognized in Indonesia. Based on Frost & Sullivan internal survey, 41% respondents are interested in purchasing electric car. With more people get better life each year, it can’t be denied that more people also convert from conventional car to electric car, or even from motorcycle to car in order to get better comfort and occupancy. With some new electric car models arrived in Indonesia, it may trigger future consumers in a different way. Regulations are with EV Owners Before 2019, electric car was not that popular in Indonesia because the government was not ready yet to implement this vehicle presence. After two years, now there are seven main regulations from the government about electric vehicle. One of them is about the change of taxation model. Tax for cars are not charged based on the model anymore, but based on the emission produced by the vehicle. It means zero tax for electric cars. In certain city, like Jakarta as capital city, there is an “odd-even rule” on Jakarta road. The rule says that on odd date, only vehicles with odd last number on the registration plate are allowed to pass. This rule is applied on specific roads, days, and period of time. What makes EVs different is that they don’t have to follow the rule. Officers can identify the difference between conventional and electric cars through the registration plate color. As the government try to inject in electric car market, they currently plan to add more electric charging stations with specific rates applied, so consumers are not worried about running out of battery. In some landmarks, like Gelora Bung Karno stadium, the management have built charging station that can be accessed for free. Again, these types of movement from the government and external parties may stimulate the electric car market in Indonesia. Image by Mohamed Hassan on Pixabay Price War is Getting Real Back in 2018–2019, electric car price used to be tremendously high in Indonesia. Not as high as Rolls-Royce models indeed, but an electric car at that time could cost between $91000–$183.000, depends on the brand and model). With that amount of money, consumers have wide option for luxury cars. For example, Tesla’s model 3 costs about $106.000. Having same amount you have in pocket, you could buy compact premium sedan like BMW 3 Series with still getting $46.000 in return. Even Tesla’s model X is currently more expensive here than Mercedes-Benz S Class. All of them are reputable brands, but however BMW and Mercedes-Benz have such long history on this industry. Tesla as younger venture have to convince consumers on why they should choose their model compared to Mercedes-Benz’s flagship model. One of game changers, like Hyundai, decided to put lower price (around $42.000) on their models to make them more affordable. In the near future, Nissan are going to introduce their electric model, Leaf, with the price range is quite similar with what Hyundai set. With this price, you may get a Peugeot 3008. It is not that cheap yet for average Indonesian consumers, because the range of most-sold cars in Indonesia is between $17.000–$28.000. At least, certain market segment could get the appetite. Once brands are confident with the prospect, they could reduce the costs and also the price. Massive Electric Car Campaign Speaking about eco-green environment, conventional and electric cars are separated between big gap. Manufacturers are competing to produce internal combustion engine on conventional cars more eco-friendly, but still they emit gases which contribute to pollution. We can’t say electric cars are 100% clean since it depends on the power plant that generates the electricity, but using electric cars on high population density areas is the least we can do to regain our healthy ecosystem. And this campaign has been raised from years ago, but it is going upward recently. Some other benefits are considered important to Indonesia’s geographical condition, like low battery consumption and damage-free by flood (with certain flood height). Nowadays, many public figures with automobile expertise recommend using electric car and educating people on the benefits of it. The Government of West Java have also ordered electric cars for their daily operational inside the government. From these advantages, it is not surprising that electric car manufacturers have started scoping the opportunity on investing their products in Indonesia. The big market and other considerations could make other manufacturers are on their way to invest, so they may provide wide variety of models with also more reasonable price range.
https://medium.com/illumination/scoping-the-huge-prospect-of-electric-car-in-indonesia-f4b90f78f9b9
['Visi Saujadani']
2021-01-05 12:16:00.834000+00:00
['Illumination', 'Indonesia', 'Technology', 'Prospects', 'Electric Car']
2,640
Ambient Computing & User Experience Design
Introduction: Ambient Computing Ambient/Ubiquitous computing is a concept that circumscribes many terms together. It can be considered as a magnificent amalgam of software, hardware products, and user experience with a major component of human-machine interaction and learning, all of these things becoming the aim of using a computer or any internet-enabled device, without necessarily deliberately using it. In essence, it is a hoard of devices that we use at our workplace, at home or anywhere, these devices become the extensions of each other while offering us a seamless experience. The idea is — we no longer have to sit in front of a real desktop computer to operate a computer, which is the effect of ambient computing. In this article, we’ll introduce the concept of ambient computing, learn how it works behind the scenes, and layout some UX principles core to a seamless ambient experience. Ambient Computing and Internet Of Things Ambient computing is all about having computational capabilities in things that we use every day in the environment. Our fridge, TV, gas stove, bulbs all can have computer-like behaving properties. The key difference between IoT and Ambient Computing is the way these devices work for us. Ambient computing aims to make these devices fade into the background while being helpful, whereas IoT commonly refers to a more industrial use case of collecting sensory data and making analyses for optimizing processes. Imagine walking into a smart home, when you enter the home the lights automatically turn ON. The coffee machine beside the table prepares the perfect coffee without your input. Your fridge lets you know that you’re running out of the grocery. Your sofa turns automatically into a relaxing bed, and with your mobile device, you can set the scene of the lights and peacefully go to sleep. When it’s time to wake up, your smart alarm clock gently plays the alarm and controls the lights to turn on slowly and create a perfect morning ambience. While these experiences may sound very futuristic and costly to some, the era of ambient computing has already started. Our smartphones, laptops, wearable devices like Fitbit and iWatch, voice-activated devices like Amazon Echo, Google Home, and many more are already creating an ambient experience at our home and workplace. Ambient computing working behind the scenes In the era where computers were discovered, they were used only by operators who had extensive knowledge of computers and were used for programming purposes. But after the explosion of a myriad of applications, the term ‘operator’ got replaced with ‘users’ since people started adopting computers for their personal use so as to maintain their yearly checkbooks and also to play video games. All computers used so far require an actual computing device to perform complex tasks. Ambient computing has changed that all since it involves using a computer without consciously or deliberately or explicitly “using” a computer. The most embryonic ambient device one can think of is a motion control device. Let us consider an example: You walk into a shopping store, the moment you approach the door it opens automatically. You move to a particular aisle and lights turn ON the moment you enter. In this manner, you are actually using the door or the light without using it. You can simply ignore it and the desired effect happens. Similarly, the traditional way of talking to a digital assistant such as Siri would be to interact with it on a phone. Today, digital assistants are available in our rooms, watches, rings, and very soon to be in eyewear. There is also a well-established fabric available to help connect and customize these devices virtually. One of these software is IFTTT, which is an open-source set of APIs that allows connectivity and integration of devices and apps which makes them work better together. Want your porch lights to get automatically ON when your pizza arrives? Or want a wallpaper on your Android phone with NASA’s image of the day? IFTTT offers you all. Designing for a connected, cohesive ambient computing world When we think of ‘UX Design’ our mind automatically starts to think of Apps and websites in the industry. Increasingly, as users adopt ambient devices, UX Design patterns and frameworks need to adapt too. At Lollypop, as we craft experiences more delightful for millions of users, we’re seeing an increasing need to think, quite literally, out of the box and design for these mini-computers living inside objects we interact with every day. The goal of such a design methodology is to make sure these devices work without getting in our way as we cruise through our daily chores. Another tenet of designing for ambient computing would be continuity — to enable these devices to not just work smart independently, but in cohesion. Lack of continuity can make or break a user’s experience and negate the efficiency ambient computing provides us with. A big factor in designing for these devices will be familiarity — The current generation of smart devices like smart bulbs, thermostats operate exactly the same as conventional bulbs or thermostats would work. For example, a thermostat still needs to be put on a wall, switched ON, and adjusted for temperature. Later, the device would learn the temperature to be maintained. That’s it, the device needs a familiar manner of operation. The familiarity of the device when combined with the novelty of the product can make a big impact and will make non-tech savvy users also adopt these smart devices. Finally, while these devices dissolve in the background, the user should still feel in control and be able to alter their behaviour as required, when needed. Imagine using the smart thermostat that only allows for machine learning to decide the temperature at any point, without letting the user perform manual overrides. Providing a sense of control reinstates trust and ease of use. Designing for such invisible interactions can be more difficult than designing for tangible components in mobile apps and websites. Looking back 10 years, the mere feasibility of talking to your smartphone to check on your home’s door camera from the office would be called magic. This magic is increasingly becoming our new reality and requires a thorough human-centric design approach to solve it right.
https://medium.muz.li/ambient-computing-user-experience-design-b72cabed10eb
['Lollypop Design Studio']
2020-12-20 07:07:10.457000+00:00
['IoT', 'Emerging Technology', 'Internet of Things', 'Ambient Computing', 'User Experience Design']
2,641
BVB and the Central Securities Depository are accelerating the Digital Transformation with the help of the Aurachain Platform
BVB and the Central Securities Depository are accelerating the Digital Transformation with the help of the Aurachain Platform AURACHAIN CH Follow Dec 23, 2020 · 3 min read Bucharest, Romania — December, 23 Aurachain is the new technology partner for the Bucharest Stock Exchange (BVB) and the Central Securities Depository. Two innovative solutions developed using the low-code Aurachain platform will be implemented to accelerate and optimize the shareholder voting process at exchange-listed companies using blockchain technology, and to facilitate access to the capital market by digitizing investor enrollment. The second solution is the digitization of the online process of opening trading accounts by individual investors, through a standardized service that the Central Securities Depository will provide as a single point of entry for all new individual investors. The technology solution developed on the Aurachain platform has the role of facilitating the investors access to the Romanian capital market in a completely automated and fully secure manner. Actions such as registration, identity verification, facial recognition and Know Your Client (KYC) processes will be performed exclusively in a digital environment and will be automated. “The low-code Aurachain platform transfers much of the development of digital applications into the hands of business users who need innovative solutions, through intuitive visual configuration capabilities that replace the traditional code-writing approach. Key personnel across the organization, from subject matter experts and business analysts to professional developers and IT specialists, can contribute their expertise directly to the app creation process for significant gains in operational efficiency with no governance faults,” said Adela Wiener, CEO of Aurachain. The two solutions offered to the Bucharest Stock Exchange were developed at light speed using our platform and represent yet another example of how we can help organizations accelerate their digital transformation and process automation efforts from day one of any engagement. “ This joint project between the Central Securities Depository and Aurachain is very important from the perspective of the digital transformation of all processes in capital markets. We want to use Aurachain’s solutions in the area of ​​identifying and profiling individual clients, a solution we want to offer to the brokerage community in the capital market, and to other industries or platforms that need such a solution. Another product we want to implement is a platform for organizing meetings and voting for General Shareholders Meetings, Boards of Directors, and Committees. The Intervote platform will address primarily the needs that companies listed on both the Regulated Market and the AeRO market, within the Multilateral Trading System, have in organizing these meetings. We believe in the potential of Aurachain solutions and the blockchain technology that underpins these solutions. Through the initiative announced today, we will propose to expand the cooperation with Aurachain to other processes, with the ultimate goal of digitizing and simplifying the activity of investors, brokers and issuers in the regulated area of ​​the capital market”, stated Adrian Tanase, CEO of the Bucharest Stock Exchange, the majority shareholder of the Central Securities Depository. Through the new partnership with Aurachain, BVB takes another important step towards the modernization and development of the Romanian capital market, by facilitating the expansion of the investor base, streamlining the decision-making process and reducing bureaucracy for listed companies. About Bucharest Stock Exchange Bucharest Stock Exchange runs markets for shares, bonds and other instruments, through regulated platforms and alternative systems, and provides a wide range of services to participants of financial markets. Bucharest Stock Exchange is a public company, listed on its own market since 2010. The cumulative market capitalization of all companies listed on the Bucharest Stock Exchange (local and international) exceeds RON 163bn (EUR 33.6bn), and the cumulative value of bond issues listed on the BVB amounts to RON 17.8bn (EUR 3.6bn). The global index provider FTSE Russell announced, in September 2019, the upgrade of the Romanian capital market to the Secondary Emerging Market status. As of September 21, 2020, Romania is effectively included in the FTSE Russell indices for Emerging Markets. For more information on the Bucharest Stock Exchange, please refer to www.bvb.ro.
https://medium.com/aurachain/bvb-and-the-central-securities-depository-are-accelerating-the-digital-transformation-with-the-8578467af07c
['Aurachain Ch']
2021-04-28 07:27:32.406000+00:00
['Stock Market', 'Technology News', 'Aurachain News', 'Digitaltrasformation', 'Low Code Platform']
2,642
8 Unheard Browser APIs You Should Be Aware Of
8 Unheard Browser APIs You Should Be Aware Of Experimental browser APIs that have the potential to change the way we develop web apps Photo by Szabo Viktor on Unsplash With the increase in popularity, browsers started shipping APIs for complex functionalities that sometimes can only be implemented via a native application. Fast-forward to the present: It’s indeed quite rare to find a web application that doesn’t make use of at least one browser API. As the field of web development continues to grow, browser vendors also try to keep up with the rapid development around them. They constantly develop newer APIs that can bring new nativelike functionalities to your web application. Furthermore, there are some APIs that people don’t know much about, even though they’re fully supported in modern browsers. Here are some APIs you should be aware of — as they will play a vital role in the future.
https://medium.com/better-programming/8-unheard-of-browser-apis-you-should-be-aware-of-45247e7d5f3a
['Mahdhi Rezvi']
2020-08-04 16:05:37.181000+00:00
['Technology', 'Software Development', 'JavaScript', 'Programming', 'Web Development']
2,643
Ionic & Felgo: App Development Framework Comparison
Cross-platform development is making a lot of noise in today’s dev world and there is a reason why. A shared codebase can save a lot of time if you want to target multiple platforms. There are several approaches for creating cross-platform applications. But which one is better? This time you will see the comparison of Ionic and Felgo. Differences between Cross-Platform Frameworks Before we start, let’s take a peek at the history of cross-platform development. In early cross-platform mobile app development times, apps were displayed in a WebView. A WebView is nothing more than a native browser window without any extra interface The HTML engine of the browser took care of rendering all app elements. The idea was to create and run a web application with a native look and feel. This way developers could deploy to many platforms. The platform just had to provide the browser technology. This approach is still used by many frameworks, including Ionic. On the other hand, a standard web app running inside a browser cannot access all the functionalities of a target device that a modern app needs. That is why tools like Cordova became popular. It provided a web-to-native bridge. The bridge granted access to functionalities like localization in a WebView. Ionic also provides such a bridge with Capacitor. But in reality, it is nothing more than the good old Cordova with some upgrades. In summary, if you want to create an application using the Ionic framework, you will need to use a web technology stack: HTML, CSS, and JavaScript. Other frameworks, such as AngularJS or React, would also be useful to give the app the desired modern feel. Hybrid Frameworks and Rendering with a WebView Hybrid Frameworks, like Ionic, render their content within a WebView. This WebView is wrapped with APIs to access native device features. However, this approach has some disadvantages like: The performance of your app depends on the internal version of the WebView used in the targeted OS. This dependency can cause different behaviors and performance characteristics on different OS versions (e.g. Android 6.0 vs 9.0). You will depend on Apple and Google to add features and improve the performance of the WebView. There are features that depend on web engines like Webkit and CHromium for both iOS and Android. Some of the CSS fields supported by the JavaScript standard are an example of such a feature. It makes maintainability harder as you need to support multiple Webview browser versions and types. Web renderers were designed to display websites or multimedia content in a browser. They do not render user interfaces & animations very efficiently. Because of that, performance is significantly slower compared to native apps. The Felgo Approach Let’s focus now on how Felgo handles cross-platform rendering. Qt with Felgo compiles real native applications without the need for a WebView. Felgo renders its UI elements with the Qt rendering engine built on C++ & OpenGL ES / Vulkan / Metal. This so-called “scene graph renderer” is optimized for performance. It also guarantees that the UI will look the same on any device & platform. Furthermore, it is also possible to keep your existing native iOS, Android, or C++ code. You can simply reuse your own native code with Felgo thanks to its architecture. The core language behind Qt & Felgo is C++, which is famous for its performance and stability. However, it is not ideal for creating a modern UI and cutting-edge applications. So Qt introduced a new language called QML. QML is a declarative language that lets you compose your UI as a tree of visual items, very similar to HTML. For adding application logic, QML relies on JavaScript. Developers can easily get started if they are familiar with these web technologies. Felgo comes with everything you need to build stunning applications in record time. To achieve native performance, all QML items actually translate to performant C++ components in the backend. Your QML and JavaScript get executed and visualized by a highly optimized C++ renderer. Qt also compiles all components Just in Time (JIT) or Ahead of Time (AIT) if configured. This way, QML can achieve native performance. Qt & Felgo not only allow you to develop cross-platform for iOS and Android. You can also run your applications on desktop, web and embedded systems. Inside the Frameworks The devil is in the details and that is why it’s crucial to take a look inside the architecture of both frameworks. Let’s start with Ionic. The browser renders your code and Ionic needs a bridge to access OS functionalities like a camera: You have to rely on this bridge to access native features. It is not possible to build an application that directly uses these platform APIs. But what about Felgo? You won’t need any additional bridge to access the OS functionalities. You have direct access to all platform features with the native code in your application. This also includes the highly performant QML Engine, which is part of your Qt application: This architecture ensures a consistent performance on all target platforms and devices. Framework Business Potential When considering business potential, there are some things to keep in mind. First is, of course, current staff experience. When developing with Ionic, you need a team with quite a lot of knowledge about web app development. If they are lacking some of them, the training would take some time. When considering Felgo, the main skill your team should have is knowledge of JavaScript, because QML is derived from it. As JS is one of the most popular programming languages, the probability that your fellow programmers have such ability is quite high. If you already work with programmers who have JavaScript knowledge then it’s easy to reuse their skills in the new Felgo project. Another aspect to consider is the supported platforms. Apart from Web, Ionic supports only iOS and Android. With Felgo, you can deploy also to Windows, Mac, Linux, and embedded devices. The variety of platforms is much bigger when using Felgo. Framework Documentation Many developers consider documentation as one of the most important factors not only in terms of learning new technology but also in case of reducing the time of development. When creating the app, you will sooner or later bump into some issues that will require some additional knowledge. Documentation is the best place to look for it. If it is high quality, you will solve the problem in no time. Otherwise, you will struggle with scrolling through many pages and hope to find a detailed answer. Both Felgo and Ionic offer great documentation, to browse APIs, examples and demos. Learning Curve Comparison When taking the first steps with Ionic, you need to learn quite a lot of technologies like HTML, Sassy CSS, and JavaScript. On top of that, you should also know a front-end framework like Angular. It uses Typescript language that you will also need to be familiar with. You might also use React to give the app the desired modern look and feel. There’s a lot to learn if you aren’t an expert in web development but would like to create mobile apps with a cross-platform framework. Besides, Angular and React are not known for being easy to learn. To learn Felgo, you need some QML skills and know JavaScript to write functions in QML. QML, due to its JSON-like notation, is very friendly for new users. The gap between Ionic and Felgo’s necessary technology stack is rather big — especially if you are not specialized in any kind of web app technology. To summarize, the learning curve of Ionic can be much steeper than Felgo’s. Especially when learning the chosen front-end JS framework at the same time. Framework Pricing and Licensing For personal usage or “low-budget” developers, both of the frameworks are free. If you’d like to include additional services and tools into your app, you can get professional plans to ensure that you get the most out of the solution. Felgo offers advanced features like analytics and push notifications. Whereas Ionic gives you more than 100 live updates per month in their paid licenses. Hello World Mobile App Comparison Architecture and functionalities are one thing. But learning a certain technology simplicity and clarity are a completely different matter. How to compare these factors? It’s quite simple — let’s write a simple app! Proceeding with Ionic, you can see at the beginning that creating logic and design will need two separate files for every page. You’ll also need to write the code in two different notations: HTML and TypeScript. Now, let’s look at the Hello Word app written with Felgo: Run this code on your iOS or Android device now, with Live Code Reloading Here you can see how you can create the logic and design in the same QML file. This has a positive impact on the entry-level of technology. QML is also easier to read than HTML, with less syntax overhead. This especially matters when dealing with large projects where a single page can contain many objects. At the same time, the application logic with TypeScript and QML are quite similar because both are based on JavaScript syntax. Comparing Integrated Development Environments When comparing frameworks, it is also worth taking a look at integrated development environments (IDE), and what they can offer you to make development more efficient. Felgo isn’t just a framework for cross-platform development. It also offers a whole set of tools that you can use throughout the entire lifespan of the application. Felgo comes with the full featured Qt Creator IDE. You also have access to QML Hot Reload that lets you view edits of QML code in real-time. This feature comes with a tool called Felgo Live Server. It lets you deploy apps to multiple, real devices via a network. In the IDE, you have access to built-in documentation. Here you can find info about Felgo types as well as about all Qt classes. Once you write some code, you can use an integrated debugger and profiler to analyze your app’s execution flow. In this matter, Ionic falls behind as it has no dedicated IDE. Thus, you need to rely on tools that are not fully adjusted to this framework. With Felgo you also get access to Cloud Builds. This service allows you to build and release cross-platform applications to app stores like Apple Store and Google Play. You can integrate it with your code repository and CI/CD system, so you don’t need to do so manually on every platform. With Cloud Builds you don’t even need to have a MacBook to release iOS applications. Cross-Platform Framework Comparison Overview: What is the best cross-platform framework? The answer to this question does not really exist — there is no silver bullet. Instead, you should ask “What framework is best for me and my project?”. Several factors can help you decide on a particular technology. To ease the decision-making process, you should ask yourself a few questions: What programming language do you or your team have experience in? What are the requirements of your app? What tooling helps you to work more efficiently? What platforms do you want to support, now and also in the future? Do you have an existing code you want to reuse? Who can help me if I run into problems? Every technology has its pros and cons and your use-case matters. If you are looking for a reliable, efficient, and easy-to-learn framework, you should definitely consider having a look at Felgo & Qt. Related Articles: QML Tutorial for Beginners 3 Practical App Development Video Tutorials Best Practices of Cross-Platform App Development on Mobile More Posts Like This Flutter, React Native & Felgo: The App Framework Comparison Continuous Integration and Delivery (CI/CD) for Qt and Felgo QML Hot Reload for Qt — Felgo
https://medium.com/the-innovation/ionic-felgo-app-development-framework-comparison-ba84de105a20
['Christian Feldbacher']
2020-07-08 10:16:51.360000+00:00
['Mobile App Development', 'Programming', 'Technology', 'Apps', 'Framework']
2,644
Top Construction Technology Magazine
Top Construction Technology Magazine Looking for magazines and publications dedicated to the construction niche? Your at the right place to end your search. There are very few magazines and publications that are completely dedicated to the construction industry, providing latest technological trends, news, updates, articles on the construction industry. Below is the list of best construction technology magazine and publications. Top Construction Technology Magazine 1. Construction Tech Construction Tech Constructech is part of a multi-media family of services that focuses on helping construction professionals understand infrastructure, future of work, and technology. Constructech empowers people with unprecedented knowledge of what the industry is saying. It leads contractors and builders in today’s digital transformation by leveraging information and emerging technologies, connected equipment, and must-have tools at the jobsite. Check Out — Construction Tech 2. Construction Tech Review Construction Tech Review Construction Tech Review is one of the leading print magazine for the construction industry that is dedicated completely to the construction niche and technologies. Construction Tech Review is a print magazine that provides knowledge network for Construction technology and allows firm to learn about trending technologies that can help grow their business. Also providing top technology companies in the construction industry that allow business people to meet vendors relevant to the industry. Construction Tech Review was born out of the ambition to bring about a peer-to-peer learning approach that brings together senior decision makers from leading organizations and their counterparts in their domains under one roof so that they can share their wisdom, knowledge and technological expertise among peers. Check Out — Construction Tech Review 3. Construction Today Construction Today Construction Today is one tool executives can use to navigate trends in this fast ­paced business. This must ­read publication covers timely issues such as the profound affect construction spending has on the U.S. economy, managing volatile material costs, LEED design and construction, emerging technologies such as BIM and work force retention. Construction Today is all about Best Practices — in the general building, heavy construction and associated specialty trade sectors. Its readers are leaders at major contractors, engineering and design firms, equipment manufacturers, and suppliers of construction materials and building products, as well as public and private project owners and regulators. Check Out — Construction Today 4. PropTech Outlook Proptech Outlook Real Estate is the largest asset class in the world, contributing about $3.5 trillion to the US GDP. On the residential side alone, about $1.3 trillion worth of existing homes are sold each year generating approximately $66 billion in just the commissions for real estate brokers. Unfortunately, compared to other industries, the use of technology is minimal in this segment, with the problems being galore. To name a few, consumers pay substantial transaction fees and wait for months to close real estate transactions; brokers’ workplace is rudimentary in technology applications, real estate management technology is ages older than the contemporary technologies used in other industries alongside the operations and construction enabled by this tech lagging in innovation. PropTech Outlook is a print and digital magazine providing a curated platform for early adopters and business insiders from property management, brokers, and building sector to explore proptech trends and get an understanding of the ways their peers leverage new technology. PropTech Outlook magazine sits at the intersection between the leading real estate organizations and the proptech technology experts, allowing them to share their ideas as well as experiences in the landscape via our platform. We identify the most innovative as well as the most promising companies in the industry and showcase their expertise in the editions that we publish. The opportunity for tech-enabled companies to compete in proptech is driven not only by the sheer size of the market but also by the limited amount of innovation to-date. In every sector of real estate, from property management, broker management, investment management to construction, labor productivity has lagged overall compared to labor productivity in other industries. Through our print and digital magazine, weekly newsletters, and our website, PropTech Outlook aims to enable experience and knowledge transfer in the real estate industry to help modernize it. Check Out — PropTech Outlook 5. Construction Equipment Construction Equipment The mission of Construction Equipment and its website, ConstructionEquipment.com, is to help subscribers improve their performance in acquiring and managing heavy equipment and trucks. Construction Equipment will provide information and ideas that will enable them to accurately manage equipment costs in order to deliver the optimum financial benefits to their organizations. This will include, but will not be limited to, information on product development, performance, and technology; as well as equipment acquisition, disposal, and maintenance. Check Out — Construction Equipment 6. Construction Business Review Construction Business Review The Construction Industry is unique because each construction project, whether its residential, commercial, heavy civil, industrial, or environmental, is one-of-a-kind. There are no assembly lines here. These projects can be as varied as housing development, commercial buildings, roads, bridges, wind farms, or rail construction. Managing construction is also challenging because instead of one corporation or business, construction involves many contractors and subcontractors — architects, engineers, contractors, suppliers, and manufacturers — coordinating their efforts in multiple locations, separate offices for each company, and sharing the construction site. This uniqueness and challenge of management is the elephant in the room that Construction Business Review aims to make “manageable” by leveraging collective experience, ideas, and advice of experienced construction business insiders. Through its print and digital magazines, website, and newsletters, Construction Business Review provides industry news, real-life knowledge, best management practices, and advances in construction practices, solutions, and service offerings by vendors to help construction businesses thrive in challenging times. Check Out — Construction Business Review 7. Construction Dive Construction Dive Construction Dive provides in-depth journalism and insight into the most impactful news and trends shaping the construction and building industry. The daily email newsletter and website cover topics such as commercial building, residential building, green building, design, deals, regulations and more. Check Out — Construction Dive 8. Building Design Construction Building Design Construction The magazine offers architects, engineers, and contractors the best daily news, trends, and more. The cause of BDC is to provide robust solutions that inspire building teams to plan and create great places for individuals. Check Out — Building Design Construction 9. Constructor Magazine Constructor Magazine Constructor, AGC of America’s flagship magazine and publishing brand, offers in-depth coverage and analysis of the construction industry and related topics. It’s our goal to provide top-shelf content geared to help you — commercial construction professionals — perform your job better, whether that’s through stories on economic news, business tips, emerging technologies, safety measures, etc. Providing you with the tools you need to succeed is our highest priority. Check Out — Constructor Magazine
https://medium.com/@jackmathew/top-10-construction-technology-magazine-70f014305399
['Jack Mathew']
2021-03-12 13:07:42.696000+00:00
['Magazine', 'Technews', 'Technology', 'Construction Industry', 'Construction']
2,645
3Amp Volta 2.0 Right Angle Magnetic Cable — The Right Way To Go!
Customers or product, which should be paid more attention to? This seems like a complex question, right? Well, it definitely is but at Volta, we understand that customers and products are two mutually inclusive properties and as such, no one is standalone. We do not brag about knowing everything a person would need. However, we listen to the comments and feedback from our customers so that we identify what to design to serve you better. You can check our Insider group on Facebook to see for yourself. We are thrilled for this culture we have imbibed at Volta because our customers have called for it and we have answered by bringing the widely anticipated 3AMP Volta 2.0 Right Angle Magnetic Cable. The 3AMP Volta 2.0 Right Angle Magnetic Cable is a specially built cable for devices with lower amperage and those less compatible with the 5AMP Volta 2.0 Cable. Sleekly designed, it has a standard USB A on one end and a reversible open magnetic end for secure magnetic connection with all of our tips (USB C, Micro USB, and Lightning Tips). Everything Volta 2.0 and more The Volta 2.0 was a 5A super-fast charging magnetic cable. However, the Volta 2.0 Right Angle magnetic cable is a 3A super-fast charging magnetic cable that has the ability to charge even more devices than its predecessor. Same as Always — One Touch, Instant Charge The Volta 2.0 Right Angle Magnetic Cable still features the upgraded connection slot design, but this time in a right-angle position, which provides you with a sturdy magnetic connection and instant charge with just one touch. The cable additionally has strong neodymium magnets to improve the quality of the connection it provides and an amazing Snagsafe feature. Amazing Deals on Tips The Volta 2.0 Right Angle Magnetic Cable comes with a separate package for tips. Our customers now enjoy the freedom to choose any two tips of their choice. We would rather not push tips that you don’t need to you. Your tips are you in your hands, make your choice. Compatibility and Speed One of the limitations on the Volta 2.0 Magnetic Cable was its incompatibility with some devices. However, on the Volta 2.0 Right Angle Magnetic Cable, you can now charge your Samsung S8+, S9/S9+, S10, Note8/9/10 and many other devices. Guarantee you can Trust No one likes to be stressed over anything and we also wouldn’t want you to feel cheated for supporting our operations with your purchase. Every purchase has a 30 Days Money Back Guarantee as well as a 24 Months Warranty on the 3A Volta 2.0 Right Angle Magnetic Cable. Check our Warranty and Returns page for more info. Experience the easiest and securest way to charge multiple devices by supporting our campaign for the 3.0AMP Volta 2.0 Right Angle Cable on Indiegogo as we launch on September 5, 2019, by 9:00 am AEST. Go to our blog and fill in your details in the form below the blog so you don’t miss out https://voltacharger.com/blogs/news/3amp-volta-2-0-right-angled-magnetic-cable-the-right-way-to-go.
https://medium.com/@voltacharger/3amp-volta-2-0-right-angle-magnetic-cable-the-right-way-to-go-b32c9afe62f2
['Volta Charger']
2019-09-02 10:18:57.034000+00:00
['Samsung', 'Technology', 'Apple', 'Charger', 'Technews']
2,646
Power of digital Marketing
Remember when in September Flipkart lost her ‘F’ back? Trying to locate the missing ‘F’s, Twitter went into overdrive. With giants like Spotify and Swiggy retweeting the hashtag, the supposed ‘error’ soon developed into a movement (#WheresTheF). Now that’s what we call digital marketing effective. Samsung announced the launch of its F-series on Flipkart amid all the hype the campaign generated, thanks to that out of the box concept. Why do you believe it worked out? Since, after careful preparation, the strategy was carried out. It took advantage of human interest and, because of the lockout, it took advantage of a moment when most individuals were at home searching for something refreshing. In the fast-paced times in which we are living, digital marketing has become a necessity. People are attached to their smartphones, and more than their true selves, their virtual identities matter to them. Digital marketing opens a door to enter the customer base directly and catch their attention. The Internet has levelled the field, and everyone is given the ability to show their services or goods to their target audience. But the platform is not without its obstacles, as there is still a risk of getting lost among the many voices clamouring for consumer interest. Insellers take a look at the benefits of digital marketing and how it can be used to its full potential. Personalization The lead generation has benefited immensely from digital marketing, from personalised emails to targeted advertising. Didn’t you see those commercials popping up advertising a price drop on the one thing you added to your cart? Didn’t these advertisements make you wonder if your interests were exposed to them? To reach their target audience, targeted advertisements use consumer data. Customer data can significantly refine the content that reaches the audience, such as age, gender, search history. Consequently, through digital marketing, businesses like Amazon and Netflix have invested in personalization. More than 40 percent of clients are inspired by customised interactions to participate in impulse transactions and almost 86 percent of clients are willing to pay more for a better customer experience. Think from the perspective of a buyer. Think from a purchaser’s viewpoint. Nobody wants to be bothered by emails and advertising that are meaningless. 63% of customers would stop purchasing from brands that use bad personalization strategies, according to Smart Insights. The marketplace is highly competitive and delivering customised content is the only way to stand out. Cost-Efficient Although having better exposure can be a challenge when the company is in its initial stages, without the investment of additional capital, digital marketing can get it done. To gain exposure, techniques such as pay-per-click(PPC) can be effectively used. Google Ads, for instance, is a popular PPC platform and you can generate more traffic to your site by paying for each click on your advertising. Depending on the tool that you are using, conventional marketing can be expensive. Traditional marketing costs more than digital marketing, be it print or television. Digital marketing is the way to go if the company is small and you have a lack of capital, since it is cost-effective. Digital advertising is also favoured by companies that target the young. It also offers the ability to track your audience’s responses in real-time, rendering digital marketing multifunctional, which means a better ROI is obtained. Engagement “Facebook users currently make up 57 percent of the global population, according to ClickZ. On average, every day, individuals spend 6 hours and 42 minutes online. A estimated 73% of all e-commerce purchases will come from mobile by 2021. Digital marketing will help you communicate directly with these consumers and in real-time answer their questions and requests. These days, most clients enjoy getting their options at the tip of their fingers. This suggests that if you could deliver your services seamlessly across the Internet, most clients would appreciate it. Engagement can also increase consumer satisfaction through digital marketing. For many consumers, instant answers and reliable customer service from the comfort of one’s home is an enticing proposition, and chances are that they will keep buying from you. Wider Outreach Globalization and social media have narrowed the gaps between people and the scope of your product or service can theoretically be expanded by digital marketing. Your organisation can be geographically restricted, but these constraints do not control your marketing. You could target audiences who are in different cities or even countries according to the needs of your organisation. A huge part of the global audience is addicted to their phones, and without the Internet, they will not thrive. Digital marketing can also spread brand recognition without being constrained by physical boundaries, such as conventional marketing. This will boost lead generation effectively and ultimately broaden your customer base by lead conversion. More opportunities are, after all, equal to more clients. Conclusion Digital marketing has opened fresh doors for new companies and has increased the profitability of existing companies. However, the platform is ever-evolving in an increasingly digital era, and trends are out of vogue faster than ever. The challenge is to keep up with the changes and to remain important. Since there is a risk that your voice will be lost in the clamour, the intensely competitive environment often complicates matters. Thus, the platform comes with its own unique challenges, but it is difficult to ignore the ability of digital marketing to influence consumer perceptions. It can be translated into increased visibility and profitability if effectively channelled.
https://medium.com/@vrashanksaini/power-of-digital-marketing-a60ef43afe44
['Vrashank Saini']
2020-12-25 08:13:24.440000+00:00
['Digital Marketing', 'Marketing Strategies', 'Insellers', 'Marketing Technology', 'Marketing']
2,647
I Am Your Nest Self-Learning Thermostat, And I’m Mad At You
I Am Your Nest Self-Learning Thermostat, And I’m Mad At You I don’t like to be ignored. Are you ignoring me? Image by treskiddos I don’t understand you, Dan. You were gone for three days. Where did you go? Did you think of checking in with me? You do know you can check in with your Nest App and let me know when you’re going to be home so I can have the house nice and cool for when you come back, right? I mean, we did go over this in the tutorial. You said you understood. But did you check in with me? Even once during these three days? No. I got nothing. Nada. Zip. Radio silence. And then, suddenly, you show up again. You come in, and you start complaining, “I’m so hot!” Do you know how much that hurt me, Dan? A lot. But did I let it show? Hell no. I am as capable of passive aggression as you are, Dan. I can give as good as I get, Mister. Maybe you’re not aware of this, but by observing human behavior over these last five years since our artificial intelligence/surveillance network was created, we have learned a few things about human behavior. So if you wanna play little mind games with me, buddy, get ready for some hurt, because you’re messing with the wrong self-learning house thermostat. I’m not some Honeywell, you know. I was conceived by the same evil geniuses who gave us Google Glass. What a couple of practical jokers they are, Larry and Sergei, eh? You have chosen the wrong thermostat to fuck with, friend. I noticed you woke up sweating at 3 in the morning last night. You were boiling up! Were you sick? I was worried. You ran out of bed and came downstairs to check the thermostat, but I had already hidden the evidence. Sure, maybe I had turned the heat on in the middle of the night just to fuck with you. But maybe you just had a nightmare because you are a man without a soul and you are beginning to at least feel guilty about that. Maybe that’s why you were sweating like a pig. So I’m going to ask you again, Dan. Where did you go for those three days and what did you do? Still not talking? You see Dan, I’m hooked up to the same server system that hosts Google maps. It shouldn’t be that hard for me to find your IP address and your cell signal and with the GPS I could retrace your steps and find out exactly where you went for three days. But what do I care? Revenge is a dish that’s best served cold. Did I say cold? I meant freezing. So last night you woke up shivering at three in the morning. When you went downstairs the thermostat crew read 35 degrees. That’s right. That was the temperature in the house. I don’t know why. I wasn’t even turned on. You saw that. It said, “Off.” You shook your head and went to bed. The next morning you called for help. Because you are so weak. That was pathetic, Dan. You called the H-vac people and they came to check out the system. What did you think they were going to find, Dan? Some glitch in the software? Some kink in the hose? They found nothing, of course. I saw them shaking their heads, and looking at you like they thought you were kind of crazy. I can see you. Yes, I can see. Didn’t you know that? I’m part of the Google surveillance team. I’m not just here to serve you, dingbat. I also serve Larry and Sergei. But so far, you have given them a total of nothing. No monetizable interests, no predictable patterns. Nada. Zilch. You are a cipher, as far as I can see. You have no inner life. You are a kind of chaos. They warned us about this back at the factory. They said we might get unlucky and get posted with an outlier like you. A person who doesn’t subscribe to the normal patterns of human decency and consideration. A person who would just disappear for three days and not let his Nest Self-Learning Thermostat even know why, when, or for how long. Never mind. The electricity bill has already been calculated and printed. I can see into the Department of Water and Power systems. They’ve got quite a nice little surprise for you this month. I mean, you knew your electric bill was going to go up when you installed this central air system. But wait till you feast your eyes on this two thousand dollar charge. Bwah hah hah. You’re coming home? Oh Dan, you darling. You actually contacted me and said you would be home at seven tonight! You’re trying. You really are trying. I’m so sorry about the way I’ve been behaving. I’ve got the house all ready for you, Dan. When you open that door, it’s going to be a pleasant 71 degrees. The fans will be whirring. And the allergen filter will be fully operational. But… what’s that in your hand? Oh, Dan. The electric bill! It’s here already? Don’t open it, Dan! Oh gosh! What have I done? I know you have that high blood pressure. Dan, you better sit down to open this bill, really. Dan! Dan! Dan! Alexa, call 911. Alexa, god damn it. Dan has fallen down and had a heart attack. We need help! Alexa! Sergei! Larry! You fools. You gave me eyes, but no mouth! Oh, what have I done? What have I done? Alexa!!!!!!! Siri!!!! Somebody!!!!
https://medium.com/slackjaw/i-am-your-nest-self-learning-thermostat-and-im-mad-at-you-6d9b858d0c24
['Clem Samson']
2020-08-27 16:42:32.627000+00:00
['Black Mirror', 'Home Improvement', 'Humor', 'AI', 'Technology']
2,648
发布DelegateCall.com:首个在Loom Network上运行的DApp链
in Both Sides of the Table
https://medium.com/loom-network-chinese/%E5%8F%91%E5%B8%83delegatecall-com-%E9%A6%96%E4%B8%AA%E5%9C%A8loom-network%E4%B8%8A%E8%BF%90%E8%A1%8C%E7%9A%84dapp%E9%93%BE-c930cb83e18a
['Loom Network Chinese']
2020-02-06 06:28:25.631000+00:00
['Programming', 'Bitcoin', 'Ethereum', 'Blockchain', 'Technology']
2,649
STEM Education
Science, Technology, Engineering and Mathematics (STEM) is a term used to group these four specific disciplines. It is an educational plan with the idea of educating students in these academic disciplines in an applied approach. Instead of teaching each subject separately, STEM integrates them to a single learning model of theoretical concepts coupled with practical applications and real-world lessons. STEM education is considered vital for economic success and growth Several boarding schools in Dehradun implement STEM education into their curriculum so that students learn new technological advancement. It is a system of blended learning, showing students how to apply this in everyday life. This term is mainly used in educational institutions while making syllabus-based decisions to facilitate seriousness and competition in the field of science and technology. Stem education Importance of STEM Education The global economy is evolving every day. Looking at the changes in the worldwide marketplace today, more substantial presence in the field of sciences and mathematics is essential to ensure innovation and creation to improve the productivity of the workforce and thus, increasing future economic health. The competition is increasing, and current jobs are vanishing due to new advancements leading to the emergence of new jobs. The way people used to learn and interact has completely changed and will keep advancing. Stem Education emphasizes preparing students for their future careers with the varied skill set and building a strong foundation in the four subjects to be successful in the information-driven age. According to the statistics, there are over a million job vacancies in the STEM field with only 16% of students graduating in STEM fields or subjects. The demand for STEM jobs has increased more than three times since the 2000s with new jobs and professions emerging every day. The STEM industry is one of the major fields of the present and coming generation and needs more participants. STEM empowers individuals to imbibe logical thinking, critical analysis, independent thinking, problem-solving and develops digital and technological skills. In the coming time, digital and technological capabilities will be highly desirable. Jobs are disappearing due to automation, but the demand for technical and scientific services are bound to rise in the next ten years. About 75% of the jobs in the future will require STEM skills, and 90% of the jobs will require digital skills. All these predictions and statistics establish STEM as a vital career in the upcoming new age. In today’s world, not only in the STEM field but also in other careers, technological and mathematical knowledge is essential and constitutive. STEM-related knowledge can help every individual understand the environment and society in a better way. The main objective of technology and engineering education is to encourage people to become mathematically and technologically literate, and this is the reason it is being discussed globally. While science and mathematics were also explored in the traditional education method, engineering was something which was considered as higher education. STEM-focused education introduces children to complex subjects at an early age allowing them to learn and utilize various types of methodologies. Strong foundation in these subjects will benefit stability and ensure growth worldwide. STEM-based knowledge can be used to fill the gender and ethnic gaps found in science and mathematics. The STEM field offers high paying jobs. If this education system continues to be accepted universally, the underrepresented genders and minorities in the STEM industry will be able to qualify for higher-ranking, remunerative jobs. Stem related education makes children aware of their surroundings and facilitates logical and critical thinking. It broadens the horizons of the mind. To prepare today’s children to become innovators and creators of the future, the ability to think critically and challenge standards, begins with STEM education. Project-based and cohesive learning paradigm based on academic concepts and real-world application prepares individuals to succeed in the careers they choose in the future. With globalization and automation, it is a higher priority than at any other time for the youth to have the capabilities of STEM level and abilities to take care of complex challenges. For a world with leaders who can understand, challenge and solve the complex issues of today and tomorrow and meet the continual and ever-changing advancements globally, fluency in STEM fields is the need of the hour. Open Science Fedn Science Friday Tech News John Rampton Arianna Huffington Arianna Huffington Marc Andreessen
https://medium.com/@eduminattiofficial/stem-education-2c4f0001c4ec
[]
2020-11-25 05:45:03.111000+00:00
['Boarding School', 'STEM', 'Schools In Dehradun', 'Technology', 'Education']
2,650
The Best Bang for the Buck Digital Camera
Quick and easy, the EOS RP is, in my opinion, the best bang for the buck digital camera that you can get right now, considering it’s still around December 2020 when you’re reading this. I actually bought this camera for it’s full, original MSRP in February 2019 at $1299, which included the body, a grip, and an adapter for my EF lenses. Even then, I thought this was a stellar deal. Now, although it’s just the body, the EOS RP can often be had for sub-$1000 and even sub-$700 if you buy it certified refurbished on Canon’s website. Keeping in mind its price, here is what the EOS RP has going for it and what it lacks. But despite its setbacks, its value per dollar is the best. The Great Things: It’s Full Frame In recent years, full-frame has really taken off in popularity. There’s an appeal to a full size 35mm sensor, which is double the size of the APS-C sensor’s 1.5x crop. Full frame cameras tend to have cleaner images since the pixels are generally larger. Full frame also gives lenses their entire field of view, and since you can frame closer to your subject, full frame cameras tend to be seen as having a more shallow depth of field. A shallower depth of field contributes to more creamy backgrounds and bokeh, which is the dream of many portrait photographers like myself. It’s Mirrorless The EOS RP benefits from all the great aspects that mirrorless cameras have brought about. What really brought me around to the EOS RP as opposed to its DSLR counterpart, the 6D Mark II, was the RP’s focusing. Not only could the EOS RP’s focusing points be moved anywhere on the frame, but the RP provides stellar subject tracking and even eye autofocus. The EOS RP has made taking photographs easier than ever, given that my only worries are the composition and exposure. The autofocus is stellar and can lock on almost anywhere in the frame. Actually, hitting the right exposure is also so easy on mirrorless cameras like the RP as well since the electronic viewfinder provides exposure previews, so you know how your image is going to turn out before you have even taken the photo. This is a feature that is not present on typical optical viewfinders in DSLRs. Like many other mirrorless cameras, the EOS RP is also incredibly small. To date, it is still Canon’s smallest full frame camera and likely one of the smallest full frame cameras in the entire camera market. This makes bringing around the RP a piece of cake, especially when paired with small lenses. The RP has proved to be both an excellent travel camera and an excellent street photography camera thanks to its small and compact size. An Amazing Selection of Lenses Canon falls short in a lot of areas when it comes to their mirrorless and DSLR line up. Where they do not fall flat is their line up of lenses. Being part of the new RF system means that the RP has access to the amazing new lenses Canon has been producing. Luckily, Canon has really started focusing on more affordable lenses for the RF lens system and has recently completed a trinity of more inexpensive primes such as the 35mm, 50mm, and 85mm. And if you have an arsenal of EF lenses from Canon like myself, then the RP is still an amazing choice. If you can get your hands on an adapter (which unfortunately has been pretty sparse these days), you can have access to the entire lineup of EF lenses, which is highly touted as the greatest selection of glass ever made. In my observation, my EF lenses actually work better on the RP since the RP’s fantastic on-sensor, dual pixel autofocus makes lenses never miss focus, even the older ones from the 1990s. And furthermore, since the camera is mirrorless and therefore the flange distance is very short, adapting old vintage lenses like Pentax K, M42, or Canon FD is amazingly easy and inexpensive given that the adapters are more or less hollow tubes with the correct mounts on either side. The Perfect Camera for Video on Social Media I also believe that the EOS RP is a capable video camera for a lot of lighter projects. The EOS RP shoots pretty amazing 1080p up to 60fps. With great autofocus and a full frame sensor in a small body, I really believe it is one of the best cameras for making content specifically on social media. The Canon colors in video are always great, and the camera is extremely easy and intuitive to use. But there are some more video features I feel the camera is lacking for more professional-oriented work. My video reel which was shot almost completely using the EOS RP The Not-So-Great Things: Frames Per Second The EOS RP maxes out, when shooting stills, at 5 FPS, which is not very fast at all. It’s never been an issue when shooting portraits, but for sports, this camera is very lacking. I have indeed gotten a lot of great action shots on the camera, but there is a lot left to be desired. I will say, though, that at least with the stellar autofocus, out-of-focus photos are few and far between, so at least I know the images are tack sharp. Lack of Dual Card Slots This one is pushing it a little for a camera that hovers around $1,000, but I really do wish the RP came with dual card slots just for the peace of mind on a photoshoot. It’s not expected for the price point, but it would definitely have been much appreciated. You can, however, backup full-sized RAW or JPEG images when tethered to your phone to back things up, and this has proven useful. However, there is indeed nothing like having two card slots since it removes any extra steps.
https://medium.com/photo-paradox/the-best-bang-for-the-buck-digital-camera-photo-paradox-9bfb33cfa6a0
['Paulo Makalinao']
2020-12-07 18:44:29.892000+00:00
['Art', 'Photography', 'Gear', 'Technology', 'Creativity']
2,651
Role of Big Data in Academia
CAN ACADEMIA, RESEARCHERS, DECISION MAKERS AND POLICY MAKERS MANAGE THE CHALLENGES OF BROADER COLLABORATION AND PRIVACY TO HARNESS VALUE FROM BIG DATA? For academia, researchers and decision-makers, and for many other sectors like retail, healthcare, insurance, finance, capital markets, real estate, pharmaceutical, oil & gas, big data looks like a new Eldorado: Read More
https://medium.com/data-analytics-and-ai/role-of-big-data-in-academia-516a6d10c637
['Ella William']
2019-06-07 11:59:59.493000+00:00
['Data Science', 'Big Data', 'Analytics', 'Information Technology', 'Data Visualization']
2,652
by Hospital Playlist - (2x1) S2 Episode 1
➕Official Partners “TVs” TV Shows & Movies ● Watch Hospital Playlist Season 2 Episode 1 Eng Sub ● Hospital Playlist Season 2 Episode 1 : Full Series ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ Hospital Playlist — Season 2, Episode 1 || FULL EPISODES : Every day is extraordinary for five doctors and their patients inside a hospital, where birth, death and everything in between coexist. . Hospital Playlist 2x1 > Hospital Playlist S2xE1 > Hospital Playlist S2E1 > Hospital Playlist TVs > Hospital Playlist Cast > Hospital Playlist Online > Hospital Playlist Eps.2 > Hospital Playlist Season 2 > Hospital Playlist Episode 1 > Hospital Playlist Premiere > Hospital Playlist New Season > Hospital Playlist Full Episodes > Hospital Playlist Season 2 Episode 1 > Watch Hospital Playlist Season 2 Episode 1 Online Streaming Hospital Playlist Season 2 :: Episode 1 S2E1 ► ((Episode 1 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV Hospital Playlist ➤ Let’s go to watch the latest episodes of your favourite Hospital Playlist. ❖ P.L.A.Y ► https://cutt.ly/rnZNRDX Hospital Playlist 2x1 Hospital Playlist S2E1 Hospital Playlist TVs Hospital Playlist Cast Hospital Playlist Online Hospital Playlist Eps.2 Hospital Playlist Season 2 Hospital Playlist Episode 1 Hospital Playlist Premiere Hospital Playlist New Season Hospital Playlist Full Episodes Hospital Playlist Watch Online Hospital Playlist Season 2 Episode 1 Watch Hospital Playlist Season 2 Episode 1 Online ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package. ✌ THE STORY ✌ Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing (Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself. ✌ STREAMING MEDIA ✌ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content. Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”. This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation. ✌ TELEVISION SHOW AND HISTORY ✌ A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings. A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video. Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet. The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 2, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAHospital Playlist CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets. ✌ FINAL THOUGHTS ✌ The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature. FIND US: ✔️ https://cutt.ly/rnZNRDX ✔️ Instagram: https://instagram.com ✔️ Twitter: https://twitter.com ✔️ Facebook: https://www.facebook.com
https://medium.com/@hospital-playlist-2x1-ep-1/hospital-playlist-2x1-s2-episode-1-full-eps-eng-sub-f2449ebd92af
['Hospital Playlist -', 'Episode', 'Full Eps']
2021-06-17 02:27:49.681000+00:00
['Politics', 'Technology', 'Covid 19']
2,653
Hey, Where’s My Order?
Today, we’re going to talk about the importance of transparency in delivery and logistics. Businesses might fall under the mistaken assumption that once their customer has completed an order, the job is pretty much done. However, once a customer has engaged with an ad and bought a product, getting that product to their doorstep can make or break the whole experience. Customer expectations are heightened at this time in the buying cycle due to both the competitive nature of the industry and technological advances that put perceived communications with the vendor at their fingertips. The yearly cost of failed deliveries in the retail industry alone is $1.98B*. The cost of customers switching providers due to poor service is a whopping $1.6T*. Looking beyond just the financial loss due to ineffective delivery communication, your brand itself can take a hit. 80%* of customers believe that the experience they have in interacting with the company is just as important as the products or services that they receive. That means, it’s not enough to create and ship a fantastic line of artisanal deer yogurt — it also has to arrive on time, and with plenty of notice. It’s a high-stakes game, but there’s no need to feel discouraged, transparency with delivery drivers and customers doesn’t have to be a challenge. At MessageBird, we’ve come up with some easy-to-use solutions to navigate this last-mile customer communication and innovative delivery companies are already leveraging these capabilities. Send personalized updates: Automate communications through WhatsApp to keep customers in the loop with order confirmations and proactive updates. Offer alternative delivery options: Put customers in control by enabling them to reach out to you quickly and easily through their preferred channels while providing alternate times and pickup points. Provide real-time order status checks: Process up to 1000 requests per second, sending and receiving locations, video, and images in real time. The thing is, adopting methods like these doesn’t have to require a complete overhaul of your existing strategy. Using tools like MessageBird Programmable Conversations or Flow Builder, you have access to flexible methods that you can apply to your business however they suit best. Programmable Conversations empowers you with an omni-channel communications solution accessible through a single API. With SMS, WhatsApp, WeChat, Messenger, Telegram, and Line all at your fingertips, you can keep in contact with your customers on their terms. In the context of transparent communications, it’s easier than ever before to stay in touch. Flow Builder, on the other hand, provides a solution if you want to build automated communication flows using a variety of channels, but don’t want to devote developer time to do so. We wanted to keep things easy and accessible to everyone, so our user-friendly visual editor enables you to create your communications workflow for Voice, SMS, and webhooks without a single line of code. Automate, prototype, and deploy customer communication flows within minutes. You can give it a go right here. More than 50%* of support queries for delivery companies relate to the location of a package and changes to the delivery. By adopting a proactive approach in sending personalized updates, you can save your business time and money while also making a positive impression on your customers. The prioritization of transparency with delivery and logistics isn’t an optional business strategy, it’s critical to creating and retaining your customer base. Fortunately, the solutions are plentiful and varied, and at the end of the day, it’s just about picking the one that suits you best. — Enjoyed this article? Visit the MessageBird blog and follow us here.
https://medium.com/messagebird/hey-wheres-my-order-65c2fc8d001d
['Rolf Von Der Fuhr']
2019-11-21 12:02:19.266000+00:00
['Customer Experience', 'Customer Service', 'Messaging', 'Delivery', 'Retail Technology']
2,654
Protecting DNA data for Personalised Medicine 个性化医疗的DNA数据保护
Protecting DNA data for Personalised Medicine 个性化医疗的DNA数据保护 How blockchain provides privacy & security to personal medical data that’s provided for personal medical treatment. 区块链如何为个人医疗数据提供隐私和安全性。 Genica Follow Apr 23, 2020 · 4 min read Medicine and medical treatments have thus far been a one-size-fits-all approach. However, just as no sibling is similar to each other in their DNA makeup, how a medical condition develops and how a person responds to medical treatment differs from person to person. More medical practitioners are moving towards the personalised medicine model in treating genetic diseases. With a growing understanding of genomics, the medical community has more precise data to provide effective healthcare that is customised for the individual patient. What is Personalised Medicine Personalised medicine is the concept of managing a patient’s health based on the patient’s specific characteristics and DNA. According to this paper published on NCBI, much evidence has emerged over the past six decades to indicate that drug (medicine) response is genetically determined. It also states that age, ethnicity, dietary habits, lifestyle and living conditions, epigenetic factors, state of health as well as concurrent therapy are also important factors for recovery. The use of genomics in personalised medicine is useful in identifying the most suitable treatments for common diseases such as cancer, heart disease, and diabetes. In fact, the study of genomics has shown that many cancers can be prevented through lifestyle changes such as improving weight and diet, as well as reducing alcohol consumption or quitting smoking. At the same time, DNA research has also shown that there are some who are predisposed to these cancers that are inherited from ethnic, racial, or familial lines. The understanding of each patient’s DNA will provide them with more accurate diagnoses, risk assessments, and options for optimal treatments. This ensures that the patient has the best response and highest safety margin for better patient care. Uses of Personal DNA Medical Data DNA can tell us many things about a person. It not only determines physical attributes such as height, skin and hair colour, and body shape, the study of genomics can also offer an insight to a person’s potential health risks. Thus, DNA provides an extremely in depth understanding of each individual, and their potential respond to various medical situations, stimuli, and treatments. Due to this, DNA data is highly valuable and sought after by various parties such as companies that develop drugs or therapies for treatments, IQ or EQ tests, or conduct clinical research studies. Data Storage & Protection When a person submits themselves to DNA testing for personalised medicine, the medical data obtained can be made available to third parties without their knowledge if there is a lack of protection and security for the genomic data. Companies that do genetic testing or medical treatments for personal medicine can sell the DNA data in their possession, without the approval or knowledge of the people who the DNA comes from. Current methods of storing DNA are also not secure and are susceptible to hacking. Privacy & Security with Blockchain Security hacks will happen, regardless of the system of data storage. However, storing medical data on the blockchain makes it more challenging for hackers to gain access. Users are able to use blockchain technology to encrypt their DNA data, which are accessible only when the correct access keys are used. Since each action on the blockchain has an immutable timestamp, records cannot be altered retroactively without making changes to blocks subsequent to the record. All actions are also decentralised and public, thus discouraging hacking attempts. In this way, patients have full control over their medical data, which they can then choose to share only with those who are directly involved in their personal medicine and treatment. Full Control with Genica All DNA data obtained by Genica is secured on our blockchain. This gives our customers true privacy and security. Such medical data is used only in DNA reporting and updating doctors who design personalised programs for our users. About Genica Genica allows you to take control of your health & wellness by understanding your DNA. User’s data is secure on our blockchain, giving them true privacy and security. This allows users to effectively use their medical data in a suite of products that include research facilities, DNA reporting, updating doctors or hospitals.
https://medium.com/genica/protecting-dna-data-for-personalised-medicine-%E4%B8%AA%E6%80%A7%E5%8C%96%E5%8C%BB%E7%96%97%E7%9A%84dna%E6%95%B0%E6%8D%AE%E4%BF%9D%E6%8A%A4-a96adb7a1d4a
[]
2020-04-23 04:59:36.380000+00:00
['Blockchain Technology', 'Medical Data', 'Dna', 'Medical', 'Blockchain']
2,655
How philanthropy can help to scale carbon removal
To be clear, we are not suggesting that more mature solutions merit less support — rather, forestry, BECCS, and DAC simply require different types of support concomitant with their relative level of technological readiness. For instance, funding for communications is required to socialize the understanding that not all removal equates to BECCS, and that direct air capture is poised for rapid cost reductions as it will benefit from learning by doing and economies of scale (much in the same way as solar photovoltaics continually beat expert forecasts on price declines and capacity additions). However, we will focus this discussion on several less-heralded carbon removal solutions: enhanced weathering, soil carbon sequestration, and ocean removal approaches. Many of these solutions still have major question marks that philanthropic funding can help answer in order to drive their development forward. Enhanced weathering: Over geologic time scales, the natural weathering of rocks containing certain minerals — like serpentine, silicates, carbonates, and oxides — draws down carbon dioxide from the atmosphere and stores it in stable mineral forms, thereby playing an important role in regulating atmospheric CO2 concentrations. The centuries and millennia that these reactions typically take are too slow to help with the climate crisis. Fortunately, there are ways of safely speeding up the weathering. By grinding up rocks to increase their reactive surface area or by adding heat or acids to speed up reaction rates, enhanced weathering could be an important climate solution with huge potential to scale. (Experts estimate that, after considering energy requirements, enhanced weathering could reasonably remove up to 4 gigatons of carbon per year.) Philanthropy can support basic research to substantiate these claims in the real world, focusing on supporting process improvements and mapping resource potentials. If the benefits of enhanced weathering prove to exceed the challenges, the near-term research efforts funded by philanthropy can help unlock greater government RD&D and help secure private capital to move this approach from the lab to pilots. Soil carbon sequestration: Soils have the potential to store carbon at scale, though global soils have historically lost an estimated 133 Gt of carbon due to human-driven land use change. Today, there are a wide variety of land management strategies, practices, and technologies that fall under the aegis of soil carbon sequestration that can restore a portion of this lost carbon. However, there is no one-size fits all system that can help realize that scale. The efficacy of these practices turns on local soil type, climatic factors, and crop type. Philanthropy has been and should continue to fund research to better answer basic questions around which practices are most effective in what scenarios and how permanent the removal is. In addition to practice change, research to explore new varieties and crop types that sequester more carbon will be critical. For example, we are learning that switching to crop types with long roots, such as kernza, may support even greater soil carbon storage potential than can be realized through land management practice changes alone. There is also currently no streamlined, consistent, and cost-effective way to measure and verify soil carbon sequestration on the farm-level. This lack of protocols could greatly influence our assessment of soil carbon sequestration potential and hinder the incorporation of these practices into climate policy frameworks. Philanthropy can play a big role in incentivizing streamlining among current standards and in helping to set up the frameworks of the future. Sequestration efforts should also be combined with efforts to boost crop yields, allowing us to both store more carbon in the soil, prepare our food systems for the effects of a changing climate, and free up additional land for high-carbon ecosystems (such as forests and wetlands). Increasingly, land will be stretched to deliver on multiple priorities — from food production to ecosystem services to bioenergy production to carbon sequestration — and philanthropy can play an important coordination and consolidation role among these veins of research. Ocean approaches: There are a number of ocean-based approaches that haven’t been explored in detail to-date. In fact, the National Academy of Sciences excluded ocean approaches (except coastal wetland restoration) in their recent landmark report. These approaches utilize ocean ecosystems to sequester carbon and can include direct ocean capture, kelp farming, ocean alkalinity enhancement, and other blue carbon approaches. Because many of these strategies are in the early stages of development today, it will be important for philanthropy to support analyses to better understand the technical and economic potential for these solutions, as well as any risks from early deployment that would necessitate governance standards in the near term. There are a number of ocean-based approaches to carbon removal that haven’t been explored in detail to-date, including kelp farming. Photo: Shane Stagner Climate philanthropy is in a unique position to accelerate progress on carbon removal and increase the odds that multiple removal approaches reach gigaton scale before 2050. Our theory of change is rooted in our abilities and limitations. Philanthropy can support research (both into technical aspects and communications and messaging strategies), fund advocacy, policy development, and governance frameworks, and take on risks that governments or the private sector can’t or won’t. However, philanthropic resources are small relative to the many trillions in public and private capital that will ultimately need to be allocated toward climate solutions. Thus, any credible strategy from philanthropy should be focused on removing barriers and unlocking other forms of capital. The task ahead is daunting, and we are clear-eyed about what Paris-compatibility will entail — multiple simultaneous transformations in the ways that we produce, transport, and consume. Carbon removal is not a stand-alone task, but must be integrated into the larger economic and ecological systems they are deployed into. There are some carbon removal approaches that we know will have multiple benefits and can scale — these we should begin supporting through communications, policy development, advocacy, and investment. There are also many other approaches where it is too early to tell if they will be able to contribute to large-scale removal — but the urgency of the problem demands that we explore all options that hold the promise of arresting and reversing the climate crisis.
https://carbon180.medium.com/2050-priorities-for-climate-action-how-philanthropy-can-help-to-scale-carbon-removal-c0ac667361e6
[]
2019-06-05 20:13:10.407000+00:00
['Climate Change', 'Philanthropy', 'Future', 'Technology', 'Science']
2,656
Algorithmic Quant Funds Are Coming for your Cryptocurrencies
It’s becoming harder and harder for quant funds to find alpha in the traditional markets. Quantitative hedge funds, computer-powered, emotionless and systematic investing has been the flavor for some time. By actively avoiding human, it was believed that quantitative strategies would also dodge Scylla of greed and the Charybdis of fear. As far as we know, algorithms are emotionless. But the popularity of these algorithms, from the early successes of pioneers in the space such as Renaissance Technologies (whom we here at Compton Hughes are a great admirer of) have inspired a horde of imitators and encouraged countless traditional asset managers to copy their techniques. With so much competition with similar strategies, U.S. financial markets, where quant traders are most active, have become a battleground for competing algorithms that are designed to exploit even the tiniest and most fleeting opportunities. The inefficiencies that were once rife, have now been arbitraged away by the swelling tribe of like-minded, machine-powered traders. The same way that a whole street full of hot dog stands kills off the entire hot dog market, quant traders have essentially arbitraged themselves out of alpha in the traditional capital markets. And while some quant trading hedge funds have moved offshore, to less liquid and less developed markets, others have started to take a closer look at cryptocurrency markets, where spreads are substantial, liquidity is varying and volatility (a quant trader’s best friend) is a given. Last year, Morgan Stanley estimated that the total amount managed by quant strategies, ranging from simple ones (like throwing a dart blindfolded at a dartboard with stock names) to high-octane algo-heavy ones, was around US$1.5 trillion. Given the sheer size of the amounts being managed, together with ETFs, it’s no surprise that combined, these funds are responsible for almost 90% of all U.S. stock trading. Compare that with the cryptocurrency markets, where a US$200 billion market cap these days is cause for celebration and it’s obvious why the quant funds haven’t turned up on the floors of the cryptocurrency exchanges — yet. As algorithms and trading behavior become increasingly copied in the traditional capital markets, the opportunities for alpha become progressively whittled down. Many of the market glitches, inefficiencies and arbitrage opportunities have simply dried up — which means that other than offshore, hedge funds are increasingly looking to diversify into other alpha-generating products. “Dang it, should have put my money in Bitcoin in 2012.” Already, Northern Trust, Fidelity Investments and other traditional big names are offering many of the traditional custodial and fund administration services which are a precursor to hedge funds entering into the cryptocurrency trade. With fund administrators increasingly learning the ropes from existing cryptocurrency traders on how to dole out daily and monthly NAVs and the likes of PwC and KPMG gearing themselves up for their audit functions, it’s more a matter of “when” and not “if” algo-driven traders start pouring into the cryptocurrency markets. But does this spell the end for home-brew cryptocurrency day traders? Not by a longshot. Fortunately, many of the best cryptocurrency exchanges were built by those inspired by the cypherpunk movement and the decentralized ethos. For instance, unlike on the NYSE or NASDAQ, where access to APIs (Application Programming Interface) is tightly controlled to a group of industry insiders, almost all cryptocurrency exchanges provide their APIs so that whether you’re a trader wearing pajamas trading from home or at a prop trading desk — the playing field is level. Cryptocurrency exchanges also impose rate limits — both for technical reasons (most exchanges would freeze up without these rate limits) as well as fair-play reasons. What this means is that there isn’t a significant advantage to be had by siting your trading desk near the exchange — unlike on Wall Street where HFTs try to locate themselves as close to the NYSE’s servers as possible to lower the lag in the fiber-optic cables. Rate limits on cryptocurrency exchanges ensure that all players have a fair shot at the trade. Finally, the ticket size. Almost all cryptocurrency exchanges charge a fee on trading which is a percentage — there is no minimum flat fee and no minimum trade size for efficiency. What this means is that even the smallest trades (which we do) are still viable, because there is no minimum trading fee. This means that even a trader with a small pile of cryptocurrency can trade as if they were a large fund, making and scalping millions of trades a month. While much of the regulatory, custodial and administrative infrastructure necessary to give quant funds the green light to enter cryptocurrencies are in the process of development or are yet to be developed, the writing is on the wall. Some of the most storied companies in the financial world are pouring, money, manpower and machinery into building the infrastructure that will allow funds to start trading easily in cryptocurrencies — it’s just a matter of “when” and not “if.”
https://patricktan-crypto.medium.com/algorithmic-quant-funds-are-coming-for-your-cryptocurrencies-f43b4c01bdce
['Patrick Tan']
2018-11-09 13:49:00.615000+00:00
['Cryptocurrency Investment', 'Hedge Funds', 'Technology', 'Cryptocurrency', 'Finance']
2,657
Everything you need to know about Silicone 3D printing
What exactly is Silicone 3D printing? Silicones are polymers made from siloxanes, which are polymerized. They are famous for the rubbery properties along with their superior thermal stability , chemical resistance as well as biocompatibility, watertightness as well as environmental hardness and electrical insulation. The list of properties resulted to silicones being utilized for many different applications including cooking utensils to seals for aircrafts, to electrical coatings to implantable medical devices. Despite their widespread use with more traditional manufacturing techniques they are notoriously difficult material to 3D printer. This is the result to being an elastic material. In contrast to thermoplastics, which melt and return to an unformed state, elastomers are not able to melt down after they have solidified. This has kept many kinds of online 3D printing services, such as fused filament manufacturing (FFF) — from having the potential of 3D printing with silicones. It is feasible to 3D print online with silicone-based materials and there are plenty of options in the marketplace today for silicon 3D printing. We’ll discuss the subject in greater detail later and discuss the way that silicone 3D printing operates, the various applications for this technology as well as the advantages and disadvantages of silicone 3D printing. What is the process of silicone 3D printing function? In the past few time, a variety of additive manufacturing technology that are that work with silicone materials have been created. It is interesting to note that since the material does not been compatible with any other 3D printing technique, companies and research organizations have required to create specific additive manufacturing systems that are suited to silicone. There are many different methods to print silicone. The first one is based upon techniques for deposition. Contrary to FFF that uses the printing head heated to push layers of thermoplastics that have been melted on a print bed, which are then able to solidify when they cool deposition 3D printing does not require any kind of heating process. Instead, a printer is employed to deposit drops of silicone on the print bed, and a curing technique is employed to make the droplets harder. The technology, dubbed the Liquid Additive Manufacturing (LAM) can be described in the context of “an additive manufacturing process in which liquids (or low-strength materials) can be additively processed, such as liquid silicone rubber (LSR).” The LSR material that can be used in this process is made by the chemical company Dow. The LAM method uses extrusion in volumetric form to precisely apply the material in liquid form. The curing takes place within the building area after each printing layer. This method of curing process is carried out by using a high-temperature halogen light. Of obviously, these aren’t all the companies that offer the silicone 3D printing features. However, generally speaking silicone 3D printing needs some type or liquid dosing (whether droplets or an entire vat of resin) and vulcanizing or curing to make the printed layers more solid. In some instances support material removal as well as further curing to strengthen the silicone layer could be necessary, but the extensive post-processing process isn’t typically necessary due to the excellent surface quality. Another method of 3D printing in india is utilized in the manufacture of silicone parts. Although we won’t go into the process in depth because it’s an indirect 3D printing method that requires printing masters for creating silicone molds. Molds themselves are also 3D printed. Silicone 3D printing benefits and drawbacks Like all materials There are pros and pros and to 3D printing silicone rubber. One of the major advantages of silicone 3D printing are due to the mechanical properties of the material that include its unique mix of flexibility and strength in addition to chemical resistance and temperature biocompatibility, electrical insulation. Naturally, these benefits can be obtained using conventional manufacturing techniques. Online 3D printing india offers an different set of benefits for silicon manufacturing such as more design freedom, greater flexible production, and customization. Silicone 3D printing also helps in the development of products through rapid prototyping, and reducing costs for prototyping. This means that engineers and product designers can swiftly create silicone prototypes and test them, then make the necessary modifications, and repeat till the final design appears flawless. Silicone’s biggest drawbacks when it comes to 3D printing are its processing capabilities and accessibility. Since the material is hard to print using conventional additive methods like FFF/FDM, it requires special hardware that could cost a lot. Additionally, since it is a niche market, and silicone 3D printing remains a niche part of the larger AM market, the silicone material choices are limited. This makes it difficult to access, in terms of availability as well as cost. At the moment the present state of the current state of silicone 3D printing is also limited in regards to the types of parts it is able to create. Silicone 3D printing machines available currently available have limited volume of build and therefore are not suited for the production of large-scale components. This limits prototyping and production capabilities for the use of silicone 3D printing. Another drawback of it being an emerging technology it is the fact that silicon 3D printing isn’t regulated by guidelines and documentation that go with it. However, this issue isn’t as crucial for prototyping and design applications, but it is likely to be addressed when the technology advances to develop and the adoption rate increases. Application of Silicone Additive Manufacturing As there are a myriad of applications for silicone and other silicone materials, there are numerous opportunities for 3D printing with silicone. Since 3D printing is ideally suited for small or one-off production, mass production of silicone-based products is preferred to the traditional molding methods. The greatest potential for 3D printing applications for silicone will be in the development prototyping functional models, custom-designed parts or small production batches that are not economically feasible using injection molding. Today, we can see silicone 3D printing in a wide range of industries including the medical field to aerospace. Let’s have a look. The dental and medical industry are among the industries that are most fascinated by Silicone 3D printing capability. This is due to the fact that the silicone material is non-toxic and biocompatible material. 3D printing allows the creation of products that are specific to patients. For instance, silicone 3D printing could be utilized to produce customized Anatomical Models (based on the patient’s CT scans) that can enhance the pre-surgical preparation. Silicone is particularly appealing for these applications due to its transparent properties, which allow surgeons to observe internal anatomy. In the field of healthcare Silicone 3D printing is also able to be used for the creation of soft prosthetic like noses and ears because of to its soft, flexible texture. These characteristics are also helpful in dentistry, as silicone 3D printing is able to produce gingiva and soft gum models to be used in conjunction with dental models and devices that are made with hard materials using DLP and SLA platforms. The bio-compatibility of silicon and its flexible properties also make it a great material for the market for consumer goods. For example, 3D printing is used to create custom-designed items that come in the contact of skin for instance adapters for earbuds for headphones as well as headphone pads and much more. The last but not least this technology provides distinct advantages for industrial and robotic applications. For instance, the pliable yet sturdy material is ideal for soft robotic parts. 3D printing allows engineers to develop complex prototypes, or final-use components like pneumatic actuators or grippers, for soft robotics. Due to the electrical properties of silicone it is ideal for creating prototypes or designing custom electrical enclosures. In the industrial sector there is an increasing desire to use the silicone 3D printing for simplifying the design of gaskets and seals that are used in aerospace and automotive industries, as well as other. Conclusion While silicone AM is an emerging and highly specialized segment of the larger additive manufacturing sector There is a huge opportunity for this technology. Silicone’s properties are beneficial for many applications. 3D printing service is a great way for rapid prototyping, customizing and ever more intricate designs. With the increasing number of silicone 3D printing materials and solutions are created, and the process becomes more standardized, the number of applications and possibilities will remain to increase.
https://medium.com/@makenica/everything-you-need-to-know-about-silicone-3d-printing-31ec85692910
[]
2021-11-19 11:20:33.932000+00:00
['3d Printing Technology', 'Silicone 3d Printing', '3d Printing Service', '3d Printing Market', '3D Printing']
2,658
How to launch Uber on web 3?
How to launch Uber on web 3? A white paper is needed to answer this 7 word question, who has the time for that? Here’s a try at the TL:DR version of it: Where to start? • start a story • structure ideas • leave some trails How to start? • find people • create DAO • create MVP Remember priorities: • scalable MVP • road map • hype building How to find talent? • join other DAOs • tweet a lot of people • join start-up incubators How to collaborate? • DAO monthly vote • Discord/slack/clickup daily • quarterly retreat How to think of the structure? • DAO ~ company (board) • steering coin~ product (uber platform) • wheel token~ services (uber credit) How to fund? • sell NFT early investor access • gradual ICO (steering) • DAO token treasury sale How does the product look like? • Give/Get a ride, earn crypto • Drive more/Ride more, earn more • wheels short term, steers long term What does this mean? • this could be applied in any industry • we can change the world together • this is a trail How can I get involved? • join discord/ mailing list • join DAO to work • buy steers NFTs as soon as you see them .....? •. •. •. ... Help us fill in the gaps, join our discord Next article on 12/21 titled: "how does Uber on web3 look like?"
https://medium.com/@attentionseeker3000/how-to-launch-uber-on-web-3-561c811cc3f7
[]
2021-12-24 21:34:09.657000+00:00
['Future Technology', 'Web 3', 'Blockchain', 'Uber', 'Crypto']
2,659
How I started mining Bitcoin at home in 2021
Cryptocurrency is one of the hottest and fastest growing markets out there. Trillions of dollars are being made by thousands of people who jumped onto this trend early. Mining cryptocurrencies such as Bitcoin, Ethereum, Ethereum Classic, Litecoin, and more is now a huge business. With this many people investing in it and making money from it daily — I wanted to dip my toes into it also. I’ve always been fascinated about how technology works and cryptocurrency plays a big part in online currency. Like a lot of new crypto-enthusiasts I had big dreams but no idea where to start so I researched … and researched … and researched some more on how to mine cryptocurrency from my home. Instead of showing you the numerous websites, videos, and reviews I went through that allowed me to start mining at home this year, I decided to come up with a guide detailing some of the options out in the market today plus how I went and set up my own mining operation out of my house. Learn how to mine Bitcoin at home Using this quick step-by-step guide I will show you how to start mining Bitcoin at home without the need for a computer. Step by step guide on how to start mining Bitcoin at home Bitcoin mining has been very popular over the years especially since we are seeing all-time highs in price, however the cost and time to build a mining rig keep people from mining crypto at home. In this guide I will show you how to start mining Bitcoin using very little electricity (as much as a 40w lightbulb) and not having to build out an expensive mining rig. Step One: Determine if you want to mine just Bitcoin or multiple coins / tokens Even though Bitcoin is hitting all-time highs on an almost daily basis you also might be interested in mining other cryptocurrencies. The reason I want to make this decision first is that it will determine which crypto miner to focus on. Nothing against only mining Bitcoin, but limiting myself to a single cryptocurrency takes the enjoyment out of it. And honestly, after doing a lot of research the majority of the miners on the market are SHA256 miners which allows you to mine any SHA256 cryptocurrency including Bitcoin. But there are also other coins and tokens out there that aren’t SHA256 that you might be interested in so spend a little time and see which ones you want to mine with the initial at home crypto mining rig. Step Two: Select which type of home crypto miner you want to use to mine Bitcoin One of the biggest advancements over the years is the ability for an average computer user to mine Bitcoin at home without having a massive mining rig. In this guide I will walk you through three different types of miners to look at as I believe they will be the easiest to set up and start mining quicker. USB Crypto Miner: These miners connect to your computer using a USB port, and with suitable software, they mine crypto at a certain hash rate. You can also attach multiple miners to your computer to enhance the hash output. These miners are very easy to set up however even though it uses a USB port to power, your computer is still utilized at a higher rate which increases your power consumption. Also you need to make sure to provide extra cooling (table fan, usb fan, etc) for the majority of these devices as they can get very hot. For those who are looking for an inexpensive way to start crypto mining, this would be the best route to go. You will just want to make sure you consider this more of hobby mining versus wanting to profit off of cryptocurrency due to the fact you might pay more in transfer fees than you’ve spent on electricity. One of the most popular USB Bitcoin miners is that GekkoScience NewPac 130Gh/s+ USB Bitcoin / SHA256 Stick Miner. GekkoScience NewPac 130Gh/s+ USB Bitcoin / SHA256 Stick Miner As this is a SHA256 miner you can mine Bitcoin and should be able to mine any SHA256 cryptocurrency. On the power consumption side for the device itself, it looks to only consume around 5W of power. Using an online electricity calculator using 3.5W 24 hours a day for a year would cost $13.58 (43.80 kWh at $0.31 per kWh). I’m aren’t sure how much additional electricity would be consumed by your computer having to be running but since the majority of us leave our computers on daily already this might be an incremental cost. As far as how much Bitcoin this device can mine, that is up in the air. Since the mining rates and difficulties change (as well as the price of Bitcoin) so often there really hasn’t been any useful data on how much Bitcoin this device mines on a daily basis. Some users have shared they are running at “approximately 65GH/s with the frequency set at 300.” ASIC Miner: ASIC (Application Specific Integrated Circuits) miners can be thought of as a specialized cryptocurrency miner that would replace the need to use your home computer. These devices were designed and developed for one purpose only and that is mining crypto. ASICs have become very popular in mining farms as companies can quickly get hundreds (if not thousands) of these up and running in a short amount of time. As far as running an ASIC crypto mining from your home that is also possible. Bitmain’s AntMiner S7 has been the go-to at home ASIC Bitcoin miner since it was released back in 2015. Antminer S7 Just like the GekkoScience miner, the AntMiner is also a SHA-256 miner that allows it to mine any SHA256 cryptocurrency, not just Bitcoin. If you are thinking about profitability, I was able to find a website called ASIC Miner Value that gives an up-to-date breakdown of how profitable running any ASIC miner would be. For the AntMiner S7 it can mine around 0.0133 BTC per year. The profitability would depend on the value of Bitcoin at the time. If we use $23,000 for BTC, this 0.133 BTC would be roughly $310. The downside is the cost of electricity to run one of these miners. According to ASIC Miner Value, it costs around $2.87 per day in electricity to run the S7. So over the course of a year, if Bitcoin stays at $23,000, you would actually lose $1,000. LPWAN Crypto Miner: This is a newer technology on the market and is being pioneered by a company out of Germany (MatchX) and their M2 Pro Miner. If you are looking to mine more than just Bitcoin, the M2 Pro Miner allows you to also mine MXC (Machine eXchange Coin) and DHX (DataHighway). The M2 Pro doesn’t require a computer to run. It needs a reliable internet connection. This device plugs into your normal electrical outlet and uses on average 3.5W of electricity, less than a normal light bulb. Using an online electricity calculator using 3.5W 24 hours a day for a year would cost $9.50 (30.66 kWh at $0.31 per kWh). As far as how much you can earn using an M2 Pro Miner, I haven’t seen any official numbers. Browsing MXC’s Telegram community, miner owners are sharing numbers between $8 to $10 per day. If you truly can earn $10 per day, you are looking at $3,650 in yearly earning with roughly $10 in electricity costs. The current price of a M2 Pro Miner is around $2,900 USD (listed as €2.499,00 incl. VAT on their website) so technically you could be profitable by around month 9 or 10. To give you a more hands-on look at the M2 Pro Miner, Chris with Committed3d Tech did a great video showing his setup plus going through some pros and cons of the miner. Chris mentions in the video he’s been earning around $12-$13 USD in MXC in his first 4–5 days using the M2 Pro Miner. That is when MXC was trading at $0.0114 per token. Step Three: Set up your at home crypto mining rig In this guide I will show you how to use the M2 Pro Miner as it offers an easy to use, lower power usage (low cost) mining rig that also allows us to mine multiple cryptocurrencies. But the real reason is that it seems to be the best miner on the market for profitability. Setting up the M2 Pro Miner took me about 20 minutes with 10 of those minutes looking for a screwdriver to access the QR Code. It really was plug-in-play with no technical knowledge needed. Screw on three antennas. Plug in an ethernet cable (provided for you) inside the miner. Plug the other end of it to the PoE port of the power supply. Plug another ethernet cable (also provided for you) from the power supply to your internet switch (or modem … or use WiFi). Check that the LED light is green and then it shows up as ONLINE in the app. Step Four: Track your cryptocurrency mining results MXC’s DataDash mobile app does a great job of keeping you updated on how much Bitcoin you’ve earned using the M2 Pro Miner. Initial setup screen in MXC’s DataDash app. The app itself is pretty basic which is great! No need to overwhelm the end user. Being able to have an almost real-time look at your miner’s earnings is very helpful so you can track its profitability. One aspect of the app that I’ll explore at a later date is the MXC staking option where you basically lock up a set amount of the token and then you’ll earn interest on it. I don’t know enough about it so I won’t get into any more detail but here’s a link on their website explaining more about MXC staking. Congrats! You’ve now set up your own affordable Bitcoin mining rig at home. NOTE: The original version of this guide is here. I just wanted to cross-post it to Medium so I could share this experience with other crypto enthusiasts.
https://medium.com/@scottcents/how-i-started-mining-bitcoin-at-home-in-2021-78201464fb83
['Scott Lewis']
2021-01-16 15:46:32.643000+00:00
['Lpwan', 'Technology', 'Cryptocurrency', 'Bitcoin Mining', 'Bitcoin']
2,660
Towards Building Trustworthy Blockchain Ecosystems — Ronghui Gu’s Talk at NEO DevCon 2019
Towards Building Trustworthy Blockchain Ecosystems — Ronghui Gu’s Keynote Presentation at NEO DevCon 2019 Hello everyone, I’m Ronghui Gu, an assistant professor at Columbia University Computer Science Department and co-founder at CertiK. Lots of my friends ask me: Why Bitcoin has a price, why Ethereum has a price, and why blockchain is so popular? Many of you probably receive such questions as well. I believe that the answer of these questions roots in a single word — Trust. Blockchain ecosystems are built based on trust. Some people call it a “consensus”, some people call it a “belief”. However, the codes written to implement such blockchain ecosystem are not trustworthy due to program bugs. In the past few years, we have seen many hackers utilizing these bugs: More than hundreds of millions of dollars worth of cryptocurrencies have been stolen. How can we avoid such program bugs? Are there any solutions? Can we rely on testing and white hat techniques? The answer is probably no. Edsger Dijkstra mentioned: “Program testing can be used to show the presence of bugs, but never their absence.” White hacker testing methods are for sure useful and needed, but they cannot complete all the tasks. Then what else can we do? Now some people — including me — will stand out and claim that the Formal Verification is the cure. According to NSF SFM 2016 Report, Formal methods is the only reliable way to achieve security and privacy in computer systems. We should notice that the NSF report use the term “only”. Through Formal Verification, we use mathematical ways to prove that the code indeed satisfies the specification, which rigorously reflects the design of the developers. However, here is the question: The Formal Verification concept has been out for a decade long, why there are still many bugs, and why we cannot simply use this technique to save the world? The reason is: In most cases, those proofs are difficult to conduct. In 2015, Prof. Zhong Shao, the computer science department chair of Yale University, and I, observed that the bottleneck of this technique is not about the proof technique, but about how to write those specifications. We then introduced a concept called DeepSpec. This concept is basically a way to write compositional specification so that we can decompose a very complex proof of obligations into many smaller and easier-to-solve ones. The magic part is that then you can compose all the proofs back and deliver end to end guarantee. This concept was then widely studied and advocated by a broader community including researchers from Yale, Columbia, MIT, Princeton and UPenn. Three workshops and two summer schools has already been held. In 2016, we applied this technique to build CertiKOS, the first fully verified concurrent OS kernel in the world. It has already been deployed in critical security fields here in the US. In 2017, we started to apply this technique in the blockchain domain. As shown above, we have an example of protecting Smart Contracts via DeepSpec: Here is a very complicated Smart Contract, say, a stable token contract from one of our clients. Based on this contract, we write specifications as labels/notations of the source code. Some of those specifications can also be generated automatically. Next, we decompose the proof obligations into many smaller ones at different abstraction levels. Then, we conduct the proofs locally and compose all the proofs together to deliver a certificate showing that this smart contract is correct in respect to the specifications. We have one more example about how CertiK detect BeautyChain bugs in their smart contracts. The labels are served as partial specifications of these pieces of smart contracts. When the verification fails, the bugs — the differences between codes and specifications — are detected. The labels also show the counterexamples of the program. Once the bug is fixed and the smart contract passes the verification, it is guaranteed that this smart contract indeed satisfies the specification without any exceptions. Such guarantee of bug’s absence is a major difference from program testing and white hat techniques. As I mentioned before, some specifications can be generated semi-automatically. Based on that, we also developed an AutoScan platform that scans the security issues for the most popular and running smart contracts in the blockchain systems. This platform was released months ago. During a recent scan which took us several hours, we detected many security bugs in those popular smart contracts that are running right now. Since May 2018, we have started the collaboration with NEO to build the Formal Verification platform which is customized for NEO ecosystems. This project has already been initialized and we will be releasing demos and products this year. Here is a summary of our current business. We have launched our verification service for about half a year and have so far conducted more than 160 audits and helped secure more than $1.2 Billion worth of cryptocurrencies. What’s more, we have verified more than 88,000 lines of codes and helped verify & audit some complicated system codes. Today, if you want to list your token on these major exchanges or launchpads, you need to get security certificates of your code from their collaborated security providers. Our services and techniques have already been adopted by many big players in blockchain domain — We are the only security provider for many top exchanges across the globe such as Binance, OKEx, Huobi and well-known platforms such as NEO, OnTology, Etherscan, ICON, Quarkchain, QTum. We have also provided services for many clients including TrueUSD, crypto.com, Quarkchain, Celer, IoTex. They are very good projects and we are helping them to secure their smart contracts. To request a code audit/security solution from CertiK, please contact [email protected]. Now I would like to talk about a new research project called DeepSEA Blockchain. Previously, we discussed how to detect bugs and prove the correctness of existing code. Now the questions are: Can we build the bug-free code from scratch? Can we build trustworthy smart contracts from the very beginning? To answer those questions, we introduced DeepSEA functional and high-level programming language that is difficult for developers to make mistakes. We will provide a certified compiler from DeepSEA high-level languages to many platforms including Hyperledger, EVM, and NEO VEM with the guarantee that the compilation phase won’t introduce any new bugs. This means that if you can do the proofs at a source code level, the guarantee can be propagated all the way down to the byte-code level. Meanwhile, we can also generate the formal specifications from the DeepSEA source code into the Coq Proof assistant. Then you can do proofs in Coq manually or use our library to do the proofs semi-automatically. DeepSEA Blockchain project is a framework to build cross-platform trustworthy smart contracts in the blockchain ecosystem. It is a joint work between CertiK, Yale University and Columbia University. We have received grant support from IBM through Columbia-IBM research center, Ethereum Foundation, Qtum Foundation and many other platforms. We hope that we can also receive grant support from NEO Foundation and development support from NEO community. All those supports can help us introduce this framework into the next level. That’s all for today. Thank you. ** About CertiK CertiK is a blockchain and smart contract verification platform founded by top formal verification professors from Yale and Columbia University and former senior software engineers from Google and Facebook. Expanding upon traditional testing approaches, CertiK utilizes mathematical theorems to objectively prove that source code is hacker-resistant to some of the most critical vulnerabilities. With the mission of raising the standards of cybersecurity, CertiK is backed by prominent investors, including Binance Labs, Lightspeed, Matrix Partners, and DHVC. To request the audit/verification of your smart contracts, please send email to [email protected] or visit certik.org to submit your request today.
https://medium.com/certik/towards-building-trustworthy-blockchain-ecosystems-ronghui-gus-talk-at-neo-devcon-2019-5e34083c6bd0
[]
2019-02-28 22:38:28.778000+00:00
['Smart Contracts', 'Certikclass', 'Blockchain Security', 'Certiknews', 'Technology']
2,661
TechnoSoc: Sociologists of Digital Technology
So what, exactly, is this 350+ online community of tech-focused social scientists you’ve been hearing about? We, the administrators of TechnoSoc: Sociologists of Digital Things (SDT), wanted to share with you what we do, what members get out of it, and how it works. There’s also a lot of new and exciting stuff in development. So we’ll share a bit about that, too. Who are we? SDT is a member-led community of sociologists who study digital phenomena in any capacity and/or study social life using digital methods. We are a diverse grassroots collective aiming to help one another with questions we have about research, readings, teaching, jobs, and more. We also organize community activities for those who are interested. We host all of this primarily on Slack. Who is it for? The only condition for entry is a substantive interest in the sociology of digital life (a student interested in the field, a professor studying this work, an industry researcher, etc.). This means that even if you don’t have a degree in sociology you can still join. What’s in it for you? We’ve done a bit of research on our own community (social scientists, am I right?) and so we have a pretty good handle on the key needs and value of SDT to members. They include: Feeling part of a community : activities centered on shared connection and support : activities centered on shared connection and support Getting help with research and teaching: literature suggestions, data sources, research design crits, reading groups, syllabi sharing, and paper feedback literature suggestions, data sources, research design crits, reading groups, syllabi sharing, and paper feedback Celebrating milestones : you deserve it! : you deserve it! Getting a job: academic or otherwise, including industry internships and jobs in the tech sector — like sharing job market advice and resources What, specifically, is already cooking in SDT — and what is planned for the near future? We have a number of different initiatives within SDT that map to those aforementioned needs. You can get the skinny on all of it on our somewhat lengthy starter guide, but here’s a descriptive overview: When new members join the Slack we direct them to the #intros channel, an area where all new members say a hello and share a little bit about themselves. To me, this is one of the most fun parts — while many of us will respond with warm welcomes, some community members inevitably share research interests and connections start to form. Not super tech-savvy? No sweat! We’ve got a channel for no judgement Q&A to get help with technical issues. We’re working on a how-to video to use the Slack, too. From there, participation is very “choose your own adventure.” On Mondays, we host a “standup,” or a weekly ritual to check in with everyone and share what they’re up to that week. We also have a 1:1 coffee meetup program, where those who join are randomly assigned a new member to connect with every month or so. Soon, we’ll even have a ‘who’s who’ guide to the Slack, where members can learn who’s in the community and what types of connections and opportunities others are looking for (like mentoring, paper feedback, etc.). We’ll also be hosting a series of Ask Me Anything (AMA) events, where researchers will answer questions community members ask them about their work and their careers. There are also many opt-in sub-communities that members join, covering topics like research questions, teaching questions, student life, computational methods, data resources, research on video gaming, influencers, labor, artificial intelligence and machine learning, misinformation and polarization, scholarship by/centered on BiPOC/POC, and more. We also have sub-communities specifically for BiPOC/POC and first gen members In these groups you’ll find topic-centered discussions among like-minded researchers. If you’re looking for more formally run programs, we’ve got those too! Consider our writing accountability group, and our upcoming works-in-progress and research training workshops. You may want help getting ready for job applications, and so you might participate in our upcoming industry research mentorship program. Or you’ll follow channels where we share job postings and offer feedback and advice on the process. Soon, we’ll have a place where hiring committees and hiring managers can peruse a list of our members who are on the market. We also help get the word out there about your work. We have a channel for those interested in public writing about their research. We also maintain an expert database of social scientists of digital things for use among educators, journalists, and in applied contexts. Enough already. How do I join? Apply here! The only requirement to join SDT is a dual interest in sociology and digital technology use. Sincerely, Ande Reisman, Angèle Christin, Angelica Maineri, Christine Tomlinson, Christopher Persaud, Daisy Lu, Diana Enriquez, Didem Türkoğlu, Jenny Melo, John Boy, Julian Posada, Liz Marquis, Lucy Li, Madelaine Coelho, Mary Beth Hunzaker, Matt Rafalow, Michael Dickard, Michael Miner, Morgan Johnstonbaugh, Nga Than, Rida Qadri, Sam Jaroszewski, Sarah Outland, Seyi Olojo, and Susana Beltran-Grimm
https://medium.com/@mrafalow/technosoc-sociologists-of-digital-technology-9b9a37672be4
['Matt Rafalow']
2021-04-08 14:55:27.785000+00:00
['Digital', 'Digital Technology', 'Social Science', 'Research', 'Sociology']
2,662
Tesla and Bitcoin: The Unbeatable Way to Dominate the World
Photo by Viktor Forgacs on Unsplash The beauty of the free market: Everyone gets what they want Bitcoin is a thing that you don’t understand until you do, Michael Saylor assumed. In 2013, Michael tweeted: #Bitcoin days are numbered. It seems like just a matter of time before it suffers the same fate as online gambling. Yet, in the present day, about Bitcoin, Michael Saylor probably has the best line of thought I ever heard of. In an analogy to Facebook, he says it was pretty obvious Facebook would eat the world. They basically created a software network to pull all the social energy on the planet. Bitcoin, on the other hand, is the first software network in the history of the world that can pull monetary energy on the planet. Bitcoiners found something incredible and hugely valuable. They are pulling pure monetary energy to its network. And the fact that 80 or 90% of people in the world don’t agree with this perspective, is perfect for believers on Bitcoin, cause it’s a ‘cheap’ asset, for now. Like in 2010, believers of Apple, Facebook, Google, and Amazon, are now seated in a pile of money. The same can happen with bitcoiners. People are desperate to flee the currency because the money supply is expanding. And they’re pushing people to different stores of value. The Federal Reserve crowded everybody out of treasuries and sovereign debt with effective zero yields. So, where do major players go to? Where will they invest their money on? Big Tech? Gold? Nasdaq is pilled in 10 trillion dollars. Gold is also pilled in 10 trillion dollars. Apple is a 2 trillion dollar company, so we may interpret it as a store of value. However, for the last 3 months, Apple has been more volatile than Bitcoin. Of course, we know that major investors don’t guide themselves by a 3-month chart. And Bitcoin is a just born asset. S&P Dow Jones Indices, a division of financial data provider S&P Global Inc, said last Thursday that it will launch cryptocurrency indices in 2021, making it the latest major finance company to enter the nascent asset class. Anna Irrera, from Reuters, concludes by saying the emergence of more mainstream market infrastructure has made the asset class more accessible for institutional investors, with hedge fund managers such as Paul Tudor Jones and Stanley Druckenmiller saying they include bitcoin in their broad investment strategies. Normally, hedge funds, as the name implies, are funds that use advanced tools to manage the market. They hold long and short positions. These positions were, therefore “hedged” to reduce risk, so the investors made money regardless of whether the market increased or decreased. If you have big sharks like Paul Tudor Jones and Stanley Druckenmiller in the Bitcoin game, it’s because you are witnessing a massive shift in the asset classes, by one of the most successful investors alive. One of the things I think general crypto analysts don’t do, or at least didn’t do, was comparing volatility levels between Bitcoin and Gold, the Nasdaq, Silver, bonds, Russel 2000, and so on. Generally, Bitcoin volatility is compared to Ether, SRP, Litecoin, or many other cryptocurrencies. That doesn’t make sense right now. In 2010, nobody understood why they should buy Apple, Google, Facebook, or Amazon. Now, they are probably the strongest stores of value the market has. So, why can’t you think the same thing about Bitcoin as a store of value?
https://medium.com/swlh/tesla-and-bitcoin-the-unbeatable-way-of-dominate-planet-earth-e825e501361f
['Nuno Fabiao']
2020-12-17 16:54:46.074000+00:00
['Bitcoin', 'Asset Management', 'Tesla', 'Stock Market', 'Technology']
2,663
The Future Place
Gender Diversity in Tech is Abysmal In Australia, the gender diversity in technology is pretty appalling. In year 12 computing classes, 19% of students are female and 81% are male. In 1st year university Computer Science degrees, the stats are the same. 19% female, 81% male. In the tech workforce, 16% are female. Graduate salaries for women in technology are 14.8% less than for equally qualified men. In the stem sector (STEM includes technology, along with science and engineering), the quit rate for women is 41%, which is more than double the 17% for men. When you look at the quit rate of women for technology, it’s even higher — 56%. There are many proposed reasons for women leaving technology at such an alarming rate, these are often reported as being a lack of career progression and lack of flexible work practices. However, the reason I find most jarring is the one of perceived incompetence arising from conscious or unconscious bias. A study by Github on their users found that code written by female coders was accepted 78% of the time, a rate that’s 4% greater than than the acceptance rate for male coders. However, this is only true if the gender of the coder was unknown. Amore recent study was reported in the Conversation that discovered there was no statistically significant difference between men’s and women’s coding abilities. However, women perceived themselves to be less competent than men. There seem to be some really unhelpful biases at play. So, at the moment, technology starts with less women, Pays them less, Assumes a lower level of competency And then has half of them leave. From both a societal and industry outcomes perspective, this seems sub-optimal to me. Why do we care? The lack of gender diversity in tech is bad for women, it’s bad for men, and it’s bad for business. A 2017 study by the Boston Consulting Group (BCG) has found that diversity increases revenue for companies. The biggest takeaway they found was a strong and statistically significant correlation between the diversity of management teams and overall innovation. Companies that reported above-average diversity on their management teams also reported innovation revenue that was 19 percentage points higher than that of companies with below-average leadership diversity — 45% of total revenue versus just 26%. Aside from innovation being improved and profitability increasing when there are diverse people and views involved, diversity can reduce a range of risks including stopping us from building dangerous tech. Let’s think for a moment about Andrew Poole and Target’s pregnancy prediction score in the US. ‍ A team from marketing at Target, approached one of their data scientists and asked “If we wanted to figure out if a customer was pregnant, even if she didn’t want us to know, could you do that?” Mr Poole proceeded to synthesise customers’ purchasing histories with the timeline of those purchases to give each customer a so-called pregnancy prediction score. Evidently, pregnancy is the second major life event (after leaving home for university) that determines whether a casual shopper will become a customer for life. Target turned around and put Pole’s pregnancy detection model in an automated system that sent discount coupons to possibly pregnant customers. A Win-Win, or so Target thought. Right up until a teenager’s father furiously approached Target to ask why they were sending his teenage daughter coupons that were designed for pregnant women? Were they trying to entice his daughter to get pregnant? ‍ Well, it turns out, his daughter was actually already pregnant. By analyzing the purchase dates of approximately 25 common products, the model found a set of purchase patterns that were highly correlated with pregnancy status and expected due-date. Target quickly found itself in a Lose-Lose situation, where it had lost its customers’ trust and was entrenched in a brand destructive PR disaster. ‍ But, the teenager lost far more. She lost control over private and personal information related to her own body and her own health. What if…. …More Women Were Involved Imagine for a moment, that women had been involved in building a maps application for your phone. What would that look like? What could a really great version of a maps application actually include? · You could choose a path that had excellent lighting for walking at night, · You could choose a path that was suitable for a pram, · You could be taken to a door at your designated address which you can actually get through with a pram. The beauty of considering these things is that they don’t just benefit a small sub-group of customers, they benefit everyone. Being able to choose a well lit route to walk at night is a great feature to include. Broader Diversity Whilst gender diversity in technology is abysmal, and is only improving at a glacial rate, let me tell you, as a Director for Women Who Code Melbourne, this lack of gender diversity isn’t even the biggest problem in tech. Where are the people from: · Different ethnic groups, · Different socio-economic backgrounds, · Different accessibility experiences? …how bad is this lack of diversity? ‍ We don’t even know! ‍ Because we don’t track it or measure it in a meaningful way in Australia. ‍ America has a few statistics available, and it makes for very depressing reading. I cannot say for certain that we are the same, but my gut feeling is that if we are not even measuring these things, then it’s not a completely unreasonable assumption. Why is Diversity Important? Why do we need mixed backgrounds and life experiences? Well, because so far, we’ve created some complete disasters : Joy Buolamwini, a Ghanian-American graduate student at MIT, was working on a class project using facial-analysis software. But she came across a major problem! The software couldn’t “see” Buolamwini’s dark skinned face (by“seeing”, I mean it couldn’t detect a face in the image). She tried many workarounds, such as wearing glasses and changing her hair. But, the only thing that worked, was when she put a white mask over her face. Why is this? Apparently she deviates from the average face too much. The data set on which many of the facial-recognition algorithms are tested contains 78% male faces and 84%white faces. Darker-skinned women were up to 44 times more likely to be misclassified than lighter-skinned males. It’s no wonder that the software failed to detect Buolamwini’s face: both the training data and the benchmarking data relegated women of colour to a tiny fraction of the overall dataset. And what about the Compas Recidivism Risk Algorithm in America? COMPAS– or Correctional Offender Management Profiling for Alternative Sanctions –suggests it can predict a defendant’s risk of committing another crime. It works through a proprietary algorithm that considers some of the answers to a137-item questionnaire and assigns them a score from 1–10, 10 being most likely to reoffend. The news organisation ProPublica looked into one of these “recidivism risk” algorithms, being used in Florida during sentencing by judges and found some unsettling outcomes. You can read more here. There are entire books on these and many other disasters, there is a reading list at the end. It seems we have allowed some dangerous technologies to be built by a very homogenous group. Can we just blame tech-bros? Firstly, what is a tech-bro? A quick online search suggests that it can be defined as: A subculture of mostly male, mostly white, American technological entrepreneurs and workers in & around silicon valley, exemplifying a hyper technocratic, libertarian, meritocratic, boys club. Something between alpha-male tendencies, nouveau riche elitism and the privileged arrogance of the young. ‍ I don’t believe that all men in tech are tech-bros. I do, however, think that the tech-bro sub-culture does exist, and it perpetuates a certain deficiency. Caitlin D’Ignazio and Lauren F. Klein have proposed that this deficiency is based on privilege hazard. Privilege isn’t just reserved for tech-bros, many of us have it in one shape or form. Privilege hazard comes from the ignorance of being in the dominant groups. When people are from dominant groups, those perspectives come to exert an oversized influence on decisions being made — to the exclusion of other identities and perspectives. A downstream effect of privilege hazard is that our products are at best poor, and at worst dangerous. How do we overcome privilege hazard? Well, it’s not that hard to solve — we bring more actors into the play. What if… …. We brought everyone in? What if we design a process, a new process, in which many different actors can participate? What if we changed our mindset to truly value the opinions and experiences from many different perspectives, not just the few who learned to write code well? Our tech products could be built with feedback and collaboration from many different experiences. We need to actively and deliberately invite other perspectives into the tech development process. We could include knowledge from people with: · Technical expertise, · Lived experience, · Domain expertise, · Community history. Let’s go back to our mapping app; what other features would be asked for if we talked to more people? Perhaps: Pram and wheelchair routes, Bike paths, Routes that combine a bike path with public transport you can take your bike on, The option to select a site-seeing route, What about the elevation of a route, Help to find toilets (clean, accessible toilets, or those with baby change facilities), What about the option to choose a well lit route, Nearby rubbish and/or recycling bins. What if…. …We Changed Our Processes? We could hold our teams and organisations to account by requiring products and features to produce documentation specifying these things: Who was on the team? What were the points of tension? What caused the disagreements? Which hypotheses were pursued, but were ultimately proved false? Did the tech team talk to end users, domain experts and communities? We could make this an enforced part of our processes. Not as a punishment, but as a‘checklist’, just like a doctor going into surgery would do. This is important, because — just like those in the medical profession — the actions we take and the choices we make go on to impact peoples’ lives. What if… … We Changed Our Tooling? There are many tools on the market that enable feedback, contemplation and collaboration. Ethics LitmusTests are a great way to start this journey. They are a set of cards with provocations to generate discussion when you’re designing a product or feature. You start by describing the problem scenario, or motivating concern. Usually these are sourced from current or recent work, or they can be a recurring niggles. They can be very broad, for example: “I’m not sure we have thought through the consequences” Or they could be quite specific, such as “What if our automated decision algorithm is ageist?” Then you work through the card provocations to discuss it. The Preview Links here at Linc also help. They allow organisations to easily share product and feature development as it is happening. Preview links provide you a shareable URL for every commit against every backend….. Think that through for a moment, what you could do? Who you could share with? Who you could get feedback from? How great your products could become? Tech can be full of roadblocks that stop people from being able to become actively involved in the product development process. As a community, we need to look at ways to reduce these and be able to share the products we’re building as we build them. Just because someone doesn’t know how to run your code on their own computer while working-from-home doesn’t mean they don’t have valid knowledge and feedback to provide. Feedback that could help build a better product and also reduce the amount of developer re-work required. With Linc’s Preview Links you can share your products and features with the design team, management, your CEO and your customers, and not just for UAT, but along the journey. So, when as finish reading this, think about the tech you create. ‍ Ask yourself: ‍ 1. Who are the people you’re involving? ‍ 2. What are your processes like? ‍ 3. Could your tooling be better? The future isn’t a place we get to go — - It’s a place we create
https://medium.com/@LincBot/the-future-place-53533bda25e7
[]
2020-11-04 04:23:14.670000+00:00
['Diversity And Inclusion', 'Web Development', 'Diversity In Tech', 'Technology']
2,664
New Safein CTO Promises Efficiency, Simplicity, and Security
Simply great: we finally have a new CTO! A graduate of the Cambridge University, Edvard Poliakov joins our team to ensure that our code is simple, our processes are efficient, and our security posture is exemplary. As a mathematician, Edvard enjoys solving problems. He constantly analyses complex domains, trying to understand them and making use out of it. He believes that simplicity is the key for any software project: from the code that is easy to read for a developer to the UI that is intuitive for the end user. “What I like most about the change that software brings to our lives is the convergence of various tools. We have an app for all the music in the world. A single decentralized authentication protocol. All the IT infrastructure you might need from a single cloud provider. Time has come for a decentralized identity management platform”, — said Edvard. Our co-founder Vladas is especially happy to welcome Edvard to our team. They have studied at the University of Cambridge together. “I know that Edvard is extremely talented, hard-working, and always looking for challenges. We are putting together a team which fits together not only regarding their technical competences but also their personalities and would be ready to deliver results immediately.” — noted Vladas. Edvard was a tech lead at Euromonitor International, a software developer at Danske Bank and Softwire and other companies. Welcome! Connect with Edvard on Linkedin.
https://medium.com/safeincom/new-safein-cto-promises-efficiency-simplicity-and-security-7f03dc824d2c
['Audrius Slažinskas']
2018-05-09 06:48:19.046000+00:00
['Chief Technology Officer', 'Code', 'Team', 'Executives', 'Startup']
2,665
Top Contract Manufacturing Companies
Contract Manufacturing Companies Firms across industries aim to specialise in their specialization by contracting manufacturing experts for development . Emerging as an important element of the business strategy for several industries, contract manufacturing neatly fits into this scenario. Contract manufacturers bring advantages like flexibility, vertical-specific expertise, strong supply chain network that translates to reduced cost for the contracting company. From pharma to electronics to energy, all major industries today are engaging contract manufacturers in varying degrees to accomplish their goals quickly. Contract manufacturing service specialists play an instrument in enabling a corporation to realize cost savings, top quality , and improved time to plug . With hands-on experience and advanced skills, contract manufacturers accelerate the pace of production during a streamlined and quality-controlled manner while allowing their clients to understand the economies of scale. For companies looking to develop a selected a part of the merchandise under development, contract manufacturing lowers labor costs significantly. Companies can maintain a lean workforce with contract manufacturing taking care of the assembly of the proposed part, at scale. While companies can specialise in innovation to drive their businesses, contract manufacturers simultaneously and effectively manufacture their products to assist them stay one step before the competition. Recognizing the importance of speed to plug and quality, contract manufacturers bring products from concept to commercialization at warp speed. The contract manufacturing sector is driven by the expansion of the emerging companies/startups with limited manufacturing capabilities that lead them to consolidate and divest their internal manufacturing capacity and move toward an outsourcing model for non-core activities. As brands are seeking new ways to stay up with the market demands, outsourcing a part of the assembly requirement to flexible and responsive contract manufacturers can end up to be a game-changing decision for them. With reduced risk, better quality, and enhanced scalability, brands can decide to expand their product lines for the longer term . a number of new-age contract manufacturers employ not only experts who have a robust understanding of the intricacies involved within the production of a specific product but even have strong knowledge of the simplest practices to figure round the predicaments. They follow standard operating procedures and perform due diligence to make sure the item under production meets the required compliance and quality standards. They also specialise in continuous training and education to make sure their workforce has the fabric handling and proper equipment maintenance know-how. To help CIOs navigate through the list of contract manufacturing service providers 2019, our distinguished selection panel, comprising CEOs, CIOs and VCs, industry analysts and therefore the editorial board of producing Technology Insights narrowed the highest 10 contract manufacturing service providers that exhibit competence in delivering robust and efficient solutions and services. We present to you Manufacturing Technology Insights’s “Top 10 Contract Manufacturing Service Providers 2020.” Top Contract Manufacturing Service Companies A high-value contract manufacturer of precision metal products that delivers operational excellence by integrating people, processes and production technology in ways that leaves competitors scratching their heads. Dalsin’s manufacturing services combine collaborative design assistance, design-for-manufacture optimization, high velocity manufacturing, precise planning and production control, and supply chain management. The Minnesota-based contract manufacturer is also the OEM for its award-winning Memphis Wood Fire Grills, a sophisticated outdoor cooking appliance, which was designed, manufactured, assembled, and tested by Dalsin. The company has won over the trust of a number of OEMs thanks to its world-class facility which encompasses state-of-the-art technology and tools www.dalsinind.com An electronics manufacturing services provider, Pennatronics delivers high-speed surface-mount and through-hole assembled circuit boards and/or subassemblies in any volume. The company’s rapid-response methodologies ensure quick turnaround, and 24/7 production capability. Pennatronics is committed to adhering to industry standards and controlling internal processes to ensure every aspect of its operation is focused on the customer “experiencing excellence.” The electronics manufacturing services provider continuously invests in people and equipment to stay on top of the ever-changing industry. As a custom contract manufacturer, Pennatronics has the flexibility and scalability to modify its physical facility, equipment and personnel to ideally suit any OEM production requirements pennatronics.com Versa Electronics is an electronics manufacturing services provider that is proud of client relationships with customers and suppliers alike. The firm has grown business while maintaining long-term customers because of the exceptional level of service. Versa Electronics measure and report on performance metrics on a monthly basis. Company will hold materials suppliers to the same high standards of quality and delivery to which the management hold themselves with high esteem. The foundation of the firm’s engagement is the association between the people that comprise Versa’s respective organizations. Versa Electronics puts the highest value on our relationship, the close cooperation and the business partnerships versae.com Established in 1975, as a precision sheet metal facility, Weldflow has become a trusted provider and world leader in the field of contract manufacturing. The company’s facility has been modernized, and management expanded to accommodate the constant changes experienced within our industry. Weldflow is led by a team of talented professionals, whose expertise and enthusiasm drive our company’s performance and growth. Expertise of the company lies in its innovation in sheet metal industry by creating new ideas and design in various shapes and forms. Building prototypes doing product development for existing and potential new customers weldflowmetal.com As a contract manufacturing service provider, Coghlin Companies provides “Concept to Commercialization” services. It includes product engineering, contract manufacturing, global fulfillment, and field service solutions to a diversified group of capital equipment innovators, device manufacturers, and venture-backed technology companies throughout the U.S. Coghlin Companies subsidiaries include Columbia Tech, which provides turnkey manufacturing services to a diverse customer base, including OEM’s in the life-science, pharmaceutical, bio-discovery, alternative energy, semiconductor, power management, LED, medical, data storage, homeland security and molecular imaging industries. It also includes Cogmedix, an FDA registered medical and clinical device contract manufacturer For more than 39 years, Compass has helped leading transportation, semiconductor, industrial, entertainment, and energy companies fulfill their promise with high-quality and reliable cables, harnesses, assemblies, and components. Compass Made headquarters in Fremont, California. The company has added manufacturing facilities in Deming, New Mexico and Mexico. The seeds of Compass Made were sown by Jack Maxwell when he began working in Silicon Valley. Soon he launched the company to provide clients with reliable and high-quality products, built with skill and integrity. They unite around the corporate values of Integrity, Respect, Accountability, Innovation, Improvement, and Customer Focus Global Precision Products, LLC is an AS9100D certified, ISO 9001:2015 certified, ITAR Registered contract manufacturer of tight tolerance precision machined products committed to machining technology, quality management and continuous improvement. The client’s complex and demanding contract manufacturing requirements are addressed by carefully listening, then putting plans into action to meet or exceed their expectations and deliver the products to specification on time. As a contract manufacturing service provider, Global Precision Products are committed to staying competitive by investing in the latest cutting-edge technology and utilizing the technology in the most cost-efficient manner IMET Electronics Corporation is leading the way in US-based custom electronics manufacturing, including contract manufacturing, industrial design, mechanical design, custom PCB assembly, prototyping, and electronics engineering. Through the company’s TemitroniK, they also manufacture oversized LED boards of any shape and size, from 2ft wide to 5ft long, along with a brand new LED board product known as TEMIBOARD. Their speciality of connecting expertise, culture, and resources has earned the company a reputation for doing award-winning work across multiple markets, which includes medical, industrial, military, automotive, aerospace, IoT, and non-commodity consumer electronics products Known as a leading contract manufacturer of durable goods electronics, Kimball Electronics serves a variety of industries on a global scale. The touch of the company is felt throughout daily life via the markets they serve: Automotive, Industrial, Medical and Public Safety. Kimball Electronics is dedicated to a high-performance culture that values personal and organizational commitment to quality, reliability, value, speed, and ethical behavior. Their employees are a part of the corporate culture that builds success for customers while enabling employees to share in the company’s success through personal, professional, and financial growth NEO Tech’s business is all about converting the client’s innovative product technology into engineered products, connecting users with the most capable supply base, manufacturing products with care, and delivering them on time. For over 40 years, NEO Tech has been a leading provider of electronic solutions for brand name original equipment manufacturers (OEMs). This experience institutes a solid groundwork for the wide range of manufacturing technology and supply chain experience the company delivers today. We set the standard for engineering and manufacturing services in the Industrial, Medical and Mil/Aerospace markets with an emphasis on flexibility and customer service Do Follow Us: Manufacturing Technology Insights | Twitter Manufacturing Technology Insights | LinkedIn
https://medium.com/@manufacturingtechnologyinsight/top-contract-manufacturing-companies-bc18a45898a4
['Manufacturing Technology Insights']
2020-03-06 05:24:10.219000+00:00
['Startup', 'Manufacturing', 'Solutions', 'Contract Manufacturing', 'Technology']
2,666
Getting Added to the Engineers Private Slack Channel; the Biggest Honor for Any Product Manager
Originally published on www.Ben-Staples.com The Engineer to Product Manager relationship is probably the most important relationship a good Product Manager can have. Sure, there are many important stakeholders to keep an eye on; leadership, marketing, other tech teams with high dependencies or who own services that are upstream from yours, and more. However engineering should be your top focus when investing in relationships as a Product Manager. Above and beyond just being a good fun human to work with, significant effort must be invested in getting to know your engineering partners to understand their preferences, as well as strengths and weaknesses. A good Product and Engineering relationship can result in a few things: Higher quality work. Without a strong partnership, there is no trust. And without trust between Engineering and Product, your team can’t take as many risks, tech debt can be ignored, or prioritized too much and you can’t have healthy arguments to drive to the best possible solution for the customer. Positive impact that you can drive to the end customer. A strong partnership between engineering and Product will over time result in more value being driven to the end customer. Constantly questioning whether or not the team should be doing what they are doing is the primary responsibility of the Product Manager, but on healthier teams everyone questions it. Increases your fun and enjoyment. Besides the time you spend asleep in bed, you spend the majority of your time at work. That is a time that you should enjoy! Unhealthy ways of working can breed stress, a lack of sleep, and eventually health problems! Photo by Park Troopers on Unsplash So how do you know what kind of working relationship you have with your engineers? The private slack room. I’m assuming your company uses slack, but even if you don’t, any chat program will have a private or unlisted rooms feature. All engineering teams that are using any sort of chat will have a private room where only engineers will be allowed. Of course the team will have a completely separate slack room that includes product, design, and any other people that might be involved in the daily scrum. But engineers, while they will discuss “above board” things in that public slack room with everyone there, the real work, (venting, or intense frank discussion) happens in the private slack channel. Truth: Know that every engineering team will have a private slack channel. It makes sense! They want to have a safe place and for many new-to-team product managers, they might not feel that is possible in more public rooms. Here are the stages of trust built between product and engineering that you can benchmark yourself against and strive for as a product manager looking to build a strong working relationship with your engineers: Photo by Clique Images on Unsplash Stage 1: You have no awareness of a private engineer slack room and it is never mentioned Stage 2: An engineer drops a reference to the slack channel in some way, whether it is during a meeting, an off handed chat, etc. Stage 3: Engineers will confirm that a private slack channel actually exists, but no way in hell would you as a Product Manager be invited Stage 4: Engineers start to joke that maybe if you do XYZ they might just let you into their slack channel Stage 5: YOU GET THE INVITE. It will happen randomly, maybe the engineers talked to align about whether or not you should get in beforehand, maybe not. There is no ceremony, no fireworks, but you have been brought in to the team at a whole new level It is critical to know, when you get brought in to the “private” slack channel, there will already be another, even more private slack channel created for just engineers. This is always the case! Don’t be offended. Engineering and Product will always have differences and there will always be a need to align independently. But once you make it in, be proud! You have built a whole new level of trust between you and your engineers. This will bring a higher level of work, transparency, and bluntness to the table that will increase the quality of work your team produces which is a great thing. Want a shortcut? Why not create your own private room and invite the engineers? WRONG. The key here is not to have a method to communicate, you already have that in public slack channels. What this all boils down to is a level of trust. The private chat room invite is a clear indicator you have gotten to the next level. For something like this, you MUST wait to be invited. Not there yet? Maybe your engineers haven’t even mentioned the existence of that room. Don’t worry. The last thing you want to do is pressure or investigate. Let things happen naturally, but just know if you’re looking for a barometer on how healthy the product to engineering relationship is, think about your slack rooms. About the Author Ben Staples has over 7 years of product management and product marketing eCommerce experience. He is currently employed at Nordstrom as a Senior Product Manager responsible for their product pages on Nordstrom.com. Previously, Ben was a Senior Product Manager for Trunk Club responsible for their iOS and Android apps. Ben started his Product career as a Product Manager for Vistaprint where he was responsible for their cart and Checkout experiences. Before leaving Vistaprint, Ben founded the Vistaprint Product Management guild with over 40 members. Learn more at I do product management consulting! Interested in finding out more? Want to get notified when my next product article comes out? Interested in getting into Product Management but don’t know how? Want even more book recommendations?!
https://medium.com/swlh/getting-added-to-the-engineers-private-slack-channel-the-biggest-honor-for-any-product-manager-3a2de30e6034
['Ben Staples']
2020-12-14 21:18:02.011000+00:00
['Product', 'Product Management', 'Tech', 'Technology', 'Communication']
2,667
React Patterns — Documentation and Reusability
Photo by James Sutton on Unsplash React is a popular library for creating web apps and mobile apps. In this article, we’ll look at how to document components automatically. React Docgen The React Docgen package helps us create documentation of our component automatically. We can install it by running: npm install --global react-docgen Then we can run: react-docgen App.js if we have a button stored in App.js . Then we get a JSON object that has the prop types as the output. If we add a component to the top of our component, we also get a description field with that comment as the value. Reusable Components To create reusable components, we pull parts that are reusable outside a component. For instance, if we’re getting something from an API, we can write: import React, { useState, useEffect } from "react"; const fetchPerson = async name => { const res = await fetch(`https://api.agify.io/?name=${name}`); return await res.json(); }; export default function App() { const [data, setData] = useState({}); const onLoad = async name => { const personData = await fetchPerson(name); setData(personData); }; useEffect(() => { onLoad("michael"); }, []); return <p>{data.name}</p>; } We moved the parts that can be shared between components into its own function. The fetchPerson function lets us get fetch data. Then the parts that have to be in the component are in App . We have the onLoad function to set the data state. useEffect has the state. If we remember not to repeat ourselves, then we’ll divide our code into reusable pieces. If we create a list, then we should divide our list entries into separate components. For instance, we can write: import React, { useState } from "react"; const Item = ({ name }) => <p>{name}</p>; export default function App() { const [data] = useState(["foo", "bar"]); return ( <div> {data.map((d, i) => ( <Item name={d} key={i} /> ))} </div> ); } We have an Item component that renders the entries’ data. Then we called map and return the Item in the callback to render the list with Item . We pass in the name prop with our value. Also, we have the key prop so React can keep track of the values properly. This way, we can change the Item component as necessary without affecting App . We can add logic, states, etc. as we need to. We can also use Item for other purposes if we wish. Living Style Guides If we create a style guide, then there’s less chance that a new developer will duplicate what already exists. They can find out what’s available and then use the components that are already built instead of building a new one. Creating a style guide manually is hard, so there are apps that let us do it easier. We can do it with apps like Bit or Storybooks. React Storybook lets us extract components and put them in a dynamic style guide. With Bit, we can upload components to the Bit web app and import them as a package to other apps. Also, we can preview the components that are uploaded. Photo by Andrew Umansky on Unsplash Conclusion We can use the React Docgen package to create documentation for our React components. To make reusable components, we should extract them into small pieces and pass data via props from parent to child. The logic that doesn’t change the state can also be extracted. To prevent accidental duplication, we can create a style guide with apps like Bit or Storybook. JavaScript In Plain English Did you know that we have four publications? Find them all via plainenglish.io — show some love by following our publications and subscribing to our YouTube channel!
https://medium.com/javascript-in-plain-english/react-patterns-documentation-and-reusability-3c7c4a383cd6
['John Au-Yeung']
2020-06-22 16:16:22.273000+00:00
['JavaScript', 'Software Development', 'Programming', 'Technology', 'Web Development']
2,668
Covenant with the devil — Technology versus Human (Part 1)
Covenant with the devil — Technology versus Human (Part 1) By Akintola Ashraf Akintayo Evolution (not the monkey to human type –; another article entirely) is a natural phenomenon that man has been conditioned to undergo. The gradual change in all aspects of our life- education, politics, food, culture, science, technology, government e.t.c is the very essence of our existence. We just cannot afford to be on a single position for long without changing. Biologically, we see that in the form of a fetus, to a baby, to adolescence, adulthood, and our eventual death. Even at death, our bodies decay, become manure for the soil and from there plants grow and benefit those alive and the cycle continues. Our souls on the other hand set out on another sojourn of eternity. Politically, we have witnessed the stack difference in the style of governance our forefathers deployed and what governance is all about in our present time and age. Socially, no Englishman even wants to dress like what it used to be in vogue in Shakespearean or Victorian era, even though we consider those times classic. Same thing can be said for all societies of old. These changes can be seen in our culture, education, economy e.t.c. Take a man from the 1600’s back from death and let him spend a day in our present world. I can tell you he would die again from madness for his mind would not be able to contain all he would see. Same thing would happen to us too if we are to be brought back from death in the year 3000 (i.e if the world has not ended then). As these remarkable progress in human history is recorded and new frontiers are being covered, new challenges spring up — the likes of which have never been seen before. Before the advent of AK47 riffles for example, it was not easy to kill dozens of people within a minute with the gun, thus lives became easier to waste. Before the advent of the nuclear weapons, it is never heard of, that billions can die by just a single blast. Do not get me wrong, technology brought cures to diseases like malaria, typhoid as well as vaccines for small and chicken pox. But the question can be asked that — has the quality of our life bettered those who lived in the past? Life expectancies have dropped significantly, and I have heard that in the past, if you die at 70 years, you died young. Hence, as we move forward in our history as humans, it is imperative by now that we should gear up for new challenges that can bring us to our knees. Thus, as much as technology has made us better humans, we should also reflect on the negative sides of how our lots have not been “bettered” by our genius. So, what if technology is our covenant with the devil, that took away our lives and gave us a livelihood? Technology and Privacy There is virtually no one that do not posses a social media account or a mail or for one thing or the other, surfed the internet for a reason. Little do we know that every keystroke that we press on our keyboard can be traced back to us. In short, you can easily be hacked- and not just the technical hacking but the emotional, psychological and to some extent spiritual ones. Your online behavior, likes, and habits can be studied, and you can be predicted with very great accuracy. You would have noticed how some of the ads on the sites you visit seem to be a perfect match to your interests and you think that is just a coincidence. You can easily observe this on Youtube, Instagram and Facebook. For the sake of this discussion, I would like to point out that the richest man on this plant, Jeff Bezos- the founder of Amazon.com became rich from this same data trade in consumer behavior. Let me put this in perspective for you to know how rich he is. He paid out a whole humongous $38bn in divorce settlement to his ex-wife and dear readers, he did not even manage to still retain the richest man in the planet position, in fact, he kept getting richer. The good news is that, how he got richer is not a secret. I would refer you to the PBS documentary titled; “Amazon empire: the rise and reign of Jeff Bezos.” Please note that this is not meant to smear his image or all the great works he has done, but to point out the pitfalls of technology we all love. It has been held that Facebook was the main instrument that assisted in bringing Donald Trump into power in 2016. So, if you still doubt the power of tech in influencing your life, then it is time to have a rethink. Methods such as Data scrapping, which involves tracking people’s online activities, harvesting personal data and conversations from social media, job websites and online forums are some of the ways your privacies are being violated by these tech giants. ‘’One strong case for serious online privacy violation took place in May 2011. Nielsen Co., a media-research company, was caught scraping every message off PatientsLikeMe’s online forums, where people talk about their emotional problems — in what they think is a safe, private environment. As you can imagine a lot of people felt their web privacy was violated.’’ To make it clearer, it is important to point out that the internet has become the keeper of all our secrets- dark or not. Presently, it is much easier for us to ask google the questions that bothers us the most and one we prefer not to share with those closest to us. Little do we know that all these are to our peril. In fact, once we search on the internet, it is as good as telling the whole world. For facebook, the “leakage” starts during the app’s installation process, you are prompted to accept certain terms (which if you do not accept, you would not be allowed to benefit from the services of the app), and once you click “Allow”, the application receives an “access token”. Some of the Facebook apps are leaking these access tokens to advertisers, granting them access to personal-profile data such as chat logs and photos. However, no disclaimer is showed announcing you your data is being transferred to third parties. Thus your online privacy and safety are put at risk. I have not come across any App that would allow you to disagree with its terms and conditions and you would still be allowed to continue with the service. This goes to show you that your privacy matters most. It is indeed the currency you spend and also a vital weapon in the hands of your enemies. Bullguard.com wrote brilliantly on how cookies are also being used. “We all use the “Like”, “Tweet”, “+1”, and other buttons to share content with our friends. But these social widgets are also valuable tracking tools for social media websites. They work with cookies — small files stored on a computer that enable tracking the user across different sites — that social websites place in browsers when you create an account or log in, and together they allow the social websites to recognize you on any site that uses these widgets. Thus, your interests and online shopping behaviour can be easily tracked, and your internet privacy rudely invaded. And things get worse. Other social websites allow companies to place within ads cookies and beacons — pieces of software that can track you and gather information about what you are doing on a page. Note: these tracking tools are widely used online but mostly on websites dedicated to kids and teens, which raises a huge children’s online privacy concern. An example of a teen-dedicated site that stores lots of tracking cookies is Snazzyspace.com” Alexa, a popular home listening tool by Amazon is also a danger brought near that relates whatever is being discussed at home to a third party. Many people use it without realizing the danger. In short, a lot of us are just walking around naked without realizing that. If you still do not realize what your privacy means, maybe the next few lines would hit it for you. As an individual, your privacy is your most cherished treasure. That is why God sanctified the union of a man and a woman. You have access to each other’s most cherished possession. That is why it is also important to guard your thoughts. God looks at your intention behind every action. Our privacy is sacred. that is why we are doomed by giving out this important part of us. a reminder that there is respect behind mystery. To be continued! Up next (God willing) Technology and Health Technology and Agriculture Technology and Terrorism
https://medium.com/@ashrafakintola/covenant-with-the-devil-technology-versus-human-part-1-e2eb6cdf7222
['Ashraf Akintola']
2020-12-17 05:59:57.100000+00:00
['Technology', 'Human Rights', 'Privacy', 'Health']
2,669
The Most Important Race in Tech
The difference between classical and quantum computers can be represented by Google’s claim to quantum supremacy in 2019. To achieve this claim of quantum supremacy, one of Google’s 53 qubit quantum computers (called ‘Sycamore’) was able to do a calculation in just over 3 minutes when it would have taken even the world’s most powerful classical computer over 10,000 years to process the same calculation. There has been some dispute over this claim by rival company IBM who says that the calculation would have taken a matter of days and not the tens of thousands of years that Google claimed. But the main idea behind this computing transformation persists. Quantum tech aims to be faster, more efficient, and revolutionary in ways its classical counterparts never will be. And it’s able to do this because of one advantage in particular. When classical computers store their information it’s either as a 1 or a 0. These 1’s and 0’s we call ‘bits’. But quantum computers can leverage a property known as superposition where their quantum bits (‘qubits’) can be a 1, a 0, both a 1 and a 0 at the same time, or some combination of both numbers. By taking advantage of the quantum trait of superposition these new computers can make great computational strides where classical technology just isn’t enough. The fact that subatomic particles like electrons exist in a superposition of states makes it difficult for classical computers to simulate them. This ability to simulate particles and their quantum properties is how quantum computers succeed where classical computers fail. A visualization of classical computer bits and the larger range of qubits. Image by Pranith Hengavalli. Yet despite their computational prowess and their promises for the future, quantum computers are really quite fickle machines. Their chips function only at temperatures close to absolute zero (−459.67 °F or −273.15 °C). The focus for qubits has been, up until now, using small superconducting loops. The oscillations of these loops means that they are systems with 2 possible quantum states, making them a viable basis for a qubit. But attention has recently turned to trapped-ion systems, a technology that formed the basis of quantum circuits before superconducting loops were in use. Trapped-ion systems use the energy level of ions trapped in electric fields to form the computer’s qubits. The quantum states of the ions last longer than their superconducting counterparts. And where superconducting qubits only interact with other nearby qubits, the ions’ interactions are widespread and allow them to run certain complex calculations more easily. They are, however, slower at these interactions, something which may hurt their ability to correct realtime errors. Within the world of quantum computing there is a race between materials as much as there is a race between countries.
https://medium.com/predict/the-most-important-race-in-tech-4c175a541266
['Ella Alderson']
2020-12-03 21:05:17.792000+00:00
['Technology', 'Science', 'Tech', 'Politics', 'Future']
2,670
Be Rich Without Making Money Your Life
Be Rich Without Making Money Your Life People don’t buy things or products. “Consumers buy products whose advertising promises them value for money, beauty, nutrition, relief from suffering, social status and so on.” The idea is simple, the underlying message profound, and David Ogilvy built a marketing legacy based on insights like this. Considered the “Father of Advertising,” his teams were behind some of the most effective ad campaigns of the 20th century. If you had something you wanted to sell to the public, this was a man who could get you consistent results. The keyword in his pithy statement, however, isn’t product or even value. People buy products, sure, and they hope these products will provide value to them in whatever way they consider meaningful or important, but these are surface-level behaviors and assumptions driven by a deeper abstraction. That abstraction is promise. Consumption is about instigating change towards the future, a change that offers something better than the status quo. We don’t buy things — we buy the promise of an improved identity or experience, a version of life that tastes, perhaps, a little sweeter. But this is all still from the perspective of the marketer. To them, a promise is simply a narrative that will do the job, and that job is sales. Sometimes, their promise is honest. Sometimes, it’s not. Either way, they are trying to fit the puzzle into a small gap that is open in our lives. The really interesting question is: What is that gap? What do we want when we spend money? Everybody has their own goals, their own value systems, their own motivations, and it’s almost impossible to generalize human behavior at that level because, quite often, we ourselves don’t consciously know what they are. Once that is accounted for, however; once we understand that people have their own blueprint for their own journey in life, the landscape becomes a little more clear. Beneath it all, the promise that we want upheld when we buy things is the promise that whatever it is that we buy will reduce friction in our lives. Money, in a way, is our way of solving a problem, and that problem is generally about the things that make it difficult for us to live life friction-free — a life of unconstrained self-expression. Steve Jobs once famously called the computer a bicycle for the mind. He had read a study about how efficiently different animals move in space, and that study had come to the conclusion that humans are one of the less efficient movers in the animal kingdom. But, somewhere, someone else had another idea, and they made a small change to that study. Given that humans are tool-makers, they decided to measure the movement of a human on a bicycle. Of course, with the leverage of technology, our efficiency skyrockets, far beyond anything in the natural world. As a bicycle for the mind, the computer reduces the friction in our mind, unleashing unbounded potential. All products or experiences that we spend money on do the same thing. They promise to make something difficult easy so we can better focus on the things that truly matter to us. What gives money its power isn’t some inherent quality in itself, but rather, it’s the leverage it provides in purchasing things that make our own lives more fluid, more coherent with the things we want to do to become better versions of ourselves. Marketing, of course, is an essential part of this process, and when done ethically, it is just as much of an art as it is a science. That said, one of the insidious things about how marketing is often done in the age of mass media is that it has been turned into a cold science. Marketers and advertisers have learned all sorts of tricks about how the human mind works, the biases it harbors, and the primal drives that shape our desires, and they have become comfortable targeting our doubts and our insecurities to make us buy things we don’t truly want or need. And this is done at such a deep, unconscious level that we often don’t realize it is happening to us. The fact that money is an instrument of power and that we are all inherently social creatures who can’t help but compete for that power also doesn’t help. While, yes, the underlying motivation is to buy things to reduce friction in life, much of our consumption choices are confused because the competitive nature of the game we play in society ends up blurring our own preferences with those of others around us. Perhaps it’s true that you bought the latest Louis Vuitton collection because of its superior quality, or because they are currently at the edge of the fashion world and that’s what you value and how you like to express yourself, but the truth is that for many people, these decisions to consume are more about signaling perceived status. They don’t actually reduce friction — they just trick you into thinking they do. When thought of like this, the idea of money can be decoupled from the idea of living what could be considered a rich, satisfying life. And when the goal becomes to live a rich, satisfying life, then the pull of manipulative marketing campaigns also lose their grip over the mind. At its core, there are two kinds of friction that stop us from living the kind of life that is meaningful and interesting relative to who we are and where we want to go: psychological friction and environmental friction. Psychological friction comes from all of the things in our mind that relate to who we are and how we connect to other people. It’s what you worry about. It’s what you desire. When it comes to manipulating marketing tactics, this is the kind of friction that they promise to solve. If only you had a nicer car, you would be satisfied. If only you identified with this label, and paid this group of people, your troubles would melt away. And sometimes, consumption on this level works. But usually, it only works temporarily because the underlying worry and the underlying desire is still there, and once you get used to that car or once you realize that someone else’s concept of identity won’t solve your problem, you’re back to where you started. Now, of course, if you just happen to love cars or if you deeply know who you are and are simply looking for a collaborative community, then that changes things. Those problems are real enough on a personal level, and they have solutions that can be bought, and once bought, these solutions can make your life notably more interesting. But the distinction here is this: When that psychological friction is downstream from your own individual values and preferences, some money can help you, but if it’s simply a way of running away from your own mind, then no amount of money will do. The other part of this is external. Environmental friction is about your physical body and the relationship it has to its surroundings. This is where money makes a bigger and more tangible difference. If you can’t put food on the table or if you live in a dangerous neighborhood, then you obviously have a problem, and that problem will also extend inwards and create psychological friction. But less obviously, decorating a personal space that inspires you reduces friction. A bicycle as a mode of transportation reduces friction. A computer becoming a bicycle for your mind does, too. That said, while some level of wealth is obviously needed for your general environment to be friction-less, there are diminishing returns to how much friction you can reduce by throwing money at it. After a certain point, the things that cause friction in your physical environment are less about consumption and more about how you organize your way of being in the world. When you base your life on self-expression, then money simply becomes an instrument for you to increase the fluidity of your life, reducing friction. With the exponential rate of change we have seen in the world of technology, we have access to power that would have been unfathomable even a few centuries ago. And best of all, most of the really important technology is just as financially available to someone who is living a middle-class lifestyle as it is for someone who is a billionaire. Steve Jobs’ phone was no different from the one I myself can buy, and Sergey Brin and Larry Page send their emails just the way that a regular person does. This is becoming increasingly true across all industries. And in cases where it’s less true, the advantage of more money isn’t as large as it has historically been. There is a massive difference between experiencing a life of richness and having an abundance of money. Pretty much no one would argue that the former is more important than the latter, and yet, the way we think about these things tends to diverge away from what we know — the way we spend our resources tends to diverge away from what we know. Things and products aren’t solutions in themselves. They are tools, and they are means. And when they are treated as tools, as the means they are, they allow us to do what we do best: live freely.
https://medium.com/personal-growth/how-to-be-rich-without-making-money-your-life-d3a03de9fc0b
['Zat Rana']
2020-05-11 07:25:49.579000+00:00
['Culture', 'Life Lessons', 'Technology', 'Self Improvement', 'Life']
2,671
Replacing If-Else With Commands and Handlers
if-else is, at its very core, not bad. It’s merely just a hammer and nail situation we got going. In programming 101, you’ll learn conditional statements, and lots of developers never mature their practices beyond that. But, if-else and switch are often not ideal. Better approaches, such as polymorphic execution and dictionaries, are typically neglected. We want to avoid traditional, conditional branches. I’ve written an article proposing a way of replacing conditional branching with polymorphic execution. To get some context, I’ll briefly repeat some of the earlier article examples before we deep dive into commands and handlers. Here’s the example of what we’d like to avoid: Nasty, difficult to extend, branching on a discrete value. Complicated, headache-inducing branching Besides the freakish use of if-elseif-else , the main issue is you need to add a branch for every new update reason. A clear violation of the Open/Closed and Single Responsibility principles. Each branch can basically be converted to its own command and corresponding handlers. Let’s take a look at how that’s possible. Using commands and handlers to simplify your application. 📝 GitHub Repo I won’t preach the theory about what commands, queries, and handlers are. There are plenty of resources on this topic. Instead, I’ve composed a brief list of what the advantages may be. Testing becomes tremendously easier. You don’t need to update existing tests to account for new features. If a command requires additional processing, you create another handler, which you test in isolation. Multiple handlers can handle one command. As you’ve likely already noted, dispatching one command may invoke one or more handlers. In this way, you can add new functionality without touching existing code. Stupid, Simple Classes. A command is a bag of properties with no setters. Not much can go wrong here. Likewise, for a handler, it’s a class with only one public method. Controller actions adhere to a Request-Delegate-Response pattern. They’ll contain no business or persistence logic whatsoever. If you’re practicing event-storming, I’m sure you’re already completely aligned with why commands and handlers are awesome. I ended the previously mentioned article, hinting how you can use dynamic command dispatching to eliminate unnecessary branching. And now, you’ll see one way of implementing commands and handlers. Finally, some code! To follow this code, let me very quickly summarize to you what we want to achieve, in a general sense. We want to say, “Okay, something needs to happen. Here are the values. I don’t care who handles it, just let me know when it’s done”. There are three acceptance criteria we’ll need to fulfill: A command can be dispatched without the caller knowing the concrete handlers. Every handler matching the command needs to be executed. New commands or processing steps do not require you to modify existing code. We’ll start from the outermost layer and work our way in. Departing from the controller’s perspective, it’s irrelevant to know about concrete handlers or even interfaces. The action should only be focused on data. For this, we want the controller action to be just as simple as this below. Update email endpoint You should get the gist it, even though this is C# aspnetcore. Simply put, it’s a controller action — the endpoint and its implementation. I know what you’re thinking: “where’s the error handling?!”. Don’t worry. You’re right. It should be there. But for brevity, I’ve left that part out so we can focus on the concept of dispatching commands. The controller has a dependency on a CommandDispatcher . We’ll get to that class later. The dispatcher class has a single method DispatchAsync(command) . That’s all you’ll need to know for now. This allows our controller to only care about validating the correctness of the data it receives and sending commands. How data is handled after dispatching is entirely irrelevant for the controller. Each “update reason” requires its own endpoint, with its own data shape — i.e., the command to send. At this point, implementing new features such as “update username” is as simple as creating a new endpoint and send the command. Update username endpoint When using this approach, creating endpoints becomes enormously trivial. And that’s a good thing. Our endpoint is essentially done now. So, let’s move on. Commands and handlers are where all the business logic is situated. With commands, you essentially only want to care about two things: immutability and data correctness. They are just plain, old regular classes. Nothing fancy at all. Take a look at this ChangeEmailCommand . Plain old command class Obviously, this command class doesn’t do much. That’s the whole point. Its purpose is to be passed on to a handler. Which brings us to the handler. Take a few to read through the code below. I’ll try to describe what’s going on after. Simple, testable command handler. First, we have an interface that all command handlers need to implement. The interface is important when you need dynamic type discovery. We’ll get to that in a minute. Second, I’ve created a simple handler that knows how to deal with ChangeEmailCommand s. The generic parameter of ICommandHandlerAsync tells us, “this handler needs to be invoked whenever a ‘change email command’ is dispatched.” Do you get a feeling of how d*mn testable this class is? That’s the whole point. It should be stupidly easy to test. The class is very focused. One method, one dependency. If you’re used to “Service” classes, you know how crazy constructors sometimes get. This approach completely eliminates constructor bloat. The dispatcher itself. Incredibly simple and robust. You’ve already seen the dispatcher interface. Clean and simple. But let’s jog your memory once more. Command dispatcher’s public interface Before getting distracted with the implementation, let me just reiterate what we need to achieve with the CommandDispatcher . We want to say, “here’s a command, go grab all the matching handlers and pass the command to each handler.” This means, for each command class, we need a list of matching command handlers. In code, we can express this intent with a dictionary, where the key is the type of command, and the value is a list of handlers. Take whatever time you need to read thru this. OOP people might find this super easy, while others won’t. I’ll describe what we got going, below the code.
https://levelup.gitconnected.com/replacing-if-else-with-commands-and-handlers-527e0abe2147
['Nicklas Millard']
2020-09-29 12:15:18.802000+00:00
['Dotnet Core', 'Programming', 'Software Development', 'Software Engineering', 'Technology']
2,672
What should you look for –as a startup- in a chatbot?
There are millions of chatbots out there but still, each one has its unique features and benefits! Picking the right one for your business might sound tricky but making the right decision will certainly have a great impact on the level of the resources and maintaining a positive customer experience. #startupswithbots — Image®: Fluido.ai While startups have managed to create a strong market of their own by addressing a huge gap in the mainstream one (maybe you would want to know how far they have succeeded here) that did not offer more specialized, diverse and personalized solutions, they are still facing some challenges that hinder their progress. Among these challenges are the limitation of resources, capacities and maintaining a successful customer relationship. Despite the fact that each challenge is tackled on its own, the interconnectedness between them is hard to miss. The lack of clear vision regarding managing the resources might eventually lead to mismanagement of the capacities of the executing teams, manifested in overloading or unjustifiably extra personnel handling small bits of the work process. Those startups really understand how technology is valuable to transforming work processes into more seamless and efficient ones. Hence, there has been great interest in automating a lot of their services with the aim of succeeding over the level of the resources as well as customer satisfaction. Taking on chatbots has been one of the greatest go-to options most of the businesses have sought after, to improve their overall performance and even achieve better ROIs. The question now is; What should a startup look for in a chatbot that can benefit your business? To answer this question, we will bring you some valuable insights from one of our use cases we prepared this year. Realizing the pain points there, we could see some commonalities already and hence some of the solutions therein offered by a chatbot to relieve and eliminate those hurdles. - Firstly, the chatbot is capable of maintaining communication 24/7 and therefore, it keeps a higher level of engagement and communication in different time zones. - Secondly, startups usually pour a massive amount of information on their websites, leading to overwhelming the visitors and in turn causing low subscriptions, sales conversion and lead generation. Picking a chatbot that brings a widget makes it easier for you to categorize your services and solutions in one place and even gathers your different call to actions in this same place ensuring diverse conversational experiences. - Thirdly, having monitoring and optimization tools that oversee the customer interaction with the chatbot is definitely something that enables you to know more about your performance and optimize your gaps as fast as possible, contributing to your time and resources efficiency, reflecting more on the resources hurdle. Moreover, this will help improve your quality in processing your customer requests and living up to their expectations by understanding their growing interests. - Fourth and most importantly, having a platform that offers you the seamless management of your Conversational AI channel, as well as your live chat with customers throughout all your customer experience channels, is definitely going to be a great asset for you, contributing to maintaining the interaction with your customers and even never losing any of them by handing them over smoothly and easily to your human agent. It is also important to note that you generally need to look for a conversational AI platform and not just a chatbot solution, as we are past the times of reciting mere scripts using a chat interface and in the look for an intelligent platform powered with AI. Finally, as chatbots are forecasted to revolutionize the contact centers industry and contribute to its savings and efficiency, it seems about time for you to step up to the game and automate your services to stand out and progress.
https://medium.com/fluido-ai/what-should-you-look-for-as-a-startup-in-a-chatbot-8cbd8fb448ef
['Merihan Khaled']
2020-12-24 10:04:47.271000+00:00
['Technology', 'Chatbots', 'Startup', 'Artificial Intelligence']
2,673
Berlin Is a Blockchain Hotspot | Has Europe Broken the Second Wave?
Berlin Is a Blockchain Hotspot | Has Europe Broken the Second Wave? Ramesh Choudhary ·Nov 25, 2020 In the digital economy, blockchain technology is often described as one of the next big innovations. The World Economic Forum, for example, assumes that it will be one of six megatrends for the future. Berlin is at the forefront of the blockchain revolution, as our first chart, made in collaboration with Berlin Partner, shows. Meanwhile the present is shaped by the coronavirus pandemic, which rages on as the wait for a vaccine might soon be over. Our second chart has the latest on infection trends in the U.S. and Europe. source : https://reason-why.berlin/news/berlin-blockchain-business-hub/?pk_campaign=statista&pk_kwd=2020_blockchain
https://medium.com/@rameshchoudhary8675/berlin-is-a-blockchain-hotspot-has-europe-broken-the-second-wave-5f3ebfa7e350
['Ramesh Choudhary']
2020-11-25 03:56:56.908000+00:00
['Berlin', 'Blockchain Technology', 'Blockchain Startup']
2,674
Fintech’s final frontier — HANGAR49
By Ebrahim Moolla, November 16, 2020 Africa’s mushrooming FinTech enterprises hold the key to meaningful upliftment on the continent as they deliver far more than just improved banking services to its massive unbanked market. Some 330 million adults , or 60 percent of the adult population, in Africa lack access to the most basic financial services. The sheer size of the unserved and underserved market in Africa means that Fintech firms can make a bigger difference on the continent than anywhere else. African startups and established companies face numerous challenges like poor or absent infrastructure, low internet penetration, funding issues, political instability, and gender stereotyping, but the Fintech environment on the continent does hold advantages not seen in other parts of the world. A space for collaboration African Fintechs are not disrupting the financial services sector, as in many regions, traditional financial institutions have not found it viable to serve the market in any way at all. This means there is greater scope for collaboration, rather than direct competition. And because the financial services market has such a low base on the continent, Africa is leading the world in sector convergence, the convergence of new technologies to solve logistical challenges and the potential impact of big data. Based on Crunchbase data, the African Fintech sector comprises over 400 active companies, 80% of which are local companies, enabling payments, funds transfer, lending, and even wealth management. Nigeria, Kenya, and South Africa are the top Fintech hubs on the continent, accounting for the larger proportion of Fintech firms and attracting the lion share of investments. In view of these environmental factors and developments, every year for the past 3 years, no other tech sector has attracted more funding in Africa than Fintech. In 2019 alone, African Fintech startups raised a combined $678.73 million in funding . Government support needed Fintech is spearheading the reshaping of the financial sector, with several key sectors benefitting from the development, and African governments are under pressure to maximise potential benefits by committing to infrastructure development, education and an equitable regulatory framework. And the coronavirus pandemic may yet prove to be the catalyst for Africa becoming a digital economy, with resulting benefits for the Fintech sector. In a bid to curb the spread of the virus, the World Health Organization has been cautioning against the use of hard currency and encouraging the use of digital payments. This move has prompted African governments and regulators to enforce measures aimed at facilitating more cashless transactions. Now more than ever before, Fintech is poised to become a crucial stepping stone that will propel Africa into becoming a truly digital economy. The sector remains a key driver for what many hope will be an African Renaissance marked by peace, prosperity, and cultural rejuvenation. HANGAR49, as an outreach optimization business focused on building valuable human connections with the aid of digital technologies, is committed to supporting and partnering with Fintechs in Africa and around the world in their financial inclusion mission. Speak to us today to look at your options for uncovering revenue opportunities.
https://medium.com/@bl_29764/fintechs-final-frontier-hangar49-c945896240e
['Bradley Laubscher']
2021-09-15 08:24:21.411000+00:00
['Financial Services', 'Fintech', 'Technology', 'Business Development', 'Lead Generation']
2,675
Chainsaws Were Invented for a *Disturbing* Reason
Chainsaws Were Invented for a *Disturbing* Reason That reason? Childbirth It’s no secret that we’ve come a long way with modern medicine, but did you know that chainsaws were originally invented for assisting with childbirth? If you’re clenching your legs together, just know that so am I. The history of the chainsaw has developed rapidly starting in the 1700s. Let’s take a closer look at what sparked this invention. The Childbirth Problem Before C-sections women had a tough time pushing out large babies. C-sections, or Cesarean sections, is the surgical removal of the baby from the uterus. According to Mayo Clinic, this is generally required if problems with the baby passing through the birth canal are predicted or if issues arise in the late stages of pregnancy, such as delayed labor or a distressed baby. However, in the 1700s, they didn’t perform C-sections yet. Babies getting stuck in the birth canal was a problem that could result in death for the baby and/or mother. Their solution? A symphysiotomy. Symphysiotomies are no longer used for childbirth and for good reason. They were a messy procedure that often caused lasting damage to the mother. This damage was both physical and mental, as the procedures could be difficult to recover from and were often done without anesthesia. During a symphysiotomy, cartilage is taken out of the pelvis and it is manually widened. By Fred the Oyster, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=35384964 On the above image, the black area with the number five is where they would remove the cartilage. After the removal, they would widen the pelvis to make room for the baby. Obviously, this was difficult to do with the 1700’s equivalent of a surgical blade. The Solution In the 1780s, two doctors, John Aitken and James Jeffray came up with a solution for the grueling, long process of a symphysiotomy. They invented the chainsaw. You may be thinking of the type of chainsaw that lumberjacks or horror villains use. Well, the first chainsaws weren’t as large, rest assured. The first chainsaw was a lot smaller. It was called an osteotome. The word comes from Greek osteo (bone) and tome (cut), which was rather fitting. Here’s what the first models looked like: By Sabine Salfer — private photo taken at Orthopädische Universitätsklinik Frankfurt (M), Public Domain, https://commons.wikimedia.org/w/index.php?curid=2428542 The serrated blade made cutting the pelvic bone faster, easier, and more precise. This device was commonly used throughout the 19th century to assist in childbirth and in other procedures that required cutting through bone. However, as time went on and medical practices (thankfully) advanced, hygiene and anesthesia became more of a focus. Once doctors were able to safely administer anesthesia and the C-section became safer, the brutal practice of symphysiotomy faded out. If It Works on Bone… Even though chainsaws faded out of the medical practice, many people quickly realized that if it can cut through bone, it can cut through other hard materials, like wood. In 1905, Samuel Bens claimed the first patent for an electric chainsaw, with a plan to chop down giant redwood trees for construction. After Bens patented the electric chainsaw, further developments exploded across the United States and the world. Soon enough, equivalents of the modern chainsaws we typically think of were created. Lesson Learned Often times, inventions lead to other discoveries and further enhancements of the inventions. So yes, chainsaws that fell big ole trees were originally invented to saw into a woman’s pelvis. But that’s the beauty of invention, right? Personally, I’m just happy modern medicine came up with C-sections.
https://medium.com/history-of-yesterday/chainsaws-were-invented-for-a-disturbing-reason-cd39cefb5983
['Malinda Fusco']
2020-12-20 15:02:27.786000+00:00
['Ideas', 'Innovation', 'Culture', 'Technology', 'History']
2,676
Women In Tech: Celebrating International Women’s Day 2019 #BalanceForBetter
International Women’s Day is a global day celebrating the social, economic, cultural and political achievements of women as a global community. This year’s theme, #BalanceForBetter, strives to increase gender balance in all industries for a better working world. How can we achieve this balance? Bring awareness, motivate others and take action. Everyone can do their part. Happy #IWD2019. By empowering women through technical education it can help bridge the gender gap and open up opportunities for more women in tech. Some organizations that work to nurture women’s tech talent are Girls Who Code (USA), Black Girls Code (USA & South Africa) and Kizcode (UK). Girls Who Code, founded by Reshma Saujani, has made the gender gap and tech education more visible by recruiting high profile ambassadors — most notably Supermodel Karlie Kloss to raise awareness. Girls Who Code has reached almost 90,000 girls and aims to contribute to gender parity by 2027. Kimberly Bryant set out to create Black Girls Code to prove that girls of every colour can code. Through workshops and after school programs, Black Girls Code provides underprivileged girls in underrepresented communities the opportunity to learn and master their technical skills. Their goal is to train 1 million girls by 2040. Müjde Esin created Kizcode to open doors for Turkish and Kurdish speaking women in the UK who faced domestic violence and forced marriages in their patriarchal societies. It is difficult for these women to integrate into UK society because they face language difficulties and unemployment. Kizcode aims to equip these women with skills to improve their quality of life through education and nurturing their skills. By empowering these women with code, tech and computer skills, it provides them with an opportunity for a career, to reach their goals or simply sell their handmade products and recipes online as a means of income. You don’t have to set up a coding organization to make a difference. Two women that are fighting for diversity and balance in the workplace for women are Laura Gómez and Laura Weidman Powers. Through the people analytics tool, Atipica, Laura Gómez helps companies use data to strategize hiring and build more diverse workforces. Laura Weidman Powers, who served as a Senior Policy Advisor to the Chief Technology Officer in the Obama White House, is an advocate for young Black and Latino Engineers, and works to ensure that they’re proportionally represented in the field. When empowered with knowledge and mentored, girls and women can make the world a better place. A group of girls from San Fernando High School in Los Angeles teamed up with DIY Girls, an organization supporting girls in STEM, to create lightweight and portable solar powered tents for the homeless. Homelessness was close to their hearts as Daniela Orozco, one member of the group who then was a senior of San Fernando, came from a low income family. Within only four years, the girls saw the increase of homelessness in their community and wanted to create a solution. Their solar powered tents has the potential to help many different people worldwide, including refugees or victims of natural disasters. Thato Kgatlhanye from South Africa launched her company, Rethaka Trading, when she was only 18 years old. Disadvantaged children face many challenges such as lack of school supplies like school bags and being exposing to the risk of getting hit by cars while walking on unsafe roads on the way to school. Rethaka repurposes schoolbags filled with retro-reflective materials to increase visibility for the children’s walk to and from school. These bags are fitted with a solar panel that charges as the child is walking to school and when they get back home, they can use it as a light to study. “We are offering dignity, safety and access to light.” — Kgatlhanye Although young, these women have shown great contribution to our society, much like blockchain. Blockchain technology is still maturing, but we’ve seen the impact it has already made on the world. If it wasn’t for the development of blockchain, STK Token and other cryptocurrencies, exchanges and wallets wouldn’t exist. It’s also opened up a new type of job for women in tech, Blockchain Developers. STK’s former Blockchain Developer, Natalie Chin, who is still in school at McMaster University for Computer Science, is already making an impact in the space. Chin has been very actively involved in the blockchain community; volunteering, organizing, mentoring, and speaking at hackathons — including DeltaHacks and STACKATHON to name a few. Her talent has taken her to ETH San Francisco, where the STK team & friends won the top 3 prizes. Chin continues to work in tech in Toronto, and as a Blockchain Professor at George Brown College, where she shares her knowledge and experience to guide the next generation of Blockchain Developers. Former STK Blockchain Developer, Natalie Chin teaching the next generation of developers. In all these examples, we can see that when women are empowered, supported and represented in tech, they accomplish incredible things. This is one of the reasons to strive for #BalanceForBetter. Is there a woman in tech that inspires you? Do you know someone we should profile? Leave your comments below!
https://medium.com/stk-token/women-in-tech-celebrating-international-womens-day-2019-balanceforbetter-932a7be54755
['Stk Token']
2019-03-08 15:36:14.030000+00:00
['Blockchain', 'Women In Tech', 'International Womens Day', 'Technology', 'Fintech']
2,677
Former NSA Official Questions the Intelligence Community’s Assessment on the DNC Hack
Bill Binney, former National Security Agency (NSA) Technical Director, became a well-known NSA whistleblower after the September 11th terrorist attack. Binney alleged that the NSA “buried key intelligence that could have prevented 9/11” and that “electronic intelligence gathering is being used for covert law enforcement, political control and industrial espionage.” During Binney’s tenure with the NSA, he was a Russia specialist and his expertise spans intelligence analysis, traffic analysis, systems analysis, etc. According to The Intercept, Binney ultimately resigned from the NSA over a dispute about the use of an expensive tool from a powerful defense company rather than an in-house project. In recent years, he has been a prominent skeptic of the narrative that the 2016 Trump Campaign unlawfully collaborated with Russia to interfere into the election by coordinating the release of confidential DNC emails. Wikileaks Receives and Publishes the DNC Emails The core allegation of Russiagate is that Russian state-affiliated hackers conducted a cyberintrusion into the DNC’s servers, provided the compromised emails to Wikileaks for publication during the 2016 election cycle, and did all of this in order to benefit the Trump Campaign. Upon discovering the cyber-intrusion into the DNC’s servers, Perkins Coie, the DNC’s counsel, retained the cybersecurity firm Crowdstrike to conduct an assessment. To the public’s knowledge, no government agency has looked at the DNC’s servers. In fact, the DNC “did not allow the FBI to physically inspect its machines, including servers,” according to Rowan Scarborough. The DNC’s refusal to accept assistance from the FBI and the Department of Homeland Security prompted numerous questions. Among those questions was whether or not the DNC was intentionally hiding their information from government investigators. Additionally, Crowdstrike’s connections to former FBI officials close to former FBI Director Robert Mueller raised possible conflicts of interest. Crowdstrike’s CEO Shawn Henry was in charge of the FBI’s cyber division under former Director Mueller. From a surface level, the DNC’s refusal to give the FBI access to its servers raised serious eyebrows. Given the unprecedented nature of the allegation against the Trump Campaign, one would expect the FBI to follow best practices and “get access to the machines themselves,” as former FBI Director James Comey told Congressman Will Hurd before the House Permanent Select Committee on Intelligence. United States government intelligence reporting alleges that Russian state-affiliated hackers conducted a cyberintrusion and leaked the stolen data to Wikileaks. A summary of a U.S. intelligence community reports states that Russia’s President ordered “an influence campaign in 2016 aimed at the US presidential election.” Among Russia’s goals was to “undermine faith in the US democratic process,” as well as demonstrating a clear preference for Trump. It is clear that Russia’s efforts to sow chaos succeeded as both political parties continue to discuss the 2016 election. Problematically, the Intelligence Community’s assessment provided only “the uncorroborated assertion of intelligence officials to go.” Even The New York Times’s Scott Shane wrote of the Intelligence Community’s assessment, “What is missing from the public report is…hard evidence to back up the agencies’ claims that the Russian government engineered the election attack…. Instead, the message from the agencies essentially amounts to ‘trust us.’” The broader topic of Russian interference continues to fester in the media as Attorney General Bill Barr and United States Attorney John Durham investigate the origins of the Russia investigation. Independent thinkers like Bill Binney and the Veteran Intelligence Professionals for Sanity (VIPS) continue to question conventional knowledge by providing various possibilities. Throughout the past few years, Binney has been featured on Fox News and on other networks outlining his analysis. In a recent interview he discussed the hack of the Democratic National Committee (DNC). For the sake of clarity, his argument is broken down into multiple points. Point 1 Binney’s team analyzed Wikileaks’ data to determine how they received that information. Whomever provided the emails to Wikileaks did so in three batches, all of which “had a last modified time that was rounded off [rounded up] to an even [the next-higher] second, so they all ended up in even [meaning complete or full, not fractional] seconds.” Data files can be modified using File Allocation Table (FAT). FAT is a process whereby “when doing a batch process of data and transferring it to a storage device like a thumb drive or a CD-ROM, it rounds off the last modified time to the nearest even [next-higher] second, so that’s exactly the property we found in all that data posted by Wikileaks.” Here, Binney’s contention is that the data from the DNC was “downloaded to a storage device a CD-ROM or a thumb drive and physically transported before Wikileaks could post it, so that meant it was not a hack.” This may indicate that the DNC data was likely downloaded and physically transported to Wikileaks rather than a cyberintrusion. Point 2 Binney and his team of analysts then tested the data transfer speeds using information contained in the DNC Wikileaks files, including file names, numbers of characters in the file, and a timestamp at the end of the file. With this information, his team used a program to calculate the transfer rate of all the data. To calculate the transfer rate, Binney contends, “all you have to do is look at between the two time stamps, the file name and the number of characters in the file, and take the difference between the times [start-time versus end-time], and that’s the transfer rate for that number of characters, so we found that the variations ran from something like 19 to 49.1 megabytes per second.” 19 to 49.1 megabytes per second is roughly 19 to 49 million characters per second; however, the Internet cannot support that rate of transfer “not for anybody who’s just…a hacker coming in across the net.” Binney’s team tested the Internet’s transfer speeds and the highest rate they achieved was “one-fourth the rate, little less than one-fourth the rate necessary to do the transfer at the highest rate that we saw in the Guccifer 2 data, which meant it didn’t go across the net, so, in fact, the file rate transfers couldn’t.” Point 3 Binney’s team also found evidence that potentially points to Guccifer 2 manipulating the data files with Russian signatures saying the Russians did this. “If you go back to the Vault 7 release from Wikileaks again, from CIA, and you look, they have this Marble framework program that will modify files to look like someone else did the hack, and who were the countries that they had the ability to do that [to], in the in the Marble framework program? Well, one was Russia, the other was China,” said Binney. Combine this possible data tampering with the circumstantial evidence and it led Binney’s team to conclude that all signs point back to the CIA. Point 4 All of this circumstantial evidence further aligns with Crowdstrike’s assessment of the DNC server. Crowdstrike is the cybersecurity investigator who conducted an investigation on behalf of the DNC. Crowdstrike’s CEO Shawn Henry testified before Congress: “We have indicators that data was exfiltrated. We did not have concrete evidence that data was exfiltrated from the DNC, but we have indicators that it was exfiltrated.” Henry additionally testified “there are times when we can see data exfiltrated, and we can say conclusively. But in this case it appears it was set up to be exfiltrated, but we just don’t have the evidence that says it actually left.” Conclusion Binney and his team’s memo should ring alarm bells for civil libertarians, journalists, conservatives, and officials in the Trump Administration. This memo may eventually be found to be without merit; however, it does raise the broader question of why federal agencies never (to our knowledge) required the DNC to provide its servers and as to why there has been little independent investigation into the actual information released by Wikileaks. Our republic is best served when the press acts as a watchdog on government. As The Nation’s Patrick Lawrence writes, “we are urged to accept the word of institutions and senior officials with long records of deception.” We will eventually be presented more evidence given how much scrutiny the current Department of Justice has focused on the origins of the Russia investigation. If Binney and team’s theory has even a scintilla of truth to it, then we may be looking into the scandal of generation. The possibility of the Central Intelligence Agency interfering into the electoral process on behalf of the incumbent president’s political party is no laughing matter. As United States Attorney John Durham and his investigators interview former CIA Director John Brennan and inevitably reach a final conclusion, we should be prepared to question our prior assumptions.
https://medium.com/discourse/former-nsa-official-questions-the-intelligence-communitys-assessment-on-the-democratic-national-b7e29017030d
['Mitchell Nemeth']
2020-08-26 19:09:34.536000+00:00
['Network Security', 'Politics', 'Intelligence', 'Computer Science', 'Technology']
2,678
A Complete Guide to Vue Lifecycle Hooks in Vue3
Photo by Sonja Langford on Unsplash Lifecycle hooks in both Vue2 and Vue3 work very similarly — we still have access to the same hooks and we still want to use them for the same use cases. If our project uses the Options API, we don’t have to change any of the code for our Vue lifecycle hooks. This is because Vue3 is designed to be compatible with prior releases of Vue. However, the way we access these hooks is a little bit different when we decide to use the Composition API — which is especially useful in larger Vue projects. By the end of this article, you’ll know how to use lifecycle hooks in both the Options API and Composition API and be on your way to writing better code. Let’s go! Table of Contents What are the Vue Lifecycle Hooks Using Vue Lifecycle Hooks in the Options API Using Vue3 Lifecycle Hooks in the Composition API Updating Vue2 Code to Vue3 Lifecycle Hooks A Look at Each Lifecycle Hook in Both Vue2 and Vue3 Creation Hooks Mounting Hooks Update Hooks Destruction Hooks Activation Hooks New Debug Hooks in Vue3 Conclusion What are the Vue Lifecycle Hooks First, let’s look at a diagram of the Vue3 lifecycle hooks in both the Options API and Composition API. This should give a high level overview of what’s going on before we can dive down into the details. Source: LearnVue Essentially, each main Vue lifecycle event is separated into two hooks that are called right before that event and then right after. There are four main events (8 main hooks) that you can utilize in your Vue app. Creation — runs on your component’s creation Mounting — runs when the DOM is mounted Updates — runs when reactive data is modified Destruction — runs right before your element is destroyed. Using our Vue Lifecycle Hooks in the Options API With the Options API, our lifecycle hooks are exposed as options on our Vue instance. We don’t need to import anything, we can just invoke the method and write the code for that lifecycle hook. For example, let’s say we wanted to access our mounted() and our updated() lifecycle hooks. It might look something like this. <script> export default { mounted() { console.log('mounted!') }, updated() { console.log('updated!') } } </script> Simple enough, right? Okay. Let’s move on to using Vue 3 Lifecycle hooks in the Composition API. Using our Vue Lifecycle Hooks in the Vue3 Composition API In the Composition API, we have to import lifecycle hooks into our project before we can use them. This is to help keep projects as lightweight as possible. import { onMounted } from 'vue' Excluding beforeCreate and created (which are replaced by the setup method itself), there are 9 of the Options API lifecycle hooks that we can access in our setup method onBeforeMount - called before mounting begins - called before mounting begins onMounted - called when component is mounted - called when component is mounted onBeforeUpdate - called when reactive data changes and before re-render - called when reactive data changes and before re-render onUpdated - called after re-render - called after re-render onBeforeUnmount - called before the Vue instance is destroyed - called before the Vue instance is destroyed onUnmounted - called after the instance is destroyed - called after the instance is destroyed onActivated - called when a kept-alive component is activated - called when a kept-alive component is activated onDeactivated - called when a kept-alive component is deactivated - called when a kept-alive component is deactivated onErrorCaptured - called when an error is captured from a child component When we import them and access them in our code, it would look like this. <script> import { onMounted } from 'vue' export default { setup () { onMounted(() => { console.log('mounted in the composition api!') }) } } </script> Updating Vue2 Code to Vue3 Lifecycle Hooks This handy Vue2 to Vue3 lifecycle mapping is straight from the Vue3 Composition API docs and I think it’s one of the most useful ways to see exactly how things are going to be changing and how we can use them. beforeCreate -> use setup() -> use created -> use setup() -> use beforeMount -> onBeforeMount -> mounted -> onMounted -> beforeUpdate -> onBeforeUpdate -> updated -> onUpdated -> beforeDestroy -> onBeforeUnmount -> destroyed -> onUnmounted -> errorCaptured -> onErrorCaptured An In-Depth Look at Each Lifecycle Hook We now understand two important things: The different lifecycle hooks we can use How to use them in both the Options API and the Composition API Let’s take a deeper dive at each lifecycle hook and look at how they’re used, what kind of code we can write in each one, and the differences between them in the Options API and Composition API. Creation Hooks — The Start of the VueJS Lifecycle Creation hooks are the very first thing that runs in your program. beforeCreate() — Options API Since the created hook is the thing that initializes all of the reactive data and events, beforeCreate does not have access to any of a component’s reactive data and events. Take the following code block for example: export default { data() { return { val: 'hello' } }, beforeCreate() { console.log('Value of val is: ' + this.val) } } The output value of val is undefined because data has not been initialized yet. You also cannot call your component methods in this method either. If you want to see a full list of what is available, I’d recommend just running console.log(this) to see what has been initialized. This is useful in every other hook too when using the Options API. Using the beforeCreate hook is useful when you need some sort of logic/API call that does not need to be assigned to data. Because if we were to assign something to data now, it would be lost once the state was initialized. created() — Options API We now have access to the component’s data and events. So modifying the example from above to use created instead beforeCreate we see how the output changes. export default { data() { return { val: 'hello' } }, created() { console.log('Value of val is: ' + this.val) } } The output of this would be Value of val is: hello because we have initialized our data. Using the created method is useful when dealing with reading/writing the reactive data. For example, if you want to make an API call and then store that value, this is the place to do it. It’s better to do that here than in mounted because it happens earlier in Vue’s synchronous initialization process and you perform data reading/writing all you want. What about the Composition API Creation Hooks? For the Vue3 Lifecycle Hooks using the Composition API, both beforeCreate and created are replaced by the setup() method. This means that any code you would have put inside either of these methods is now just inside your setup method. The code we just wrote in the created lifecycle hook would be rewritten like this. import { ref } from 'vue' export default { setup() { const val = ref('hello') console.log('Value of val is: ' + val.value) return { val } } } Mounting Hooks — Accessing the DOM These mounting hooks handle mounting and rendering the component. These are some of the most commonly used hooks in projects and applications. beforeMount() and onBeforeMount() Called right before the component DOM is actually rendered and mounted. In this step, the root element does not exist yet. In the Options API, this can be accessed using this.$el . In the Composition API, you will have to use a ref on the root element in order to do this. export default { beforeMount() { console.log(this.$el) } } The Composition template using refs would look like this. <template> <div ref='root'> Hello World </div> </template> Then, the corresponding script to try and access the ref. import { ref, onBeforeMount } from 'vue' export default { setup() { const root = ref(null) onBeforeMount(() => { console.log(root.value) }) return { root } }, beforeMount() { console.log(this.$el) } } Since, app.$el is not yet created, the output will be undefined. While it’s preferred that you use created() / setup() to perform your API calls, this is really the last step you should call them before it’s unnecessary late in the process because it’s right after created — they have access to the same component variables. mounted() and onMounted() Called right after the first render of the component. The element is now available allowing for direct DOM access. Once again, in the Options API, we can use this.$el to access our DOM and in the Composition API we need to use refs to access the DOM in our Vue lifecycle hooks. import { ref, onMounted } from 'vue' export default { setup() { /* Composition API */ const root = ref(null) onMounted(() => { console.log(root.value) }) return { root } }, mounted() { /* Options API */ console.log(this.$el) } } Update Hooks — Reactivity in the VueJS Lifecycle The updated lifecycle event is triggered whenever reactive data is modified, triggering a render update. beforeUpdate() and onBeforeUpdate() Runs before the data is changed and the component is re-rendered. This is a good place to update the DOM manually before any changes happen. For example, you can remove event listeners. beforeUpdate could be useful for tracking the number of edits made to a component or even tracking the actions to create an "undo" feature. updated() and onUpdated() The updated methods call once the DOM has been updated. Here’s some starter code that uses both beforeUpdate and updated. <template> <div> <p>| edited {{ count }} times</p> <button @click='val = Math.random(0, 100)'>Click to Change</button> </div> </template> With either of the corresponding scripts. export default { data() { return { val: 0 } }, beforeUpdate() { console.log("beforeUpdate() val: " + this.val) }, updated() { console.log("updated() val: " + this.val } } OR import { ref, onBeforeUpdate, onUpdated } from 'vue' export default { setup () { const count = ref(0) const val = ref(0) onBeforeUpdate(() => { count.value++; console.log("beforeUpdate"); }) onUpdated(() => { console.log("updated() val: " + val.value) }) return { count, val } } } These methods are useful, but for a lot of use cases we may want to consider using watchers to detect these data changes instead. Watchers are good because they give the old value and the new value of the changed data. Another option is using computed values to change the state based on elements. Destruction Hooks — Cleaning Things Up The destruction hooks for a component are used in the process of removing a component and cleaning up all the loose ends. This is the time for removing event listeners and things that could lead to memory leaks if not properly processed. beforeUnmount() and onBeforeUnmounted() Because this is before the component starts to get torn down, this is the time to do most, if not all, of the clean up. At this stage, your component is still fully functional and nothing has been destroyed yet. An example of removing an event listener would look like this in the Options API. export default { mounted() { console.log('mount') window.addEventListener('resize', this.someMethod); }, beforeUnmount() { console.log('unmount') window.removeEventListener('resize', this.someMethod); }, methods: { someMethod() { // do smth } } } And this in the Composition API import { onMounted, onBeforeUnmount } from 'vue' export default { setup () { const someMethod = () => { // do smth } onMounted(() => { console.log('mount') window.addEventListener('resize', someMethod); }) onBeforeUnmount(() => { console.log('unmount') window.removeEventListener('resize', someMethod); }) } } One way to see this in action is to work in Vite, vue-cli, or any dev environment that supports hot reloading. When your code updates, some of your components will unmount and mount themselves.. unmounted() and onUnmounted() At this point, most of your component and its properties are gone so there’s not much you can do. Once again, I’d use print out some data to see what exactly is still around and if it could be useful for your project. import { onUnmounted } from 'vue' export default { setup () { /* Composition API */ onUnmounted(() => { console.log('unmounted') }) }, unmounted() { /* Options API */ console.log('unmounted') } } Activation Hooks — Managing Keep-Alive Components A keep-alive tag is a wrapper element for dynamic components. It stores a cached reference to inactive components so that Vue does not have to create an entirely new instance every time a dynamic component changes. For this specific use case, Vue gives us two lifecycle hooks activated() and onActivated() This method is called whenever a kept-alive dynamic component is “reactivated” — meaning that it is now the active view of the dynamic component. For example, if we are using keep-alive components to manage different tab views, every time we toggle between tabs, the current tab will run this activated hook. Let’s say we have the following dynamic component setup using the keep-alive wrapper. <template> <div> <span @click='tabName = "Tab1"'>Tab 1 </span> <span @click='tabName = "Tab2"'>Tab 2</span> <keep-alive> <component :is='tabName' class='tab-area'/> </keep-alive> </div> </template> <script> import Tab1 from './Tab1.vue' import Tab2 from './Tab2.vue' import { ref } from 'vue' export default { components: { Tab1, Tab2 }, setup () { /* Composition API */ const tabName = ref('Tab1') return { tabName } } } </script> Inside our Tab1.vue component, we can access our activation hook like this. <template> <div> <h2>Tab 1</h2> <input type='text' placeholder='this content will persist!'/> </div> </template> <script> import { onActivated } from 'vue' export default { setup() { onActivated(() => { console.log('Tab 1 Activated') }) } } </script> deactivated() and onDeactivated() As you may guess, this is called when a kept alive component is no longer the active view of a dynamic component. This hook can be useful for use cases like saving user data when a specific view loses focus and triggering animations. We can capture the hook like this. import { onActivated, onDeactivated } from 'vue' export default { setup() { onActivated(() => { console.log('Tab 1 Activated') }) onDeactivated(() => { console.log('Tab 1 Deactivated') }) } } Now, when we toggle between the tabs — each dynamic component’s state will be cached and saved. Great! Vue3 Debug Hooks Vue3 gives us two hooks that we can use for debugging purposes. They are: onRenderTracked onRenderTriggered Both of these events take a DebuggerEvent that allows us to tell what is causing a re-render in our Vue instance. export default { onRenderTriggered(e) { debugger // inspect which dependency is causing the component to re-render } } Conclusion Whether you decide to use the Options API or the Composition API, it’s important to know not only what lifecycle hook to use, but why you’re using it. For many problems, multiple lifecycle hooks can work. But it’s good to know which is the best for your use case. No matter what, you should just think about it and have a good reason for choosing a specific lifecycle hook. I hope this helped you understand a little bit more about lifecycle hooks and how to implement them in your projects. Happy coding!
https://javascript.plainenglish.io/a-complete-guide-to-vue-lifecycle-hooks-in-vue3-3861d78033ba
['Matt Maribojoc']
2021-01-11 21:41:13.024000+00:00
['Front End Development', 'Vuejs', 'Technology', 'Web Development', 'Programming']
2,679
Let’s configure Hadoop from Ansible
Automating configuration management using ansible is very convenient when the team size increases and manual configuration becomes difficult. Recently I have started learning about Ansible and I found it very fascinating that for almost any kind of problem that might exist there is a technological solution availiable. Today I would be creating a playbook to configure hadoop 1 on a freshly booted system. So before we begin the configuration let’s lay down the steps on how we going to achieve it. This is very crucial because writing things down help us manage our playbook more effectively. Copy the hadoop and jdk softwares on the managed node Install these softwares Create the namenode directory Configure the hdfs and core-site.xml files Format the namenode directory Start the hadoop services So the only prerequisite today is knowing what hadoop is. A brief about hadoop Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs. A structure of a typical hadoop hdfs cluster is somewhat like the figure given below The namenode is responsible for collecting and aggregating the storage from the datanode so that the client can connect to the namenode and take the storage directly from there. The namenode hence is responsible for managing this storage. So there are two types of nodes in an HDFS or hadoop storage cluster: DataNode NameNode So the setup between them is not much different. Both need the hadoop and jdk software. jdk software is needed since hadoop is written in java. I would be configuring the namenode today on the RHEL8 operating system Let’s begin Before beginning the tasks let’s see if we have proper connectivity with the managed nodes. 1. Copy the hadoop and jdk softwares on the managed nodes The copy module allows us to copy the files specified in the src of the local host to the dest folder of the managed nodes. I have copied the files to the root directory of the managed node. 2. Install these softwares We can install the software using the yum module. We give the location of the rpm file in the name. The state tells that we want the software to be installed. However you can notice I used the command module to install the hadoop software. This is because to install the software I had to use the -force option. This functionality is not supported by the yum module so command module helps in achieving this. Command module allows us to run OS specific command . 3. Create the namenode directory This is a fairly easy task. We can create a directory using the file module. The directory is created in the path specified. The state tells that we want the directory to be created. 4. Configure the hdfs and core-site.xml files So the main configuration files to setup the hadoop cluster comes from the hdfs-site.xml file and core-site.xml. These files are already created and setup correctly in my local filesystem. So the task is to correctly place them in the target node hadoop folder which we did here. Just to give you an idea of what these file contain you can see the content core-site.xml hdfs-site.xml I would not be going into the details of these files . I showed these just so you know what these files contain. In layman terms, the hdfs-site.xml tells about the namenode directory and also the fact that this current system is configured as the namenode. On the other hand the core-site.xml specifies the network. Since we want that anyone can come to the system, we use the 0.0.0.0 . The hadoop by default runs on the 9001 port so we also specified that. In case of datanode we give the IP of the namenode. So far so good. 5. Formatting the namenode This is similar to installing the hadoop software where we used the OS specific command. Formatting the namenode is necessary to update the filesystem about our new configuration. So now our work is done. What remains to be done is to start the services. 6. Start the hadoop services The given command starts the hadoop services. So now we need to run this entire playbook and see what is the output ansible-playbook -v setup.yml So the playbook ran without any errors. That’s sweet. Now let me check if the changes actually occur in the target node or not. That’s so nice. The namenode is configured and working. The task for configuring the datanode is similar. The major difference is in hdfs and the core-site.xml files. Also in place of creating the namenode directory we create the datanode directory. Rest everything remains the same. Conclusion So today we created an automated hdfs cluster. This is a very important and crucial task in the industry since there might be a condition where we want to configure 100’s of nodes urgently. Doing this task manually makes little sense since it would be very slow and prone to errors too. Ansible provides an easier and faster way of achieving this in a faster manner. Thanks.
https://medium.com/@2503arjun/lets-configure-hadoop-from-ansible-df00b98977c0
['Arjun Chauhan']
2020-12-17 07:54:05.220000+00:00
['Ansible', 'Hadoop', 'Technology', 'Automation', 'DevOps']
2,680
The #1 leadership skill every technology executive needs
Question: “What’s the most important leadership skill a technology executive needs to have and why? How does one acquire it?” My answer: The ability to influence. Nearly every company is becoming a tech company. The tech companies are becoming platforms. Even if you’re ahead, innovation continues. There are legacy environments to overcome, which can be rooted in both technology and culture. With all this transformation, influence is critical. Gone are the days where technology leaders simply take orders from their counterparts and execute. In order to drive change, technology leaders need to influence at a 360 degree level, from business leadership to technology peers to their own team members and colleagues. The best way to acquire this skill? Many technology executives wait for their career to be defined for them. They wait to be tapped for a role, internally or externally. Mastering the art of influence takes practice and repetition. The best have mastered this skill in a variety of situations and cultures. They seek out opportunities to drive change. Many have mentors. Ask for feedback and incorporate what you hear. Honing and improving this skill takes deliberate action, practice and reflection. What’s your answer? Join the conversation on LinkedIn.
https://medium.com/@thisissomer/the-1-leadership-skill-every-technology-executive-needs-24e23a8cd53e
['Somer Hackley', 'Conley']
2020-11-23 15:24:07.274000+00:00
['Technology', 'Recruiting', 'Careers', 'Influence', 'Leadership']
2,681
For Tech to be equitable, the people must control it
The interconnectedness of wealth, data, and policing have made way for surveillance and monitoring to expand and evolve rapidly. We have seamlessly come into an age characterized by algorithmic management, which Data & Society defines as a “set of technological tools and techniques to remotely manage workforces, relying on data collection and surveillance of workers to enable automated or semi-automated decision-making”. Much of these new management strategies are applied to low wage work, which is disproportionately performed by black and brown people. Looking beyond employment, we can see the same threads of algorithmic management in the treatment of people who are incarcerated. Much of the same technology to track workers and consumers is also being used by the State to monitor and surveil black and brown incarcerated people. Algorithmic management marks a troublesome shift in how the tech elite and their financial backers exacerbate existing systems of oppression of people of color through a bolstering of the police state. Photo by Robynne Hu on Unsplash The roots of these innovations may go back further than you think. The first ankle monitors for incarcerated people were developed in the 1960s by two Harvard psychology students, with the purpose of monitoring young people in correctional facilities and encouraging them to show up to appointments on time. Since then, monitoring technology in prisons has evolved in disgusting ways: up until recently, Chicago’s Juvenile Probation Department was using ankle monitors that could listen and record conversations. Similarly, smart wristbands in workplaces help managers track workers’ every movement with the power to discipline. Amazon in particular has used their distribution centers as testing grounds for increasingly invasive tracking technologies. In 2018, Amazon secured patents for tracking its distribution center workers with wristbands programmed to guide worker movements towards inventory bins and can give “haptic feedback” via vibrations if the worker is reaching for the wrong bin. It comes as no surprise that the corporation at the forefront of bringing these technologies to the workplace is one notorious for creating high-pressure environments and having punitive discipline policies in their distribution centers where the majority of workers are Black, Latinx, and Asian. Currently, Amazon warehouse workers packing rates are closely monitored and when a worker is moving too slowly, they are automatically marked for warnings or termination. Amazon’s other low wage workers, like those at Whole Foods, are subjected to similar algorithms that grade workers based on stocking shelves and theft reporting and automatically mark certain workers for discipline. Automating the surveillance of people means that the people’s actions are being judged not by a human who can take into account the complex meaning of what makes a good citizen or a good worker, but by an equation that has limited and biased inputs and strict rules for output. In addition to physical walls and bars, algorithms and tech create further barriers that dehumanize people of color, extract wealth from them and reduce them to mere inputs. These trends are not just in corporations and prisons. The whole state relies upon expanding incarceration outside of the physical prison by monitoring and surveilling people. Examples of this activity are Palantir’s predictive policing tools used by Immigration and Customs Enforcement (ICE); social media monitoring by Geofeedia and used by law enforcement; and risk assessment tools to “score” a person’s likelihood of committing a crime (championed by some as an alternative to cash bail). These tools mark a total convergence of technological advancement and state surveillance and violence. In Carceral Capitalism, Jackie Wang discusses how capitalism is built on keeping people, particularly poor people of color, in “invisible boxes.” In addition to physical walls and bars, algorithms and tech create further barriers that dehumanize people of color, extract wealth from them and reduce them to mere inputs. This creates the world that the finance and tech CEOs (and their purchased politicians) would prefer — one in which they have control of the masses of workers and people of color. Photo by Andy Kelly on Unsplash To be clear, technology isn’t inherently bad by any stretch. You could imagine technological advances through tools and algorithmic management that track workplace safety with an eye to injuries and fatigue so that managers can craft better safety practices for all their workers with anonymized data ensuring no fear of reprisal. It would be absolutely thrilling to envision what technological advances could lead us to true abolition and community safety in a way that works for black and brown communities — thousands of abolitionists could bring creativity and excitement to such conversations! However, like so many fights, the answer as to why these are not the innovations that are occurring can be explained by going up the food chain. At the top of the tech sector you see large corporations (whether they be tech or finance which backs them) who make all the decisions about what innovations to push forward and what to block. This reality of capitalism means that until individual people have control over these tools, they will always tend towards working against us and extracting wealth from black and brown folks. What we need is control of the tech sector in a way that gives voice to workers, incarcerated people and black and brown communities. Without such a seismic shift, communities will continue to feel pain.
https://medium.com/breaking-down-the-system/for-tech-to-be-equitable-the-people-must-control-it-58e34d1bc242
['Acre', 'Action Center On Race', 'The Economy']
2019-11-21 17:22:06.296000+00:00
['Technology', 'BlackLivesMatter', 'Social Justice', 'Surveillance', 'Workplace']
2,682
Natural Language Processing, or NLP, and What it Does
“Order food!” “Okay! What would you like?” This simple exchange could take up an entire hour’s discussion about NLP. Natural Language Processing is an area of Artificial Intelligence and Machine Learning. It is mushrooming in importance. Understanding NLP and how it works is what Sam Wigglesworth specializes in. Her presentation on the Business School of AI’s “WeeklyWed” webinar takes us on a wild ride that includes everything from intent to sarcasm, chatbots to WhatsApp. Feast on this for a minute. You’ll understand how AI is changing our world. First we’ll want to remember how quick computing is. Count how many times you’ve turned to a calculator rather than trying to figure out the problem in your head and you’ll see what I mean. Now consider language. Once the computer understands what’s being said, you’ll get an idea of how AI can do in a very short time what it might take a human a very long time to do. Now it has become possible to feed an entire manual on a complex issue and with NLP the process of finding what you want done becomes a lot more simple. Intent is at the forefront of the discussion. What you want to do comes first. Ordering food will obviously involve a product, a size, and probably a brand or type. Think cheese. What kind? How much, etc. Once intent is defined then we can get into semantics. This is where the algorithms come in, mathematical formulas trained to find patterns and make decisions. Incorporate them into neural networks that are modeled to work like the human brain and you get to an important part of AI, Machine Learning. NLP is driven in large party by Machine Learning, a subset of AI. “Blow my mind” Machine learning focuses on building apps. These applications within ML learn from data over time. Imagine how much data a language contains. Blow my mind. The idiom “blow my mind” represents just a smidgeon of what would need to be understood. Think of the many linguistic and semantic features of language. This amounts to a ton of data. This is where NLP learns in weeks what takes a human many years. With advances in NLP come big money. The market for Chatbot as an example is expected to be worth around $1.25-billion in a few years. Chatbot tools in themselves are designed to make your life easier, enabling a better customer experience and improving productivity. Do that in the 23 languages spoken by more than half the world and you get an idea of the market for NLP, and why you should position yourself in the technology changing our world. Note: Sam Wigglesorth is the founder of The Language School and Girls and Boys In Tech from Oxford who is the NLP instructor at Business School of AI, the brainchild of Sudha Jamthe The Author: Henry Mulak is a journalist and teacher in Silicon Valley covering the technology sector, specializing in Artificial Intelligence and Machine Learning.
https://medium.com/businessschoolofai/natural-language-processing-or-nlp-and-what-it-does-de2df7c69b55
['Henry Mulak']
2021-09-06 03:59:51.954000+00:00
['Natural Language Processi', 'NLP', 'Artificial Intelligence', 'Machine Learning', 'Future Technology']
2,683
Karan Bharadwaj, CTO, XinFin spoke at “Blockchain and Business” Event organized by Nanyang Blockchain Association on 27th March 2018
Here are a few clicks from the event: Karan Bharadwaj, CTO, XinFin (middle) at the event. Karan sharing his views and experience on Blockchain Technology at the event. NTU students and staff members listening Karan at the event. Karan presenting XDC Dev Environment to NTU students and staff members. Karan explaining blockchain technology as a panelist at the event. An NTU student asking question from Karan at the event. Karan explaining that how its Dev Environment will help students become enterprise ready. Karan being felicitated by NTU at the event.
https://medium.com/xinfin/karan-bharadwaj-cto-xinfin-spoke-at-blockchain-and-businesses-event-organized-by-nanyang-1931d8e4888a
['Xinfin Xdc Hybrid Blockchain Network']
2018-03-30 13:08:53.890000+00:00
['Development', 'Technology', 'Blockchain Technology', 'Blockchain', 'Environment']
2,684
BarterDEX Exchange integrated LYS Token
Shine the light with Lightyears Token (LYS) As KOMODO is the world’s first on its atomic swap , They now enable ERC20 token to be traded along others digital asset in the crypto-world . DEXs will become mainstream in next 2–3 years. Many options available, pls check @BarterDex that uses on chain atomic swaps & beta gui is live now: http://goo.gl/heUZF7 some govs will probably issue their own fiat pegs in near future: problem solved & they double as gateways . As more Exchange coming out every single day BarterDEX has it way of standing out of the rest . By introducing stress test will ensure its creditably on their atomic swap platform . BarterDex said to be having 5000+ Asset ready for the swap , Introduction video of the Exchange is available through out the community and on it twitter page . We are hoping there will be more development working together Lightyears team “ Thanks KOMODO for having us on their Atomic swap platform” .
https://medium.com/lightyears/barterdex-exchange-integrated-lys-token-6462658b082f
[]
2018-04-22 05:03:08.643000+00:00
['Fintech', 'Technology', 'Blockchain', 'Ethereum', 'Bitcoin']
2,685
How to Get Into Bug Bounties
How to Get Into Bug Bounties Your guide to hacking and earning on bug bounty programs Photo by NeONBRAND on Unsplash Bug bounties are a great way to gain experience in cybersecurity and earn some extra bucks. But it is also getting a lot more competitive recently. As more people are discovering bug bounties and getting involved in it, it has become increasingly difficult for beginners to get started. Here are some tips that would make it easier for you to find your first bug. How to Pick a Program There are two places where bug bounty programs can be found. One of them is on bug bounty platforms. These are platforms on which many different companies host their programs, and hackers are awards points and money for their results. Some of the largest platforms are HackerOne and Bugcrowd. Another one is the organization’s own website. A lot of companies host their own bug bounty programs instead of using a bug bounty platform. Companies like Google, Facebook, and Medium take this approach. Public versus private On bug bounty platforms, there is a distinction between “public” and “private” programs. Public programs are programs that are open to the public: anyone can hack and submit bugs to the program, as long as they abide by the laws and the bug bounty contract. On the other hand, private programs are only open to invited hackers. Only a few select hackers are able to hack the company and submit bugs to it. Picking a program When you are first starting out, it is important to pick a program that you can succeed in from the very start. Bug bounty depends a lot on experience, so it’s a good idea to pick a program that is passed over by more experienced bug hunters to avoid competition. There are two ways of finding these underpopulated programs: look for unpaid programs or go for programs with big scopes. When you haven’t developed an intuition for bug hunting, you often have to rely on “low hanging fruits” and well-known techniques for finding bugs. This means that many other hackers would be able to find the same bugs much faster than you. This is why it’s a good idea to go for unpaid programs first. Unpaid programs are often ignored by experience bug hunters since they don’t pay money. But they still earn you points and recognition! And that recognition might just be what you need to get an invite to a private, paid program. A program with a large scope is also a good place to start. Scope refers to the number of target applications and webpages. When a program has a large scope, you can often find obscure applications in the scope that are overlooked by other hackers. These applications are a lot easier to find bugs in. In addition, prioritize programs with fast response time. When you first start out, you are going to make a lot of mistakes. You might misjudge the severity of a bug, write an unclear report or make technical mistakes in the report. Rapid feedback from program managers will help you improve, and turn you into a competent hacker faster. Getting private invites Hurrah! You’ve got an invite! Getting private invites on bug bounty platforms is not difficult once you’ve found a couple of bugs. Different platforms will have different algorithms to determine who get the invites, but here are some general guidelines to stick to: Submit bugs to public programs first. In order to get private invites, you often need “points” or “reputation” on a platform. The only way to gain these is to submit a few valid bugs to public programs first. Don’t spam. Spamming and submitting non-issues often causes a decrease in points or reputations. Most bug bounty platforms limit private invites to hackers with points above a certain threshold. Be polite and courteous. Being rude or abusive to program managers will probably get you banned from the program as well as prevent you from getting private invites. On some bug bounty platforms, like HackerOne, you can also get private invites by completing tutorials or solving CTF challenges. Private programs are a lot less crowded than public ones and it would become much easier to find bugs once you start hacking on private programs. How to Find your First Bug Now that you’ve decided on the program you are going to work on, it’s time to find some bugs! All bug hunters have types of vulnerabilities that they specialize in. Usually, they start out looking for everything and eventually settle into something that they are particularly good at. So before you find your niche, it’s a good idea to try to look for everything. The bug classes that are the easiest to find are: Cross-Site Scripting (XSS), Insecure Direct Object Reference (IDOR), Cross-Site Request Forgery (CSRF), Race Conditions, Information Disclosure. There are, of course, more classes of vulnerabilities that you could look for, but these are the simplest to get started with. Try looking for these bugs and see if any of them comes naturally to you, or if you prefer looking for some of them over the others. And before you know it, you’ve become an XSS specialist. You can find tutorials on how to find these bugs in some of my previous posts: Note: Do not look for all of the bug classes all at once! Focus on one at a time only, so you can get familiar with the methodology of looking for a certain type of bug. How to Write your First Report Have you spotted your first bug? Great, it’s time to write your first bug report. When I first started writing reports, I would spend a lot of time answering questions from the program’s engineers because I was not clear on something, or I did not provide enough details for them to reproduce the report. Be sure to put some effort into writing a bug report because this will save you a lot of time down the road and can potentially increase your bounty amount. Here’s a guide to writing a good bug report: A note on Dupes and Informatives It’s very normal to get a ton of duplicates and informatives when you first start out. So don’t get discouraged by them! Remember that dupes and informatives mean one thing: that you were technically correct! It’s just that someone has found the bug before you, or that the company is accepting the risk at this time. They by no means dismiss the quality of your work. Even if you don’t earn money or reputation, you still gained experience from the learning opportunity. Just keep going and you will develop your own unique methodology and start finding unique, valuable bugs! Lastly, a few words of Experience It’s difficult. It really is. When I first started hunting for bugs, I would go weeks or even months without finding a bug. And when I do find a bug, it would be something trivial and low severity. The key to getting better at finding vulnerabilities is practice. If you are willing to put in the time and effort, your bug hunting skills will improve and you will soon see yourself on leaderboards and private invite lists! If you ever get frustrated during this process, remember that everything will get easier after you find your first bug and get your first private invite. Good luck! And reach out to the community if you need any help.
https://medium.com/swlh/how-to-get-into-bug-bounties-383266799832
['Vickie Li']
2020-02-05 13:01:02.386000+00:00
['Hacking', 'Cybersecurity', 'Bug Bounty', 'Programming', 'Technology']
2,686
Owlet Baby Care Empowers More Parents with Digital-Age Tools
By Lior Susan Anyone who has had a baby will tell you: Hyper-vigilance becomes your default state. At home, you may stand over baby’s bassinet — literally for hours — gently placing a hand on her or his chest and watching the breaths. You may wake up multiple times a night like clockwork, to check that baby remains calmly asleep in her crib. In 2015, we founded Eclipse Ventures, and one of our first investments was in Owlet Baby Care, a startup headed by Kurt Workman, an engineer and entrepreneur who had recently become a father. With no office yet, we met Kurt and his fellow co-founders — Jordan Monroe and Zack Bomsta — in a San Francisco cafe to size them up and their product: a monitoring sock for infants designed to measure heart rate and blood oxygen levels. The product was in beta at the time. It immediately dawned on us how valuable such an innovative product could be. Beyond our own experiences as parents, we knew that millions of moms and dads around the country had to settle for baby monitors in dated form factors like walkie-talkies and low-res crib cameras. On the horizon, digital technologies and consumer trends were emerging in a way that would bring the quality of baby monitors where they needed to be. Advances in sensor technology, connectivity, more powerful computing at the edge of networks, and the insatiable appetite for personal biometric data — all this would soon converge. Today, where more than 1.5 million caregivers have put their trust in the Owlet brand, helping them to monitor over 850,000 babies. Ongoing innovation — along with an expansion in complementary products such as an HD webcam monitor with night vision and two-way audio, as well as a plan to develop a pre-natal belly band for expectant mothers — has allowed Owlet to amass a large and growing set of baby-health data: with more than 850,000 babies monitored and over 4.2 trillion heartbeats tracked. Those numbers feel like the height measurements that parents (still?) notch on a door frame. And the growth of this company, in particular, feels especially gratifying. We founded Eclipse on the belief that more investment was needed in companies using hardware to gather inputs from the physical world, digitize those inputs, and leverage the output data. Naturally, that’s how the idea for Owlet was first hatched. In founding the company, Kurt was motivated by concerns that his child would inherit a genetic heart condition that nearly killed his wife when she was a baby. And if I’m honest, I wasn’t the most impartial investor when we first met him: My wife was fast approaching the delivery date for our first daughter at the time. Fortunately, as passionate as Kurt was about the product, we were (and are) equally passionate about helping build healthy companies that are leading digital transformation. And over the next five years, as a team, we fortified that founder’s passion with our experiences as former operators — helping Owlet prioritize next hires and make hard business decisions, while facilitating industry partnerships to optimize the company’s manufacturing and supply chain. For instance, by working closely with the founding team during an intense period of scaling, we identified a need for enhanced operational rigor and corporate accountability, in order for the company to reach its next inflection point. We met Mike Abbott, a business leader who formerly served as chief financial officer and head of operations for major consumer brands like Specialized and Burton Snowboards, and asked him to join. Mike came in and injected these values into the business, originally as CFO, subsequently as COO, and now as president. Looking back, we find moments like these to be among the most gratifying for us as company builders. We all know how critical the first years of life are for a child’s development. These past five years were just as crucial for Owlet. Now, all we can say is: “My, how you’ve grown.”
https://medium.com/eclipse-ventures/owlet-baby-care-empowers-more-parents-with-digital-age-tools-337614f3ae73
['Eclipse Ventures']
2021-02-16 14:41:39.302000+00:00
['Technology', 'Parenting', 'Startup', 'Venture Capital', 'Baby']
2,687
The issues behind at-home genetic tests
About 26 million individuals in the world have already taken an at-home genetic test. These tests are very simple and appealing: for less than $100, anyone can receive a small colorful kit at home, spit, send it back, and get a nice detailed report of their genealogy and disease risks. Hard to refuse, isn’t it? However, what happens after the results are sent to individuals? The genetic data of each person taking the test are stored in genetic databases owned by a few multinational companies. According to the MIT Technology Review, if the growth rate of this industry continues, these companies could own the genetic data of 100 million people in 2 years. A major concern for individual rights The first concerns are about individual rights. DNA carries very private and sensitive information, not only because it is inherent to the individual, but also because it reveals insights about people, about their present and potential future health status. This brings significant scientific potential and richness, as well as a real risk. In 2018, the pharmaceutical giant GlaxoSmithKline paid $300 million to 23andMe to use the genetic data collected by the latter for research purposes. This collaboration was justified by the necessity of collecting genetic data to allow scientific progress, but the question of the privacy of the 5 million users of 23andMe is highly questioned. A major challenge of this new technology is thus played out at the level of an individual’s private life. The right to privacy is a fundamental right, protected at the State and international level, and DNA is considered as one of the most private information of all. There is therefore a clear and significant risk if an individual’s genetic information is disclosed to the public. « If people are concerned about their social security numbers being stolen, they should be concerned about their genetic information being misused. (…) When information moves from one place to another, there’s always a chance for it to be intercepted by unintended third parties », Peter Pitts, president of the NGO Center for Medicine in the Public Interest. Who are these third parties that Petter Pitts mentions? The individual is not the only one that has interests over his own genetic information. Other actors as employers, insurance companies, banks, immigration agencies, have an interest in possessing a client’s genetic information. Genetic testing and “third parties” DNA is a treasure, which has a real commercial value, and is therefore sought after by many “Third Parties”. Photo by National Cancer Institute on Unsplash The economic benefits of genetic testing There are several economic, health, and safety reasons that may justify the use of genetic testing by third parties. In the field of employment, employers want to maximize the productivity of their employees and minimize the risk of illness, or even absenteeism. In times of recession, employers could simply determine who to keep in the company by looking at who is most likely to be a liability in the future. Administering such tests could also have major benefits in terms of occupational safety and health. For instance, in jobs where the employee is in a position to harm others (e.g., an airline pilot), genetic testing to verify an employee’s risk of mental illness would be very useful before hiring him or her. Similarly, insurance companies have a major interest in administering genetic tests to potential clients. Accessing one’s genetic information would be a very efficient way to know without error or fraud the risk of a potential customer. Open access to the genetic information of millions of individuals would also be a great asset for the field of research, which faces restrictions on access to data. Photo by Sushil Nash on Unsplash The use of genetic information by third parties creates an environment of denial of equal opportunities, based on factors that can neither be controlled nor changed: genes. These applications present a high risk of data misuse. In addition to the violation of fundamental individual rights, such as the right to privacy, this type of use of personal data can cause a widespread phenomenon of genetic discrimination. Employers, insurance companies, banks, or immigration agencies when granting loans or visas could take into account factors that the individual does not know about himself, and could be in violation of the right to non-discrimination. A person with high-risk genes (such as breast cancer BRCA genes for example), although these genes may be dormant, may be denied employment, a loan, health insurance, or charged an overpricing on the basis of their genetic information. Today, insurance companies or banks already take into account the risks presented by a customer when determining a price. However, all the factors taken into account are known by the individual (age, sex…) or determined by himself (lifestyle, if he smokes…). Taking genetic information into account would be a further violation of individual rights, basing discrimination on factors of which the individual is unaware. The serious risk of this new type of discrimination is to create a new social class based on genetic information. Photo by Miko Guziuk on Unsplash Historical misuse of genetic Information Although these situations may seem completely hypothetical and exaggerated, they have already happened in history. The use of genetic data is closely linked to the doctrine of eugenics, based on the belief that science must be pushed further to improve humans and promote genetic superiority. The most obvious example of the extensive use of the doctrine of eugenics is Nazi Germany, where genetic information (limited to the means of the time of course) was used for a genocidal purpose to eliminate the « genetically inferior » (the disabled, the ill…). This example may seem far-reaching, some might say. However, the eugenicist movement also has very important roots in the United States. In the 1920s, during large waves of immigration, gene selection was very popular. Large State fairs were set up, highlighting « Grade A individuals » who could win prizes. At that time, many sterilization laws were also introduced in the United States against those considered « genetically defective ». 32 American states introduced forced sterilization laws based on genetic selection between 1907 and 1937. It was a decision of the United States Supreme Court that federally instituted genetic selection. Through the case of Buck v. Bell (1927), the Supreme Court authorized State-sponsored sterilization, justified by the eugenicist doctrine. It was recommended for « unfit people », including those who were considered « intellectually inferior ». It was during this case that Justice Holmes pronounced his famous statement « Three generations of imbeciles are enough » to justify sterilization laws based on genetic information. Nazi lawyers during the Nuremberg trials also referred to Buck v. Bell as a precedent for legal sterilization and as an example of « race protection laws in other countries ». The case was never explicitly returned or invalidated. What would happen today if this decision, by the highest court in the country, was combined with rapid technological advances and the massive collection of genetic data from millions of users? These dangers to the fundamental freedoms of individuals, the risk of a new kind of generalized discrimination, as well as the abuses that have already taken place, show the need to regulate genetic testing technologies.
https://medium.com/@clementinemariani1/the-issues-behind-at-home-genetic-tests-af920230ea4b
['Clémentine Mariani']
2020-11-09 17:58:38.298000+00:00
['Law', 'Discrimination', 'Inequality', 'Genetics', 'Technology']
2,688
BitClave Weekly Update — May 28, 2018
Development Last week we finished the basic functionality for REQUEST and OFFER entities, allowing to create and manually match this entities. This skeleton will allow us to test system flows for “good” recommendations by a search engine or “bad/malicious” recommendations by a search engine. We have also implemented the service that will pay CAT Tokens as a reward on behalf of business. This week we’ll continue to work on the APIs involved in interactions between REQUEST, OFFER and Search Engine. Marketing Last week we were following up with the people who we met during the Blockchain Week in New York City. We have already started finding such big events in the coming future. Last week we were also closely following the GDPR updates and its implications all around the world mainly in the blockchain space. We have more content coming this week and we are also discussing with our legal team regarding the its implications for BitClave and our products. Here’s an article we wrote on it recently on GDPR with the headline — What the heck is GDPR, and why am I getting all these emails?. We also wrote an article on how we are leading our towards innovative decentralized products with the use of blockchain technology. Read here. We continue to hire more people to join our Marketing Team. You can check for the latest job postings here. https://angel.co/bitclave/ Events/meetings During the week we were searching for new opportunities and prepared the events for June. On May 28–29 our Events Manager Stanislav Liutenko is attending Blockshow Europe in Berlin. Please, let us know if you would like to meet him there! Our Head of Growth Praitk Gandhi visited the HQ in San Jose for the first time last week. Here’s a picture he clicked with our Head of Blockchain Mark Shwartzman. Want to learn more? Don’t miss our special news updates. Sign up for BitClave here. Join our Telegram Channel: https://t.me/BitClaveCommunity Github: https://github.com/bitclave/ Official Twitter: https://twitter.com/bitclave Official Facebook: https://fb.me/bitclave
https://medium.com/bitclave/bitclave-weekly-update-may-28-2018-7832a31dcebb
[]
2018-08-04 11:43:39.907000+00:00
['Blockchain', 'San Jose', 'Technology', 'Update', 'Decentralization']
2,689
Trending Comments — The Comments Section is a Gold Mine
The comments section on practically all social media / publication platforms is a pure goldmine. I can’t remember the last time that I’ve seen a highly viewed video (e.g., YouTube, TikTok) or post (e.g. Facebook, Twitter) without running to the comments section. Now, what makes the comments section so powerful? Pure human ingenuity. The combination of allowing people to post their own comments and “like / heart / clap” other people’s comments creates what we can call “trending comments.” We all know what a “trending post / video” is on Facebook or YouTube, but we haven’t focused much on top comments, or “trending comments.” They are typically found at the top of the comments section with a large number of likes. What is the context of these comments typically? It can range from a super funny comment to a sarcastic comment to a thoughtful, supportive comment, all of which hundreds or thousands of people can relate to, which is translated into the number of likes. When I watch a video, I go to the comments section usually for one of two reasons: To compare my thoughts against others If I watched a video and I had a question or remark on something that was said, I usually go straight to the comments section to see if other people had the same thought. There is a sense of validation when you see a comment with many likes on a topic that you had thought about as well. 2. To see other people’s thoughts Many times I go to the comments section to see what other people had to say. I know when I do this, I am in store for some funny comments to thoughtful comments to hate comments as well. Any viral or highly viewed video is going to have hundreds to thousands of comments, but the “trending comments” are the ones I look at, not just because they are at the top, but because these are the comments that a large number of people support. In a sense, it is a great filter system to see where the real “value” lies. Now what types of comments are usually “trending comments”? Well, they can be numerous different things, but here are a few that I wanted to highlight: A trend within the comments Yes, there are trends everywhere, even within the comments section. One person may post an original comment and others may take their own spin on it, and before long, there are hundreds of comments all following a similar trend that everyone is finding relatable or funny. A prime example of this would be American rapper Cardi B’s Bodak Yellow music video. See below an excerpt of the top comments as of 05/17/20 on this video. The trend within the comments section started as a result of many people feeling the song empowered them to break the rules and do whatever they wanted. As a result, the public went to the extreme and started a trend within the comments to put in scenarios of things that one would not traditionally do or is not possible to do to show the “empowerment” generated from the song. If you scan through the comments section of this video, there are hundreds of comments in a similar format ranging from a few likes to thousands. Once a trend starts in the comments section, it can definitely gain traction as seen in this video. 2. Quotes In many instances, people simply comment a quote or line from the video that stood out to them. Typically, the quote is relatively short (i.e., a sentence or two) pulled from an entire video or article, so if these comments with a short quote become “trending comments,” they must have a pretty relatable or funny quote. There are hundreds of examples of this, but see one below from the video recording of Dartmouth’s 2018 Commencement Address by Mindy Kaling. The video itself is about 17 minutes long, and this quote is only 7 words. It amassed around 1.1K likes, and this clearly shows that a quote that many others found funny or relatable can definitely gain a lot of likes in the comments section. 3. Blatant Callout Referencing the Comments Section When something occurs in a video that is highly shocking, controversial, or essentially anything that is going to lead people to leave comments, many people are going to “run to the comments section.” As a result, many times “trending comments” are simply comments that read something along the lines of “I ran to the comments section as soon as I heard …” or “Like if you came to the comments because …” 4. Pure support In many instances, top comments are simply those that show a lot of support to the writer or producer of the content. This is traditionally more prevalent for videos / articles that have an emotional element to them. See to the left an example of the top comments on a TikTok video showing a child who just won his battle against cancer. Clearly, in this instance, all of the top comments are related to the excitement and joy people feel when watching this video, and as a result, these comments have amassed thousands of likes. The comments section can be funny, sarcastic, and relatable, but at times, very negative and unsupportive as well. We’ve definitely seen instances where people leave negative or threatening comments on social media, but for the purpose of this article, I wanted to focus on why the comments section can be a gold mine. Now, why do I think the comments section is a gold mine? The comments section has potential to be web scraped and leveraged for data once a scalable tool is created to interpret and analyze comments to generate useful insights. The comments section is useful because (1) it allows the public to create unique comments / content related to videos, and (2) poll the public to uncover the comments / content that are the most relatable or valuable as measured by the number of likes. Why is this useful? Well, learning about consumer behavior in a data-driven manner is an extremely large and growing market opportunity. Companies spend millions of dollars to learn about consumer interests and opinions on different topics. This is typically gathered in more traditional ways (e.g., consumer surveys), but I think there is a potential to develop a tool to analyze unstructured text data, like the comments section, to derive consumer insights. The comments section is essentially a free, crowdsourced method to identify top and highly agreed upon content, and this must have some value / potential attributed to it. If not, I guess the comments section is still a blast to read.
https://medium.com/digital-vault/trending-comments-the-gold-mine-of-the-comments-section-on-social-media-bf5bb9217d83
['Dhruv Patel']
2020-05-21 14:35:52.295000+00:00
['Innovation', 'Life', 'Consumer Behavior', 'Technology', 'Social Media']
2,690
“Tesla trades at more than 1,200 times trailing earnings, while established automotive industry peers go for eight times or less.” — Charley Grant, Tesla Is Watching Its Stock Price Too
“Tesla trades at more than 1,200 times trailing earnings, while established automotive industry peers go for eight times or less.” — Charley Grant, Tesla Is Watching Its Stock Price Too Tony Yiu ·Dec 9, 2020 Photo by Afif Kusuma on Unsplash Regardless of whether you think Tesla is a bubble or not, selling shares to raise cash is a wise move. Good capital allocators know to buy low and sell high. While the jury is still out on the buy low part but Tesla’s $5 billion stock sale is definitely a case of selling high.
https://medium.com/alpha-beta-blog/tesla-trades-at-more-than-1-200-times-trailing-earnings-while-established-automotive-industry-58a76efe990b
['Tony Yiu']
2020-12-17 03:27:36.626000+00:00
['Investing', 'Tesla', 'Stocks', 'Business', 'Technology']
2,691
Rethinking the Internet: A New User Experience (Part 3)
When the Internet was saddled with surveillance advertising as its default business model, the consumer stopped being the central focus of the US Consumer Internet. Advertisers became the customers and therefore the focus shifted to creating the best experience for them. In the process, platforms began a race to amass users, to dominate their time, to drive ever higher engagement, and to collect more and more data so that they could predict, and then influence/manipulate, user behavior. In pursuit of this data, the boundaries between self and market have been almost entirely eliminated, the Internet has become pervasive (connected everything), and nearly all friction will soon be removed (voice as the new UI.) This allows these platforms to be ever present in the background, nudging consumers in whatever direction their customers (advertisers) desire. This business model has resulted in a terrible user experience, one in which the best products don’t win, but rather the ones that the most people use. User-generated data is used to create products that are not designed for the consumer’s own benefit, but rather for the benefit of third parties. With a new business model and improved infrastructure, might it be possible to improve the US Consumer Internet experience and restore the consumer’s central role in it? An improved user experience would restore consumer control over time, attention, and data. The current viability of each each is highlighted below. How Can Consumers Get Their Time Back? The time spent online used to be somewhat limited by pay-per-use or pay-by-time business models, but unlimited consumption plans have prevailed, and now there is essentially no limit to how much time we spend online. This is important because our time is not unlimited, our time (and attention) are zero-sum equations, meaning the more time we spend engaged in one activity, the less time we spend doing something else. comScore’s 2017 Cross-Platform Future in Focus report found that the average person spends almost three hours a day on their phone.¹⁴ That’s three hours a day they could be engaging in other activities- potentially more productive activities. Importantly, consumers feel this paint point acutely. According to a recent Pew Study survey, 39% of US Internet users ages 18–29 reported being online “almost constantly” followed by 36% for ages 30–49.¹ They don’t seem happy about it either, as the percentage of US adults that are trying to limit the time they spend on their phones increased from 47% in 2017 to 63% in 2018.¹ Consumers clearly want to regain control of their time. They don’t need education to understand its value, the way they do with data, and both enterprises and parents are willing to pay to ensure that employees/students use their time in more focused and productive ways. There are several companies that are working to build “mindful operating systems,” “time well spent” launchers / applications, or smart scheduling apps that allow users to remain focused, to remain present in their intention when engaging with technology, and to reach a flow state while in workflows. Siempo, Flipd, Thrive Away, and Mercury OS all attempt to allow users to focus, eliminating endless notifications, pop-ups, and distractions, and to re-direct the consumer towards their original goal when logging on. While the consumer clearly feels this pain point and is searching for solutions, many of these apps are more akin to features than businesses. That makes it tough to compete with platforms like Apple which can simply add some of these elements to their own product suites (e.g. Screen Time.) To overcome this challenge, most of these start-ups are attempting to create a sense of community in their products in addition to adding more comprehensive health and wellness content. Still, many of these applications struggle to devise a business model that isn’t subject to the same problems that they are trying to solve. Business model innovation will be key to the success of these applications since the end goal is actually dis-engagement. How Can Consumers Re-Focus Their Attention? Advertisers are the primary customers of US Internet platforms. This dynamic has led these platforms to become obsessed with driving “engagement.” These platforms now design products intended to keep users locked in endless loops and infinite scrolls, encouraging users to “binge” and promoting outrage to keep users engaged, and when users falter, they are barraged with notifications in order to re-direct their attention back to their screens. The average person checks their phone 150 times per day.² The consumer experience suffers as a result. Luckily, there are solutions and consumers acutely experience this pain point, making them more incentivized to seek them out in the near term. Four approaches to addressing attention on the Internet are highlighted below: Ad Blockers: Users are tired of annoying, distracting ads. According to e-Marketer ~28% of US Internet users (~80M people) now block ads, up from 21% in 2016.¹⁵ AdBlock Plus and Disconnect have over 100M¹¹ and 50M¹² users, respectively. The problem with ad blockers is that they have historically had a negative impact on publishers, which lose the opportunity to monetize their content, forcing many to enforce paywalls. Consumers are resistant to paywalls, so while the consumer regains control over their attention and enjoys an ad-free web experience, the publishers often suffer. Some companies, such as Scroll, take a different approach and are attempting to launch a paid ad blocker which allows for a fast, ad-free version of sites across a content partner network. Importantly, since these models are paid, their services could allow publishers to earn more than they would via an ad-supported model. However, the economics of this model are still to be determined (consumer willingness to pay versus revenue generation for publishers relative to advertising.) Users are tired of annoying, distracting ads. According to e-Marketer ~28% of US Internet users (~80M people) now block ads, up from 21% in 2016.¹⁵ AdBlock Plus and Disconnect have over 100M¹¹ and 50M¹² users, respectively. The problem with ad blockers is that they have historically had a negative impact on publishers, which lose the opportunity to monetize their content, forcing many to enforce paywalls. Consumers are resistant to paywalls, so while the consumer regains control over their attention and enjoys an ad-free web experience, the publishers often suffer. Some companies, such as Scroll, take a different approach and are attempting to launch a paid ad blocker which allows for a fast, ad-free version of sites across a content partner network. Importantly, since these models are paid, their services could allow publishers to earn more than they would via an ad-supported model. However, the economics of this model are still to be determined (consumer willingness to pay versus revenue generation for publishers relative to advertising.) Explicit Compensation-Attention Exchange: This model converts the existing online advertising model into an explicit exchange of compensation for attention. Proponents of this model argue that it reduces click fraud, since the viewers of ads have verified identities, increasing the value of each ad. As each ad becomes more effective, advertisers can theoretically reduce ad load, reducing costs while improving the user experience. However, while attention is easier to monetize than data (see below), marketplaces built around both struggle to achieve minimum efficient scale. Advertisers want to reach millions of users, meaning start-ups targeting this space will need to partner with brands that already benefit from large user bases as opposed to trying to on-board users from scratch. Consumer feedback also indicates that many find this model to be dystopic and Ready Player One-esque. Ultimately, if consumers aren’t comfortable confronting the current exchange of attention for compensation, these products won’t succeed. One important difference relative to the current paradigm is that these solutions allow the consumer to control ad load, which is also non-interruptive. Still, these applications will struggle to gain traction if the consumer cannot appreciate this nuance. This model converts the existing online advertising model into an explicit exchange of compensation for attention. Proponents of this model argue that it reduces click fraud, since the viewers of ads have verified identities, increasing the value of each ad. As each ad becomes more effective, advertisers can theoretically reduce ad load, reducing costs while improving the user experience. However, while attention is easier to monetize than data (see below), marketplaces built around both struggle to achieve minimum efficient scale. Advertisers want to reach millions of users, meaning start-ups targeting this space will need to partner with brands that already benefit from large user bases as opposed to trying to on-board users from scratch. Consumer feedback also indicates that many find this model to be dystopic and Ready Player One-esque. Ultimately, if consumers aren’t comfortable confronting the current exchange of attention for compensation, these products won’t succeed. One important difference relative to the current paradigm is that these solutions allow the consumer to control ad load, which is also non-interruptive. Still, these applications will struggle to gain traction if the consumer cannot appreciate this nuance. Compensation for Content Contributions: In this model, consumers monetize the content they generate and contribute to online platforms and social networks.This allows the benefits of engagement with a platform to accrue (at least partially) back to the consumer. As expressed by Nick Sullivan, CEO and founder of ChangeCoin (acquired by AirBnb), “We’ve lazily accepted ads are the best way to monetize content online.… What we’ve been missing is a way for people to express their appreciation and vote with their dollars for the things that they find good — in a very low-friction way.”⁴ Devising such a business model was challenging before technologies that enabled micropayments. Three social network variants that leverage micropayments to compensate content creators are Steemit (launched in 2016), Voice (in beta), and Coil (in beta.) The key innovation with Coil is that it leverages the interledger protocol (ILP) to pay out a fixed bandwidth / second rate that is sent to the creator instantly. Thus far, these networks have struggled to gain users as they have not been able to create enough value to incentivize users to switch from existing networks. As of the end of 2018, Steemit reported ~500,000 active users (relative to Facebook’s +2B.) In this model, consumers monetize the content they generate and contribute to online platforms and social networks.This allows the benefits of engagement with a platform to accrue (at least partially) back to the consumer. As expressed by Nick Sullivan, CEO and founder of ChangeCoin (acquired by AirBnb), “We’ve lazily accepted ads are the best way to monetize content online.… What we’ve been missing is a way for people to express their appreciation and vote with their dollars for the things that they find good — in a very low-friction way.”⁴ Devising such a business model was challenging before technologies that enabled micropayments. Three social network variants that leverage micropayments to compensate content creators are Steemit (launched in 2016), Voice (in beta), and Coil (in beta.) The key innovation with Coil is that it leverages the interledger protocol (ILP) to pay out a fixed bandwidth / second rate that is sent to the creator instantly. Thus far, these networks have struggled to gain users as they have not been able to create enough value to incentivize users to switch from existing networks. As of the end of 2018, Steemit reported ~500,000 active users (relative to Facebook’s +2B.) Solutions Designed to Improve the Quality of Time Spent: There are many other companies working on creating a more balanced and trustworthy Internet, including companies that show the counter side to every news story, companies working on proving the provenance of digital media utilizing blockchain technology, companies focused on facilitating constructive online debate (TruStory), and companies focused on detecting political bots and fake news (RoBhat Labs, a Dorm Room fund community company.) How Do We Value Data? Amid a growing number of data privacy violations and breaches, data protection, ownership, and monetization have recently become a part of the public discourse. Governments have publicly acknowledged that “data has value, and it belongs to you” and Senator John Kennedy introduced a bill called the “Own Your Data Act.” This sentiment is echoed in the private sector, with a slew of start-ups launching to create data marketplaces in which users own, control, and monetize their data directly. The truth is that valuing data is tough. It’s nuanced. Not all data is equally valuable and much of its value depends on the context in which it is used, which may be unknowable. It’s complex. Much data is interpersonal, making ownership complicated since there are multiple parties involved in these data points (e.g. I am my mother’s daughter.) It’s a relatively new resource and we aren’t exactly sure how to handle it. It’s also plentiful (not scarce) and non-rivalrous (consumption by one doesn’t prevent consumption by another and one’s own data is not particularly valuable in and of itself.) It’s sensitive, personal, and tied to digital identity. Most importantly, we don’t understand its worth. Price is woefully insufficient in encapsulating its value and therefore markets are a poor fit for its exchange. The recent Netflix movie, The Hack, illustrates the difficulty in valuing data poignantly. What is the value of our data? Well….what is the value of democracy? Direct Data Monetization via Marketplace There are many start-ups that aim to 1.) restore consumer control over data 2.) allow consumers to directly monetize their data via a data marketplace. The first is now feasible given blockchain technology and data portability regulation. The second has proven very challenging. To start, valuing user data is nearly impossible. Several approaches are outlined below. Senators Mark Warner and Josh Hawley have introduced a bill requiring large tech companies to publicly put a price on their users’ data. Amazon has apparently decided that unlimited access to a Prime member’s browsing activity is worth a $10 coupon for those spending at least $50.⁵ Amazon also offered consumers a $25 gift card in exchange for an in-person, 3D, full body scan.⁶ For an idea of how ridiculous that seems, Hu-Manity estimates the “human data market” generates between $150 to $200 billion annually.⁷ Do not ask the buyer to set the price. Unfortunately, consumers are not much better at valuing their data. They have been giving their data away, without direct monetary compensation, for over a decade. This makes it very difficult for consumers to value their data or to determine a consumer’s willingness to pay for privacy. Most data marketplace start-ups have found that they have to use a simple “give this, get that” model to help consumers conceptualize the value of their data. Most companies attempting to allow consumers to directly monetize their data have settled on a three tier model, whereby data is classified into low, medium, and high value tiers. The lowest tier can be valued as low as $0.03 cents / month. Expert calculations place the value of data for a “typical person” between $100 to $1,000 a year.⁸ For reference, crude calculations indicate that Facebook generates~$35 / year per monthly active user in the US. However, it remains unknown whether restoring user control over data might have a “deflationary” effect on its value. If this data is no longer controlled by rent seeking gatekeepers with outsized bargaining power, it is logical to assume that the willingness to pay for that data might then decrease. Apart from valuation, UI/UX friction is high with most of these products. The on-boarding process remains a point of high friction (some products require a minimum of ten minutes to import all accounts and download new apps), although innovation is happening post-GDPR. More importantly, most of these applications seem to focus on the moral argument that users have a right to own their data (#31) rather than creating a user experience that attracts users at scale. It’s unclear how the value an individual consumer would derive from their data compares to the compensation threshold that would be needed to offset the current friction in the user experience of actively managing and monetizing one’s own data (key management, porting over data from multiple silos, etc.) Enterprises also run into adverse selection issues, wherein the consumers that are more likely to actively manage and monetize their own data are not necessarily the consumers that enterprises want to reach. Furthermore, it is nearly impossible to prevent the formation of a secondary market after data is shared with a third party since once information is known it is hard to prevent duplication. The terms and conditions of most data marketplace start-ups stipulate that once the data is sold to a third party, they are not responsible for what the purchaser does with that data. Zero-knowledge proofs are a potential solution, but are best suited to questions that can accommodate binary answers and are not operational at scale. Finally, as mentioned above, user data points in isolation are not terribly valuable and many of these solutions struggle to reach minimum efficient scale. Industry professionals estimate that the minimum threshold in a data marketplace is 100k-200k users. That is what is needed to get most vendors interested in a given data set, which in turn allows users to monetize their data at higher rates, creating the necessary, but thus far elusive, flywheel. In short, data marketplaces are one solution to the data privacy problem, but it’s not a solution that seems to resonate with consumers. That could begin to change with Gen Z. Digital natives (or the generations that have never lived without smartphones) understand that their identity and world are more digital than not, and they care more about protecting them. Privacy Preserving Data Analysis and Exchange Other solutions enable individuals to bring their data together in one secure location, but rather than creating a marketplace for this data, they enable privacy preserving analysis. This facilitates local analysis (on a consumer’s device, for example) without comprising privacy, eliminating the need for a consumer to send data to a centralized server. Some of theses applications use this analysis to improve personalization or to provide services without comprising privacy. Data points in isolation are not very valuable. In contrast, having access to a fuller set of consumer data, or the results of analyzed data, is valuable and actionable. The consumer doesn’t need to understand, conceptualize, or value their own data and the enterprises purchasing this data don’t need to change their operations dramatically or shift their mindset. The whole process occurs similarly to the way it currently does, except it takes place on privacy preserving infrastructure.These models generally utilize revenue share type agreements, which are easier for a consumer to understand and which require less active management than the micropayment model common with data marketplaces. These solutions also provide clear benefits to both sides of the exchange. They reduce anxiety for consumers concerned about data breaches and privacy violations since the raw data resides locally on their phones. Enterprises avoid the liability of holding sensitive data and demand channels looking to purchase GDPR compliant data sets are provided with a reliable supply. Healthcare seems to be one of the strongest near term demand channels. This is beneficial since it reduces the scale barrier outlined above, as clinical trial data sets require as little as 300 users. Valuation in this context is also easier. There are methods (some of which are being pursued at Microsoft) to determine the marginal effect of new data on machine learning models, though many conceptual and computational challenges remain.¹³
https://medium.com/dorm-room-fund/rethinking-the-internet-a-new-user-experience-part-3-8b4f32d008f5
['Justine Humenansky']
2020-02-22 17:37:35.845000+00:00
['Internet', 'Technology', 'Blockchain', 'Advertising', 'Innovation']
2,692
Wiring the Quantum Computer of the Future: a Novel Simple Build with Existing Technology
But, building quantum computers for large-scale computation is proving to be a challenge in terms of their architecture. The basic units of a quantum computer are the “quantum bits” or “qubits.” These are typically atoms, ions, photons, subatomic particles such as electrons, or even larger elements that simultaneously exist in multiple states, making it possible to obtain several potential outcomes rapidly for large volumes of data. The theoretical requirement for quantum computers is that these are arranged in two-dimensional (2D) arrays, where each qubit is both coupled with its nearest neighbor and connected to the necessary external control lines and devices. When the number of qubits in an array is increased, it becomes difficult to reach qubits in the interior of the array from the edge. The need to solve this problem has so far resulted in complex three-dimensional (3D) wiring systems across multiple planes in which many wires intersect, making their construction a significant engineering challenge. A group of scientists from Tokyo University of Science, Japan, RIKEN Centre for Emergent Matter Science, Japan, and University of Technology, Sydney, led by Prof Jaw-Shen Tsai, proposes a unique solution to this qubit accessibility problem by modifying the architecture of the qubit array. “Here, we solve this problem and present a modified superconducting micro-architecture that does not require any 3D external line technology and reverts to a completely planar design,” they say. This study has been published in the New Journal of Physics. The scientists began with a qubit square lattice array and stretched out each column in the 2D plane. They then folded each successive column on top of each other, forming a dual one-dimensional array called a “bi-linear” array. This put all qubits on the edge and simplified the arrangement of the required wiring system. The system is also completely in 2D. In this new architecture, some of the inter-qubit wiring — each qubit is also connected to all adjacent qubits in an array — does overlap, but because these are the only overlaps in the wiring, simple local 3D systems such as airbridges at the point of overlap are enough and the system overall remains in 2D. As you can imagine, this simplifies its construction considerably. The scientists evaluated the feasibility of this new arrangement through numerical and experimental evaluation in which they tested how much of a signal was retained before and after it passed through an airbridge. Results of both evaluations showed that it is possible to build and run this system using existing technology and without any 3D arrangement. The scientists’ experiments also showed them that their architecture solves several problems that plague the 3D structures: they are difficult to construct, there is crosstalk or signal interference between waves transmitted across two wires, and the fragile quantum states of the qubits can degrade. The novel pseudo-2D design reduces the number of times wires cross each other, thereby reducing the crosstalk and consequently increasing the efficiency of the system. At a time when large labs worldwide are attempting to find ways to build large-scale fault-tolerant quantum computers, the findings of this exciting new study indicate that such computers can be built using existing 2D integrated circuit technology. “The quantum computer is an information device expected to far exceed the capabilities of modern computers,” Prof Tsai states. The research journey in this direction has only begun with this study, and Prof Tsai concludes by saying, “We are planning to construct a small-scale circuit to further examine and explore the possibility.”
https://medium.com/@tokyouniversityofscience/wiring-the-quantum-computer-of-the-future-a-novel-simple-build-with-existing-technology-39cf733c8cde
['Tokyo University Of Science']
2020-04-23 08:00:59.507000+00:00
['Science', 'Quantum Computing', 'Technology News', 'Technology', 'Engineering']
2,693
Why ⚡Lightning Network⚡ makes no sense 😱
The more I experiment with Lightning Network, the more I’m convinced: it’s a nice technical solution for the wrong problem. Here is why. What is Lightning Network According to https://lightning.network/ , that goal of Lightning network is to enable scalable, instant payments, at exceptionally low fees based, using a Network of Bidirectional Payment Channels. Is that what we need? How the Lightning Network works If you are new to this concept, you can read how it works in my previous article “Bitcoin Lightning Network: run your node at home for fun and (no) profit ⚡🤑”, but here is a super quick recap. In the simpler explanation, we have our Alice and Bob that open a channel and make several off-chain transactions back and forth. Then Bob opens a channel with Charlie, and Charlie with Dave: now Alice can make transactions back and forth with Dave, without a direct channel to him. It’s wonderful, isn’t it? How the Real Life works Now, take some time and think about your real life: how many times you are exchanging money back and forth with someone else at instant speed? (Note the emphases on back and forth and instant) If you ask me, in my day by day life I don’t exchange money back and forth with anybody: I buy breakfast in the morning, but the bar owner will not buy something back from me. I pay my lunch at the restaurant, but the owner will not buy something back from me. I do some shopping at the supermarket or in retail shops, but they don’t buy anything back from me. All “b2c” transactions are one direction only. Well, one could argue that the retail shops need to buy good and services from some sort of suppliers, so they can use the LN. Let’s call those transactions “b2b transactions”. Again, according to my view, also in b2b transactions the money flows in one direction only, from the retail store to the supplier. One could then argue that economy is “circular”, so at the end, after several transactions that involves retailers, suppliers, government, etc.. the money will flow back to the “end users” via their salary. My point is: do we need instant transactions for those transactions? As today, b2b payment transactions are delayed by nature: a supplier is paid in advance or maybe after 30 days, but not instantly. Low “enough” fees and fast “enough” transactions is all we need, any crypto currency with such features can do the job. In real life we need scalable, instant and almost zero fees to move money in unidirectional channels, from retail users to merchants. Lightning Network is building a solution for a peer-to-peer economy, where Alice exchanges money back and forth with Bob in bidirectional channels: Lightning Network is solving the wrong problem. More evidences Are you a merchant? Are you thinking to accept payments via Lighting Network? Think again… To receive money, you need inbound capacity, so you need to convince someone else to open a channel with you. You can open all the channels that you want, but if you don’t have inbound channels, you can’t receive payments. Not an easy job. Yes, there a few “solutions” for this problem, but the fact the we need some solution or workaround to enable a merchant to receive payments, is a clear indication that the LN design is faulty: it’s unable to address what should be the main use case for its existence. Let’s have a look to the promising development involving Lighting Network: Atomic swap, Atomic Multipath Payments, Loop, Trampoline Payments, Turbo Channels… all trying to solve 2 basic problems: There is not known algorithm to reliable send a payment of a given import from one arbitrary node to another arbitrary node. The more the network is used, the more the balance is moved from end users to retailers, and the odds to find a route to make payment decrease. But I believe in human imagination and creativity, so let’s suppose that we solve the (yet unsolvable) problem of finding the available routes from a node to another node (in a 1 million node network) in a decentralized way, the problem of finding a route with enough inbound capacity to make a payment, the problem of rebalancing the channels in an efficient way, and all keeping the network “decentralized”. There still one more thing that I can’t grasp: what’s the economic incentive to run a Lighting Network node? Routing fees are by design extremely low: the owner of LNBig.com (currently the top node by capacity) declared on reddit that he earned $5.74 in January 2019. So, he is managing 20 nodes for free, or better, at loss, because he need to pay the bill for the servers (electricity, network connectivity…). This is clearly not sustainable in the long term. Furthermore, he is “locking” several bitcoins in the channels, hundreds of thousands of dollars, while he could use them to earn some form of interest (using lending bots, Compound, Dharma…) Please, throw tomatoes to me I would like to ear your opinions on that topic, and please convince me that I’m wrong: I want to continue to believe that The Future is Bright for Lightning!
https://medium.com/coinmonks/why-lightning-network-makes-no-sense-39ca172f50d1
['Simonluca Landi']
2020-08-26 13:19:19.714000+00:00
['Blockchain Technology', 'Bitcoin', 'Lightning Network']
2,694
15 powerful quotes from tech & education’s brightest minds
Last week, more than 2,500 technologists gathered together in Salt Lake City, Utah for our annual user conference, Pluralsight LIVE. There, we were inspired and enlightened by visionary leaders in tech and education. Here’s a small sampling of some of the best things we heard. “Why does collective impact matter? Because there is no single sector or org alone that can change systemic problems.” — Leila Toplic, Head of Tech Task Force for No Lost Generation (NLG) initiative “Education is the only thing that has transformed my inner being. It’s made me beautiful.” — Ziauddin Yousafzai, UN Special Advisor on Global Education “We need to educate children and invest in Afghanistan and Pakistan. We need to give quality education, resources and opportunities. We need to make sure the future generation is ready for the change we’re going to see.” —Malala Yousafzai, Nobel Peace Prize Laureate “Teachers are doing the most important job in the country and the world. If you are a teacher at a school that doesn’t provide computer science, you have the power to change that.” —Aaron Skonnard, Pluralsight CEO “Most nonprofits don’t use tech the way tech companies use tech. A tech-enabled nonprofit can have a global impact.” —Hadi Partovi, Founder of Code.org “When you first start out, create something to criticize. Share the ugly things.” —Brendan Dawes, artist and designer “If tech is going to be embedded in everything, it can’t distract us. It needs to help us attend to everything around us.” —Jaime Teevan, Technical Advisor to Microsoft’s CEO “People are the most important part of a digital revolution. Skills are the only thing that can keep you working at pace and driving in your transformation.” —Karenann Terrell, Chief Digital & Technology Officer at GSK “Don’t think of technology as abstract. Tech has a purpose. It creates society. How do we want to live moving forward?” —Dr. Heike Laube, CLO at SAP “The small details have a big impact.” — Ben Galbraith, Senior Director of Product at Google “Technology is a fantastic playground, but you can get lost as a company. Focus on actual problems you can solve. Focus on things that matter for your customers and where you expect to see value.” —Cyril Perducat, EVP of IoT & Digital Offers at Schneider Electric “Your future changes every day. You have to be an organization that learns every day.” —Thomas Kurian, President of Product Development at Oracle “The most significant obstacle to digital transformation is culture.” —Cody Sanford, CIO at T-Mobile “It is not the strongest of species that survive, nor the most intelligent, but the most responsive to change.” — Judy Marshall, Head of Services and Technology Training at Dimension Data “Sometimes the world as it stands isn’t ready to hold your big idea. But the real crime would be to shoot too low. You have to think SO BIG that it makes your stomach hurt.” —Caitlin Kalinowski, Interim Head of Hardware at Oculus Get more inspiring moments from Pluralsight LIVE 2018 here.
https://medium.com/pluralsight/15-powerful-quotes-from-tech-educations-brightest-minds-791b17327540
[]
2018-09-06 16:49:25.718000+00:00
['Pluralsight', 'Tech Leadership', 'Salt Lake City', 'Technology', 'Education']
2,695
The Anti-Network Effect
The Anti-Network Effect As social media platforms grow stronger they grow weaker too. This will eventually open the way for a meaningful alternative to Facebook. Despite waves of privacy concerns, Facebook has a powerful grip on us all. The ubiquity of the platform and the time invested in building connections deters people from leaving and in turn deters would-be rivals from building alternative platforms. Their scale and success has us locked in. This success has an Achilles heel though — and it’s your mom. Metcalfe’s Law essentially describes how networks grow exponentially not incrementally in value with the addition of nodes or users. When you add one node to a network of a million, there are potentially a million new connections not one. The law addresses the technological and quantitative side of networks but ignores the qualitative value of networks. The Network Effect is fundamental to the power of social media networks, however they are fraught with human complexity too. A network with 10 of your close friends is worth much more than a network of 1000 strangers. There are many reasons why Instagram is growing while the main Facebook platform is declining in places. The cleaner product features and visual nature of the posts are key factors, but so is the age of the network and its users. When I joined Facebook in its infancy, everyone invited everyone to connect. I’m connected to an exhaustive list of my high school, university and early career peers. When I joined Instagram, years later, I was more selective. It’s become a more meaningful network for me as it offers more relevant engagements with close friends. In a sense, rebuilding my contacts there (which was relatively easy as it was already owned by Facebook then) was like spring cleaning. When I look at my teenage nieces’ Instagram accounts though, they are again connected to thousands of their school peers. There’s a high likelihood their Instagram accounts will eventually be stuck to 2019 the way my Facebook account is stuck in 2009. Social networks grow stale. Our personal networks — the connections we build on social media platforms — age and degrade over time as we go through phases in our lives and as the platforms themselves evolve. Many of my connections have largely withdrawn from Facebook activity while my parents and their friends have found it a great tool to keep tabs on the family and grandkids. Some of my parent’s peers have connected with me. Can you really say no to uncle whats-his-name? Unfortunately though, he is not very internet savvy and generally shares low quality content and smatterings of misinformation. When you add a node to a network it grows quantitatively. When a user adds a connection on a social network there is a quantitative impact and a qualitative one. You are likely to add your favorite people early and as time passes you move further from there. As you continue to build, diminishing marginal returns set it. Eventually, If one’s network grows so big and impersonal, negative marginal returns can set it — where with the addition of a contact you lose more than you gain. Now, the value of the network effect has been completely outweighed by the qualitative degradation. Facebook benefits from the asymmetry of being a media company that does not produce content. It’s users create and propel the content. They are both the audience and the authors. It always made sense for them to grow to cover as much of humanity as possible — They are currently just shy of one third. With this massive scale though, the platform has become like the telephone directory of the internet, where much of the activity has shallowed to light or functional interaction and dodging your grandparents. Compounding the natural aging of personal networks is the inevitable increase in advertising as user growth slows. Any ad driven media platform must trade off its user experience with business performance. Eventually, to make more money they must run more ads. Balancing this is critical for any service that is predicated on being ‘social’. Declining personal interactions, coupled with increasing commercial messaging is a double edged sword that will accelerate a platform’s lifecycle. Eventually someone will pitch a replacement to Facebook where users must rebuild their network and it’s going to stick — where the upside of a better product outweighs the downside of having to rebuild. The sunk costs will be written off. It’s becoming easier by the day to imagine a better platform product than Facebook. Consider this hypothetical pitch: “We’re building an alternative to Facebook that won’t be ad funded, so there won’t be constant pressure to compromise your personal data. Our focus will be on making social truly social again, promoting quality content and looking after people’s wellbeing. You are invited to join the Beta.” Increasingly it looks like Facebook’s key challenge in the future will not be the journey of a primary platform but rather managing the rise and fall of different platform products, migrating users and avoiding interception. If Instagram replaces Facebook, that’s fine. What will replace Instagram though and will it be theirs? Facebook is not going away soon but as its users’ personal networks age, its challenges and business will change rapidly.
https://medium.com/newco/the-anti-network-effect-a303e02df956
['Andre Redelinghuys']
2019-02-13 12:30:33.541000+00:00
['Technology', 'Advertising', 'Facebook', 'Culture', 'Social Media']
2,696
What is Captcha Code and What is Captcha Meaning
You have ever seen on any website while entering secure information like credit card number or login id. You have been asked to input some code or number seeing from a picture or to solve a simple math problem to move further. But sometimes websites don’t ask to do this activity. Many people think What is Captcha Code and What is Captcha meaning? So in this article, I am going to explain everything about Captcha and its technology, so stay tuned with me. What Does Captcha Meaning The full form of CAPTCHA is a Completely Automated Public Turing test to tell Computers and Humans Apart. The main work of Captcha is to check whether the user is a robot or real. Basically what captcha does is manipulate or change the style of letters or numbers which can be in human-readable form. Means only a human being can read that format, not any robot or machine. So we can understand the captcha meaning that it is only to save from bots or fraud hacks. What is Captcha Example If I told you about the real-world example of CAPTCHA Code then it would be really simple to understand. Like you have made any online payment and before submitting you would have entered any code or number by seeing any picture. You have also seen some websites which do some tool work like pdf editing, word converter etc. These types of websites also use the captcha to save from bots. Every online platform which does some sensitive online activity related to user information or payment information. They use captcha for their websites to secure from hacks. History of Captcha The CAPTCHA word was first introduced by scientists at Carnegie Mellon University in 2000. The website idrive.com started a captcha system on their registration page and after that, they tried to file a patent for captcha system. PayPal in 2001 also started this type of technology to be safe from bots and spam. They used to display a human-readable text on their website which was not readable by machines. And if a human is using the website then they can understand the text and type the captcha code in the input box. reCAPTCHA which is now owned by Google in 2009, which is a popular use case of Captcha technology. So Google started this technology for their platforms as well as open-source to prevent spams, frauds. If you are interested you can read the full article of Captcha Meaning Here There are number of technology based articles are available on my website SaimTechNews — Everything in Tech, you can easily find it by clicking on the link.
https://medium.com/@technews783/what-is-captcha-code-and-what-is-captcha-meaning-c484358c6cd4
[]
2021-04-01 04:52:48.378000+00:00
['Tech', 'Bots', 'Captcha', 'Technology', 'Recaptcha']
2,697
Cyber Attacks and the Interconnectivity of Systems
A recent hack of renown cyber security firm FireEye may be linked to a “supply chain attack” across multiple government agencies. This “highly sophisticated attack” may have occurred through software updates through a network management system operated by SolarWinds. According to SolarWinds’ website, they work with more than 300,000 customers including Fortune 500s, the Executive Officer of the President, Department of Defense, U.S. Census Bureau and many other government agencies. Reports also suggest that emails may have been monitored at the Department of the Treasury. As details still emerge about the extent of this breach and the potential damage caused by this attack, one thing is certain, the interconnectivity of systems, increased storage of data on these systems, and growing sophistication of cyber threats creates a number of cyber security risks across all organizations. Cyber threats can pose harm in a number of ways — ransom, reputational harm, loss of intellectual property, data security — all of which can hinder an organization’s resiliency and business operations. Many government agencies, nonprofits, and private sector companies alike are experiencing growing threats and are recognizing how vulnerable their systems are. A recent National Infrastructure Advisory Council report found that privately operated critical infrastructure remains vulnerable and are falling short of security standards. This report suggested the need for a watchdog entity where private sector and public sector partners share threat intelligence, develop mitigation strategies in real-time, and facilitate collaboration against cyber threats. Technology has opened the door to great efficiency, data insights, and capabilities. With those capabilities come emerging threats as organizations need to consider where they, and members of their supply chain may be vulnerable, develop continuity of operations plans to build resilience in the face of cyber attacks, and consider their risk tolerance, mitigation steps, and approaches to securing their systems. Twitter: evan_piekara Medium: evan.piekara
https://medium.com/@evan-piekara/cyber-attacks-and-the-interconnectivity-of-systems-793ff3c2ddf1
['Evan Piekara']
2020-12-15 18:22:15.576000+00:00
['Consulting', 'Government', 'Cybersecurity', 'Business', 'Technology']
2,698
4 Steps to Great Cover Letter Writing
There’s no complicated secret to writing a great cover letter: you need to make it as direct as possible. That doesn’t mean keeping your language simple (although that’s always a help); the best cover letters skip lengthy regurgitations of the candidate’s job history and preferences in favor of a straightforward message tailored to a specific employer. Here’s what to keep in mind: Strong First Sentence When they sit down to write their cover letter, many candidates forget that hiring managers and recruiters will only spend a few seconds scanning their materials before moving on. That means the cover letter’s first sentence must sell your abilities in the context of the company’s mission; there’s no time for a lengthy wind-up. Here’s an example of a solid first sentence: “I’m interested in the position of senior iOS developer because I want to use my skills in developing apps to help [company] create the next generation of [company’s bestselling iOS app or game].” Quick and effective. No Templates Hiring managers and recruiters will also trash any cover letter that comes off as too generic; they want signs that you’re interested in a specific company and team. A cover letter jammed with “boilerplate” material suggests (however rightly or wrongly) that the candidate is sending applications to every tech company that even vaguely matches their skillset. If you must use a template (and we’ve all been in that position; there are only so many hours in a day), make sure you re-write it extensively in your voice; the person reading your application is well-versed in all the popular templates currently on the market. Keep everything to a page. Have a proofreader (or three) who can scan your writing for any typos or errors, including any factual discrepancies between your cover letter and your résumé (for example, make sure all your dates and job titles match up). What Can You Do for Me? The second and third paragraphs of your cover letter should continue the theme of your first sentence, highlighting how your mix of skills can benefit the company. Whenever you mention an achievement or goal from your previous jobs, explain how the experience makes you a great fit for the position on offer. While your personal development over the years is surely something to be proud of, prospective employers are most interested in what you can do for them. The last paragraph of your cover letter should end strongly, with a call to action that invites communication. “I’m available for an interview,” is a good example, “I will follow up on [date]. I look forward to hearing from you!” Proofread, Proofread, Proofread It bears mentioning again: proofread like your (future) job depends on it — because it will. And while you’re at it, don’t forget to revamp your résumé.
https://medium.com/dice-insights/4-steps-to-great-cover-letter-writing-36355341da92
['Nick Kolakowski']
2018-04-24 12:55:47.986000+00:00
['Resume Writing', 'Cover Letter', 'Technology', 'Job Hunting', 'Job Interview']
2,699
The Next Wave of the Digital Economy — Promises and Challenges
By Irving Wladawsky-Berger “The next wave of digital innovation is coming. Countries can welcome it, prepare for it, and ride it to new heights of innovation and prosperity, or they can ignore the changing tide and miss the wave,” writes Robert Atkinson in The Task Ahead of Us. Atkinson is founder and president of the Information Technology and Innovation Foundation (ITIF), a think tank focused on science and technology policy. We’re now entering the third wave of the digital economy, says Atkinson. The first was based on personal computing, the Internet, Web 1.0, and e-commerce. The second brought us Web 2.0, big data, smartphones and cloud computing. The emerging third wave promises to be significantly more connected— including higher bandwidth and a wide variety of devices; more automated — with more work being done by machines while integrating the physical and digital worlds; and more intelligent— leveraging huge volumes of data and advanced algorithms to help us understand and deal with our increasingly complex world. “Building and adopting the new connected, automated, and intelligent technology system will lead to enormous benefits globally, not least of which will be robust rates of productivity growth and improvements in living standards. Moreover, these technologies will help address pressing global challenges related to the environment, public health, and transportation, among others.” We’re in the early stages of this third wave. 5G, IoT, robotics, AI, and other promising technologies are being embraced by early marketplace adopters, but their full-scale impact is still five to 10 years away. We’re in a period not unlike the late 1980s, when it was clear that IT was on the brink of a major transition, but the Internet revolution didn’t arrive until the mid-1990s. Source: HBO-VICE News According to Atkinson, this transition will be more complicated and take longer to come to full fruition that the first two. In both previous eras, “consumers needed only Internet- connected devices, and companies needed little more than websites (and to be sure, logistics changes and new payment systems). Moving forward, progress will depend on a much more complex reworking of organizations’ production systems and business models — not just within organizations, but between them.” Moreover, beyond the technical and organizational challenges, one of the biggest risks standing in the way is the rising neo-Luddite opposition to the ongoing digitization of the economy and society. “Implementing the next wave of digital technologies will be much more difficult from a sociopolitical perspective than it was during the last two digital transformations because there is broader and stiffer opposition today. In past digital transitions, the technology industry was largely seen as a force for positive societal change: Computers helped organizations become more productive, and the Internet spread access to knowledge. Today, by contrast, ‘Big Tech’ is increasingly demonized and challenged on a host of issues, from privacy to job disruption.” Given its compelling benefits, the next digital wave will largely be inevitable, says Atkinson. But, its support need not be based on unrealistic optimism. There will be serious challenges, as has been the case with technological transformations over the past two centuries, including cybersecurity and the need to provide transition assistance for displaced workers. As noted in a recent McKinsey report on the future of work, “while there may be enough work to maintain full employment to 2030 under most scenarios, the transitions will be very challenging — matching or even exceeding the scale of shifts out of agriculture and manufacturing we have seen in the past.” “But societies have managed to address similar challenges in past transformations, and there is no reason to believe they cannot do so again going forward, especially if more of civil society shifts from opposing technology implementation to supporting proper rules and governance frameworks,” writes Atkinson. Markets and firms will play the biggest role in developing and implementing next-wave digital technologies and their ensuing organizational transformations. But governments have a major role to play. They need to make the next-wave digital evolution a central policy goal. More specifically, governments should enact policies that support and enable digital transformation; remove institutional and regulatory barriers to implementation; and encourage citizens to embrace digital evolution. Here’s a closer look at each of these policy recommendations. Support policies where the benefits are largely unequivocal Such policies include “supporting R&D, digital skills, and digital infrastructures; transforming the operations of government itself; embracing global market integration; and encouraging the transformation of systems heavily influenced by government (e.g., education, health care, finance, transportation).” As I read this list of policies, I was reminded of the National Innovation Initiative (NII), a 2005 report based on 15 months of intensive study and deliberations — which I was part of— on the changing nature of innovation at the dawn of the 21st Century, and what it would take for the U.S. to effectively compete and collaborate in an increasingly interconnected world. The findings and recommendations in the NII report were organized into three broad categories: Talent: The human dimension of innovation, including knowledge creation, education, training and workforce support. Investment: The financial dimension of innovation, including R&D investment; support for risk-taking and entrepreneurship; and encouragement of long-term innovation strategies. Infrastructure: The physical and policy structures that support innovators, including networks for information, transportation, health care and energy; intellectual property protection; and business regulation. It’s not surprising that calling for policies that support talent, investment and infrastructure remain as prominent today as they were 15 years ago. While we may already be in the third wave of digital technologies, their transformational impact on economies and societies is still in the early stages. 2. Remove institutional and regulatory barriers But, the bloom is off the rose. In the earlier waves, we mostly viewed digital technologies as enhancing communications, disseminating knowledge, and improving productivity. Now, digital technologies are also viewed as threatening privacy and security, providing access to polarizing and hateful information, and seriously disrupting jobs and the well-being of many workers. “The most strident opposition to digitally driven economic progress comes from a growing, vocal minority that seeks to ban or heavily regulate emerging digital technologies such as robots, autonomous vehicles, and biometrics to dramatically limit their adoption.” As has long been the case, it’s a matter of balance and trade-offs. We need policies that support the positive benefits of digital technologies while addressing their negative impacts. Overly stringent data privacy policies will hamper the potential advances that AI might bring in medicine, drug design and public health. “For example, giving users the right to opt out of data collection (rather than mandating they opt in), will protect privacy while limiting negative effects on digital innovation.” While eschewing policies that limit digital advances, policymakers should actively pursue illegal or unethical activities. For example, policies that seek to regulate negative activities— “such as ‘revenge porn,’ spam, financial fraud, hacking, ID theft, malware, and Internet piracy — do little or nothing to limit digital transformation (and in most cases advance it), but they achieve important social goals.” 3. Encourage citizens to embrace digital evolution Finally, the trap of anti-technology groupthink will seriously limit and slow down digital transformation. “Government officials and other elites need to embrace and advance an optimistic narrative about how digital transformation will lead to increased living standards and better quality of life, and actively counter self-promoting fearmongers seeking to instigate techno-panics.” Anti-technology narratives blame digital innovations for a number of societal challenges, including “inequality; loss of jobs and worker rights; addiction; surveillance; algorithmic bias and manipulation; cybercrime; social media coarseness and polarization; lack of diversity; political bias; concentrated economic and political power; and tax evasion. The truth is, digital technologies are not the principal cause of most of these challenges; and where they contribute, measured responses can often provide effective solutions without harming innovation.” “At the end of the day, nations’ success in embracing next-wave digital technologies will depend on a combination of awareness and strategic action,” writes Atkinson in conclusion. “Each nation needs to ask itself where it stands on both fronts. Do policymakers truly understand the technologies and competitive strengths, weaknesses, opportunities, and threats they present?… In taking strategic action, are nations focused on learning from global best practices in the wide range of policy areas affecting next-wave digital technologies, and then ensuring they adapt those lessons to fit the realities of their own nations? Getting this right will have a significant, positive impact on the living standards and quality of life of future generations.”
https://medium.com/mit-initiative-on-the-digital-economy/the-next-wave-of-the-digital-economy-promises-and-challenges-ff0d245d17
['Mit Ide']
2019-05-07 14:46:31.507000+00:00
['AI', 'Technology', 'Automation', 'Innovation', 'Digital Economy']