Unnamed: 0
int64 0
3k
| title
stringlengths 4
200
| text
stringlengths 21
100k
| url
stringlengths 45
535
| authors
stringlengths 2
56
| timestamp
stringlengths 19
32
| tags
stringlengths 14
131
|
---|---|---|---|---|---|---|
2,400 | Abdinajib is a 9th-grade student at the Abaarso School of Technology and Science in Somaliland. He lives with his mother, father, and 8 siblings. Abdinajib says on Schoolfund.com “I’m a writer and I… | Meet the 9th-Grade Student in Somaliland Who Wants to Be a Writer Too.
Abdinajib from theschoolfund.com
Abdinajib is a 9th-grade student at the Abaarso School of Technology and Science in Somaliland. He lives with his mother, father, and 8 siblings. Abdinajib says on Schoolfund.com “I’m a writer and I like to write descriptive stories. Back in Hargeisa when I was young, I used to go to a public school called Qudhac Dheer and I always liked to draw pictures. For my future, I want to be a social worker, because it is rare to see any educated people that are fighting for poor people’s rights.”
If you want to help make Abdinajib dream a reality consider donating to his tuition directly here.
My goal is to help fund the rest of the 20 students at Abaarso school's full tuition paid by featuring their stories. Donate at the link above. | https://medium.com/everything-shortform/meet-the-9th-grade-student-in-somaliland-who-wants-to-be-a-writer-too-2b742ee9a146 | ['Chitara Smith'] | 2020-12-15 02:17:30.110000+00:00 | ['Education', 'Fundraising', 'Students', 'Schools', 'Technology'] |
2,401 | The Ethical Dilemma of Cyberpunk 2077’s Soulkiller Project | Running from the Reaper
While a personality construct replacing one’s consciousness isn’t quite a feat we’ve reached today, you can compare the process to cloning, which is most definitely an available technology.
In Cyberpunk 2077, Arasaka’s Soulkiller project aims to grant immortality and “godhood” to higher-ups such as Saburo. Cloning, with a bit of tweaking, can also accomplish this, albeit it’ll be a good few years before the process is perfected.
Dolly was a cloned sheep part of The Roslin Institute’s methods of producing modified livestock. Through the process of Somatic cell nuclear transfer, (removing DNA from an unfertilized egg and replacing it with the nucleus of the intended clone), the identical clone of a sheep was created for the first time with an adult cell. This was groundbreaking, as the process was not thought of as possible, and opened many doors for discussion.
While cloning animals for experiments is debated ethically, the real discussion is at the level of cloning humans. Many argue that the process is simply “playing God” and morally wrong, but it also brings on many practical issues, such as impacting the gene pool. Societal side effects are expected, as we see in the ending of Cyberpunk 2077, where the Soulkiller project is commercialized and only the powerful can become “immortal.” Religion also comes into play when discussing the ethics of cloning. Genetic engineering has been condemned by Protestant theologian, Paul Ramsey because it “threatens Christian views on human happiness, morality, power, and procreation.”
Cyberpunk’s unique take on cloning lays within the possible corporate oppression that comes with it. Power is constantly shifting, exchanging between the hands of different people as history goes on. With the technology to clone someone, power can remain in one individual’s hand, and if the ability to be cloned is commercialized, what does that mean for those at the bottom? Are they forever doomed to be ruled by the wealthy, for eternity?
Arasaka offers V the opportunity to be uploaded to Mikoshi while the company looks for a body for V to be uploaded into. This was the final decision in my game, one that I declined. Signing V’s rights over to Arasaka was something I wasn’t planning to do. Call it the Silverhand influence, but after watching the final events of the game unfold, I wish I had gone out in a blaze of glory instead of helping Arasaka reach their goals of becoming immortal. | https://medium.com/super-jump/the-ethical-dilemma-of-cyberpunk-2077s-soulkiller-project-7f6856d7106 | ['Paul Lombardo'] | 2020-12-24 18:00:25.454000+00:00 | ['Ethics', 'Technology', 'Business', 'Gaming', 'Features'] |
2,402 | How much power does the HomePod mini actually draw? | Disclaimer: Information on this article was written based on my personal experiment, and certainly was not conducted inside a laboratory condition. I did the experiment because I was curious, and this experiment was not endorsed, paid, or influenced by anybody. Use the information for yourself only (although you are more than welcome to share this articles with your friends). Inputs are welcome.
Devices used:
Apple HomePod mini, Space Gray, Baseus 18W 10,000 mAh/3.7V/37Wh power bank, Satechi USB-C Power Meter Tester, iPad Air 3 (2019) as the Apple Music controller for the HomePod mini.
This article will run you through these tests:
Test 1: Charging an iPad Air 3 at 60% battery, Test 2: HomePod mini on standby (waiting for command/Hey Siri), Test 3: HomePod mini at 30% volume, Test 4: HomePod mini at 50% volume, Test 5: HomePod mini at 70% volume, Test 6: HomePod mini at 85% volume, Test 7: HomePod mini at 100% volume, Conclusion.
Testing Method
Satechi USB-C Power Meter Tester
I am using a Satechi USB-C Power Meter Tester which is available to buy here. This device shows us the Volt and Ampere that are currently running through it, and by using a simple calculation (Volt x Ampere), we can get the Watt.
During each of the tests, I played Black by Danger Mouse and Daniele Luppi from Apple Music at different volumes.
Accuracy of The Tests
The amounts of Volt and Ampere are not flowing at a persistent rate (you can see it in the video on Test no. 7 below). I tried to take the pictures at the average number of Ampere during each test case for the best reference.
Enough talk, let’s see the results.
Test 1: Charging an iPad Air 3 at 60% battery
iPad Air 3 (at 60% battery) power usage — 11.9V 1.45A (17.26 Watt)
As a sample, I connected my iPad Air 3 to the power bank, and it shows 11.9V and 1.45A, which translate to 17.26 watts. This is in line with the power bank’s rated power of 18 watts.
Test 2: HomePod mini on standby
HomePod mini on standby — 8.96V 0.03A (0.27 watt)
On standby, the HomePod mini uses 0.27 watts of power as it is connected to the network waiting for your command from another device or “Hey Siri” voice command.
Test 3: HomePod mini on 30% volume,
HomePod mini at 30% volume — 8.98V 0.09A (0.81 watt)
While playing the music at 30% volume, the HomePod mini uses 0.81 watts. This volume is perfect for background noise while working.
Test 4: HomePod mini at 50% volume,
HomePod mini at 50% volume — 8.99V 0.13A (1.17 watt)
While playing the music at 50% volume, the HomePod mini uses 1.17 watts. This volume is best when you want to chill and listen to some music.
Test 5: HomePod mini at 70% volume,
HomePod mini at 70% volume — 8.98V 0.25A (2.25 watt)
While playing the music at 70% volume, the HomePod mini uses 2.25 watts.
Test 6: HomePod mini at 85% volume,
HomePod mini on 85% volume — 9V 0.52A (4.68 watt)
While playing the music at 85% volume, the HomePod mini uses 4.68 watts. It is impossible to talk without shouting if there’s a HomePod mini playing music at 85% volume 1 meter (6 feet) away from you.
Test 7: HomePod mini at 100% volume,
HomePod mini at 100% volume — 8.92V 1.03A (9.19 watt)
At full blast, we see that the power usage hits 9.19 watts, and it registers a 17.21 watts of peak usage (8.92V 1.93A). Most of the time, it will use around 6–12 watts.
At this volume, the HomePod mini fills the entire room. It is very loud and I personally never play music at this volume, but I use the HomePod mini (stereo pair) to watch movies at this volume.
Conclusion
The HomePod mini use:
0.27 watts on standby,
on standby, 0.81 watts at 30% volume,
at 30% volume, 1.17 watts at 50% volume,
at 50% volume, 2.25 watts at 70% volume,
at 70% volume, 4.68 watts at 85% volume, and
at 85% volume, and 9.19 watts at 100% volume.
During most of the tests, the HomePod mini does not use all the 18 watts available from the power bank. During daily use, I use it at 30–50% volume which means it pulls around 1 watt from the wall. During standby, it only uses around 0.3 watts, making it a very efficient little device and I don’t need to worry about my electricity bill even if I have 5 of them around the house.
Based on my testing I also found out that my HomePod mini can only work when my power bank supplies it with 9V of power. When I switched it to 5V, the HomePod mini simply refuses to work (the light blinks orange).
Edit: I just found this document by Apple on HomePod mini’s Product Environmental Report, and on the last page they mentioned power usage when playing music at 50% volume, which is around 1.28–1.34 watts.
So, how long can a 10,000 mAh power bank keep a HomePod mini running?
I did a test playing the HomePod mini at 30% volume, and after around 15 hours the power bank still has around 50% of power remaining. That means that it theoretically can power a full day (24 hours) of music listening.
Please note that I did the test in my own home at a temperature of 25° Celsius (77° Fahrenheit). Batteries are known to discharge faster in cold weather.
The final verdict is that technically you can bring a HomePod mini to a camping trip with a power bank!
Thanks for reading! | https://medium.com/@psn/how-much-power-does-the-homepod-mini-actually-draw-79c682145977 | ['Prayudi Satriyo Nugroho'] | 2021-03-14 16:54:06.490000+00:00 | ['Homepod Mini', 'Smart Home', 'Apple', 'Technology', 'Homekit'] |
2,403 | Master Chief and Kratos Do A Whole Lot More Outside Fortnite | Master Chief and Kratos Do A Whole Lot More Outside Fortnite
When these Spartans dance, the gods take note
My first reaction was shock. It was soon followed by intrigue. Kratos and Master Chief are flagbearers when it comes to driving PlayStation and Xbox console sales across the globe. At the steering wheel of God of War and Halo respectively, their shoulders bear burdens that far exceed the responsibilities of selling consoles. Kratos’ body is literally covered by the ashes of his dead family. Meanwhile, Master Chief is a faceless machine bred for combat. Few see him as the John he once was. And even Kratos hasn’t exactly come to terms with who he has become. Should the grim Ghost of Sparta and a green walking tank be reduced to equals among Fortnite’s snazzy personas and sentient pancakes in a battle to the finish?
It’s a good question. The part-time outing has incited all sorts of reactions. It has divided zealots, with some welcoming the move and others treating it as heresy. I won’t deny that I was among the latter at first. But eventually, I realized why Microsoft and Sony would even consider the move, let alone play along. With over 350 million players, Fortnite is a cultural phenomenon that shows no signs of stopping. Epic Games has tended to the flames that propelled the battle royale game to its lofty pedestal, catering to an audience that is as diverse as it is committed. 15.3 million players showed up for its showdown against Marvel villain Galactus.
It’s a testament to how well Epic Games knows its fanbase and how it doesn’t refrain from throwing rivalries aside to team up with some big names. Artists like Marshmello and Travis Scott have performed virtually to millions of loyal fans. From Marvel and DC to now PlayStation and Xbox, Fortnite is still going all out on big-name collaborations. The move exposes millions of hyper-casual gamers to some of the biggest names in the history of videogames. Sure, Fortnite lacks Halo’s tight gunplay or God of War’s visceral combat and powerful narrative. But it has the numbers and an addictive gameplay loop set in a breathing world that appeals to a demographic far larger than any game in the past.
With that out of the way, let’s see what these Spartans have gone up against in the past. From almighty gods to fleets of alien champions, they weren’t afraid to get their hands dirty.
Art from God of War: Ascension. Source: SCE Santa Monica Studio.
Kratos — God of War
“If all of Olympus will deny me my vengeance, then all of Olympus will die!”
- Kratos
Full disclosure: I’ve never owned a PlayStation. My encounters with the divine embodiment of vengeance were always on a friend’s PSP or PlayStation console. Nonetheless, even a glimpse at the wreckage Kratos leaves in his wake is reason enough to fear him. Once a respected general, a cruel twist of fate leads to Kratos killing his family while under Ares’ (the then-Greek God of War) command. Their ashes cling on to Kratos, giving him an appearance “as pale as the moon,” a gut-wrenching stab of misery that he will never forget.
Themes of vengeance and sorrow permeate God of War. While its latest iteration does shift its focus towards the bond between father and son, previous games didn’t stray from the path of embedding slaughter deep into your muscle memory. Consumed by wrath, Kratos has faced everything from bizarre Greek creatures to the gods themselves in battle. However tilted the playing field was, all Kratos did was rip and tear until the deed was done. A mere mortal marked with destruction threw a significant portion of the divine Greek pantheon into disarray.
God-slaying involves a fair bit of tricky boss fights that freshen up the “kill everything getting in the way” experience. Hulking colossi and grotesque magical beings shape the very nature of the battlefield as they strive to best Kratos. The challenging battles that ensued gave action games a template to build upon across several console generations. Be it Poseidon’s hulking horses made of wild water torrents or the towering mass of muscle that constitutes Cronus, every engagement with the enemy is something to look forward to. In an age where fetch quests and daily tasks pepper games, God of War doesn’t just scratch the right itch. It cuts to the bone.
Halo: Reach’s journey of hope is one I won’t forget. Source: Bungie.
Master Chief — Halo
“I need a weapon.”
- Master Chief
Full disclosure: I’ve played every Halo game out there (yes, 5 was terrible). 20 novels in, the possibilities that Halo’s sci-fi canon presents to videogames are downright incredible. Over the course of numerous titles, there’s no denying the indelible impact the franchise has had on first-person shooters. Regenerating health and two-weapon loadouts are staples in the industry today but what really set Halo apart is what it once promised. And I’m not talking about its impeccable multiplayer component that kickstarted online multiplayer on consoles.
The scope of the campaigns’ sandbox-esque missions is made apparent right when you take an all-terrain Warthog for a spin. But underneath its bombastic trappings lies a sprawling sci-fi narrative of a tireless struggle between humanity’s finest and a grave intergalactic threat, bound with tactical gameplay and memorable character arcs. True, the protagonist is a man of few words. But when an alien armada is at humanity’s battered doorstep, the Chief’s trigger finger preaches with a golden shower of lead.
Having said that, Halo does have its solemn moments bereft of the bravado and swashbuckling action first-person shooters are known for. Halo: Reach pits hope against hope with notable characters dying left, right, and center, only for their sacrifices to pay off in the end. The unending swarm in Halo: Combat Evolved’s iconic Library mission is a desperate encounter I won’t forget in a hurry either. Couple that with some of gaming’s most iconic soundtracks and it’s no surprise why people still place their bets on Halo Infinite getting the timeless franchise back on track.
Kratos from God of War and Master Chief from Halo are the latest additions to Fortnite’s growing roster. Source: Epic Games. Image edited by the author.
One doesn’t have to squint to see the gilded thrones these Spartans occupy in the hearts of gaming veterans. While Fortnite isn’t known for gritty combat or a potent narrative, it still serves as a portal that could draw in a new generation of casual gamers towards franchises that defined genres and defied expectations. After all, playing as irate Kratos or nearly wordless Chief is certain to evoke emotions, be it awe or nostalgia. Or a laugh (ouch). It’s a bold experiment that could certainly use some Spartan firepower. I can’t wait to see how this cultural juggernaut’s roster expands with time and how Fortnite expands as a medium of expression amidst unprecedented times. | https://medium.com/super-jump/master-chief-and-kratos-do-a-whole-lot-more-outside-fortnite-f20cb61cdb10 | ['Antony Terence'] | 2020-12-12 00:30:35.747000+00:00 | ['Culture', 'Gaming', 'Art', 'Digital Media', 'Technology'] |
2,404 | Data Journalism Crash Course #3: Data Curation | Data Journalism Crash Course #3: Data Curation — Image by the author
Because of the mass of available information, an extreme amount of care must be taken to ensure compatibility between the material provided, the public profile, and the activities and content values.
Another point that makes curatorship even more relevant is the fact that the content cannot simply be “played” on the networks without contextualizing it.
Anyway, there is also a certain margin of creation, necessary for your audience to understand the relevance of the published material.
How to curate content?
There is a model for curating, which is divided into three stages:
The first is research , which consists of monitoring news and articles and identifying the best sources. Several online tools help the curator’s work — we will talk about it throughout the course. The use of google alerts and RSS feeds from relevant blogs can also be extremely useful to keep you constantly updated.
, which consists of monitoring news and articles and identifying the best sources. Several online tools help the curator’s work — we will talk about it throughout the course. The use of google alerts and RSS feeds from relevant blogs can also be extremely useful to keep you constantly updated. The second step is contextualization. As already mentioned, it is important to make sense of what is published, according to the interests of the company and the profile of the target audience. Through social media feedback it is possible to assess what is working.
is contextualization. As already mentioned, it is important to make sense of what is published, according to the interests of the company and the profile of the target audience. Through social media feedback it is possible to assess what is working. Finally, and not least, we go to the sharing phase, and here it is necessary to define through which channels it will be carried out.
Data Curation tools
Data curation is a powerful tool to organize and share resources with projects, or to fuel a professional learning practice.This list of the best curation platforms brings together tools that allow anyone to go through and organize everything from social feeds to course materials.
Alteryx: Specialized in self-service analytics with an intuitive user interface. These analytics can be used as Extract, Transform, Load (ETL) Tools within the Alteryx framework. Informatica: It offers an AI-powered Intelligent Data Platform and the the industry’s most comprehensive hybrid and modular tool Stitch Data: It is a cloud-first, developer-focused platform for rapidly moving data. Lasso: The Lasso service is optimized for marketing activities, but can be leveraged for all kinds of business and personal web research. Alation: It is an AI-driven platform for data search & discovery, data governance data stewardship, analytics, and digital transformation. YourVersion: A content discovery tool that automatically pulls together content from around the web-based on the interests you specify. Ataccama ONE: a platform to analyze, process, manage, monitor, and provide data with a Self-Driving Data Management & Governance tool. Quuu: Select interest categories that matter to you, then watch your posts line up in the Quuu scheduler. The suggestions for you are based on the inputs given by other similar users on the platform. Talend: A platform that unites data integration and governance to deliver trusted data Elink: This tool helps you save web links, bundle them and turn your web link collections into email newsletters, fast website/blog content, single web pages, social media bio links and more. Social Weaver: This tool makes it easy to schedule content, increase engagement, and listen to customers’ impressions — of both yours and your competitor’s. ShareIt: It helps you to engage your audience and generate new leads through automated postings even when you’re not around. Apart from organic posts, you can schedule automatic posts from your feeds. Triber: Triberr is a marketing suite that helps bloggers and small businesses amplify their content, build online communities, and promote content all in one place. Social Anima: Select interest categories that matter to you, then watch your posts line up in the Quuu scheduler. The suggestions for you are based on the inputs given by other similar users on the platform. Curata: Curata CMP (content marketing platform) enables the user to create content, curate and analyze the success of the content.
Social networks — old but gold
Yes, your company’s social media can (still) be a way of filtering good content and knowing which topics are relevant to your content marketing.
The two most relevant platforms for good curation are Twitter and Facebook — and even if you don’t use those networks, you can still benefit from the site’s filtering and search tools.
Twitter, Instagram and Facebook are social media present in most of the marketing and personal and institutional communication strategies.
So, if you’re looking for a specific topic, Twitter Search(for example) offers a good overview of how the topic is being discussed and shared by users — in photos, videos and links.
The alert is worthwhile: using social media for content filtering is a useful tactic as long as you don’t spend too much time in it. After all, they have limitations and other online tools filter links, topics and articles in a much more professional way and with data that can be more helpful.
BuzzSumo helps you analyze and filter relevant content by searching for topics.
It selects the results of each keyword searched by the number of shares and backlinks of each article and news — exactly the factors that will help you decide the social impact on the Web. It is the favorite tool of brands — like IBM and Yahoo! — and content producers around the world — like Buzzfeed, National Geographic, TED and Rolling Stone.
The logic is very simple — and has support for searches in different languages.
The free version displays only the first results, and only some data sharing for each result.
In the paid version, you can compare links, find the topics that are on the rise and identify opinion leaders on social networks for each subject.
If you choose to use only the free version of BuzzSumo, there is another tool that does a complementary job to that of filtering social networks.
Social Mention focuses on blog posts and microblogs — especially Twitter. Just play the desired keywords for your brand!
This tool mainly helps to better calibrate your search for content, offering the best keyword suggestions related to your search, and finding users who may have shared important articles and news for referential content.
Pocket operates in a very simple way: it is a bookmarker, that is, a “savior” of articles that you find on the Web.
It has a website version for desktops, an application and an extension in your browser — every time you find an interesting article on the web, just click Save to Pocket and that’s it!
So, as long as you have some time to read the articles, they will be there, saved and protected.
Feedly is one of the oldest RSS tools on the web — and one that is still incredibly relevant in researching and organizing news feeds.
If you are not familiar with the term, RSS are links and directories that take you to the latest publications from websites, news portals and blogs for programs like Feedly.
In other words, it is a way for you to read first hand what is published in the places where you consume content.
The best thing about Feedly for curating your marketing is that it has keyword research so you can find the best feeds and websites related to your business — and for free!
Golden rules for data curation
1) Don’t talk too little about many things. Say a lot about a few things.
One of the most common mistakes when curating is looking for too many themes and topics.
This makes your curation very broad and the information found will never be treated with continuity and depth.
In other words, having little information on various subjects does not help your marketing and is a disservice to your audience: they will not maintain attention or interest in so many discussions at the same time.
Therefore, it is recommended that you maintain a logic in your curation: keep two or three topics that are decisive for your brand and feed with information that together combine.
2) Keep your curatorship frequency
There is no point in starting a data curation, if there is no way to keep this curation active over time.
Check an interesting frequency — daily, weekly, biweekly — to search for content and always have new and relevant information for your communication.
3) Use only the tools and habits that help you
Search for the most adaptable forms and tools to the reality of your work.
Choosing the right content
Amid so much data, how to choose the right content to use in your communication strategy?
To start you need to think about personas. They are the north of your content strategy and it is based on your preferences that you will look for material to join your curatorship.
From your pains, doubts and problems, you will find basic topics to look for and, from there, add more content to your curation list.
How to get the most out of content
There is no point in finding quality material if you are going to keep it stored in some lost folder of your favorites. Good content needs to be shared with the world.
After curating interesting materials for your strategy, it’s time to think of ways to show that content to the world.
Remember to credit
A big part of replicating relevant content is giving credit to the original author, linking to the article and showing the reader where it was originally posted.
Also be careful with images, as they may be copyrighted. The ideal is to find images in image banks that have reproduction or shared copyrights.
Remember: Combine data curation with original material
Data Curation in navigation performance
Through Data Curation we analyze the profile and behavior of visitors to a website presenting the following information:
Classic audience analysis data: unique users, visits, page views and bounces;
Purchase intention;
Topics of interest (a.k.a. trending topics);
Technologies used;
Location;
Demographic data;
Professions;
Social networks accessed by users;
Conversion funnel;
Monitoring of programmatic media campaigns;
Monthly report delivery describing the analysis in a didactic and simple way;
Suggestions for marketing actions and Inbound Marketing according to the audience profile.
IF YOU WANT TO KNOW MORE:
Top Content Curation Tools For Marketing, Social Media, Education and Businesses —https://blog.elink.io/top-content-curation-tools/
Data Curation Foundations — https://books.google.com.br/books?id=VOlHzQEACAAJ&dq=Data+Curation&hl=pt-BR&sa=X&ved=2ahUKEwjMnumPpbjsAhXhE7kGHYvbBCQQ6AEwAXoECAAQAQ
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/data-journalism-crash-course-3-data-curation-bb14726033c | ['Deborah M.'] | 2020-10-31 19:06:49.813000+00:00 | ['Data Science', 'Technology', 'Data Journalism', 'Content Marketing', 'Data'] |
2,405 | Deuterium — The Elephant in the Space Capsule | The year is 2031 and Elon Musk and his NASA crew have moments before touchdown on Mars. Everything they need to create a permanent human colony on our neighbor planet they have brought with them. This monumental achievement for humanity cannot be overstated. Here is Astronaut Musk now, live from Mars to answer the questions of randomly selected first graders on this historic day. Let’s go to Timmy from Temecula.
Timmy: Congratulations Mr. Musk on getting to Mars.
Elon: Thanks Timmy! I was only a boy your age when I dreamed of coming to Mars and here I am. If you work hard you too can achieve your dreams!
Moderator: Go ahead Timmy, ask your question.
Timmy: Did you remember to bring the deuterium water filter?
Elon: The what?
Is colonizing Mars on your to-do list? You have it all planned out you say? Unlimited cash. Rockets. Fuel. Ground Control and Major Tom. Check. Ability to remove deuterium from your drinking water? Not checked. Wait — what?
The new science of Deutenomics
The new possible in the 21st century is exciting. We are living in unprecedented times. While we are in the midst of an ecosystem crisis we are in the throes of a technological renaissance, living through the 4th industrial revolution that will transform our society like never before.
We are closer than ever to sending a colony of humans to settle the Red Planet. There are lots of challenges to overcome to get there, so in the big picture it is best to consider even the smallest obstacle. In this case we are looking at deuterium — the big elephant in the room. Deuterium is left over from the Big Bang. It’s everywhere, we can’t clean it up, and we are late to the party.
What does it do to us? It damages mitochondria. You know, the powerhouse of the cell.
The new science of Deutenomics shows us that when our ability to manage deuterium breaks down disease starts to creep in. This “new science”, 60 years in the making, is barely known about at present in mainstream medicine, but will grow as its implications in unraveling the mechanisms behind the aging process are fully understood.
What is deuterium?
Deuterium, also known as heavy hydrogen or 2H, is together with protium (1H) one of the two stable hydrogen isotopes. In the Big Bang, the first element created was protium, which makes up the bulk of hydrogen, and deuterium came second and made it possible for the creation of all other elements.
In contrast to protium, which only contains a proton, deuterium is composed of a proton and a neutron. Therefore, the mass of deuterium is twice the mass of protium, and is, thus, also called heavy hydrogen.1 Two hydrogens and an oxygen make a water molecule. But when it comes to water, what we think of as H2O is actually one molecule of HDO for every 3300 H2O, (one hydrogen in 6,600 is deuterium).
Consider that there are two stable hydrogen isotopes, they both can make water, yet one is twice the weight of the other. This is a pretty big deal.
They say the best way to poison someone over time is with heavy water. It looks, smells and tastes like water, but it is not. This is the problem with hydrogen: it can have a bad neutron. Hydrogen is used in just about every biological reaction, and most of the time everything functions smoothly because it is the protium version of hydrogen, but deuterium gets in and goes where it doesn’t belong. Imagine being forced to squeeze an elephant through a glass revolving door. The elephant doesn’t think it’s a good idea, and neither does the door. There’s going to be significant damage. Lawsuits and insurance payouts will ensue. Premiums will go up significantly.
For as evil as deuterium is to our energy production, it is naturally found in the earth’s waters, roughly to the tune of one HDO molecule for every 3,300 H2O molecules, making the deuterium content of the ocean 155 ppm. That is about 3 drops of HDO in every glass of water.
In pristine nature or out of your tap in Torrance, drinking water typically contains 140–150 ppm of deuterium. Antarctica has the lowest deuterium on the planet (89 ppm), a throwback to an ancient earth of giant dinosaurs and forty-foot ferns. Glacial water tends to be 10–20% lower than the ocean (155ppm).
Deuterium concentration below 120 ppm, typically made through fractional distillation, is considered deuterium depleted water (DDW).2 — (Full disclosure: My company sells super deuterium depleted water).
In any glass of water ladled up from anywhere on earth, ocean, lake, river, pond, the summit of Everest, or the depth the Marianas trench or any cup of tea from London to Lihue, the hydrogen-to-deuterium ratio will never be higher than 6,660-to-1. On Mars it’s closer to 1,100-to-1. On Earth deuterium makes up 0.0156% of the hydrogen in water by atoms. On Mars, the lowest is 0.078%. That’s five times more. Only.
In 1961, Russian scientists in Siberia were the first in Western Civilization to report on the positive biological effects of DDW.3,4 They observed people who live in areas with 20% lower deuterium in their water have 40 times more centenarians. By now it has been sufficiently demonstrated that it will be a real Debbie Downer if you find yourself 42 million miles from home scratching your butt on Olympus Mons, having failed to have brought a deuterium depleted water filter.
What determines the levels of deuterium in the body?
The 2H:1H isotope ratio in living organisms is affected by various factors, including diet, metabolic activity, and most importantly, the amount of deuterium in your daily drinking water. Cells have a system to eliminate deuterium from the mitochondria.15 However, this adaptive evolutionary system is limited and error prone. It is believed we can barely pass 100 years of age on this planet primarily because of this.6
— Therefore consumption of DDW is the easiest and most efficient way of keeping the deuterium levels in our body low by significantly decrease the 2H:1H ratio in human fluids.5
Life on earth has reluctantly adapted to live with the deuterium we have. Research shows that those that live in areas where there is less deuterium statistically enjoy better health and longer lifespan. In nature, it has been theorized that animal migration cycles have to do with deuterium regulation. Isotopes of elements can behave very differently from each other in biological process, on this the science is clear.
Effects of deuterium on human health
Chemically, deuterium slows down biological systems.23 According to the Kinetic Isotope Effect the dissociation of a carbon-deuterium bond is 6–8 times slower than a carbon-hydrogen bond.
Increasing evidence over the last sixty years shows that high levels of deuterium pose severe threats to human health. Perhaps the most well demonstrated effects of deuterium on biological systems are its ability to interfere with energy production, metabolism, and cell development and division.
Deuterium has been shown to cause mitochondrial dysfunction and subsequent alterations in metabolic homeostasis which is linked directly to aging and dysbiosis.12 Notably, is has been shown that deuterium inhibits metabolic energy production process within the mitochondria — the energy factories of cells. In 2007 Abdullah Olgun showed that deuterium inhibits ATP synthase and compromises the electron transport chain in the mitochondria, impairing the ability of mitochondria to produce energy (ATP), and thereby enhancing ROS production and accelerating the downward spiral towards mitochondrial oblivion.10,11
Deuterium depletion has also been studied for its rejuvenating effects, for maintaining genomic stability and regulating cell growth, optimizing gene expression, and cellular energetics.5,13,14
Recently, the effects of DDW on metabolism and metabolic conditions have gained increasing attention. László Boros, a leader in the field of Deutenomics, at UCLA, pinpointed that DDW may represent a critical link in disease prevention and treatment using natural ketogenic diets and low deuterium drinking water.15
Boros and colleagues have also shown that deuterium depletion protects cells from the genotoxic effects of radiation, in this case identifying the precise mechanism of how it works!
By minimizing the deuteration of sugar-phosphates in the DNA and modulating water exchange reactions in the tricarboxylic acid substrate cycle, a net energy benefit is attained.15 In mice, 30 ppm DDW exerted a significant radioprotective effect against X-ray radiation in terms of animal survival, as well as blood and immunological parameters.17,18
More evidence published in Pharmaceutical Biology indicated the potent anti-inflammatory and antioxidant effects of deuterium depletion, showing a decrease in expression and activity of COX-2, a key regulator of inflammation and carcinogenesis.14 Many subsequent in vitro studies have confirmed the anti-cancer effects of DDW when used as an adjuvant to treatment in various malignancies, the most responsive being lung breast and prostate cancer.13,19 Recent pre-clinical and clinical studies have confirmed the strong anticancer effects of a body that is deuterium depleted and the ability of DDW supplementation to assist to restore redox balance.20,21
Earlier research on athletes in Russia and Hungary showed a stark increase in oxygen utilization, and improved tissue oxygen. The studies showed that a deuterium depleted body can utilize oxygen much more efficiently.26,27 With half the oxygen a body deuterium depleted below 120 ppm, (25% below the average) needs half the oxygen to perform the same amount of work as someone whose body is 150 ppm deuterium. That’s a big breakthrough. I’m hopeful deuterium depletion will allow someone to break the three minute mile.
Sherpas that drink Himalayan glacial water (15% lower in deuterium than the average), are able to ascend Mt. Everest without oxygen much to the shock and admiration of western technical climbers. This is because they are naturally deuterium depleted.
“Where oxygen is at a premium, the less deuterium, the less tedium”
- William Shakespeare
He didn’t say that, deuterium was first discovered in 1931, but at the 4th International Congress on Deuterium Depletion in Budapest, 2019, I asked Dr. Olgun what made him interested in studying deuterium. He related that he was originally alerted to the potential biological implications of HDO when he calculated how much of it we have is in our blood, (~17mM). What upon first glance seemed benign became of interest when he calculated there is about 4–8 times more deuterium in blood plasma than calcium, ~4 times more than potassium,~20 times more than magnesium ~3 times more than glucose. That’s a big red flag.
This revelation drove him for more than two years of mathematical modeling the complex biochemistry of ATP production and ultimately led to the discovery of the exact mechanism by which deuterium damages the ATP Synthase nanomotors in the mitochondria.
I believe this may one day gain him the Nobel Prize in Chemistry.
Whereas Olgun discovered how deuterium does its damage, Boros discovered the mechanism by which the body tries to keep it out of the inner nano mitochondrial energy factories inside the cells. He verified to me that he coined the term ‘Deutenomics’ for the new branch of biochemistry, as the handful of researchers studying deuterium’s role in biology did not have a name for this new science.
In the U.S. institutional medical research world Boros is the loudest voice on this subject. Gabor Somylai, the Hungarian doctor who first alerted Boros to deuterium interference and the author of ‘Defeating Cancer, the Biological Effects of Deuterium Depletion’ published twenty years ago, has now over 3000 case studies on the long term effects of deuterium depletion.
It’s as simple as realizing the water inside the mitochondria does not come from the water we drink, called ‘bulk’ water. Water inside the mitochondria is not store bought, its made from scratch like God did it, combined two gases, hydrogen and oxygen in a ratio of 2:1, and kaboom — pure metabolic water. This is the dance party happening inside the smallest nano confinements of the mitochondria. Deuterium is not invited to this bash, but it’s a relentless bully and inevitably crashes the party nonetheless.
This structured metabolic water is your body’s most prized asset, your sacred precious. It has very little in common with any liquids you consume. Metabolic water, constantly produced and recycled by the thousands of liters per day is the lubrication of life, and when this ‘matrix’ water was first analyzed for deuterium content, it was confirmed to have 60–70% less deuterium than the bulk water in the blood or extracellular fluids.
I think it’s easy to agree that if our deuterium management system is off balance it will cause mitochondrial dysfunction and subsequent alterations in metabolic homeostasis, i.e., aging and dysbiosis.12 Deuterium misshapes DNA every moment of every day. You cannot reverse the actions of error reactions.
Deuterium and space exploration
If you do survive on Mars long enough to procreate, your children may likely be sterile. Experiments in mice and other animals have shown that high levels of deuteration may cause sterility due to impairments in the development of gametes. In rats, heavy water consumption for a week can also lead to death.12
Even at low concentrations (20–30%), D2O significantly decreases fertility or even caused sterility, especially in male mice.22 Because it would take a very large amount of heavy water to replace 25% to 50% of a human being’s body water (which in turn is 65–70% of body weight) with heavy water, accidental or intentional poisoning with heavy water is unlikely to the point of practical disregard, however the effects on energy production and cellular health in the long run is detrimental.
With the Red Planet on the menu we really have to solve the deuterium problem or our life spans there will be exceedingly short. Research by Villanueva of NASA Goddard Space Flight Center employed powerful telescopes to map water (H2O) and its deuterated form (HDO) across the surface of Mars. They found that the Martian globe contained a significantly high D:H ratio.7 These findings corroborated the results obtained by NASA’s Curiosity/Mars Science Laboratory in 2013, which indicated that the hydrogen-to-deuterium ratio on Mars is five to seven times lower than that on Earth.8
This is a burning dumpster fire of bad news for any biological organisms hoping to build a homestead and white picket fence on the Red Planet. As we clamor and plot to blast ourselves off terra firma to exploit new worlds, we better send those brave voyagers out there with a DDW filter.
“We will not colonize Mars until we solve the deuterium problem!”
- Abraham Lincoln
Not many people know this quote from the 16th president. I’m joking of course. But what is serious is going to Mars without developing a strategy for reducing deuterium levels in astronauts. If this revelation is new to Elon Musk or NASA it is not new to our Russian roommates on the International Space Station. A 2003 study by Russian aerospace scientists pinpointed that Mars exploration and other interplanetary missions may have severe effects on the health of crew members due to exposure to high levels of deuterium, in addition to significant radiation exposure.9
Studies to identify methods of reducing the risk of radiation-induced cancer in humans are at the forefront of preparation efforts for the mission to Mars, and so one of the most promising approaches is designing life support systems that generate DDW for consumption by crewmembers.16 We are only as strong as our weakest link. And we should not let deuterium become our Achille’s Heal.
Excessive deuterium changes three-dimensional structures in the body, creating misshapen proteins and lipids that don’t function properly and this creates an environment favorable to all kinds of neoplastic conditions. Preliminary investigations by Russian space researchers have demonstrated that decreasing deuterium in water by 65% endows water with radioprotective anticancer properties, key to long term space missions.9
Space can be a lonely place, here again DDW comes to the rescue. Another Russian study explored the link between depression susceptibility and deuterium depletion with positive results.25
From the fringes of investigative reporting, extraterrestrials have been kind enough to warn us about the deuterium problem as evidenced by the encounters of famous UFO contactees Billy Meier, reported in 1973, and “Warp-Drive” creator Wesley Bateman, published in 1993. These two contactees, unrelated, queried their respective Pleiadian and Arcturian informants as to why our lifespans on earth are so inadequately short compared to those lucky bastards. Apparently, not only can these humanoid ETs travel at superluminal speeds in their fancy space ships but they also live for thousands of years while maintaining their girlish figures.
The ET’s response to the contactees in both instances was the same, paraphrased in this way, “you poor bastards can barely crack a century before you drool and drop and become fertilizer, the reason is simple, you have too much deuterium on your planet.” It seems logical, light body vehicle maintenance for humans does not come with a manual. But they really shouldn’t have traveled so far to give us the bad news. The ETs offered no solution by the way. It took the Russians to figure it out. It can be argued we are ETs ourselves.
In addition to ET’s, ultraterrestrial “ascended masters” of earthly origin such as Kryon, Hilarion, Melchizedek and Sunanda have also weighed in on the deuterium problem, usually channeling their condolences to humanity through white robed middle aged Caucasians primarily in Marin county. Documented channelings on deuterium by ascended masters have so far offered no mitigation strategies. I don’t blame them, it’s not in their best interest to extend the lifespan of the plebeian masses in the least.
Tesla claimed aliens gave him the secrets to electricity in his dreams, but it is the simple observation of nature, deductive reasoning and Aristotelian logic that has revealed to us the deuterium problem. We are in the infant stage of solving it.
Whomever you believe or what data you analyze, it is clear we are standing on the shoulders of giants. So, if you are a space cadet and you don’t want to become Mars dust, regular DDW consumption is a must. I’ll put it this way; if you’re going to drink your own recycled piss, having it be deuterium depleted is not to be missed!
Solution: Make DDW on Space Missions
Drinking high latitude glacial melt water that is 20% reduced in deuterium is the inadvertent strategy of the Siberians that have an average of forty times more centenarians than the rest of the world.
Given the bleak and limited resources on Mars, efficient metabolism and cellular energy production is key for survival. Protect your head, protect your jewels, when it comes down to health it’s all about the fuel, and in this case the number one contaminant that needs to be removed.
NASA, Musk, Bezos and all those empowered to meet the call of our collective and never ending thirst to explore new worlds need to heed the call to the importance of DDW. To survive the cold dark vacuum of space requires an outside the box life support strategy, and it should, especially since you will be living full-time inside a box.
“If protium is the fuel of life, then deuterium depletion is the greatest discovery in biology of our time.”
- Kurt Cobain
“Sell the kids for food, weather changes moods, spring is here again, reproductive glands…”
- Victor Sagalovsky
Okay, my bad. I got those credits mixed up. But I would gladly swap.
Given the potentially detrimental effects of deuterium on human health, it is not surprising that DDW is emerging as the tasty beverage of choice at the geroprotection lu`au. If youth is wasted on the young and wisdom on the old, then this new science may allow us to find the happy middle.
The problem is it’s not easy to filter deuterium from water. Removing deuterium from water commercially itself is relatively new, and something that took decades to engineer and achieve. The first super deuterium depleted water (below 20 ppm) was not available in the US market until 2019.
The current method, fractional vacuum distillation rectification, requires lots of energy and a big footprint. Columns from thirty feet to six stories run continuously to separate out the molecules of HDO and the less prevalent D2O.
Miniaturizing this complex beast and taming its energy needs is the challenge. Or one better, creating an entirely new technology using novel sources of energy. This is your call. Who will be chosen to take up the challenge to push the needle forward? I once asked a Silicon Valley venture capital billionaire what he was looking for. He remarked, a couple of guys in a garage with a dog.
“If you’re looking to the stars and ready to settle on Mars, avoid delirium by depleting deuterium.
If your water is not atomically pure, then you have a problem for which there is no cure.
Get your portable DDW filter today, on sale for a limited time, only ten million Federal Reserve Notes, easy credit financing available.”
- Implanted brain chip TV ad jingle from the year 2045
I have confidence in the adage that necessity is the mother of invention. Given substantial financial and intellectual capital, elbow grease, some blood and a little luck, a DDW maker can scaled to the size and specs necessary for takeoff and become indispensable kit for all extraterrestrial environments. It costs $80,000 per gallon to send water to outer space. Elon Musk wants to carry 100 people per flight. Two liters is how much water an astronaut consumes per day. It’s not hard to do the math and see water is already a major problem to solve for long term space travel, and removing the deuterium on top of that.
Edison gave us the lightbulb, Tesla gave us AC current and wireless technology. Salk gave us penicillin. These inventions changed the course of history. Whoever creates a countertop DDW filter to meet this new standard of water purification will make no less a paradigm shifting contribution to human history and genomic evolution.
If the 20th century was all about deuterium heavy water, which was needed to create nuclear power, then the 21st century will be about light water, which is needed to boost our health to propel us to the stars.
Protium was the first element to be created; it is the alpha. It comprises 75% of the universe and is the foundation of everything else. It is the fabric of the material universe, and two thirds of the trinity of the “first water”, 1H2O (without any HDO or D2O). It is closest to meeting the definition of the proverbial fountain of youth.
Only time will tell how the revelation of Deutenomics impacts life on this planet and beyond. As the seasons turn we will know if the promise of radical life extension is achieved by deuterium depletion. The odds are good and there is nothing to lose.
About the author: Victor Sagalovsky is the cofounder and CEO of Litewater Scientific, purveyors of the most deuterium depleted water on the planet, and probably the solar system. Read his Brief History of Deuterium Depleted Water. He is working on a DDW filter for space travel but has no interest in going to Mars, although terraforming Venus is on his bucket list.
References
1. Goncharuk, V. V., Kavitskaya, A. A., Romanyukina, I. Y. & Loboda, O. A. Revealing water’s secrets: Deuterium depleted water. Chem. Cent. J. 7, 1–5 (2013).
2. Basov, A., Fedulova, L., Baryshev, M. & Dzhimak, S. Deuterium-Depleted Water Influence on the Isotope 2H/1H Regulation in Body and Individual Adaptation. Nutrients 11, 1903 (2019).
3. Rodimov, B. N. Agriculture of Siberia. Omsk. №7á,66 (1961).
4. I. V. Toroptsev, B. N. Rodimov, A. M. Marshunina, et al. in Questions of Radiobiology and Hematology. (Izd.Tomsk Univ., Tomsk. 1966) 118–126 (1966). doi:10.1134/S0006350914020146
5. Dzhimak, S., Basov, A., Fedulova, L. & Kotenkova, E. Influence of deuterium depleted water on indicators of prooxidant-antioxidant and detoxifying systems in experimental diabetes. Endocr. Abstr. 38, (2015).
6. Bateman, W. Knowledge from the Stars. (Light Technology Publ., 1993).
7. Villanueva, G. L. et al. Strong water isotopic anomalies in the martian atmosphere: Probing current and ancient reservoirs. Science (80-. ). 348, 218–221 (2015).
8. Webster, C. R. et al. Isotope ratios of H, C, and O in CO2 and H2O of the martian atmosphere. Science (80-. ). 341, 260–263 (2013).
9. Siniak, I. et al. [Consideration of the deuterium-free water supply to an expedition to Mars]. Aviakosm. Ekolog. Med. 37, 60–63 (2003).
10. Olgun, A. Biological effects of deuteronation: ATP synthase as an example. Theor. Biol. Med. Model. 4, 1–4 (2007).
11. Olgun, A., Öztürk, K., Bayir, S., Akman, S. & Erbil, M. K. Deuteronation and aging. Ann. N. Y. Acad. Sci. 1100, 400–403 (2007).
12. Kselíková, V., Vítová, M. & Bišová, K. Deuterium and its impact on living organisms. Folia Microbiol. (Praha). 64, 673–681 (2019).
13. Gyöngyi, Z. et al. Deuterium depleted water effects on survival of lung cancer patients and expression of Kras, Bcl2, and Myc genes in mouse lung. Nutr. Cancer 65, 240–246 (2013).
14. Rasooli, A. et al. Synergistic effects of deuterium depleted water and Mentha longifolia L. essential oils on sepsis-induced liver injuries through regulation of cyclooxygenase-2. Pharm. Biol. 57, 125–132 (2019).
15. Boros, L. G. et al. Submolecular regulation of cell transformation by deuterium depleting water exchange reactions in the tricarboxylic acid substrate cycle. Med. Hypotheses 87, 69–74 (2016).
16. Sinyak, Y. et al. Deuterium-free water (1H2O) in complex life-support systems of long-term space missions. Acta Astronaut. 52, 575–580 (2003).
17. Corneanu, G. C. et al. The radioprotective effect of deuterium depleted water and polyphenols. Environ. Eng. Manag. J. 9, 1509–1514 (2010).
18. Bild, W. et al. Research concerning the radioprotective and immunostimulating effects of deuterium-depleted water. Rom. J. Physiol. Physiol. Sci. 36, 205–218 (1999).
19. Yavari, K. & Kooshesh, L. Deuterium Depleted Water Inhibits the Proliferation of Human MCF7 Breast Cancer Cell Lines by Inducing Cell Cycle Arrest. Nutr. Cancer 71, 1019–1029 (2019).
20. Somlyai, G., Gyöngyi, Z., Somlyai, I. & Boros, L. G. Pre-clinical and clinical data confirm the anticancer effect of Deuterium depletion. Eur. J. Integr. Med. 8, 28 (2016).
21. Zhang, X., Gaetani, M., Chernobrovkin, A. & Zubarev, R. A. Anticancer effect of deuterium depleted water — Redox disbalance leads to oxidative stress. Mol. Cell. Proteomics 18, 2373–2387 (2019).
22. Hughes, B. Y. A. N. N. M., Bennett, E. L. & Calvin, M. PRODUCTION OF STERILITY IN MICE BY DEUTERIUM OXIDE. Proc. Natl. Acad. Sci. U.S. 45, 581–586 (1959).
23. Journal of Medicine, Physiology and Biophysics www.iiste.org ISSN 2422–8427 (Online) Vol 10, 2015
24. Bateman, Wes; Knowledge from the Stars, Light Technology Publishing 1993 ISBN 10: 092938539X ISBN 13: 9780929385396
25. Strekalova T., Evans M., Chernopiatko A., Couch Y., Costa-Nunes J., Cespuglio R., Chesson L., Deuterium Content of Water Increases Depression Susceptibility: The Potential Role of Serotonin Related Mechanism. Behavioural Brain Research. Volume 277, 15 January 2025, Pages 237–244
26. Györe I., Somlyai G. et al. “The effect of deuterium depleted drinking water on the performance of sportsmen” Hungarian Review of Sports Medicine 46/1:27–38
27. T.N. Burdeynaya et al. “Physiological effects of drinking water enriched with 1H2O”. P.K. Anokhin Institute of Normal Physiology of Russian Academy of Medical Science, Moscow. | https://medium.com/@viclove/deuterium-the-elephant-in-the-space-capsule-c099bd523473 | ['Vic Love'] | 2020-10-30 19:30:37.514000+00:00 | ['Space Exploration', 'Mars Mission', 'Deuterium Depleted Water', 'Space Technology'] |
2,406 | Working Out With Apple’s Fitness+: Exercise Made Easy for Superfans | Apple just launched its highly anticipated Fitness+ streaming service, with workouts led by world-class trainers. You’ll need an Apple Watch and an iPhone to join, but so far it looks like a promising new way to work out for anyone immersed in Apple’s ecosystem.
By Angela Moscaritolo
With “new year, new you” season right around the corner, Apple just launched its highly anticipated Fitness+ workout streaming service. Fitness+ brings a range of workouts led by world-class trainers, including Ironman champions, professional athletes, fitness club founders, gymnasts, health coaches, marathoners, martial artists, personal trainers, and yogis, to the comfort and safety of your home. As PCMag’s resident fitness expert and a professional yoga instructor, I’ve been eagerly awaiting the arrival of Fitness+ and have some first impressions to share while I continue testing the service for a full review.
How to Get Fitness+
Fitness+ is available to download now as part of iOS 14.3 and watchOS 7.2, so once you install those updates on your iPhone and Apple Watch, you’ll be ready to go. Keep in mind that Fitness+ requires an Apple Watch (Series 3 or later) paired with a compatible iPhone (an iPhone 6s or newer, or an iPhone SE).
On the iPhone, Fitness+ lives in the newly redesigned Fitness app. Once you download the iOS 14.3 update, you’ll see a new Fitness+ tab at the bottom of the Fitness app; just tap that to get started.
To follow along with the workout videos on a larger screen than your iPhone, you’ll also need an iPad or an Apple TV. To get Fitness+ on your Apple tablet, you first need to upgrade to iPadOS 14.3, then go to the App Store and manually download the Fitness app. On Apple TV, the Fitness+ app will automatically appear after you install tvOS 14.3. Fitness+ works on the following models: all iPad Pros, iPad (5th generation or later), iPad mini 4 (or later), iPad Air (2nd generation or later), Apple TV 4K, and Apple TV HD. I’m testing Fitness+ on an iPhone 12 Pro Max and Apple Watch Series 6.
The service costs $9.99 per month or $79.99 per year, but Apple is offering a three-month free trial to those who purchase a new Apple Watch Series 3 or later. Existing Apple Watch users get a one-month free trial. Apple will automatically start billing you when your free trial is up, unless you cancel at least a day before the renewal date. You can cancel early at any time by visiting Settings > Apple ID on your iPhone.
It’s Beginner-Friendly
Fitness+ is designed for everyone from exercise enthusiasts to true beginners, even those who have trouble with balance or find it difficult getting up and down from the floor.
When you open Fitness+, buttons at the top of the interface let you filter workouts by category: HIIT, yoga, core, strength, treadmill, cycling, rowing, dance, and mindful cooldown. Once you select a workout type, you can filter the classes by trainer, time (5 to 45 minutes), and music.
A section called For Beginners features a series of seven workouts specifically created for those who are completely new to exercise or getting back to it after an extended break. The first four workouts in the series are just 10 minutes each, and designed to teach you basic strength, yoga, HIIT, and core moves. You can then graduate to the final three sessions, which are 20 minutes each. After completing these seven workouts, you should be pretty well prepared to do any of the available studio workouts.
The Fitness+ trainer team
There’s also a Simple and Quick section featuring 10- and 20-minute workouts that are easy to modify and require little skill. Apple has assembled a diverse group of trainers with expertise in a range of modalities to lead its classes. The trainers take each other’s classes, so you might see Josh Crosby, who leads rowing workouts on the platform, in the background of a yoga session modifying the moves so beginners can easily follow along.
As you start taking classes on Fitness+, the service will begin recommending classes you might like based on your workout history. It also takes into account data from third-party apps connected to your Apple Health account when making suggestions. If, for instance, you use a third-party app to track outdoor runs, Fitness+ might suggest treadmill workouts. If you’ve been doing a lot of intense workouts, it might suggest a chill session to help you recover.
Metrics Matter
Every workout features a detail page with a video preview, written description, and music playlist to help you determine whether you want to take that class. When you find something you’re into, press Let’s Go and Fitness+ will connect with and pull up the corresponding workout type on your Apple Watch for automatic tracking.
A play button will then appear on both screens (your Apple Watch and the device you’re viewing the class on), and you can start the workout from either. If you need a break, you can also pause the workout from either device.
During Fitness+ classes, you’ll see real-time metrics from your Apple Watch on the screen, including your heart rate, calories burned, and activity rings. It shows the elapsed time, but you can switch this to show the remaining time if you prefer. You can also turn off the metrics completely and view the workout in full-screen mode. To customize the metrics you see on the screen, tap the lower right button during the class.
Working out can be boring, but one way Fitness+ aims to keep you engaged is by highlighting different metrics as you exercise. If, for instance, the trainer says to check your heart rate, that metric will animate on screen and show not just your current measurement, but also high and low readings. During intense pushes, you might see a timer showing how much you have left in that interval. And if you close your activity ring during the workout, you’ll see a celebration on screen.
It Fosters Friendly Competition
Unlike, say, Peloton, there are no live classes on Fitness+ at this time, but Apple says it plans to add new content every Monday. To see the latest content, visit the New This Week section.
Another key feature in Peloton and many other smart home gym equipment platforms you won’t find on Fitness+ is a class leaderboard. Instead, Apple is fostering friendly competition with a feature called the Burn Bar.
Available during workouts with intense pushes, including HIIT, treadmill, cycling, and rowing, the Burn Bar shows how your effort compares with everyone else in your weight range who previously completed the same workout. This lets you quickly see if you’re starting to fall behind, leading, or somewhere in the middle of the pack. Burn Bar data is anonymized, so it’s never connected to you, and you can disable this feature if you want.
Music Is Key
One of the things I love about Peloton is the important role music plays in the overall experience. Apple has also made music an integral part of Fitness+. As mentioned, you’ll see a playlist before every workout, and you can filter classes by music genre (chill vibes, fitness music, latest hits, pure dance, top country, everything rock, hip-hop/R&B, Latin grooves, throwback hits, and upbeat anthems). During a class, you’ll see the name of the song at the top of the screen.
You don’t need an Apple Music account to listen to the music during Fitness+ classes. But if you do subscribe to Apple Music, you can also quickly save Fitness+ class songs and entire playlists to your account. Peloton still has a leg up here, as it lets you connect your Apple Music or Spotify account to save music you hear during classes.
It Travels Well
Most smart fitness machines require a subscription ranging from $29 to $39 per month, so at just $10 per month, Fitness+ is an affordable at-home workout option if you already have an iPhone and Apple Watch. Most of us are still social distancing, but Fitness+ should also transition well as life gets back to normal and we start traveling and going to the gym again.
Beyond floor-based classes like HIIT, strength, core, and yoga, Fitness+ offers classes you can do on any rowing machine, stationary bike, or treadmill you own or have access to at the gym. If you’ve never used one of these machines, Fitness+ can teach you how. Inside each of the rowing, cycling, and treadmill sections of the app, there’s a Getting Started video designed to get you familiar with the equipment and its key features, like how to properly perform a rowing stroke, adjust a cycling bike to your proportions, and safely get on and off a treadmill.
Fitness+ also makes it easy to save classes you like and download them for offline access. This can be helpful if you want to get outside and do some yoga at the beach or a park, or if you’re traveling to an area without Wi-Fi.
It doesn’t support partner workouts (you can do classes with others, but you’ll only see one person’s Apple Watch stats on screen), but you can share your membership with up to six family members. When you open the Fitness+ app on Apple TV, it will scan the room for Apple Watches; just tap your name to access your account with personalized recommendations.
You can also use Fitness+ on someone else’s Apple TV, even if they don’t subscribe; all you need is your Apple Watch to access your Fitness+ account. This could be convenient if, say, you’re visiting a family member who owns an Apple TV, and want to work out while you’re there.
I look forward to taking a bunch of classes and really putting Fitness+ to the test to see how it compares with a Peloton membership, as well as how it fares in its own right. | https://medium.com/pcmag-access/working-out-with-apples-fitness-exercise-made-easy-for-superfans-d099e79ebe12 | [] | 2020-12-16 19:02:13.040000+00:00 | ['Fitness', 'Apple', 'Technology', 'Smartphones', 'Apple Watch'] |
2,407 | How to Reduce Bank Fraud With BPM | Recently, Tesco Bank was fined over 20 million dollars for failing to prevent debit card fraud which affected it and 131,000 of its customers. Although the bank’s controls prevented 80 percent of the attack’s unauthorized transactions, the FCA, the UK’s financial regulatory agency, determined the firm didn’t meet a piece of anti-fraud regulation which specifies banks to “conduct its business with due skill, care and diligence” to prevent this type of fraud.
Though Tesco Bank committed no criminal activity itself, the risk it assumed by failing to prevent the fraud was enough to warrant a fine. The recent example of Tesco bank reflects the regulatory penalties banks around the world face more and more frequently on a regular basis.
Fraud itself represents a significant cost to banks every year. According to McKinsey & Company bank losses due to credit and debit card losses amounted to almost $23 billion in 2016 and could reach $44 Billion in 2025. The level of risk banking fraud introduces into a bank’s financial equation is not only dangerous for banks, but for the entire global economy. For this reason, strict regulations have been passed on both international and national levels to obligate banks to reduce these types of risk.
To protect against losses due to fraud and regulatory fines, banks must understand the regulations they are subject to and best practices for compliance. Enterprise BPM software can enable banks to meet best practices by automating compliance throughout their entire operations.
Types of Fraud and Anti-fraud Regulations
Banks face an ever growing and ever-evolving list of fraud tactics. For this reason, it is more important than ever for banks follow best practices for fraud prediction and detection to mitigate the threat of losses due to fraud and regulatory fines. Here is a quick overview of the most common forms of banking fraud banks face today:
Credit/Debit Card Fraud is one of the most common and rapidly increasing forms of banking fraud. Generally, credit card fraud is broken into two categories: Card not Present, which is typically committed over the phone or online with stolen card information, or EMV Fraud, scams which involve physical EMV chips. Debit card fraud alone constituted 58 percent of losses in the banking industry in 2016 according to Financial Regulation News.
Bill Discounting fraud involves a fraudster gaining the goodwill of a bank by portraying themselves as a good, legitimate clients of the bank. The fraudster will use the bank to gather payments from its customers for a period of time. Once the bank has accepted the fraudster as a legitimate client, they will ask the bank to settle their balance before collecting payments from the customer. Then, the fraudster and their “clients” will disappear.
Money Laundering poses an increasingly complex challenge for banks as cryptocurrencies pose an unexplored threat and regulations continue to evolve each year. Last year, 18 out of Europe’s 20 largest banks were sanctioned for failing to prevent money laundering in a single week. According to Forbes, the software many banks are using to combat money laundering are now outdated, leading to a high number of false positives and higher operational costs for banks. As many as 95% of alerts are false positives.
Check Kiting occurs when clients use non-existent funds as credit using the float (the time in which money has already been deposited in the recipients account before being removed from the client’s account). This is often committed between multiple accounts in a process known as “circular kiting.”
Each of these forms of fraud can be reduced by collecting more reliable customer data and through process automation. Anti-fraud bank regulations referred to as “Know Your Customer”(KYC) and “Anti-Money Laundering” (AML) laws are designed to obligate banks to collect detailed information on their clients so they can calculate the fraud risk associated with each of their accounts. In the United States, an example of one such law is the Customer Identification Program, which includes detailed requirements for customer verification and thorough documentation of these procedures.
With fraud protection, it pays to go above and beyond basic regulatory requirements and follow best practices. According to Forbes, most regulators around the world have kept AML and KYC regulations purposely vague to encourage banks to go beyond fulfilling only its minimum requirements. Though verification requirements can vary widely from bank to bank, regulators can fine banks for failing to abstract due diligence standards on a case by case basis.
How BPM can help meet regulations
By automating customer verification, risk calculation, and suspicious activity monitoring processes with Business Process Management (BPM) software, banks can ensure that these processes are followed according to best practices each time. Additionally, and just as importantly, all documents are cataloged for a simple and fireproof audit.
The fraud prevention journey begins with onboarding. With BPM technology, banks can digitize paperwork required to create new accounts, authenticate users, and verify new customers. Then, the processing and evaluation of this information can be automated in an approval workflow. This way, new accounts are always processed according to the correct procedure each time, with an audit log to prove it.
Next, risk calculations can be automated using a BPM suite. Fraud risks should be calculated on an ongoing basis based on factors such as the device clients use to log into their account and by monitoring for suspicious transactions. These recurring calculations can be monitored with dashboards and alerts. Many banks save significant amounts of time and energy for their analysts by automating routine calculations.
Banks may use compliance software to complete many of the best practices covered so far. However, when organizations encounter problems that these software tools aren’t built to tackle out of the box, BPM can be used to extend their capabilities to meet any specific need. With workflow software, banks can build forms that enable employees to easily engage with necessary data entry and decision making from anywhere. Finally, BPM software integrates easily with enterprise technology tools like compliance management software, CRM, DMS, and others so that information is shared across platforms in real time.
Given the complexity of meeting compliance regulations in 2019, process automation is the key to any successful compliance initiative. BPM solutions provide the flexibility to coordinate human tasks and various technology systems around the specific processes a bank requires to meet compliance on all of its accounts.
For information on how our customers have used ProcessMaker to automate many of the processes listed here for greater compliance and fraud protection please read ProcessMaker’s financial case studies. | https://medium.com/processmaker/how-to-reduce-bank-fraud-with-bpm-d15cb6b462aa | ['Matthieu Mcclintock'] | 2019-06-17 20:23:17.653000+00:00 | ['Fintech', 'Finance', 'Workflow', 'Technology', 'Banking'] |
2,408 | How to Present and Justify Project Benefits to Sponsors | The diagram above addressed the three components of the SMART principle:
Specific : Key activities, deliverables and targets
: Key activities, deliverables and targets Measurable : Key performance metrics
: Key performance metrics Timely: Key milestones and timeline
In the next section, I will address how to quantify the values – which fulfils the achievable and realistic principles.
#3: Estimate and quantify the overall financial benefits
Investors and sponsors often have a set of projects to sponsor, and financial benefits are one of the primary factors that affect their prioritisation on which are the projects to sponsor.
You’ll want to ensure that your financial numbers are not only appealing but also achievable and realistic — by being able to justify and rationalize the numbers logically such that they are not plucked out of thin air (POOTA).
The pre-requisite is to have some form of clarity on the deliverables of your project. Details can include the key epics and features to be delivered at each phase. Knowing what you plan to deliver allows you to better compare the features of your product against others to obtain a reasonable estimate.
The approaches
There are two common ways to establish the estimates:
Top-down approach: showcase the increase in revenue by benchmarking against industry peers who have succeeded Bottom-up approach: showcase the reduction in costs by diving into the process/feature details and aggregating the values
Do note that these two approaches can also be used interchangeably, but for simplicity’s sake, I’ll only be providing an example of each approach.
Top-down approach (increase revenue)
The top-down approach involves conducting market research on industry peers who have already succeeded in implementing a solution of similar nature (based on feature comparison, etc.) and use the data as a benchmark or reference.
To obtain the project cost:
IT project delivery costs are often charged based on either 1) time-bound or 2) deliverables. Other components include infrastructure hosting costs (AWS/Azure/GCP), development tools (e.g. Atlassian suite), etc. The best case is having a reference project that would provide you with a benchmark of the cost of resources, epics/deliverables, and workstreams.
Alternatively, you can consider consulting an IT project manager or delivery consultant to advise on the costing. You can also try to do it yourself if you’re clear on the deliverables and have some project management experience.
Ample buffers and assumptions would often have to be factored into the cost estimates to manage the expectations of the sponsors.
To estimate the returns on investment:
The best case is having industry peers who have successfully implemented a similar solution and have resulted in an increase in revenue/productivity by X%. These data are typically available at the websites of consulting and product companies, e.g. McKinsey/Accenture, and Salesforce/Microsoft, etc.
If your project is a fairly new venture and there aren’t much data available to benchmark against, you can consider an alternate approach e.g. solving a Fermi problem.
Presenting investment/revenue returns
When it comes to presenting the important data of concern to sponsors, knowing what to omit/include is essential.
Once you’ve managed to obtain a benchmark revenue increment e.g. 30%, you can use that amount and work backwards to project the increment over X years and plot it out against a chart similar to the one below.
With the 30% increment spread over the years, you can then calculate the Compounded Annual Growth Rate (CAGR) using excel. CAGR is a good indicator of growth upon implementation of a solution. Internal Rate of Return (IRR) and payback period provide your sponsors with an idea on when they’ll be getting their returns on investment. | https://medium.com/the-internal-startup/how-to-present-and-justify-project-benefits-to-sponsors-9f4d40716621 | ['Jimmy Soh'] | 2020-09-06 08:12:38.909000+00:00 | ['Strategy', 'Innovation', 'Digital Transformation', 'Technology', 'Startup'] |
2,409 | Top 50 Tech Influencers on Twitter | Here goes the Top 50 Tech Influencers on Twitter from India (Name, Twitter Profile & Handle):-
1 Gaurav Chaudhary
Engineer by Education, Entrepreneur by Profession, Nano-Science Researcher by Interest, YouTuber by Passion… http://youtube.com/TechnicalGuruji Your Daily Tech Dose!!
https://twitter.com/technicalguruji
2 Arun Prabhudesai
Entrepreneur, Youtuber, Tech Influencer, *MADE IN INDIA*
https://twitter.com/8ap
3 Dhaval Patel
TV Media Panelist. Interested in Politics, Economics, Public Policy, Technology, Cyber Security, Foreign Policy. Views Personal, RT are not Endorsement
https://twitter.com/dhaval241086/
4 Amit Bhawani
Founder & Editor-in-chief, . Tech Blogger, Digital Marketing Co, Cause Runner, Traveler, Motivator
https://twitter.com/amitbhawani
5 Shradha Sharma
Founder, @YourStoryCo, , Every story deserves to be told, and heard. Tell me.
https://twitter.com/SharmaShradha
6 Dr Ganapathi Pulipaka
Chief #AI #HPC Scientist | #Speaker | PostDoc CS, PhD | Bestselling #Author | 18.4M V | Top #BigData #DataScience #MachineLearning #IIoT Influencer
https://twitter.com/gp_pulipaka
7 Ruchi Dass
Managing Director, @HealthCursor_In, . #PublicHealth and Health Technology #HITchicks #digitalhealth #HITsm #HCLDR Empowering #WomenInTech #datascience
https://twitter.com/drruchibhatt
8 Amit Agarwal
Computer Science Engineer (IIT), Developer Expert for Google Workspace (GSuite) and Apps Script, founder of http://labnol.org, a popular tech howto website since 2004
https://twitter.com/labnol
9 Raghunath Mashelkar
National Research Professor, Former DG, CSIR & President, Indian National Science Academy,National Innovation Foundation, Global Research Alliance
https://twitter.com/rameshmashelkar
10 Abhishek Bhatnagar
Founder & Editor-in-chief of @gadgetstouse — Entrepreneur, Traveler, Youtuber, Gadget Lover, Reviewer, Vlogger #GTULife
https://twitter.com/abhishek
11 Nikhil Pahwa
Founder @medianama, @tedfellow @Asia21Leaders, | cofounder savetheinternet ex, @internetfreedom, / Mere yaar patang udaya kar. Kat jaye toh gham na khaya kar
https://twitter.com/nixxin
12 Rahul Prajapati
तस्माद योगी भवः | Techie | Garba Lover | Gratitude is a Must | Bhullakkad | Digital Marketer | #SEO #SMO | Founder @indibeam | [email protected] :)
https://twitter.com/RahulReply
13 Harsh Agrawal
Award winning pro-blogger & Speaker. Fountainhead @ShoutMeLoud @coinsutra
https://twitter.com/denharsh
14 Pareekh Jain
Industry Analyst. Author. Alum IIM Bangalore, IIT Delhi.
https://twitter.com/pareekhjain
15 Varun Krishnan
Founder @FoneArena http://FoneArena.com , Strategic R&D @ Fone Labs, Technology Consultant @ Tera Omni Media
https://twitter.com/varunkrish
16 Raju PP
Tech Journalist. Opinions are of my employer, because Self-Employed. Editor @techpp .com. Tech | Cricket | Life
https://twitter.com/rajupp
17 Sourabh S Katoch
Data Scientist | Machine Learning Engineer Writes about AI Research, Data Science, Python Programming & Development. Educator, Open Source advocate & memer.
https://twitter.com/SourabhSKatoch
18 Ashish Sinha
Founder, FWD (and NextBigWhat). A ProductGeek.
https://twitter.com/cnha
19 Rakesh Goel
#RakeshGoel #SrDirector @Capgemini
#Dreamer #Influencer #GuestOfHonor #Blogger #FellowshipAward #Innovation #KeynoteSpeaker #TOGAF9 Occasional #Writer #Vegan
https://twitter.com/RakeshRGoel
20 Prakash Sangam
Tech Industry Analyst, Forbes, RCRWireless, EETimes contributor, 3GPP/ETSI member, Cellular IPR expert, Cover 5G/AI/IoT/Wi-Fi/Cloud. Ex @Qualcomm @Ericsson @ATT
https://twitter.com/MyTechMusings
21 Anirudha Karthik
Blogger at techandwe & http://tech2touch.com
https://twitter.com/tech2touch
22 Sanchit Vir Gogia
Chief Analyst, Founder & CEO @Greyhound_R | Insights #SVGPoV | Personal #SVGWorld | Poetry @svgpoet | INTJ | [email protected]
https://twitter.com/s_v_g
23 Dev Khanna
An inquisitive #learner, on a path to explore #tech #innovation #sustainability #energy #cars #IoT #AI #AR #VR
https://twitter.com/CurieuxExplorer
24 Dr Omkar Rai
Director General Software Technology Parks of India @STPIINDIA
Government of India. Ph D (Statistics) BHU Varanasi. RTs are not endorsements.Views are personal.
https://twitter.com/Omkar_Raii/
25 Rituparna Ghosh
Founder WhizzStep #ArtificialIntelligence . Our Commitment for a better future #DeepLearning #Machinelearning. We are in #AI Education♦ Research♦ Innovation
https://twitter.com/RitupaGhosh
26 Achyuta Samanta
Kandhamal MP|| Founder of KIIT (Kalinga Institute of Industrial Technology) & KISS (Kalinga Institute of Social Sciences)
https://twitter.com/achyuta_samanta/
27 Ranjit
A resident geek for over three decades! A Youtuber I have my own blunt point of view about Technology. On Instagram http://instagram.com/geekyranjitofficial
https://twitter.com/geekyranjit/
28 Vikas SN
Works @ETtech . Previously @Medianama . A Hindi Movie Buff.
https://twitter.com/tsuvik
29 Rakhi Tripathi
Against any form of inequality; Associate Professor, IT. Research: digital technology and social issues. I give ‘proud moments’ to trolls by blocking them
https://twitter.com/rakhitripathi/
30 Jaspreet Bindra
My books, The Tech Whisperer & The Immune Organisation, at http://allhisbooks.com Founder UNQBE,Digital Matters,ex- Mahindra, Microsoft, TAS Digital Transformation
https://twitter.com/j_bindra
31 Achyuta Ghosh
#Research Head @Nasscom Helping organizations master #emergingtech | Views are personal | #AI #blockchain #IoT #fintech #mobility #startups #quantumcomputing
https://twitter.com/achyutaghosh
32 Prosenjit Datta
I write on business, economics and technology
https://twitter.com/ProsaicView
33 Dr Manoj Kr Patairiya
Pursuing trans-disciplinary communications between sciences, media, publics and governance globally. Tweets Personal
https://twitter.com/manojpatairiya
34 Ramesh Dontha
Award winning/Best selling author | Host of Data Transformers podcast & ‘AI — The Future of Business’ series, Entrepreneur
https://twitter.com/rkdontha1
35 Srinivas Tamada
Founder, Blogger and Thinker — I love the WEB
https://twitter.com/9lessons
36 Rimjhim Ray
Quietly building @SpotleAI @imperialcollege @spjimr | Advisory Board @NasscomR
https://twitter.com/GlobeSlother
37 Pradeep Kumar
CEO & Founder of @Slashsquare
https://twitter.com/SPradeepKr
38 Geetesh Bajaj
PowerPoint MVP, Runs http://Indezine.com, teaches presenting and PowerPoint skills, consults about presenting concepts, and writes books.
https://twitter.com/Geetesh
39 Jayashree B.
Dir Comm @mssrf . Ex @unicef @icrisat @indiatoday . Passionate about #devcomm #parenting #gender #scicomm tweets personal https://linkedin.com/in/jayashreeb
https://twitter.com/jai_amma
40 Abhijeet Mukherjee
Current mode: Self-introspection. Previously founded and ran @GuidingTech @GTHindi . Perpetual reader and learner.
https://twitter.com/abhijeetmk
41 Vandana Gombar
Editor, BloombergNEF. Columnist, journalist, author…living in a world of words and ideas! Excited by global energy+mobility transition. Tweets personal!
https://twitter.com/vgombars
42 Priyanka Pani
Strategising content @Vcatsindia @9UnicornsVC | Founder @didis_my | Building http://Naari.Tech | Former Journalist
https://twitter.com/aaravmeanspeace
43 Vanesh Mali
Software Professional, Blogger, Founder @helloTechnoVans Follow me to get daily #bloggingtips #businessideas
https://twitter.com/vaneshmali
44 Atul Maharaj
Technology, Travel, Food and Lifestyle Blogger, Best Blogger Telangana — IBA2017, Zomato Connoisseur
https://twitter.com/Atulmaharaj
45 Arvind Mahajan
strategy & innovation,management consultant, independent director, diverse interests,sports & movie buff, trend watcher
https://twitter.com/arvindmahajan
46 Surabhi Dewra
Founder @_careerguide #Bitsian #Edtech #StrongWilled #WomenInTech
https://twitter.com/SurabhiDewra
47 Sahil Parikh
Tech, Code, Author, Golf, Tennis, Travel. @UNC Chapel Hill Alum. I write Tech Friend, a monthly newsletter
https://twitter.com/sahilparikh
48 Rajamanickam A.
#Tech YouTuber https://youtube.com/user/qualitypointtech… #Motivational Quotes Book http://thequotes.net/motivational-e… #EmergingTech News http://RtoZ.org
https://twitter.com/rajamanickam_a
49 Jignesh Padhiyar
Co-founder of @iGeeksBlog , Apple Fanboy, #Blogger, and Daydreamer.
https://twitter.com/padhiyarjignesh
50 Hemant Batra
Animal Lover | Cricket Freak | Tech Geek | Beginner Level Hindi Tech Youtuber.
https://twitter.com/iMrBatra
Related Posts
Top 50 LinkedIn Influencers in India
Top 40 Indian Marketers on Twitter
Top 50 Finance Influencers on Twitter
Note- The ranking has been done by Bloggers Alliance Team based on 2 criteria with equal weightage to each viz. number of followers and engagement rate. In case of any query or if you have been missed out by mistake, you can write to [email protected]. The list is based on Twitter data as on 30th December, 2020.
The list includes influencers in Edutech and HealthTech. Other interdisciplinary areas like HRTech, FinTech etc have been clubbed with HR, Finance etc. | https://medium.com/@bloggersalliance/so-here-are-the-top-50-tech-influencers-on-twitter-from-india-name-twitter-profile-handle-e10545a223fd | ['Bloggers Alliance'] | 2020-12-31 12:16:49.533000+00:00 | ['Technical Analysis', 'Blogger', 'Technews', 'Influencer Marketing', 'Technology'] |
2,410 | How Has Technology Influenced Modern Music? | How Has Technology Influenced Modern Music?
When it comes to music nowadays we can see that technology has had a huge influence on not only the style of modern music but the methods used for listening and creating music. Before Apple Music, Spotify, and other streaming platforms, people would either have to hear music live at concerts or on the radio. Also, music production especially has seen a dramatic change due to technology. Whether you like tor not, it seems that technology has changed every aspect of music. In this essay, we are going to discuss the specific aspects of music that technology has influenced.
(Logic Pro X is just one example of music production technology)
First, let’s look at music production and how technology is used in the process of creating music. If you listen to the radio, go on Spotify, Apple Music, Youtube, or any other streaming platform that is used to promote music, you can see that Hip-Hop has become a huge genre in Modern music. Specifically, we have seen a blend of Hip-Hip and pop music with artists such as Drake. But when you listen to this style of music, do you ever wonder what exactly goes into the process of making it? In other words, how do artists like Drake make their music, and what technology do they use? Turns out that most music production software is very universal when it comes to music production. Meaning everyone from underground rap artists to mainstream pop stars uses the same type of music production software. Logic Pro X, FL studios, and Garage band are some of the main platforms that are used in almost all music production. The reason why these platforms are so popular, even among mainstream artists, is because they all function in the same way. It doesn’t matter if you make a song in Logic Pro X or FL studios, it will sound the same in the end. You can use both software to record and edit vocals, musical instruments, and other sounds used in music production. This software is especially helpful for music producers who make beats for their artists.
(Although this doesn’t look like much, you can still record professional sounding music production with the right equipment)
Technology has had a huge influence on the way people are making modern music, but what exactly does this shift in technology look like. In other words, what are some examples that prove technology has changed the way artists make music? Nowadays we are seeing more and more people making billboard hits just by recording in their home studio. This is more of a recent phenomenon as in the last couple of years we have seen a dramatic increase in artists moving into the mainstream by making hits in their bedroom. This is because recording technology and music production software has become more accessible to the average person creating a wave of hidden talent.
(Lil Mosey)
One of my favorite examples of an artist who became huge off of recording in their bedroom is a rapper/R&B artist called Lil Mosey. In 2017 the young rapper recorded a hit song called “Pull Up” at the age of 15, which kickstarted his career. The reason why I like to mention Lil Mosey as an example of how technology has influenced modern music is not just because his music sounds good, but he is the perfect representation of what modern music is. When Lil Mosey made the song “Pull Up” at the age of 15, he used the technology and resources that were available to him. Lil Mosey used Youtube as a way to find beats or instrumentals to rap/sing over and recorded songs at a home studio that had the basic music production software along with recording equipment to make the song. Also, he submitted his music video for the song to the Youtube channel “Elevator” so it could be seen by more people as the channel had millions of subscribers. The song currently has 34 million views, but more importantly, Lil Mosey now has a fruitful career and has made multiple hit songs in the past three years, and some of these hits were literally recorded in a closet. The point I’m trying to make is that this new technology used for music production is changing every aspect of how music is made. Musicians can now make billboard hits in their own homes because of the accessibility of music production software and social media platforms like Youtube that make it easy to obtain beats/instrumentals from producers.
(Nick Mira, a music producer who has made multiple hits making beats in FL studios for rappers like Juice Wrld, and Lil Tecca.)
This leads us to our next topic, which is the actual process of making music with technology such as music production software. On a basic level, there are two aspects of a modern song which is the beat or instrumental and the vocals. Now depending on what the genre or how professional the mix is, there could be multiple people involved with this process of making music. For example, songwriters, beatmakers, and sound engineers mix the vocals. But for the most part, the majority of people only pay attention to the singer/rapper and producer. You could even argue that the producer is usually left out of the equation as a lot of people don’t even realize mainstream artists don’t make their beats. First, let’s discuss the job of the producer when making a song and also how technology has changed the way the producers make music. The job of a modern-day producer, especially in the Hip-Hop and Pop genre is making the beats/instrumentals for a rapper or singer. Production software like Logic X Pro or FL studios is used significantly when making instrumentals as a lot of the drums, instruments, and sounds are electronic. There are so many ways that producers create these instrumentals to the point where it would be impossible to explain. Production software like Logic X is limitless, as you can download millions of different sounds and plugins to make your unique instrumentals. Also, social media has played a serious role in how music is made. Producers now sell beats and instrumentals on websites so rappers can purchase and use them for their songs. Platforms like Youtube, Instagram, and now Tik Tok, are being used for producers to reach large audiences and potential customers to buy their beats. If you go on Youtube right now and search “Type Beat,” millions of different instrumentals will pop up from producers all over the world. The reason why this has changed modern music forever is that artists now have unlimited potential for making songs. You could find one of the best producers in the world that has a unique style just by looking through youtube videos. What I’m trying to say is that when more people have access to music production technology, there is going to be an influx of talent. With this influx of talent, social media will make it even more available to other people.
In conclusion, technology has changed almost every aspect of music. The way it is made, promoted, and listened has changed dramatically over the past few years.
Sources:
http://www.ronaldshannonjackson.com/how-has-technology-changed-the-way-of-music/#:~:text=The%20way%20we%20listen%20to%20music%20has%20changed%20dramatically.&text=The%20actual%20production%20of%20music,different%20in%20a%20recording%20studio.
https://www.theverge.com/2018/9/28/17874576/music-production-laptop-studio-producer
https://www.elevatormag.com/behind-the-video-lil-mosey-pull-up-w-yungtada
https://medium.com/@brokenstereo/bedroom-pop-and-the-rise-of-the-diy-artist-1946e83bc7e0
https://www.rollingstone.com/pro/features/lil-mosey-blueberry-faygo-945083/
https://www.forbes.com/sites/johnkoetsier/2017/07/28/300000-artists-are-making-20m-beats-is-the-new-music-industry/?sh=6cfbb30285bc | https://medium.com/@joemow53/how-has-technology-influenced-modern-music-d064ca606365 | [] | 2020-12-09 01:13:53.359000+00:00 | ['Pop Music', 'Rap', 'Music Production', 'Technology', 'Music'] |
2,411 | The Cooperative Data Commons. We’re very excited here at Ledgerback… | About LedgerbackØDCRC
Established in 2018, the Ledgerback Digital Commons Research Cooperative (LedgerbackØDCRC) is a nonprofit cooperative association and distributed p2p network for unifying the study of the internet and society and fostering collaboration between stakeholders to advance towards a global technological commonwealth.
Our research approach is an inter/cross-disciplinary approach, with the goal to eventually employ an anti/antedisciplinary approach as we continue to grow.
Global Technological Commonwealth
A global technological commonwealth (as summarized here) is a sociotechnical imaginary (i.e., a vision of the future) that “consists of post-capitalist society where communities of mutual interest cooperate in the construction of institutions of regenerative economic relations” [1]. The technological design principles include:
“incorporating planetary boundaries,
modelling on natural biological ecosystems,
enabling the redefinition of value,
enabling radically democratic coordination and governance, and
allowing for the growth of a cooperative commons as the desirable future” [1].
For more information on the global technological commonwealth (and to get some background), we recommend reading Dr. Sarah Manski’s article, Distributed Ledger Technologies, Value Accounting, and the Self Sovereign Identity.
Areas of Interest
Our areas of interest include, without limitation:
Web 3 technologies (blockchain, pubs, secure scuttlebutt, fediverse, smart contracts, etc.) Collaborative economy (platform ecosystems, business models, platform capitalism, platform cooperativism, ownership economy, p2p/commons, digital labor, social contracts, etc.) Future of work (open value accounting, peer production, self-management practices, digital organizations, etc.) Digital Infrastructure (internet service providers, hardware, mesh networks, machine-to-machine economy, Internet-of-Things, etc.) Data science and ethical AI (AI/ML, human-in-the-loop AI, data analytics, algorithmic policy, algorithmic governance, etc.) Information privacy and security (data stewardship, cybersecurity, privacy-by-design, zero knowledge proof, cryptography, etc.) Knowledge Commons (notetaking tools, knowledge repositories, decision-making models, decision analysis, collective intelligence, swarm intelligence, EdTech, etc.) Metascience (open science, citizen science, science funding, bibliometrics, publishing, etc.) Personal Data or Digital Identity economy (data stewardship, data monetization, self-sovereign digital identity, decentralized identifiers, digital identity, decentralized identity, data privacy, data cooperatives, data trusts, etc.) Open Finance (e.g., alternative currencies, timebanking, community currencies, decentralized finance, prize-linked savings accounts) Complex systems (game theory, mechanism design, dynamic systems, simulation, etc.) Cryptoeconomics (bonding curves, cryptoprimitives, complex systems, peer prediction, schelling points, tokenization, etc.) Sustainability (circular economy, renewable energy, community-owned utilities, etc.) Science and Technology (how science and technology interact with society positively and negatively, and how the relationship between them can be changed for the social good)
Problem Statements
Some example problem statements we are investigating are described in the following articles:
Currently, LedgerbackØDCRC is run by volunteers (and we thank them for all their effort!).
Membership Benefits
Our primary stakeholders or intended beneficiaries of our membership are investigators (scholars, researchers, academics, activists, makers, technologists, etc.), practitioners, citizens, and our staff (the people who make LedgerbackØDCRC run!) .
The benefits we provide or plan to provide to our members includes:
online portal (email included) cloud infrastructure and interactive computing infrastructure combining resources mapping, data analytics, knowledge tools grantwriting support fundraising support sharing experiences publishing support research assistance networking offline and online informing members of opportunities providing resources
We do not have a membership fee (no need to pay $2,500.00 to join our community) but we do have annual fees ($50.00/year or provide 40 hours of time to cooperative-directed activities) to keep the cooperative operational.
Join us via the form below or send an email to [email protected].
Describing LedgerbackØDCRC
The LedgerbackØDCRC is best understood as multi-purpose cooperative (we don’t fall neatly into a category 😖) that can better be described by its functions (or really a mix of a foundation, ecosystem and a research institute):
Research Institute: We produce original research (basic, applied, empirical) and analyses on the internet and society, formulate models, tools, and designs and practices, grow a body of knowledge on the internet and society with an emphasis on how to transition towards a global technological commonwealth, develop prototypes, open source software and proof-of-concepts, and run citizen science projects. Data Cooperative: We produce and analyze datasets, trends, and other areas of interest by collecting publicly available data or curating data from our members or participants in our projects, and offer our analyses and datasets to the general public and interested parties. Foundation: we support efforts to advance towards a global technological commonwealth, hosting events and workshops, hosting distributed communities, and acting as a host for the greater Ledgerback ecosystem. Observatory: We monitor progress among the many sociotechnical ecosystems Academy: We produce open source educational materials and help others find and take courses on the internet and society, and develop the skills needed to cause transformational change towards a global technological commonwealth. Distributed community: We work together with people all across the world online to build a knowledge commons and provide resources to those who need them.
Supporting the LedgerbackØDCRC
You can support us in many different ways including: | https://medium.com/ledgerback-blog/the-democratized-data-commons-93f825b576bb | [] | 2020-12-25 23:42:06.163000+00:00 | ['Platform', 'Decentralization', 'Technology', 'Data', 'Blockchain'] |
2,412 | HR metrics = flaming joke | We live in a supposed era of big data — sorry, Big Data — right now, and as such, every team/silo/organization has to “prove it” with metrics. As a result, there’s been a bunch of discussion about HR metrics of late, typically in the form of won’t-happen-for-a-while concepts like “people analytics.”
Logically it would make sense for HR metrics to be often-considered, as HR touches the biggest spend of any company (hiring people) and theoretically has data on the performance of those people (and their managers). But the whole deal with HR metrics is extremely fraught, for a number of reasons.
HR Metrics Problem №1: No decision-maker cares
This will gradually change, but in general most executives could give 0.2 shits about Human Resources. To many of these guys, who view themselves as world-builders, HR serves these functions:
Processes
Fire drills
Get out the people I don’t like
Hopefully staffed with a few hot 20-something blond girls fresh out of school
Is this normative at all places? No. But almost everywhere I’ve worked or talked to my friends about, this is how executives view HR. It’s not “seat at the table” material. This problem underscores everything else. If you don’t care about the department, well, you won’t care about HR metrics either.
HR Metrics Problem №2: Our relationship with data
This is fraught in all departments right now. Many top decision-makers don’t really understand data, and many organizational processes are set up so that people can off-load the responsibility for the analytics. No bueno. Companies also tend to “throw money at the problem” of data, hiring $$$ data scientists instead of doing the more logical thing, as Peter Cappelli points out:
In short, most companies — and that includes a lot of big ones — don’t need fancy data scientists. They need database managers to clean up the data. And they need simple software — sometimes even Excel spreadsheets can do the analyses that most HR departments need.
Yep. Simplicity matters a lot here.
And №3: the HR metrics we capture and how we do it
Usually this is going to be about turnover, cost per hire, and maybe some employee morale evaluations. The problem is that a lot of this comes from performance reviews, which are awful, or employee engagement surveys, which typically occur once a year and that’s it. (And then no one thinks about it until the process begins the next year.)
Admittedly there are more real-time HR metrics solutions now — Waggl, TinyPulse, etc. — but I wouldn’t say they’re “at scale” in terms of companies using them a lot. It still very much feels like we half-ass HR data; we collect a bunch of stuff once a year, maybe do a few slides on it, and that’s it.
Let’s talk about work, baby, let’s talk about you and me, let’s talk all the good things and the bad things in this newsletter on Thursdays…
Kind of amazing if you think about it, since HR “owns” the people aspect of the business — which should be a really big deal. It isn’t, though. I think that’s largely because no one cares, somewhat because “we’ve always done it that way,” and analyzing supply chain or operations numbers seems more “business-like.” Guys want to feel “business-like” because it’s fun to them.
How can we improve HR metrics?
Couple of ideas:
Make sure the databases where info resides “talk” to each other (as noted above)
Tie everything to cost — how much $$$ is being lost on turnover?
Connect turnover rates back to specific managers, so that they can be ID’d and improved
Calculate cost per hire, but don’t live by that number; cost-cutting measures shouldn’t be the norm when getting good people
Analyze the metrics you have more frequently
Use quick, “pulse” surveys as opposed to once a year stuff
Have actionable returns on the bad parts of the employee satisfaction surveys
Care
Those are just some quick ones off the top of my head. I’m sure there are a million LinkedIn thought leaders right now meowing about People Analytics and how it’s going to change everything, but you know what? It won’t. First off, execs still won’t care. Second off, if you design a “prototype perfect hire” and then try to get 1,000 of those, it just means your company will have tons of homophily. Your ass will get disrupted faster than you say “Johnny from Seattle just designed a new app.”
If you want HR metrics to improve, then, start by caring — then move to tying everything back to the money and being consistent with your analyses.
Anything else you’d add on HR metrics? | https://medium.com/@tedbauer2003/hr-metrics-flaming-joke-30438eeceaea | ['Ted Bauer'] | 2020-12-11 11:32:49.505000+00:00 | ['Hr Technology', 'Future Of Work', 'HR', 'Work', 'Metrics'] |
2,413 | “Education” for the 2020–2021 school year (PART II) | Photo by Andrew Neel on Unsplash
Distance Learning Technology
The story is part two of my reflections about planning for the summer of 2020. I will focus on the technical considerations we tackled to situate ourselves for the hybrid fall school year’s demands.
As we worked through the summer, the expectations for a remote or hybrid start to the school year were becoming clearer, and we verified that Zoom would still be the platform of choice for the community.
We tested several “appliances” that would tie right into Zoom and give the faculty a familiar interface to work with as they link students’ classrooms with their remote peers. We settled on the Poly Studio X50 system (this includes a control tablet and camera/speaker and microphone bar). Epson Brightlink projectors display the class, students, and work on the screen, and the remote students can see their peers in the room. Faculty connect to the meeting from their device and classroom, and remote students all see the same presented materials. It was not perfect, but provided the best solution for this mixed in-person and remote classroom.
To support the sixty (plus) Zoom Room cameras, we wanted to ensure adequate bandwidth to the internet to guarantee that classes looked and sounded good. Our bandwidth to the internet was 500MBs (up and down) for our primary connection. Using PRTG, I have been monitoring several systems, including our internet pipe. Before installing the Poly Systems, the data did not show upload or download speeds exceeding 350MBs. We wanted to consider the network load of the Zoom Rooms.
Following some discussions, we worked on updating our RCN bandwidth to 2GBs (up and down). The updated speeds give us a lot of room, and with the costs of internet services being so reasonable, the monthly costs were negligible. As of November, the PRTG charts continued to show no greater than 450MBs internet speeds (close to 500MBs). The reports from the classrooms have been good, and no broad or systematic issues.
The increased maximum internet speed required updating our Cisco ASA firewalls to support 1GBs plus internet speeds. All of this work required time, scheduling, adjustments, and some downtime. We coordinated between RCN and our partners to ensure as quick and efficient transition to the increased speeds.
We replaced aging HP network hardware switches for newer Cisco switches providing up to 10GBs fiber between switches and substantially improving the backplane speeds of these current devices.
Infrastructure is critical in assuring the use of the Zoom Room classrooms and the 1000+ devices on the network on a daily basis supporting our faculty, staff, and students.
Infrastructure takeaways. For any deployment at this time, providing the best communication from the endpoints to the internet is critical. Monitor and review the bottlenecks and prepare for the best ways to ensure smooth classroom experiences. | https://medium.com/@adamsonscott/education-for-the-2020-2021-school-year-part-ii-46ecef0d758a | ['Scott Adamson'] | 2020-11-25 18:19:25.634000+00:00 | ['Education', 'Zoom', 'Technology', 'Distance Learning', 'Infrastructure'] |
2,414 | Step 2/13 How to start your home automation | Everything that you want to achieve in your smart home can be done within our RobertSmart app. We give you the option to choose between various settings and smart programs to make your day seamless and delightful. Only you know what is best for your daily routine, so we provide the right tools and leave the creative part up to you.
Our app is easy to navigate, however, you can create a layout that better fits your preference by changing the name and location of each device. Within the device management section, you can move around each device and place it wherever you see fit. If you click on each device on your home screen you will gain access to all the device settings. Therefore, it could be helpful to put the devices that you use most often at the top of the list for easy access. At the top bar of the main page, you will notice an overview of the weather forecast. If you choose to set up your Home Location, our app will show you the weather conditions around this location at any time of the day while also enabling additional location-based functions.
Smart Programs allow you to take the next step in your home automation journey. With our app, you can set up for all your smart devices to follow your daily rhythm and work without your interference.
We give you the selection of two types of smart programs:
Tap-to-Run feature can be understood as a shortcut for smart device groups or specific features. This smart program allows you to link together various devices and their smart functions. When creating a Tap-to-Run, you will be able to choose various tasks and features that you can all activate with a single tap.
feature can be understood as a shortcut for smart device groups or specific features. This smart program allows you to link together various devices and their smart functions. When creating a you will be able to choose various tasks and features that you can all activate with a single tap. Similarly, Automatons work based on the conditions and tasks that you assign. However, once you have set up Automatons, you will not have to adjust anything manually. Link together multiple devices and set up specific conditions for the devices to automatically work according to your requirements.
Within the Profile section, you can access all your general settings for both your Account and virtual Home. Here you can add a photo, write a nickname and arrange your security details. Moreover, your notifications will be saved within the Message Centre, where you will see in detail when each device has sent you a notification. Even if you disable push notifications on your phone, all messages will be archived here in case you want to access them.
If you have decided to join or create a new Home, this section is the right place where to do it.
Therefore, whenever you need to adjust overall settings, it all can be found by clicking on the Me button on your home screen.
Our goal is to provide a user experience that is not only practical but also fun. Create a home where form follows function and start building your smart home Assistant with our RobertSmart app. Next week we will explain how you can set up your Profile and customize your first virtual Home. | https://medium.com/@robertsmart/how-to-start-your-home-automation-529d439d9495 | [] | 2021-03-16 07:12:39.960000+00:00 | ['Home Improvement', 'Smart Home', 'Smart Devices', 'Technology', 'Apps'] |
2,415 | Understanding Accrued Expenses | Businesses of all types rely on good accounting practices in order to run smoothly and ensure that their operation is making a profit. The methodologies may differ from company to company, but they all fall into one of two general categories: cash basis or accrual basis. Most individuals and smaller businesses use the cash method for ease of use, but larger and more complex businesses use the accrual method of accounting. One aspect of accrual accounting that can be somewhat confusing is related to accrued expenses.
Basis of Accounting: Accrual vs Cash
To really make sense of how accrued expenses fit into a business’s financial reporting, it’s crucial to understand the difference between the two methods. The fundamental difference between the two is related to when revenue and expenses are recognized. This is an important distinction, both in terms of keeping accurate books and staying in compliance with SEC regulations. Public companies, for example, are required by the Generally Accepted Accounting Principles (GAAP) to use the accrual method in large part because of revenue and expense recognition.
Cash basis accounting, as noted earlier, is typically used by small businesses or for tracking personal finances. As the name implies, this method is all about cash flow and tracking money going in and money going out. Revenue is recognized in the cash basis method when the money is actually received, either in currency or a credit to a deposit account. Likewise, expenses are only recognized when money is paid out. The advantage of cash accounting is its simplicity, but the disadvantage is that the focus on cash flow may overstate the company’s financial health by omitting payable accounts.
Under the accrual basis of accounting, by contrast, revenue is recognized after it is earned; in other words, it is recognized after a good or service is delivered to a customer with the expectation of being paid at a later date. Likewise, expenses are recognized on financial statements after they are incurred: after a transaction but often before any money is paid out. Understandably, this method of accounting is more complicated, but it also paints a more accurate picture of a company’s finances because it incorporates accounts payable and accounts receivable.
What Are Accrued Expenses?
Accrued expenses are a component of accrual basis accounting, and they represent expenses that are recognized before they have actually been paid. It is because accrued expenses are considered current liabilities on the balance sheet (because they are an obligation to make cash payments in the future) that they are sometimes also referred to as accrued liabilities. Accrued expenses can sometimes be estimates of what will eventually be paid out as well. The following are some examples of accrued expenses:
supplies have been purchased but no invoice has been received
accrued interest expense
product or service warranties
taxes
employee bonuses, salaries, or wages
utilities
A somewhat related term is a prepaid expense. Unlike an accrued expense, a prepaid expense is paid in advance for goods or services that will eventually be received in the future. Prepaid expenses are actually recorded as assets on the balance sheet at first; over time, the value of these assets is expensed and is noted on the income statement. Whereas accrued expenses are always recognized in the period in which they are incurred, the value of prepaid expenses can be measured over multiple accounting periods.
How Are Accrued Expenses Recorded?
When accountants and bookkeepers reconcile the general ledger, they usually treat an accrued expense journal entry as a debit to an expense account and a credit to an accrued liabilities account; this is how both expenses and liabilities are increased. An accrued liability is considered a reversing entry (a kind of adjusting journal entry) that is temporarily used to adjust the books between different accounting periods.
Find the Best Data Package
At Intrinio, our team of financial data experts is passionate about providing clients with the best data from established, reputable sources. If you’re ready to integrate new, trustworthy data into your systems, contact us to request a consultation or review our financial data packages. | https://medium.com/@intrinio/understanding-accrued-expenses-8d3fc89b07da | [] | 2021-12-21 14:25:02.195000+00:00 | ['Accounting', 'Technology', 'Business', 'Finance', 'Fintech'] |
2,416 | Why Codemiko might be the most exciting innovation to come out of live streaming | twitch.tv/codemiko, Codemiko Live Stream (26/12/2020)
With the rise of VTubers(Virtual Youtubers) popularity there is always someone that takes one step further…the talent and beauty of this project is awespiring, not only as a developer but as storyteller that wants desperatly to connect with the audience and CodeMiko Project does just that, immersive and interactive twitch stream, also shining and blowing everyones mind…and it is well in it’s way of becoming a innovative masterpiece of internet history, currently CodeMiko has a little over 200 thousand followers on her twitch channel and more than 10 thousand active viwers per stream.
Virtual Youtubers use A.I to change appearence and entertain their audiences with a 3D character, opening the envelop and raising the bar for worldwide broadcasters. Miko is a fully rendered 3D Character in the forefront of streaming entertainment. The Codemiko Project is created using Unreal Engine and the motions are captured by a Xsens Motion Capture Suit. The rigging(mapping miko’s body), coding, engeneering and development was done The Technician, in his own words “This is the technician speaking! I am using an Xsens suit and Unreal Engine for Miko. The devving/engineering is all done by me and Miko was 100 percent modeled by me + rigged.”, and it is most impressive. CodeMiko’s stream are interactive, her model can change and chat, donations(bits) can customize Miko’s appearence, give her a big head, make her dance, drop bodies in her room or just Nuke Miko in her apartment, Codemikos twitch page describes the project as “quasi interactive rpg, where it’s kind of like an arcade and a game and a stream and an RPG at the same time.”
Miko can also get out of her motion capture mode and walk like a game character in her apartment, which is absolutely amazing and inovative. As far as content goes her streams are Interviews with other streamers, talking with chat and hanging out. Miko has a wonderful personality, she is charming, funny and glitchy…well, in her own words “Hi there, i’m Miko! I am a bit glitchy but wholesome! I’m a game character who failed pretty much due to my glitch. My glitch is a corruption in my file and it completely changes my personality. Don’t listen to anything my glitch says.” her “glitches” have unexpected and unpredictable effects, they are always welcome and add to Miko’s charm, and it’s a working progress, so real bugs and glitches can happen from time to time, but we all think that is way more entertaining to have a glitched Miko to watch.
If by any means you felt incline to check out this amazing project and the wonderful Miko on her Twitch channel “codemiko”, there you can find the channel’s schedule. The bar has been raised, and live broadcast has been changed, maybe opened a whole new frontier for entertainment, and it is AMAZING. | https://medium.com/@espadagamedev/why-codemiko-might-be-the-most-exciting-innovation-to-come-out-of-live-streaming-9ab82628b9b4 | ['Espada Game Dev'] | 2020-12-27 02:08:57.958000+00:00 | ['Streaming', 'Technology', 'AI', 'Virtual Reality'] |
2,417 | COVIDMINDER: Where you live matters! — RShiny and Leaflet based visualization tool | For the past one month, I’ve been actively working with my research team to explore the data around COVID and develop a visualization tool to understand the disparities across United States. We recently released the first iteration of the application COVIDMINDER which includes disparities in mortality rates, test cases, diabetes, and hospital beds across United States, with a special focus on New York.
In this post, we’ll explore the various tabs in some detail and see how the app was designed. Also, the application is constantly evolving and you might see many more features and tabs if you’re reading this article at a much later point in time. For quick reference, the application is live: https://covidminder.idea.rpi.edu/ and you can go ahead and explore the code on GitHub: https://github.com/TheRensselaerIDEA/COVIDMINDER
The Idea
COVIDMINDER is an application that reveals the regional disparities in outcomes, determinants and medications. It explores data about COVID-19 and tries to extract and disseminate information about various factors.
Disparity index
You must have came across several dashboards till now with the basic idea of displaying the current statistics, the number of death cases, total positive counts etc. in a beautiful interface.
But we wanted to do something different. We wanted our dashboard to be much more than just spitting out numbers directly. Thus, we decided to expand our horizon of data and include factors that could be associated with mortality. We decided to include not just mortality rates but also hospital beds in each state, the current number of test cases and diabetes spread across the country. Moreover, in these factors we calculated the disparity index in relevance to the population of the state to give a better visualization capability.
Further, we decided to include New York in our analysis as it is not only the most alarming state at the moment but I am also based out of New York so I can see everything first hand.
R-Shiny and Leaflet
App homepage
The application that you see has been designed from the ground up using R Shiny, a R package that allows you to develop websites using R code, HTML, CSS and Javascript. We explored various design styles, layouts, color schemes and finally chose the one that we see in the image. The color in plots are in high contrast to the background to highlight them better.
All the plots (except the line plot), are designed using the Leaflet package. As I am well versed with how Leaflet plots are made, I can say that the map designing is very intuitive and makes working with geo plots very simple.
Plots
Geo Plot
Geo Plot using Leaflet
For each tab, we decided to plot the data on the geo map of United States or New York. The various states/counties are color coded based on the disparity index value, ranging from dark blue to dark red. As I mentioned earlier, we used Leaflets to generate these plots.
You can hover over any state/county and see more statistics specific to that region. Each hover information changes based on which tab you are on and the disparity index.
The plots have been made interactive which allows you to zoom in and out of each plot. The legend has been added to the bottom right.
Line plot
New York test cases line plot
The second kind of plot included in our application is the New York COVID cases as they have occurred since March. As expected, New York State as a whole is the highest one. Furthermore, this is closely followed by New York, which is actually the county with the most number of cases as of today.
You can zoom in and out of the image by selecting a bounding box and explore the plot even deeper. The colors of the counties are based on the color map defined below:
Color coding for New York regions
The Maths
Disparity index is used to describe the relative position of a state or county. We use log values to identify the index. For example, to calculate disparity index for US state mortality rates, we use the following formula
index = log(Mortality rate in state/Mean mortality rate in US)
South Korea was able to “flatten the curve” using testing and thus, we compare our testing cases with the South Korea rate. Italy has higher hospital bed counts but still was not able to meet all needs, thus, we compare our hospital bed counts against Italian rates as the base minimum.
Conclusion
Developing the application taught us a lot about the COVID situation and how we can use easily available tools like R Shiny and Leaflet to generate beautiful visualizations that make understanding information much easier.
Go ahead and try the COVIDMINDER application and share your thoughts, ideas and suggestions with us.
You can reach out to us via the comments form on the website or reach out to me on LinkedIn: https://www.linkedin.com/in/bhanotkaran22/ | https://towardsdatascience.com/covidminder-where-you-live-matters-rshiny-and-leaflet-based-visualization-tool-168e3857dbf2 | ['Karan Bhanot'] | 2020-04-16 15:40:17.044000+00:00 | ['Towards Data Science', 'Visualization', 'R', 'Technology', 'Data Science'] |
2,418 | A Brief Overview of Big Tech Illustration: Flat Design, Corporate Memphis, and Alegria | As the name suggests, “Corporate Memphis” is derivative of Memphis. It references the bright colours, high contrast, and bold, geometric shapes and squiggles of its predecessor. Corporate Memphis isn’t a formal design term — it’s more of a cynical nickname. It seems that the primary distinguishing feature between Memphis and Corporate Memphis is the context in which the style is used. Since the Corporate flavour is specific to commercial contexts, it’s often toned down and standardized so as to remain functional and unintrusive. The colour palette is tamer, certain image guidelines are codified, and the array of possible patterns is restricted to a tighter range.
Header illustration for an article on the Slack blog. Source: Slack
Spotify billboard ad from the end of 2017. Source: Smart Insights
Corporate Memphis has skyrocketed in digital design popularity in recent years. Just take a look at the libraries and interfaces of modern UI design tools, from Adobe XD to Figma to Canva, and you’ll see the distinctive mark of Memphis in the digital age.
Figma landing page. Source: Figma
Alegria
This brings us to Alegria — the iconic progenitor of the quirky tech people-figures we now see everywhere. Officially, Alegria is an illustration system created for Facebook around 2017 by the design agency BUCK. As explained by the agency itself, Alegria derives its name from the Spanish word for “joy,” a fitting name for a style which radiates playfulness and positivity.
Illustrations for Facebook Alegria. Source: BUCK
It fits neatly into the trendy look of Corporate Memphis. Note the flat, minimal design, geometric shapes, uniform line widths, and vivid colours. It challenges realism by warping perspective and scale, allowing it to translate 3D dynamism into 2D graphics. Xoana Herrera, an illustrator who worked on Alegria within the BUCK team, explains that “characters are stylized and not anatomically precise” and that they are “designed for expression rather than individual identity.”
Today, the numerous spin-offs Alegria has spawned are a testament to its success. It serves its purpose extremely well. On the branding end, simple shapes and lines make it highly extensible. It’s distinctive, and the fundamental elements of the style are easy to comprehend. Designers can quickly adapt it into their own work — which is not to say that it was effortlessly conceptualized, but rather that its creators were mindful of the many use cases it would need to fulfill.
And on the consumer end, Alegria projects an image of inclusivity, approachability, and groundedness. It injects an opaque Big Tech entity with a warm, friendly facelift, highlighting its human factors over its sometimes-nebulous, often-controversial technology itself. Most of Alegria’s artificial skin tones and exaggerated body shapes are so intensely stylized that they sidestep issues of representation altogether. After all, if nobody is represented in the literal sense, then everybody is represented in the abstract, and by extension, nobody is technically excluded.
These illustrations reflect a fanciful diversity that exists only in the imagination. Rarely do they highlight examples of natural visual differences, leaving the onus of envisioning a diverse cast squarely on the user. For instance, someone who pictures white as the default ethnicity likely won’t think much differently when presented with a purple-skinned figure without any visible ethnic markers. The same goes for gender, disability, age, etc.
This collectivist look in itself doesn’t reflect sinister intentions on the designer’s part. Neither does the homogeneity of tech illustration. Again, Alegria’s flexibility is its strength. When creating for a company like Facebook whose users literally span the globe, it seems logical to keep designs abstract enough to represent anybody at all. In this case, the ambiguity of character identities is meant to be a feature, not a bug.
Of course, many designers who work with flat, Alegria-adjacent, or Memphis-like styles reflect varying degrees of reality in their work, and there’s a lot of stylistic variation within this realm. For example, illustrator Jennifer Hom created a unique illustration system for Airbnb which draws upon flat Memphis design, yet intentionally avoids the anonymity that plagues many corporate illustrations. Despite their visual similarities, Hom’s approach to character creation is the fundamental opposite of Alegria, as her system embraces individuality while the latter avoids it. A similar sentiment about using real people as references is also echoed in this article from Meg Robichaud, which discusses diversity in illustration at Shopify.
Don’t worry, be happy?
Overall, Alegria is a lovely, expressive illustration system, but it has its shortcomings. Under certain circumstances, it can even feel malicious. The way I see it, two factors make the modern proliferation of Alegria-lite so uncomfortable.
First, Alegria’s quick replicability and ease of use has resulted in the rise of generic illustration libraries like humaaans, which provide stock images for the era of flat design. Once you’ve seen the millionth landing page featuring some variation of the same standard figure, these graphics start calling to mind the cold, impersonal metrics by which companies often define their consumers, as well as the half-hearted corporate insistence that those consumers are, in fact, more than just numbers to crunch. I suppose that may just be my pessimism.
Far more disturbing, however, is the political context in which Alegria is used. Its principles of utopic optimism and pseudo-inclusivity gloss over the real ethical issues that tech companies are struggling to address, often in tandem with language so friendly that it borders on infantilizing.
The list of controversies goes on and on. Regulators evidently can’t keep up with the fast pace of tech.
Our days of unfettered public trust in Big Tech are long gone, but companies won’t stop trying to bring us back — and the playful, innocuous aesthetics of Alegria and Corporate Memphis are just one small, highly visible part of these efforts.
I’m a perpetual skeptic of marketing in all forms. Despite myself, I still marvel at the wide range of artwork that’s emerged from current trends, as well as the fascinating history and influences behind it all. And to be fair, I think most of us — myself included — would rather rest easy with pretty graphics like Alegria and friendly UIs over raw brutalist interfaces, ethics be damned. For better or for worse, digital design trends are already shifting from flat design to who-knows-what-else (neomorphism and 3D renders, maybe?), and some of the images in this article are no longer in use on company sites. Whatever comes next, it’ll tell us more about the role that tech plays in our environment and how these companies seek to grow in the near future.
For more information about Alegria, see this Eye on Design article by Rachel Hawley. | https://medium.com/@anna-xing/a-brief-overview-of-big-tech-illustration-flat-design-corporate-memphis-and-alegria-a9b54a35c6b1 | ['Anna Xing'] | 2020-12-27 02:36:12.888000+00:00 | ['Technology', 'Marketing', 'Design', 'Illustration'] |
2,419 | Blossom Capital & ‘High Conviction’ Investing | The top tier Silicon Valley VC firms have always taken a distinctive ‘high conviction’ approach to investing. By that I mean they are thesis-driven and move fast, taking big risks on ambitious visions, and throw everything they have behind the teams they back. They lead rounds, and give entrepreneurs the necessary runway to build the company of their dreams. As we announce today that Blossom Capital has raised an $85m fund focused on Series A-stage companies across Europe, I want to explain why this conviction-led approach is one we wholeheartedly buy into — as well as set out how we are doing things just a little bit differently to our European peers.
‘You’re never going to get outliers unless you take big risks.’
To win in disrupting, transforming or even creating new industries often means having ideas, visions or convictions that seem implausible. It also means being comfortable with the fact that the odds of success are usually heavily stacked against you. For most founders, playing it safe is not an option. The best investors understand this too.
If you look at the performance of some of the Valley’s leading venture funds, as many as half of their companies ‘fail’. But for all of those who don’t succeed, transformative companies like Apple, Amazon, Google or Facebook emerge too. The lesson here, of course, is that you’re never going to get outliers unless you take big risks.
The reason Blossom can take this approach is because of our LPs. These include vision-led institutional investors with the heft, patience and long-term perspective to ensure big outcomes, alongside highly experienced VCs like Tom Stafford (DST Global) and Andy Weissman (Union Square Ventures), and entrepreneurs-turned-investors such as Robinhood founder Vladimir Tenev, and Mikkel Svane, co-founder of Zendesk, who know what it takes to rapidly scale a business.
As a young team — who between us have helped grow some of the biggest names in tech including Facebook, Klarna, Robinhood and Deliveroo — we have been the beneficiaries of living through a period of sustained growth, meaning that we know how quickly startups can scale and the size of company they can become.
Strong Valley links.
Another way we’re evolving the European venture model is through the relationships we’ve forged through making multiple co-investments with many of the most active and influential Silicon Valley VCs — including Greylock Partners, DST Global, Sequoia Capital and Social Capital.
So far Blossom has backed Duffel, Fat Llama, Frontify and Sqreen, bringing in some amazing investors, including Y Combinator, Greylock and Index Ventures — something that will be particularly useful to our portfolio when they look to raise their next round of finance. Similarly, advisors to the firm include a network of 30 founders and executives from some of the world’s leading technology brands.
The most diverse partnership in Europe (possibly).
Blossom’s partnership — which includes Imran Ghory, who previously led data-driven deal sourcing at Index, Mike Hudack, until recently CTO at Deliveroo, and former director of product at Facebook working in both Menlo Park and London, and Louise Samet, who joined Blossom after six years at Klarna, where she was responsible for their digital products and led the company’s technical sales team — is certainly one of Europe’s most diverse, if not its most diverse.
Why is that so significant? Successful venture investing is all about information asymmetry — it’s about seeing and believing in something (or someone) that others don’t. Having different genders, ethnicities and backgrounds around the table encourages diversity of opinion and independence of thought, avoiding the stifling consensus which can develop without it.
Under the hood.
Unusually, we’re only going to make four or five investments a year because we’ll be thinking very carefully about our bandwidth and how much time we have to devote to each of our companies.
From the moment I started to assemble the Blossom team I knew it was really important that — ideally — half of us came from operator/engineering backgrounds. You can’t have the right conversations with founders unless you have the expertise in how to build companies from the inside. It also enables us to be hands-on when we’re asked to be.
Due to the fact that there’s such a range of experience, skillsets and network among our partners, it makes sense for each of us to work with every one of our investments. What that means in practice is rather than one person taking a board seat (and we don’t believe in VCs taking board seats at Series A-stage anyway), we encourage a more collaborative and transparent relationship between Blossom’s partners and our founders. And because we know what’s actually going on under the hood, as opposed to just tracking KPIs, we can be far more useful to the team.
Data is central to what we do.
In the same way entrepreneurs are building tech products to help themselves scale, we use data to find the leverage in our business when it comes both to sourcing deals and helping startups grow. Having a data scientist on the team, in Imran, enables us to find investment opportunities early. If you look at the big outcomes in Europe, around 70% of them today came from outside of the major hubs, in countries such as Romania, Finland and Portugal. Data allows us to cover the entire continent, not just the major and overfished capitals like London, Paris, Berlin and Stockholm.
Imran also works very closely with founders to figure out appropriate benchmarking, how to build data infrastructure and making that all-important first data hire.
More ambitious than ever before, today’s cohort of European entrepreneurs expect so much more from their investors than ‘just’ capital. They want backers who can match their levels of energy and ambition and offer practical support with product, operations and growth. They think globally from the outset and need local VCs who have experience of scaling internationally, as well as deep networks not just in Europe, but in the US and Asia too. Ultimately, however, they want to partner with operators who’ve done the hard yards themselves and know just what it takes to reach the summit, while navigating the inevitable turbulence along the way. | https://medium.com/blossom-capital/welcome-to-blossom-d0556f6f5c59 | ['Ophelia Brown'] | 2019-02-27 15:04:26.684000+00:00 | ['Startup', 'Venture Capital', 'Technology', 'Europe'] |
2,420 | Best Practices for Building a Remote Culture with Job van der Voort | Submitted by Finn Meeks
Like many other companies around the world, our community has been operating remotely for the last eight months. For some members remote work has been freeing. Yet others miss the depth and serendipity of in-person interactions. We’ve been debating remote work best practices within the community and recently invited Job van der Voort, Co-founder and CEO of Remote for a fireside chat and breakout discussion on the topic.
Job was formerly the VP of Product at Gitlab, one of the first companies to successfully operate a fully remote, distributed workforce at scale. After leaving Gitlab, he founded Remote, a global platform that makes it easy to onboard, pay, and manage a remote workforce. And as the name suggests, Remote operates as a fully remote company. He has used his learnings from Gitlab to influence his own company’s remote culture and shape Remote’s product direction. Job is one of the foremost experts on remote work and we are grateful that he shared these insights with us as we refine our own community’s remote culture.
November 12, 2020 Fireside Chat with Job van der Voort
Here are snippets summarizing our key learnings from the fireside chat. Thank you to everyone that attended live and participated in our breakout discussions following the fireside chat.
Remote work != the office
Being in an office is very easy because it’s what we’ve done for hundreds of years. We rely on our natural desire to say hi to people, to interact with people.
The moment you build a distributed company, you’re faced with a lot of questions: “When do we work? How do we work? Where do we work?.” All of these questions have non-obvious answers because there’s not a giant history of us doing it. There’s no one that has 50 years of experience of working remotely.
At Gitlab, we basically had to reinvent everything that we did every six months. We had to treat our organization as a product that we iterated on and that we tried to improve. You have to consistently and constantly look for new ways to get to know each other, to build a culture.
It helps to be explicit, rather than implicit in a remote organization. Instead of relying on a recurring All Hands to communicate with the organization, it more important to spent time documenting and writing something before announcing it.
Things inevitably break with scale
When you’re a very small company with just a few people, you can get on the Zoom with the whole team and talk about work and non-work related things. That is a very important part of how you get to know each other, but it stops working once you’re at about 25 people.
At 25 people, you can no longer have a 30 minute meeting in which everybody speaks. You have to start being much more structured about the way that you do things. At Gitlab, the most important thing we did was create a handbook that served as our single source of truth for anything related to processes and culture.
Nobody likes working alone
The worst way to build your remote company is to start with a team in one location, and then hire someone that is eight hours away. That does not work. The person either has to work at the same time as the rest of the team or they start to feel very isolated.
Organizations should quickly start to expand the time zones in which they are active. When we started Remote, we were almost all in Western European time, but we made a conscious decision to hire in America as we expanded. Then we had a group in then PST to EST time zones. If we decided to hire one person in India, we’ll have to commit to hiring at least two or three other people. So that there’s never a moment in a day in which you feel all by yourself in the office.
It’s still useful to build out concentrations of people (in a city or a country) because they talk with each other and don’t feel isolated. They tend to hire their friends, which means you can quickly build a hub by tapping into your employees networks.
The nitty-gritty of hiring remotely
One of the most important things is getting the nitty-gritty details of hiring and onboarding right. New hires need to have their laptop, a stable internet connection and a great remote work setup in order to perform their job well (like this setup!). In one case, we rented an apartment with high-speed internet for a new hire in Kenya because they couldn’t work remotely without it.
Organization is important for onboarding. All of our new employees get a handbook with a checklist of things to do. One of these things is to have calls with X people around the organization, mostly people outside of your own team, so that you start to build a rapport with people across the organization.
Being remote-first allows you to hire the best people in the world, regardless of where they are based. Remote sets a floor for employee compensation (so everyone is on equal footing) and then adds on whatever it takes to get the employee. It doesn’t matter if the person is overpaid for their location, as long as they are the best person in the world.
Walk the floor
My co-founder and I have talks with individuals around the organization at random once in a while to test our culture. We call it, walk the floor, which comes from the Toyota way.
We ask them to tell us whatever they want — How you’re doing? How’s the company doing? What should we be doing different? What is not going well? — and we shut up for 30 minutes and listen. The 20 minute mark is when all the dirt comes out, which is a good indicator of us.
As an example, I recently advised the whole company to be more conscious of their work life balance. But two people told me on these calls that “we know we’re working at a startup. If you tell us not to work too many hours, it makes us feel stupid because … there’s an insane amount of work to do.” Walking the floor was effective in this case.
Remember to have fun | https://medium.com/south-park-commons/best-practices-for-building-a-remote-culture-with-job-van-der-voort-469cc777a6cd | ['South Park Commons'] | 2020-12-17 21:03:49.299000+00:00 | ['Technology', 'Startup', 'Culture', 'Remote Working', 'Community'] |
2,421 | Customer Segmentation: Taking a Page out of the Computer Vision Book | Why Customer Segmentation?
Customer segmentation refers to the process of dividing the customer base of a company into groups, which share common demographical or behavioural characteristics. Understanding the different types of customers is important for formulating a coherent set of targeted strategies, such as brand positioning, one-to-one marketing and targeted individual recommendations. On some occasions, it has even been used to identify missing segments in the customer portfolio, and launch an aggressive acquisition campaign (Groysberg, 2018).
An example of a persona profile by Salminen et al. (2018)
Customer segmentation can also be used as a foundation for customer persona generation. Defined as “a fictitious person representing an underlying customer or user group”, customer persona crystallises a specific segment into an archetype often with concrete visualisation, background stories and transactions with the retailer. The main benefit of such an approach is to provide a realistic, shared mental model of the different types of customers for key decision makers. However, the generation of personas is not without its criticisms, which are usually about its lack of verifiability and actionability. See Salminen et al. (2018) for a more detailed account.
We are at present witnessing the advent of mass collection of rich datasets such as application engagement statistics and shopping behaviours, coupled with the increasing availability of online analytics data such as social media sources. Automated, data-driven customer segmentation is fast becoming readily available for many retailers and tech platforms enabled by a wealth of rich datasets. It is often the low hanging fruit and the first step towards understanding their customers better. | https://medium.com/zero-one-group/customer-segmentation-taking-a-page-out-of-the-computer-vision-book-af02155ccf53 | ['Zero One Group'] | 2020-06-11 14:12:39.841000+00:00 | ['Machine Learning', 'Data Science', 'Customer Segmentation', 'Technology', 'Algorithms'] |
2,422 | about instagram status | Instagram, the image sharing app created with the aid of using Mike Krieger and Kevin Systrom from Stanford University, spins the story of achievement capitalized the proper manner. Launched manner again withinside the 12 months 2010, Instagram nowadays boasts of seven hundred million registered customers, with extra than four hundred million human beings touring the webweb page on a everyday basis. Out of the seven hundred million customers, round 17 million are from the UK alone! When the 2 founders began out speakme approximately their concept, they quick realised that they’d one purpose in mind: to make the biggest cell image sharing app. However, earlier than Instagram, the 2 had labored collectively on a comparable platform referred to as Burbn. For Instagram to work, Krieger and Systrom determined to strip Burbn right all the way down to the naked necessities. Burbn became pretty just like Instagram and had functions which allowed customers to feature filters to their pictures. The social networking webweb page Instagram reached 1000000000 energetic customers in 2019. The US-primarily based totally video and image-sharing app is a achievement tale that has spread out because its release in October 2010 with the aid of using Stanford University college students Mike Krieger and Kevin Systrom. Systrom majored in control technological know-how and engineering, at the same time as Krieger studied symbolic systems — a department of pc research mixed with psychology. When the 2 founders met, they began out discussing their concept for a brand new app and realised they shared a purpose: to create the world’s biggest cell image-sharing app. Budding entrepreneur Fellow college students recalled Systrom as being clearly gregarious and a budding entrepreneur from a younger age. He in short ran a market that became just like Craigslist for fellow Stanford college students. Krieger had exclusive capabilities and one in all his college tasks were designing a pc interface that could gauge human emotions. Prior to Instagram, that they’d collaborated on a comparable platform referred to as Burbn. They determined to strip it down and use it as the premise for Instagram. Burbn had functions that enabled customers to feature filters to their photographs, so the duo studied each famous image app to peer how they might development further. Eventually, they determined it wasn’t operating and scrapped Burbn in favour of making a totally new platform. Their first attempt became Scotch, a predecessor to Instagram, however it wasn’t a achievement, because it didn’t have sufficient filters, had too many insects and became slow. Once Instagram became launched for Android phones, the app became downloaded extra than 1,000,000 instances a day. Interestingly, the web social media platform became ready to get hold of an funding of $ 500 million. Furthermore, Systrom and Zuckerberg had been in talks for a Facebook poised takeover. In April 2012, Facebook made a proposal to buy Instagram for approximately $ 1 billion in coins and stock, with the important thing provision that the enterprise could continue to be independently managed. Shortly thereafter and simply previous to its preliminary public offering, Facebook received the enterprise for the whopping sum of $ 1 billion in coins and stock. After the Facebook acquisition, the Instagram founders have completed little to alternate the consumer phase, sticking to the simplicity of the app. The remarkable upward push of Instagram’s recognition proves that human beings agree with in actual connections as opposed to the ones primarily based totally on simplest words. Since the acquisition, Instagram’s founders haven’t made many adjustments to the consumer experience, who prefer to paste to the app’s simplicity. Its upward push in recognition proves that human beings revel in the manner the app works and just like the image-primarily based totally connections it provides. One of the maximum essential instructions of Instagram’s achievement is that the founders didn’t waste time seeking to keep their authentic concept, Burbn. Once they determined it wasn’t going to work, they moved on quick and invented Instagram. Systrom stated its call became primarily based totally on “immediate telegram”. The app became released at simply the proper time — and with simplest 12 personnel initially, the consumer base had increased to extra than 27 million earlier than Instagram became offered to Facebook. Today, maximum celebrities use it as a platform for promotions and with 1000000000 customers, it keeps to head from power to power. | https://medium.com/@wkaddour/about-instagram-status-1839d0120d76 | ['Wika Dydour'] | 2020-12-17 11:57:48.691000+00:00 | ['SEO', 'Instagram', 'Technews', 'Technology', 'Success Story'] |
2,423 | BSCC Will Land On BtLux | After intensive development and preparation, BSCC has finally completed all preparations. This Friday, December 18, 2020, at 10:00 am, BSCC will make its global debut on the BtLux platform. And BSCC will bring wealth, honor and happiness to all node users.
BSCC: Cloud Storage Public Chain Leader
BSCC is the abbreviation of BSC CASH. BSCC is the SEA application sub-chain issued by V-Fund to reverse repurchase BSC points and anchor the inherent value of BSC points. BSCC tokens are the native assets of the storage application public chain, which can meet the needs of storage service exchange, storage metrics and data value flow, storage ecological application passwords and storage service agreement governance.
In the era of big data, cloud storage service is the lowest infrastructure to support the operation of the Internet, and is the steel and cement to build the Internet world; currently, with the rapid development of cloud computing, big data, Internet of Things and other technology industries, the growth rate of data traffic is accelerating, and the pressure carried by data centers is getting bigger and bigger.
According to Intel data analysis, the global data volume will reach 44 trillion GB in 2020. The high-speed development of Internet applications has given rise to cloud service giants such as Amazon Cloud and Ali Cloud, which hold the lifeline of many Internet companies, and centralized cloud service platforms have contributed to the development of the Internet world, but also laid hidden worries for information security.
Nowadays enterprises are paying more and more attention to data security, centralized storage forms are beginning to be questioned, and more and more information leaks and data loss incidents have created a crisis of trust in centralized cloud service providers, and more Internet companies are unwilling to trust their lifelines to other companies.
BSCC is a world-renowned cloud storage public chain. BSCC grafted on BSC cloud service technology resources and relied on SEA public chain to increase the expandability and compatibility of the ecology, born as a strong player in the cloud service field, it is destined to shine in this track.
BSCC Landed On BtLux — The Capital Feast Of The Next Ten Years
The process of transformation of technological achievements is also the process of resource integration. For emerging technologies, capital can help the technology better focus, and the rise of each new technology and new model is a feast of capital.
Some experts predict that by 2030, 70% of the data will be stored in a distributed manner, and distributed storage will enhance information and data security and greatly improve the utilization of global storage resources.
And the intervention of capital will speed up this historical process. Capital allows ordinary users to realize that car appointments can use cell phones, dinner can not go out, but also will greatly accelerate the speed of distributed cloud storage commercial applications.
This Friday, December 18, 2020, at 10:00 a.m., BSCC lands on the BtLux platform, where technology and capital meet, and the world’s first physical pass trading platform and the leading public chain in cloud storage rub off, which will surely bring surging upward momentum, and in turn, happiness and wealth for all BSC and BSCC node users’ friends.
BSCC cloud service reconstructs the data storage ecology, the circulation of digital pass makes data assetization possible, and the circulation of data rights and interests will lead the development of digital economy to a new era, an era where everyone’s data is respected and everyone’s data value is engraved will come, and a long river of data flowing with wealth has been bred in BSCC ecology.
BSCC: Blockchain 3.0 — Boosts The Internet of Everything
Data centralized storage is the key factor that restricts the development of IoT. The advent of the Internet of Things era means that ordinary people’s lives will become more and more privacy-free. A person’s every move and habitual lifestyle are under the monitoring of big data, which is the sadness of silicon-based civilization and an ethical issue that must be faced by the development of IoT technology.
Distributed storage provides the optimal solution for IoT data security problem, and data privatization is a problem that must be solved for IoT development. BSCC builds the easiest and most convenient cloud storage and data privatization ecology with the strong extensible feature of SEA public chain, which will provide the underlying storage logic support for IoT development and push the coming of the era of Internet of Everything together.
In the future, BSCC will continue to explore the fields of business evaluation, data exchange, disaster recovery and backup, incorporate more nodes into the BSCC ecology, achieve sustainable ecological development, lead all node users to better resource realization under the concept of paid resources, and pave the way for the node users of BSCC to grow their wealth. data security environment, comprehensively enhance the popularity of distributed storage and cloud platforms, and stimulate the intrinsic momentum of the digital economy.
An exciting new era of digital economy has arrived. Lock in at 10am on December 18 to witness the moment of BSCC s ecological explosion. | https://medium.com/@btlux/bscc-will-land-on-btlux-cb2fb9583edc | [] | 2020-12-18 01:27:35.353000+00:00 | ['Eth', 'Technology', 'Cloud Storage', 'Usdt', 'Btc'] |
2,424 | Open Source Firmware — Why Should We Support It? | Arising Problems
A few problems arise if the market is dominated by only a couple of companies using opaque techniques:
Taken from LinuxBoot Repo
Bloatware
On the Open Source Firmware Conference, 2019 in Silicon Valley Ryan O’Leary and Gan Shun Lim showed that there were able to remove 214 out of 424 DXEs within the boot process. DXEs are drivers which get loaded at boot time within the Driver eXecution Environment(DXE) Phase.
That’s an overall reduction of more than 50%.
Normally Firmware needs to be adjusted to every single board. Due to the fact that companies like to save time (and also money) it is much easier to put everything into the firmware that might be needed. This results in more code and longer boot times.
Transparency
As already stated above, most of the Firmware on modern platforms is closed source, which means that the source code is not made publicly available for everyone. In reality, only a handful of developers actually know or have access to the source code. This also means that is not transparent what actually has been implemented and what code is really running.
We have to recap that Firmware is actually the first code that runs when the platform boots up and is capable of reading and writing everything without any restrictions — even your encryption won't save you here.
Security
There are a few major security concerns in current firmware solutions. The lack of transparency is one of those. In my honest opinion, we can not talk about reliable security, if the implementation of those mechanisms is not available in the public domain. The majority of the cryptography community actually relies on open source. Every known and sane cryptographic algorithm or protocol is publicly documented and available.
Closed Source Firmware is just a blob..
..and that is the problem. We do not know what is happening on the lowest level of the hardware — and especially what goes wrong. Writing bugs and making errors is just natural. Hiding those does not help at all. Security by obscurity is no security at all. Therefore it is no surprise that security researches tend to find new vulnerabilities every now and then in closed source firmware. There is a nice blog about firmware security-related topics which you can find here. | https://medium.com/swlh/open-source-firmware-why-should-we-support-it-bbd0ad75b651 | ['Christian Walter'] | 2020-01-07 21:01:01.026000+00:00 | ['Embedded Systems', 'Programming', 'Technology', 'Computer Science', 'Open Source'] |
2,425 | Ingin Bekerja di Bidang DevOps? Miliki Beberapa Skill ini! | Alterra Academy is a tech talent incubator that gives everyone (even non-IT background) a chance to be a professional Tech Talent.
Follow | https://medium.com/alterra-academy/ingin-bekerja-di-bidang-devops-miliki-beberapa-skill-ini-39d956e8283 | ['Alterra Academy Writer'] | 2020-09-17 06:43:58.062000+00:00 | ['Technology', 'Alterra Academy', 'Codingbootcamp', 'Coding', 'DevOps'] |
2,426 | Live_Stream| Tokyo International Film Festival (2021) Full Show | ❂ Artist Event : Tokyo International Film Festival
❂ Venue : Tokyo Midtown Hibiya, Tokyo, Japan
❂ Live Streaming Tokyo International Film Festival 2021
Conversation Series at Asia Lounge
The Japan Foundation Asia Center & Tokyo International Film Festival
Marking its second installment since 2020, this year’s Conversation Series will again be advised by the committee members led by filmmaker Kore-eda Hirokazu. Directors and actors from various countries and regions including Asia will gather at the Asia Lounge to engage in discussion with their Japanese counterparts.
This year’s theme will be “Crossing Borders”. Guests will share their thoughts and sentiments about film and filmmaking in terms of efforts and attempts to transcend borders. The festival will strive to invite as many international guests as possible to Japan so that they can engage in physical conversation and interaction at the Asia Lounge.
The sessions will be broadcast live from the festival venue in Tokyo Midtown Hibiya every day for eight days from October 31st to November 7th. Stay tuned! | https://medium.com/@b.i.m.sa.la.bi.mp.r.o.k/live-stream-tokyo-international-film-festival-2021-full-show-27a1c65b6 | [] | 2021-10-30 14:11:41.521000+00:00 | ['Festivals', 'Technology'] |
2,427 | SAP Partners and Business Scenario in Bangladesh | SAP (Systems, Applications, Products in data processing) is a German multinational software corporation that makes enterprise software to manage business operations and customer relations. Founded in 1972 by five ex IBM employees, that small software company is now headquartered in Walldorf, Baden Württemberg, Germany. Gradually the small company has become one of the world’s leading enterprise applications in terms of software and software-related service and support revenue. Now the company has over 282,000 customers in 190 countries worldwide. More than 500 companies of the world using SAP with the help of their local SAP partners companies for its smoothest and fastest service.
Read more to know SAP Digital Business Services
SAP helps companies of all sizes and industries run better. From back office to boardroom, desktop to mobile device, warehouse to storefront, SAP helps both people and organizations of a company to work together efficiently and use business insight effectively to stay ahead of the competition. Thus it spreads its partnership over the world with large companies which make it easy for the SAP partner companies to provide a more accurate business solution to each unique enterprises according to their own regional business needs.
SAP services in Bangladesh
SAP, a leading provider of enterprise business solutions in the world, renewed its commitment to Bangladesh by announcing its new alliances and strategic plans for the country back in 2012. Since then, over 50 regional companies have become SAP partners. In the current scenario, SAP partner companies have already implemented SAP service in our country are running their business smoother than before. As the country is stepping ahead into more digitization at every aspect the number of SAP partners companies are increasing at a significant rate. Now, most of the companies chose SAP for business automation, as well as their Enterprise resource planning service as SAP provides outstanding resource management system which fits perfectly for the structure of Bangladeshi local companies of all sizes. A list of Bangladeshi companies with SAP implemented in their corporations are as follows.
Bangladeshi Companies using SAP
ACI Logistic
British American Tobacco
Ericsson
Unilever
Bangladesh Bank
Berger Paints Bangladesh Limited
Gemcon
Ultratech CementBangladesh
Edible Oil Ltd
Butterfly-LG Bangladesh
House of Pearl Fashions Ltd
Unique Group
BASFCEAT Bangladesh
Incepta Pharmaceuticals Ltd
Viyellatex
Bashundhara Group
Ceragem Bangladesh Ltd
Karnaphuli Fertilizer Company Ltd (KAFCO)
Young one5
British American Tobacco
Coats BD
Lynde Bangladesh
Marks Spencer
Partex
Sanofi Bangladesh Limited
MGHP
edrollo
Santa Group
Nestle Bangladesh
Perffeti
SAP India
Novertis
Rahimafrooz
Siemens
Otobi
ROBI
South China Bleaching & Dyeing Factory Ltd.
Partex
Samsung
Square
Source: Here
Why SAP is getting popular day by day in Bangladesh
When you want out-of-the-box integration or self-made integration solutions, there is nothing better than SAP solutions. SAP is stable and susceptible to any large scale business growth.With the business intelligence and mobility solutions, as well as on-demand supports, SAP is gaining more popular in Bangladesh. The reason being it is more convenient and accessible now, as it is a surplus to the traditional business models offering a full set up of integrated ERP system. Enterprises are embracing newer technologies, and SAP can easily integrate with the new changes. Manual log maintenance days are gone and now SAP is becoming the most popular choice for ERP services in Bangladesh for its super fast integrable featuresets
SAP ERP Services
SAP is growing to be more preferred over other ERP services. For all the businesses that run today in the world need ERP as well as CRM. Since SAP established its business in the 70s, knowing the efficient requirements of large, medium and small-scale companies worldwide to make their business RUN better and thus Bangladesh steps forward to adopt SAP making its business curve upward. The success of the SAP partner companies makes its ERP service popular among the new business bees of Bangladesh.
Enterprise without an ERP
Attributes During Urgent Basis:
Data maintenance cost goes up.
Inventory and material cost increases.
Labor cost increases.
Loss of repute or may face legal action.
Loss of revenue and customer dissatisfaction.
Enterprise with an ERP (SAP)
Attributes During Urgent Basis:
Avoids data duplication leads to low
maintenance cost.
Labor cost decreases.
Pay on time, no legal action.
Increased revenue and customer delight.
Robust information flows and MIS system.
WHY YOU CHOOSE SAP FOR YOUR COMPANY?
The question is so relevant that why you choose SAP for your company where there are other popular ERP providing companies in the market? The point noted below may satisfy your queries
Establish strong internal control system. Centralized enterprise management system Fully integrated business applications. Scalable & flexibility to meet demanding and varied business requirements. Immediate access to information.
Online / real-time.
Visibility of information across functions.
Data entry once, at the source.
Increase brand value in the corporate world. Solid & robust MIS system
SAP Support
To Keep your systems running at peak performance and get more value from your new and existing SAP software — SAP offers a range of support services, including long-term plans, embedded teams, remote technology support, a self-service portal, and more. They have dedicated experts to help with everything from SAP implementation and maintenance to system improvements and innovation strategies. This is another reason for choosing SAP as they are really supportive with their fast customer service.
Read more to know about SAP Success & Support plans
The implementation of SAP is almost always a substantial operation that brings a lot of changes in the organization. Virtually every person in the organization is involved, whether they are part of the SAP technical support or the actual end-users of the SAP software. The resulting changes that the implementation of SAP generates are intended to reach high-level goals, such as increased return on information (as people will work with the same information) and improved communication. It is therefore very important that the implementation process is planned and executed with the usage of a solid method. SAP partner companies are trained to
Now lots of companies in Bangladesh are interested to implement SAP and some are in confusion of taking SAP as it is not plugged and play. To expel this confusion SAP partner companies are always here to ease the process of implementation and other operational services. However, implementing SAP as business automation is indeed a great achievement towards Digital Bangladesh.
Brain Station 23 is already proving ERP services as an offshore software development company. We have been partnering an open source ERP & CRM, Odoo since 2016. To expand our wings of expertise further into the verge of today’s most effective ERP platforms. We are partnering with SAP to provide more sustainable and manageable ERP service to the companies of different interests.
Read the case study of Base technologies and find out the ERP service provided by us to this local tech company. Also you can check out our solution to Oslobuss, one of the biggest transportation companies in Norway. | https://medium.com/brainstation23/sap-partners-and-business-scenario-in-bangladesh-4a412e32b566 | ['Fahmiza Ramina Hossain'] | 2017-09-12 12:36:28.393000+00:00 | ['Technology', 'Erp', 'Erp Software', 'Sap'] |
2,428 | An Approach for a Hyper Local, Crowd Sourced, Data Driven Chat Bot for COVID | An Approach for a Hyper Local, Crowd Sourced, Data Driven Chat Bot for COVID Rajan Manickavasagam Follow Aug 11 · 5 min read
Overview
The COVID-19 pandemic has captured everyone’s attention and impacted our daily lives since the past few months. It is likely to remain that way for the foreseeable future too, as we learn to navigate around it.
Several organizations and volunteers globally have created various kinds of dashboards, databases, etc. so that information is available to everyone. Some of them are:
COVID-19 database by Johns Hopkins: https://github.com/CSSEGISandData/COVID-19
COVID-19 India dashboards: https://github.com/covid19india/covid19india-react
COVID-19 India clusters: https://github.com/someshkar/covid19india-cluster
WHO app: https://github.com/WorldHealthOrganization/app
COVID-19 time series data: https://github.com/pomber/covid19
And many others
Approach
From the various open source applications already created for COVID, came my inspiration too, for a chat bot. The idea is to empower local communities with data and insights so that they are better informed as they go about their daily tasks and routine. This might help people assess the “risk” where:
Some of them have to go to an office or public place for work
Children are going to schools/parks
People are heading out for chores/outdoors
And similar scenarios
Concept
Governments, NGOs, businesses and health officials all over the world are taking several initiatives to keep people safe. However, it is imperative that local communities also take steps to safeguard themselves. The idea is to create a chat bot that is set up and maintained by each local community, like a ham radio movement.
Such chat bots could provide both quantitative and qualitative data to the users. This chat bot could act as a digital sentinel for each local community. So, let’s call them DISCO (Distributed Information Sentinels for COvid) chat bots. Each of these DISCO bots will be maintained by each community.
Technology
The first step is to gather the relevant data. This can be done on a Google Spreadsheet, as it allows multiple people to update the sheet at the same time. In the example here, data from the Aarogya Setu app is used to capture the COVID-positive cases and at-risk cases for a given location. A sample spreadsheet is here. This is the kind of “quantitative” data that the DISCO bot could provide.
Often, many people have queries regarding the pandemic. An FAQ conversation model can be built that can also be integrated into the DISCO bot. A sample is provided here. This is the kind of qualitative data the DISCO bot could provide.
Next, we need a host computer to run the bot. It could be a desktop, laptop, server, Cloud or even a Raspberry Pi. This bot would download the above spreadsheet regularly and keep a local cache (for quicker responses to user queries). My copy of the DISCO bot for the local community is running on my Raspberry Pi 3 that has been idle for years.
Lastly, the community can use Telegram chat app on a compatible device to interact with the bot. Scroll below for the demos on how to set up and use the DISCO bot.
Drivers and Goals
The following drivers and goals have driven the design and implementation of this chat bot:
Local
Yet Global
Community/Volunteer Driven
Accessible
Simple and Cheap
Local
As we have seen so far, each community can set up their own little ‘database’ in the form of a Google spreadsheet. The idea is not to have a global or even a national instance of the chat bot, but for each community/locality/suburb to maintain one for themselves.
With many countries and communities in various stages of lockdown at some time or the other, many people — professionals, students, elderly and children (basically everyone) are home-bound most of the time. Hopefully, this idea motivates people to reuse this chat bot and build something similar for their local communities. This way, people can learn/share some knowledge on technology and help their communities at the same time.
Yet Global
Largely, people are limiting their travel and daily routines to their immediate and nearby areas. This approach can be adopted by communities and organizations around the world.
Use cases for this bot could be residential communities, educational institutions, workplaces (where remote working is not possible), etc.
Community/Volunteer Driven
The two key stakeholders for this chat bot are volunteers:
Data Volunteers: Their responsibility is not just to enter/maintain the data, but to also ensure that the data is “as true” as possible. Data volunteers are kind of playing the role of citizen journalists. One of the “journalistic ethics” is to have a story/lead/data verified by at least 2 independent sources. This is the reason for the Google spreadsheet to have 2 columns: “Data Source Verified By (1): and “Data Source Verified By (2)”. It is also important to ensure that the data maintained is sufficiently protected and privacy maintained. For more details, please refer to Reuters Handbook.
Their responsibility is not just to enter/maintain the data, but to also ensure that the data is “as true” as possible. Data volunteers are kind of playing the role of citizen journalists. One of the “journalistic ethics” is to have a story/lead/data verified by at least 2 independent sources. This is the reason for the Google spreadsheet to have 2 columns: “Data Source Verified By (1): and “Data Source Verified By (2)”. It is also important to ensure that the data maintained is sufficiently protected and privacy maintained. For more details, please refer to Reuters Handbook. Technology Volunteers: Their responsibility is to set up and run the chat bot. The source code and installation instructions are available here at Github.
Accessible
Creating chat bots using Telegram APIs is incredibly simple. Also, Telegram provides various free chat clients across all devices and form factors — web, desktop, mobile and tablet apps.
Unless someone is on a “feature phone”, virtually anyone in the world should be able to access their respective local COVID chat bot over a data connection (mobile or broadband or Wi-Fi) from a device of their choosing.
Simple and Cheap
Simplicity here has 4 connotations:
Simple to maintain the data
Simple and light-weight to set up and run the chat bot
Simple and actionable insights
Simple for anyone to use the chat bot
All the technologies used in this chat bot are either free or open source or cheap to buy.
Chat Bot in Action
Step 1: Updating data in a Google spreadsheet
Step 2: Training a sample FAQ for the chat bot
https://youtu.be/IWphF4t34Pk
Step 3: Downloading Google spreadsheet to a local file cache and running the chat bot on a Raspberry Pi
https://youtu.be/ft0uEUycBXc
Step 4: Testing an alpha version of the chat bot in the Telegram Web app
https://youtu.be/CPXuFHfRNLk
Step 5: Testing the current version of the chat bot in Telegram app for iPhone
How to set up your own Bot
The source code and installation instructions are available here at Github. Feel free to customize to your requirements. You can set up and run a bot for your local community/organization. It should take roughly 1–2 hours to set it all up from scratch.
Summary
In the example videos above, the health and contact tracing application from the Indian central/federal government — Aarogya Setu (rough English translation — Bridge to your Health) has been used to manually collect anonymous data over a period of a few weeks, for the location where I live.
Take care and stay safe. | https://medium.com/engineered-publicis-sapient/an-approach-for-a-hyper-local-crowd-sourced-data-driven-chat-bot-for-covid-f994d6723731 | ['Rajan Manickavasagam'] | 2020-08-11 05:10:04.365000+00:00 | ['Coding', 'Data', 'Chatbots', 'Engineering', 'Technology'] |
2,429 | My Predictions for Technology Trends in 2021 | 2 | Quantum Computing = Computational Panacea
Quantum Computing, which functions by manipulating quantum particles, known as qubits, performs classical computations at an accelerated rate. The purpose of quantum computing, outlined by Paul Beinhoff in 1981, was to create a tool to solve non-deterministic polynomial problems in polynomial time efficiently. Some example problems include the optimization of the finance and health sector, data analysis for medical advancement, and breaking intricate encryption systems.
So, what will Quantum Computing look like in 2021? IBM and Google will be adding more qubits to their machines, and if IBM wants to keep their promise of delivering a 1 million qubit quantum computer by 2030, they better get moving. I also believe Google will also join this race to a million qubits. Moreover, I think the real ‘hype’ quantum computing will receive in 2021 will come from quantum computing software solutions rather than any hardware breakthroughs. One field in particular that will be impacted by the creation of quantum computing software is medicine. The astonishing speed of quantum computers will allow us to parse through millions of molecular combinations to develop AI-generated treatments, which could potentially save millions of lives and reduce R&D costs. | https://medium.com/swlh/my-predictions-for-technology-trends-in-2021-7ee5cc9710b2 | ['Adarsh Bulusu'] | 2020-11-09 05:49:02.196000+00:00 | ['Programming', 'Data Science', 'Software Development', 'Technology', 'Tech'] |
2,430 | iOS 14.3 is Finally Here | Photo from Newsroom
iOS 14.3 is Finally Here
The long-awaited iOS 14.3 version is finally out to the public. We’ve been looking forward to this update for a while since it contains tons of new features including ProRaw, Apple Fitness+, support for the new AirPods Max, and much more.
Hopefully, this update will come with some long-awaited bug fixes that many people (including me) have been running into with iOS 14.2 versions. Apple, interestingly, decided to hold the fixes off until 14.3 was dropped instead of just releasing a small patch to 14.2 as 14.2.1.
In case you missed those stories, here they are:
Anyway, here are a few of the coolest new features in 14.3:
Apple Fitness+ is now officially released. The subscription service costs $9.99/month or $79.99/year.
The new AirPods Max are now supported inside of iOS.
App Clips is now available. The feature allows users to scan a small object to pull up a “mini-app”. The feature will be great in situations like a restaurant where you want to order something but now download the restaurant's app and have 20 different restaurant apps on your phone.
Apple ProRaw is now released which lets the iPhone 12 Pro and 12 Pro Max shoot raw video natively.
Video can now be recorded at 25 fps, a standard found in more European countries as opposed to the universal 24 fps used in movies.
New Privacy Info is added to App Store pages. It shows what data the developers collect from you. Nutrition facts are now on apps.
Safari now supports the Ecosia search engine.
I recommend you read this article for the full list of features. | https://medium.com/macoclock/ios-14-3-is-finally-here-459ea435c8b9 | ['Henry Gruett'] | 2020-12-15 17:39:53.859000+00:00 | ['Apple', 'Tech', 'Technews', 'iOS', 'Technology'] |
2,431 | Object-Oriented JavaScript — Metaprogramming and Proxies | Photo by Aryo Kadiono on Unsplash
JavaScript is partly an object-oriented language.
To learn JavaScript, we got to learn the object-oriented parts of JavaScript.
In this article, we’ll look at JavaScript metaprogramming with proxies.
Metaprogramming and Proxies
Metaprogramming is a programming method where a program is aware of its structure and manipulating itself.
There’re multiple ways to do metaprogramming.
One is introspection. This is where we have read-only access to the internals of a program.
Self-modification is making structural changes to the program.
Intercession is where we change language semantics.
In JavaScript, we can do this with proxies.
They let us control how objects are accessed and set.
Proxy
We can use proxies to determine the behavior of an object.
The object being controlled is called the target.
We can define custom behaviors for basic operations on an object like property lookup, function call, and assignment.
A proxy needs 2 parameters,
One is the handler, which is an object with methods to let us change the behavior of object operations.
Target is the target that we want to change the operations to.
For instance, we can create a proxy to control an object by writing:
const handler = {
get(target, name) {
return name in target ? target[name] : 1;
}
}
const proxy = new Proxy({}, handler);
proxy.a = 100; console.log(proxy.a);
console.log(proxy.b);
We created a proxy with handler with the handler object.
The get method lets us control how properties are retrieved.
target is the object that we’re controlling.
The name is the property name we want to access.
In the get method, we check if the name proxy exists.
If it does we return the target value, otherwise we return 1.
Then we create a proxy with the Proxy constructor.
A first argument is an empty object.
handler is our handler for controlling the operations.
proxy.a is defined, so its value is returned.
Otherwise, we return the default value.
Also, we can use proxies to validate values before setting them to an object.
For instance, we can trap the set handler by writing:
const ageValidator = {
set(obj, prop, value) {
if (prop === 'age') {
if (!Number.isInteger(value)) {
throw new TypeError('age must be a number');
}
if (value < 0 || value > 130) {
throw new RangeError('invalid age range');
}
}
obj[prop] = value;
}
};
const p = new Proxy({}, ageValidator);
p.age = 100;
console.log(p.age);
p.age = 300;
We have the set method with the obj , prop , and value parameters.
obj is the object we want to control.
prop is the property key.
value is the property value we want to set.
We check if prop is 'age' so that we validate the assignment of the age property.
Then we check if it’s an integer and if it’s not we throw an error.
We also throw an error if it’s out of range.
Then we create a proxy with the Proxy constructor with the ageValidator as the handler and an empty object to control.
Then if we try to set p.age to 300, we get a RangeError .
Photo by Scott Webb on Unsplash
Conclusion
Proxies let us control how object properties are retrieved and set. | https://medium.com/dev-genius/object-oriented-javascript-metaprogramming-and-proxies-87c3bc212e1c | ['John Au-Yeung'] | 2020-11-19 21:15:46.560000+00:00 | ['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming'] |
2,432 | Is Windows Vista still usable in 2019? | This weekend for a trip, I was planning to do audio mixing. I picked up a laptop from a thrift store for $13. It was a Dell E4310, which in theory would be a great laptop for light audio work, and light gaming. The previous owner had decided to install Windows Vista Home Premium on it. Luckily it had 4 GB RAM, instead of 1 or 2, and was an I5 model.
Quick background on Windows Vista. It released in 2006, and was known for being extremely slow and intensive on resources especially for laptops at the time which had anywhere from 512MB of RAM to 2 GB as a “high end” *laughs in 16 gb*. Three years later it was replaced by Windows 7 as the new and improved Windows, and in 2017 removed from official Microsoft support.
The Dell E4310 is a business laptop, at core. It’s a 13 inch cut down version of the Dell E6410 from my understanding. These are very versatile laptops, where you can remove just about everything, very simply. But this is about Vista, not the laptop itself. My first experience was loading up Opera, and going to Facebook. Facebook actually worked like a charm, if a bit slow because of the mechanical hard drive. My next was my home school site, Connexus. It worked pretty well too, if a bit slow.
I thought for my next venture, I should try to download more than a browser.
First attempt was Discord.
Straight up just didn’t work.
I bought this laptop for portable audio mixing, so next was Audacity.
This was tricky, I knew the most recent version wouldn’t work, so I tried 2.3, which promptly after installation didn’t work. I tried about 4 different versions before giving up.
After that, I was done with Vista, it was no longer of interest. I tried to download a Windows 10 ISO straight from Microsoft, plugged in a 32 GB USB flash drive, and what do you know, it wouldn’t work. I have a theory that Vista doesn’t support over 4 GB USB flash drives, since this was the 32 bit version, for some reason. I tried 2 different 32 GB flash drives, and I’ll just stick with my theory.
My basic question to you : What do you do? That’s all that matter when you choose out any OS. Some people can get away with MS-DOS on something like a Brother Desktop Publisher, and others might need a fully specced out I9 15 inch Macbook Pro.
Despite that, I’d go way way out of my way to use Windows 7 or 10 even if the PC just wouldn’t do it, Vista just doesn’t cut it for anyone anymore. It makes any computer feel a decade older than it is. | https://medium.com/@LilRamenCHS/is-windows-vista-still-usable-in-2019-e9658ae3f4e7 | ['Lil Ramen'] | 2019-05-11 03:19:51.605000+00:00 | ['Vista', 'Windows 10', 'Mac', 'Os', 'Technology'] |
2,433 | A guide to creating a React app without create-react-app | If you’re a newbie willing out to implement React in your future projects leaving out the unnecessary stuff that comes packed with create-react-app, then I’m sure that this article will build you a concrete understanding to get started with React & Webpack altogether!
Most of the stuff we do in a React project created with create-react-app library is managed by the library itself. So, in short, everything we’re going to implement here is actually pre-implemented in the create-react-app library.
This is to make you understand how all that stuff works when added manually. I’ll be explaining the things as we go on implementing them.
Let’s get started
1. Initialize the NPM
Run npm init -y in the project folder named as, e.g. react-webpack-starter.
C:\Users\sapin\Desktop\react-webpack-starter>npm init -y
Wrote to C:\Users\sapin\Desktop\react-webpack-starter\package.json: {
“name”: “react-webpack-starter”,
“version”: “1.0.0”,
“description”: “”,
“main”: “index.js”,
“scripts”: {
“test”: “echo \”Error: no test specified\” && exit 1"
},
“keywords”: [],
“author”: “”,
“license”: “ISC”
}
After done, open the project folder in your code editor. | https://medium.com/javascript-in-plain-english/to-beginners-moving-away-from-create-react-app-f597413181e | [] | 2020-12-30 04:38:30.179000+00:00 | ['Front End Development', 'React', 'Web Development', 'Technology', 'Webpack'] |
2,434 | Hidden Anatomy of Backend Applications:Context Lifecycle | In previous article we’ve looked at backend application communication with external world from the point of view of I/O. Now I propose to look into another processing pattern which is explicitly or implicitly present in every backend application.
Let’s start again with the very simple HTTP endpoint. This time we’ll be looking at the processes which happen before our code is even invoked, in the depth of the framework or library we’re using. For convenience lets call this part of the framework or library a transport layer.
Let’s keep aside for the moment what happens when the client connects to the server. We’ll return to this part later, but for now let’s assume that connection is already established and the client sends a request to the server. Once data are received at the server side by the OS, they are delivered to our backend application. Received data are not a request yet. It’s just an array of bytes. The transport layer need to transform these data, extract necessary information and only then invoke our code.
Here is a pitfall: received data might not represent the whole request. This may happen for various reasons. The client may write the request as few parts, for example. Or a long request may be split into packets and OS may deliver them as soon as they available without waiting for the rest of the data. In either case, since there is no complete request, the transport layer needs to save already received data somewhere until it will be possible to decode the request and invoke the handler.
The location where data are saved is the context associated with the client connection. Right after establishing the connection, context is empty and contains only the connection itself. Then, as we receive and process data, context grows and at some point we get enough data to extract request information and then call the application code which handles the request.
This process can be represented using the following diagram:
Context Transitions
If to look at this diagram from the data flow point of view, we can describe it as follows: OS emits data packets to the application transport layer, which collects data as long as necessary to decode the request. Once the request is decoded it is emitted into the application code. Note that there might be several similar stages, for example the parsed request might be passed to the part of the framework which extracts request parameters, authentication information, etc. Once all necessary parts are extracted, they are passed to the user-level handler. Usually subsequent stages are “one shot” — they call user level code for every request emitted to them, but there might be exceptions, for example, file upload functionality might postpone calling user level code until file(s) are completely downloaded and saved in temporary location.
The whole processing pattern is not specific to the HTTP protocol or backend applications. For example, we may observe a very similar pattern in push-based XML parsers: they call client only when they recognize specific elements, all intermediate steps are hidden from the client code. This similarity is not accidental — parsing request and parsing XML document are very similar processes, just the grammar which we’re parsing is different. Some implementations of the transport layer, for example Netty, expose this processing pattern to the users of the library.
Now it’s time to look at the context lifecycle as a whole, from end to end.
In the case of connection-based protocols (for example, TCP) the context is born at the moment when the client connects to the server. Usually the initial context contains only information which can be obtained at the moment when the connection is established, for example the client address and socket which can be used to send response to the client. The end of life of context happens when the server decides to close the connection for whatever reason. Note that during context lifetime several requests could be processed.
In the case of connection-less protocols (for example, UDP) the context is usually born at the moment when the packet from the client is received and context life is rather short, as it is no longer necessary once the application processes the received packet. Nevertheless, some applications simulate connection-based protocol using UDP for the transport. If this is the case, then the context life cycle is very similar to the connection-based protocol case described above.
Conclusion
Understanding of described above processing patterns and context lifecycle is helpful in many situations — optimizing application performance, implementing resource management or designing of framework/library. | https://medium.com/nerd-for-tech/hidden-anatomy-of-backend-applications-context-lifecycle-47f8d48dd4b6 | ['Sergiy Yevtushenko'] | 2021-01-05 02:47:35.712000+00:00 | ['Software', 'Software Architecture', 'Computer Science', 'Beginner', 'Technology'] |
2,435 | Linux++ (March 1, 2020) | Linux++ (March 1, 2020)
Hello and welcome to the sixth edition of Linux++, a weekly dive into the major topics, events, and headlines throughout the Linux world. This issue covers the week starting Monday, February 24, 2020 and ending Sunday, March 1, 2020.
This is not meant to be a deep dive into the different topics, but more like a curated selection of what I find most interesting each week with links provided to delve into the material as much as your heart desires.
If you missed the last report, Issue 4 from February 23, 2020, you can find it here. You can also find all of the issues posted on the official Linux++ Twitter account here or follow the Linux++ publication on Medium here.
In addition, there is a Telegram group dedicated to the readers and anyone else interested in discussion about the newest updates in the GNU/Linux world available to join here.
There is a lot to cover so let’s dive right in!
Table of Contents
Personal News
Linux Community News
Community Voice: Michael Tunnell
Explore the FOSS World
Linux Desktop Setup of the Week
Personal News
If you don’t care what I’ve been up to over the past week, feel free to skip ahead to the Community News section :)
Joining the GNOME Foundation’s Engagement Team!
GNOME logo wallpaper. (Credit: ANTECHDESIGNS on pling.com)
Yes, you heard me right. I’ve decided to help contribute whatever talents and extra time I might have to bettering the GNOME Project through their community engagement and documentation teams.
This is an extremely exciting undertaking for me to get to work with some of the best in the business as well as learn the GTK toolkit and start building some cool stuff. If you would like to get involved with the GNOME project in any capacity, check out their page dedicated to new members here.
Trust me, they are an extremely friendly, helpful, and informative bunch, so any fears you have diving in will be put to rest the moment you express interest.
Thanks so much to GNOME Member, Heather Ellsworth, for pointing me in the right direction to get involved!
Back to Table of Contents
Falling Into a Focal Fossa Frenzy
Earlier this week, I published an article detailing my return to Ubuntu as my daily driver with the upcoming 20.04 LTS “Focal Fossa” release after nearly 2 years of wandering the Linux wild.
I truly think that Focal Fossa has the potential to be something really special and I have regained that feeling of excitement each time I upgrade my testing computer on the development branch.
My current Ubuntu Focal Fossa testing desktop.
I have a ton of different articles in the hopper at the moment, but for some reason this one just bled out of me Thursday night and refused to not be seen through to completion. Maybe it was the new default Focal Fossa wallpaper being revealed or the fact that I may have had too much coffee on that particular night, but I just couldn’t deny the excitement I have surrounding this release.
If you would like to read about my journey back to Ubuntu, you can check out the article here.
As always thank you for your continued support, it means so much to me!
Back to Table of Contents
Community News
Manjaro 19.0 “Kyria” Out Now & Update With TUXEDO Computers
What more can honestly be said about Manjaro? They are such a great team of developers that have worked so hard to ignite a community of welcoming, helpful, and just plain awesome individuals. Mission accomplished on that front, Manjaro team.
In the previous issue of Linux++, there was a piece on Manjaro’s 19.0 release candidates and of course only a day after it was published, the official 19.0 “Kyria” came out. Gotta love the timing! ;)
So, for those looking to test the Arch waters, there is quite a bit that is new in the latest Manjaro release. Of course, if you have been on Manjaro for a while, then it is likely you won’t notice too much change, as you have been getting the rolling updates all along.
All three official editions of “Kyria” feature the Linux 5.4 LTS kernel, Pamac 9.3 with integrated Snap and Flatpak support, as well as a newly integrated tool, Bauh, that will make it significantly more simple for users to install Snaps or Flatpaks.
The Manjaro Bauh tool, formerly fpakman. (Credit: vfm90 on forum.manjaro.org)
For the flagship Xfce edition, 19.0 ships with the most recent 4.14 version of the lightweight desktop environment as well as a new theme, Matcha. In addition, there is a new feature available that will let Manjaro Xfce users save and store different display profiles so that they can apply these very easily when connecting different arrangements of displays.
The KDE Plasma edition arrives with Plasma version 5.17 that includes an updated look and feel. Also, KDE users will be happy to see the Plasma-Simplemenu as an alternative application launcher as well as inclusion of KDE Applications 19.2.2 and KDE Frameworks 5.66.0.
And last but not least, the GNOME edition ships with shell version 3.34, an updated login theme, a new layout switcher tool, Feral Interactive’s Gamemode, and a really cool dynamic wallpaper!
The new layout switcher tool for Manjaro GNOME. (Credit: Jason Evangelho on forbes.com)
So, congrats to the Manjaro team for another successful release! If you would like to check out the official release notes, you can find them here. If you would like to download and try out Manjaro 19.0 for yourself, you can get it here.
In other news surrounding the Manjaro team, it looks as though everything is on schedule for the debut of their new computer line with TUXEDO Computers via a recent post by the two partners on Twitter:
Manjaro and TUXEDO Computers. (Credit: @ManjaroLinux on twitter.com)
I don’t know about you, but I can’t wait until this gets in the hands of reviewers to see how incredible this potent combination can be!
Back to Table of Contents
Netrunner 20.01 Has Been Released
If you’re into stable releases, prefer the Debian variety, and are a KDE fanatic, Netrunner might just be the distribution for you.
This is the twentieth release of the Netrunner desktop for Debian/Ubuntu and the tenth year since Netrunner was announced back in 2010. This current iteration is based upon Debian 10.3 “Buster” and comes with all the updates since the last release.
There are quite a few updates since the last Netrunner release including the latest security patches via upstream Debian, a new, more polished “Indigo” Global Theme using the Kvantum engine, a switch to Breeze Window decorations with darker colors, and an updated default wallpaper with some birthday vibes of its own!
Netrunner 20.01 default desktop. (Credit: Ilectronics on netrunner.com)
Netrunner 20.01 comes with a wide variety of applications for any user including the LibreOffice suite, Gimp, Krita, Inkscape, Kdenlive, GMusicbrowser, Yarock, SMplayer, Steam, Skype, Kate, and Yakuake.
If you are an existing user that is running Netrunner 19.08, you can upgrade as you normally would to get the same software provided in 20.01 except for the new theme settings.
If you would like to learn more about what is new in Netrunner 20.01, check out the official release notes here. If you would like to download the ISO image, you can find it here.
Back to Table of Contents
UBPorts Renames Unity8
Canonical’s Unity Shell in Ubuntu concept art. (Credit: askubuntu.com)
Though Unity8 started out as a vision for device convergence utilizing the unique Unity shell developed by Canonical for their popular Ubuntu distribution, it has now taken a path that no one could have seen coming in the early days of its development.
Unity8 was born to provide Ubuntu, a popular operating system for desktop computing and servers, to the phone, tablet, and other up-and-coming interfaces. However after failure to penetrate the intense mobile market with Ubuntu Touch and the Ubuntu Edge phone that it was designed for, Mark Shuttleworth, Founder and CEO of Canonical, announced that development on Unity8 would be dropped and the goal of convergence de-prioritized within the company’s vast portfolio.
Concept of the Ubuntu Edge phone from Canonical. (Credit: Sebastian Anthony on extremetech.com)
However, before Canonical shut the project down completely, they open sourced the entire codebase, allowing for developers to continue working on the Unity8 platform. And sure enough, it didn’t take long until a large community sprung up around the Ubuntu Touch mobile operating system that utilizes Unity8. This was all thanks to the UBPorts organization, with the central goal to continue Canonical’s dream of convergence by bringing Ubuntu Touch to all different types of devices, with much of the most recent work targeting the mobile arena.
Consequently, there has been quite a rise in popularity over the past couple of years, especially with the development of Linux-based smart phones such as Pine64’s PinePhone and the continued and increasing drama regarding data security and privacy with the major mainstream mobile platforms, Google’s Android and Apple’s iOS.
Lomiri Launcher. (Credit: Tobiyo Kuujikai on Telegram Desktop)
However, with an announcement by UBPorts earlier this week, it looks like Unity8 will undergo another change. As stated in the official blog post, UBPorts will be changing the project’s name from Unity8 to Lomiri. They do realize that name changes come with a bit of pushback from the community, however, I do believe this was a smart move by UBPorts to distance themselves from the old Unity7 Shell as well as other popular projects such as the Unity 2D/3D development platform and game engine.
The crossover of naming from the two projects caused confusion within the community and UBPorts was receiving many questions about the other project. In addition, as efforts have begun to ramp up to port Lomiri to the desktop via Debian, the developers were warned that some of the dependency packages with “ubuntu” in the name would probably not be excepted into the distribution.
The Lomiri desktop that is being ported to Debian. (Credit: ubports on github.com)
UBPorts also wanted to make it extremely clear that there was no pressure from Canonical or the Ubuntu community to force a name change legally or otherwise. In fact, Canonical has always been incredibly supportive of the UBPorts community in general.
It appears that the name change will likely effect quite a few of the packages that used Unity or Ubuntu in the name, where they will be edited to Lomiri. Additionally, the developer interface that works with Lomiri will also be changed from:
# Old QML import statement #
import Ubuntu.Components 1.3 # New QML import statement #
import Lomiri.Components 1.3
Any packages or namespaces that don’t conform to these rules will not be changed at all and packages that have already been excepted into Debian will also require no change. The name change has already begun with critical components that are being built for Debian.
From their official announcement, it appears that UBPorts went through a variety of names before coming upon Lomiri, however, many of the alternatives had problems with pronunciation, availability, or other issues. Users can expect to see changes to the Lomiri component’s names in the next few months, however, it will likely have little effect on their day to day use of the operating system.
The Ubuntu Touch operating system. (Credit: @UBports on twitter.com)
If you would like to check out the official statement from UBPorts, you can find it here. You can also check out the Lomiri code base on GitHub here, however, it is noted that UBPorts will be taking this opportunity to move their code to GitLab in the coming months.
Back to Table of Contents
Endeavour OS: What’s On the Horizon?
In May 2019, the Linux community was shocked by the announcement that the popular Arch-based distribution, Antergos, would be closing it’s doors for good. The team behind Antergos had built a dedicated, helpful, and friendly community, which refused to be extinguished with the unfortunate announcement.
Some members of the community bound together to continue and improve upon the Antergos vision: make a distribution that would be simple to install and be as close to the Arch Linux experience as possible while also putting community at the forefront of it’s vision.
One of those development teams is the force behind the Endeavour OS distribution. Though Endeavour is pretty new on the Linux scene, it has grown exponentially in popularity since the first release in July 2019, in part due to the massive amount of Antergos users looking for a new home.
Different desktop environment choices with Endeavour OS. (Credit: endeavouros.com)
With the release of their net-installer late last year, Endeavour began to make even more waves in the community as plenty of rave reviews rolled in all around the Linux world.
Now, the Endeavor team is looking toward the future. For instance, they have already announced that they were altering their release cycle in 2020 from monthly to bi-monthly.
Some of the new features expected in the new release include translations for the welcome application, Luks encryption in the Calamares installer, updated branding and theming for Calamares, a seperate NVIDA ISO, and core system updates.
Endeavour OS with the Deepin desktop environment. (Credit: endeavouros.com)
Surely, some of these new features will be instrumental in building up the user base for Endeavour. I can’t wait to watch them grow even more and see where they take the project!
Congrats, Endeavour team, the future sure does look bright!
If you would like to check out the official blog post from the Endeavour team, you can find it here. If you would like to try out Endeavour OS for yourself, you can download the image here.
Back to Table of Contents
Latte 0.9.9 Comes With Critical Improvements
The most recent iteration of the highly popular dock-like launcher for GNU/Linux, Latte, was released this week with plenty of updates. The dock is most famous for its use within the KDE Plasma environment, but can also be implemented in pretty much any desktop environment imaginable.
Latte in action. (Credit: psifidotos on store.kde.org)
One of the most important issues that has been resolved with Latte is that of a bug which plagues the initialization of configuration files during startup. From the official release notes:
“Through the mentioned bug report, I discovered that initialization of config files during startup it was not valid for all new users. There were cases that configuration files were not consistent with the v0.9.x implementation. Old users using Latte since v0.8.x days are not influenced by that.”
Therefore, it is heavily suggested that distributions providing Latte should update to the latest version to improve the experience for new users.
Other improvements shipping with the new release include fixing a bug that could disturb the MultipleLayouts appearance via a Shared layout, a variety of improvements for Wayland support, animation speed optimizations for Plasma 5.18 LTS, and several bugs relating to the blur region and area calculations of the dock itself.
You can find the official release notes for Latte 0.9.9 here. If you would like to try out Latte, you can discover different ways to get it on your system from the official GitHub repository here.
Back to Table of Contents
Check Out Ubuntu 20.04 “Focal Fossa” with Alan Pope
Ubuntu 20.04 LTS “Focal Fossa” Offcial Wallpaper. (Credit: Joey Sneddon on omgubuntu.co.uk)
We are now less than two months away from the release of Ubuntu 20.04 LTS, and I’m not sure that I have seen as much excitement surrounding a release since maybe 14.04 LTS “Trusty Tahr” (which is my own pick as best Ubuntu release of the decade).
This week, the new default wallpaper for “Focal Fossa” was revealed, which has only increased excitement levels. I know I think it’s a pretty cool piece of artwork. Who doesn’t love cats with lasers shooting out of their eyes?
With Martin Wimpress at the helm of Canonical’s Ubuntu desktop team, expectations have skyrocketed due to his self-proclaimed love of desktop Linux in particular. Martin was instrumental in the work done on the MATE desktop over the past few years and also is the founder of the highly popular Ubuntu MATE distribution. Just a quick look at that project and it’s easy to tell that he knows how to create a desktop that is enticing, fully-functional, and extremely usable to newcomers and master sudoers alike.
If you have at all checked out the Focal Fossa development branch, you will by now have seen some of the fruits of the Ubuntu desktop team’s labor. If not, there is always Alan Pope’s (Developer Advocate for Snapcraft at Canonical) Testing Tuesday series where he looks at Focal Fossa for you and points out the new features, design elements, and other improvements that have landed in the development branch.
If you would like to check out the first Testing Tuesday video for Focal Fossa, you can find it on Alan’s YouTube channel, or the video linked below:
Back to Table of Contents
Community Voice: Michael Tunnell
Michael Tunnell of the Destination Linux Network.
This week Linux++ is very excited to welcome Michael Tunnell from the Destination Linux Network. If you are at all engaged with the Linux world, and especially the podcasting community, then you should need no introduction. However, for those who don’t know Michael, here is a small summary of his work within the Linux community.
Currently, Michael is a co-host of the popular weekly Destination Linux podcast as well as a member of the Destination Linux Network (DLN), which includes a variety of Linux and tech-related podcasts. In addition, he hosts his own podcast, This Week in Linux, as well as a YouTube channel under the name “TuxDigital”. Michael has contributed to a multitude of projects, but focuses mostly on his unabashed love for KDE Plasma and the Kubuntu distribution.
More recently, Michael has started a journey down the hardware rabbit hole as a co-host of the new Hardware Addicts podcast on DLN, where he is constantly learning from hardware extraordinaire, Ryan (DasGeek) and hardware enthusiast, Wendy Hill. I’m happy to present my interview with Michael Tunnell below:
How would you describe Linux to someone who is unfamiliar with it, but interested?
“Linux is an alternative to Windows and macOS, it’s quite similar in many ways to macOS…in fact, you could probably call them cousins. Linux and macOS are both alternatives to Windows, but the biggest difference between Linux and macOS is that Linux is FREE to use. As a Windows user, have you ever been frustrated with your system rebooting itself while you’re working because it decided your work isn’t important? Well, that would never happen with Linux because not only is Linux FREE to use, but you control the system, not the other way around. This response assumes talking to a Windows user, but a similar approach would be applicable to macOS, just presented differently.”
What got you hooked on the Linux operating system and why do you continue to use it?
“I originally got hooked on Linux because of the customization and the relief from viruses or malware. I haven’t personally had to deal with any of that stuff in about 15 years and it is glorious! I keep using it because, as a tech enthusiast, I want to be in control of my computer, not have my computer controlling me. This piece is a fundamental aspect as to why I love using Linux as it provides freedom to customize the system to the weird workflow I like and update it when I have the opportunity.”
What do you like to use Linux for? (Gaming, Development, Casual Use, Tinkering/Testing, etc.)
“Everything. I use Linux 100% of the time these days. I occasionally use a Windows VM for use with Photoshop, but I haven’t used that VM in quite some time. I am actually working on a video to talk about the alternative I found because it has been wonderfully freeing as of late.”
What is your absolute favorite aspect about being part of the Linux and open source community?
“My favorite part of the community is the wonderful people I’ve met having become a part of it. Using Linux is great and I love it, but becoming a part of the Linux community is a whole other thing that makes using it all the better. I have met many new friends thanks to Linux. I started doing podcasting and found my secondary passion of video production thanks to Linux. I think the Linux community and the DLN (Destination Linux Network) community has been one of my favorite things about becoming a Linux user. Though, it was kind of interesting with finding the community many many years after starting my journey in Linux.”
Michael’s “This Week in Linux” show. (Credit: tuxdigital.com)
Do you prefer a distribution(s) that you find yourself being drawn to most and why? Do you prefer a particular desktop environment(s) for your workflow and why?
“I don’t have a preference to distribution. I often find myself using an Ubuntu-based distribution, but mostly because I like the ease of use for getting started. This specifically relates to Kubuntu as I am a KDE Plasma user. I am a big fan of the KDE Plasma desktop but it’s also a mess to get started with. The defaults for Plasma leave a lot to be desired and because Kubuntu fixes a lot of those issues, I often find myself using Kubuntu to avoid those headaches. Plus, the Kubuntu developer team are very receptive to input, so when I casually mentioned things needed to be changed in Plasma, they changed them in Kubuntu, which was fantastic!”
The Kubuntu logo. (Credit: Silviu Stahie on news.softpedia.com)
What is one FOSS project that you would like to bring to the attention of the community?
“I can’t pick just one because I am very interested in watching many projects, so I’m going to provide a list of stuff I am currently watching their growth with great interest. Flameshot (screenshot tool), KDE Falkon (web browser), Joplin (notes app), Kdenlive (video editor), LBRY (YouTube alternative), and KWin Overview (an activities overview for Plasma). I could easly keep going because I keep up with a lot of projects, but this will do.”
Do you think that the Linux ecosystem is too fragmented? Is fragmentation a good or bad thing in your view?
“As for the ecosystem, I think there are parts where it is too fragmented and other parts where fragmentation creates the innovation and overall awesomeness of the ecosystem. It’s hard to really judge because the answer is both yes and no. I think the universal formats are changing that fragmentation drastically, at least for apps, and that is fantastic. One of the things I never liked about Linux is the amount of effort it took to support multiple distros because of all the formats and version fragmentation so the universal app formats are incredibly valuable in my opinion. The fact that there are only 3 of them and they all function together on the same system without any real conflict is another fantastic thing!”
What do you think the future of Linux holds?
“I think that in the short term, Linux will grow in popularity thanks to initiatives like Proton and others putting Linux in the forefront of discussions for people who never considered it before. In the long term like 10–20 years, I expect nothing less than global domination of systems.”
The Steam logo from Valve. (Credit: Hayden Dingman on pcworld.com)
How did you find podcasting and become involved with the Destination Linux Network?
“I don’t really remember how I was introduced to podcasting, but I think it was either Leo Laporte’s TWiT.tv or Revision3 via Digg Nation. I don’t know which one first introduced me to the concept, but it was fairly early on for sure. Not important, but you may see me occasionally rock some shirts from the Rev3 days.
I got started with the Destination Linux Network because I was one of the founders of it, along with Ryan aka DasGeek. We created this network to introduce the concept of a media network to the values and core beliefs of Open Source. Destination Linux Network is the first that I know of, and maybe only, media network that has Open Source as a core pillar in that the creators have ownership of the shows rather than just work as hosts for a network.
Destination Linux Network logo. (Credit: destinationlinux.network)
The Open Source ideology is a fundamental piece of DLN and we take that throughout the network, so we don’t present it as ‘Michael’s Network’ or ‘Ryan’s Network’, but we want people to think of it as all of ‘Our Network’. This is why the DLN Forum, Telegram group, Discord server, Mumble server, and social media platforms are very important to us. We want the community to think of themselves as part of the network. This is what I mean when I talk about the DLN community.”
What is your favorite part about hosting your podcast, Destination Linux?
“My favorite part about hosting Destination Linux has to be the way we mix in bantering and entertainment with the information. It’s so fun to do the podcast and I think it shows to people when they watch because its a group of friends having fun, talking about subjects we love, Linux and Open Source.”
The Destination Linux crew having fun! (Credit: destinationlinux.org)
Do you have any major goals that you would love to achieve in the near future related to your involvement with the Linux community?
“I have so many goals that I would take up your entire article just listing them off, so I will refrain from spamming your readers…this time.
I will say that we have many things in the works for DLN, including some really big announcements that we will be making soon, which I think will be very interesting to your readers.
I also have personal goals for doing more in the community, like contributing to more projects, making more content to help people using Linux, bringing back live streams for my This Week in Linux podcast, and so much more.”
Back to Table of Contents
Explore the FOSS World
This week, we will be exploring an intriguing article by Matthew Rocklin about his seven stages to free and open source software. Matthew is the lead developer of a massively popular free and open source Python library known as Dask as well as a contributor to many Python libraries focused in the data science arena. Dask allows for distributed computing with many of the most popular core Python scientific computing tools, speeding up the slow interpreted, but human-friendly language significantly on heavy computations.
For those who don’t know, Python is a language that is nearly impossible to perform truly parallel computations with because of what is called the Global Interpreter Lock (GIL) that forces Python byte code to be executing on only a single thread at a time. Consequently, Dask has become an essential library to programmers that need to utilize all the resources available to run their code, but don’t want to leave the Python language itself to work with the much more tedious combination of C, MPI, OpenMP, and CUDA. Dask works around the GIL by calling the underlying C code to perform parallel computations from inside the Python language itself.
Dask: Distributed computing in Python. (Credit: KARTIK BHANOT on medium.com)
Seven Stages to FOSS
Free and Open Source Software has become somewhat of a loaded term in the past five to ten years as more and more large corporations begin to see the advantages of opening up some of their code to be audited, contributed to, and improved by the army of software engineers around the world. So, what truly is open source?
Well, that answer is a bit complicated, as not all open source projects are the same. For instance, as Matthew points out, Linux is much more open that Google’s extremely popular TensorFlow library for numerical computation, data flow graphs, and especially deep learning models.
However, each layer of increased open source visibility comes with both advantages and disadvantages, especially for companies who work in highly competitive spaces. He quickly points out that not all projects need to go with the most open options available. It really depends on the specific situation of the authors and their goals for the project. Without further ado, here are the seven major steps to open source software as seen by Matthew Rocklin:
Publicly visible source code
Technically, this is all that is needed for your code to be considered open source. The benefits of opening up your code to the outside world make it auditable, so now people can check within it to see exactly what a product, service, or tool is actually doing with data as well as shine light on how it works.
However, of course, if you are trying to sell your software for a fee, this can be harmful, as it may allow competition insight into the inner-workings of your product.
2. Licensed for reuse
In step 1, simply publishing your code doesn’t give other people the right to reuse it in their own way. The code is still legally protected by copyright without an explicit license from the author that says otherwise.
In step 2, a license is distributed along with the code so that others may reuse or even modify it for their own purposes. There are a variety of different open source licenses out there with differing levels of restriction, though the GPL license family is the most open of them all. This is the license that the GNU project and Linux kernel distributes with its code.
The benefits include providing free code to be used by anyone in anyway they can imagine. However, there are costs to this type of open source because you no longer have complete control over the software. For instance, if someone uses your code and profits from it, you are not entitled to any of said profits, even though you may have done a large majority of the work. In addition, your code can literally be used for any application, even those that it wasn’t necessarily intended to be used for, including malicious actions.
3. Accepting contributions
This can be extremely helpful as having extra sets of eyes on your code will likely allow people to discover inefficiencies, vulnerabilities, or bugs that you can then be notified to fix and improve your code.
However, when this scales to hundreds or thousands of people submitting contributions, it can be extremely hard and time-consuming to keep up with it. In addition, you may spend a significant amount of time just teaching the contributors about how your project works in general so that they can better integrate their changes with the overall code. In addition, rejecting contributions that you (as the maintainer) may not think are necessary or even less than ideal may cause people to feel unacknowledged and can sometimes turn into long, time consuming debates.
4. Open development
To reduce the amount of time spent educating people on your code, you move to bring the entire development team and all the internal conversations into the open so that people can understand from an insider’s perspective what is currently being worked on and help with any trouble that the team may be having.
Now, other outside contributing developers have a peek into many of the design decisions and can better suit their own development ideals to those of the internal team, allowing for much better integration and less time explaining minute details of the code.
The main benefit to open development is plain transparency. This builds a lot more trust around your product because anyone can freely see the development process, any decisions made by the team, and any contributors to the codebase. This can also attract more experienced developers to work on your product because they can operate with the full context needed to truly provide valuable contributions.
However, this may cause some strife within the internal team, as they may prefer working in other communication platforms like Slack, Telegram, or Discord. This can be especially trying if the rest of your organization is on a different communication platform, as now the team must pay attention to multiple different communication methods.
5. Open decision making
After enough time with open development, there may be some highly experienced and opinionated developers who have ideas outside the scope of your team’s regarding the direction of the project. In turn, you must listen to them and take their ideas into account before moving forward. This can cause large debates to pop up over even the tiniest details of the project. You may have disagreements and sometimes the community agrees with them and so you follow their suggestions.
You end up giving commit rights, the ability to push code, and the ability to change certain aspects, even if you might personally not agree with it.
This can be extremely powerful because now you have added developer power to your project, which means that it likely will become more visible and transform into a better project overall. Having a diverse set of developers will help view many sides of the project’s possibilities and design decisions and it may grow into something much better than you ever imagined.
On the flip side, this decision will cost you quite a bit of control over the product. Of course, you will still have the ability to make your own decisions and changes as you see fit, however, you may have to compromise over certain aspects of the product now.
6. Multi-institution engagement
The core maintainers whom you have given a piece of control to now are a way bigger part of the community than the original team you started with. There may be developers from different institutions contributing to your code as well as people from all over the world. In essence, you have traded in your control over the product to enable a much larger and more diverse group of developers.
This can be absolutely incredible for your product because now it has matured and scaled into something that could never have been predicted at the start.
However, the project may not end up in the direction you want or feel is necessary. You have given up the control to make those important decisions.
7. Retirement
You are free to leave the project for any reason whatsoever, such as starting a new project, because there are hundreds of other completely capable developers who are willing to step up into your shoes. Your software will continue to survive without you due to the open community it created.
I think that this is an extremely interesting way to look at open source development, especially for those who are not software developers or experienced the different decisions made by teams who have decided to go open source.
If you would like to check out the original article by Matthew, you can find it here. If you would like to check out the Dask project, you can find it here.
Back to Table of Contents
Linux Desktop Setup of the Week
This week’s selection was presented by u/thunderthief5 in the post titled [GNOME] Popsicles. Here is the screenshot that they posted:
The Desktop of the Week: GNOME. (Credit: u/thunderthief5 on reddit.com)
And here are the system details:
DE: GNOME 3
Shell: gnome-terminal
Theme: Juno
Icons: Tela
Wallpaper: OS: Pop!_OSDE: GNOME 3Shell: gnome-terminalTheme: JunoIcons: TelaWallpaper: Custom Wallapaper
Thanks, u/thunderthief5, for an extremely intriguing, original, and colorful looking GNOME desktop!
If you would like to browse, discover, and comment on some interesting, unique, and just plain awesome Linux desktop customization, check out r/unixporn on Reddit!
Back to Table of Contents
See You Next Week!
I hope you enjoyed reading about the on-goings of the Linux community this week. Feel free to start up a lengthy discussion, give me some feedback on what you like about Linux++ and what doesn’t work so well, or just say hello in the comments below.
In addition, you can follow the Linux++ account on Twitter at @linux_plus_plus, join us on Telegram here, or send email to [email protected] if you have any news or feedback that you would like to share with me.
Thanks so much for reading, have a wonderful week, and long live GNU/Linux! | https://medium.com/linux-plus-plus/linux-march-1-2020-b00ba0812098 | ['Eric Londo'] | 2020-03-03 00:40:20.739000+00:00 | ['Open Source', 'Computers', 'Technology News', 'Technology', 'Linux'] |
2,436 | Why Restaurants Need More than a Phone System? | As a restauranteur, you know how important it is to pick up every customer call and answer every customer’s email immediately. However, in surveys after surveys, the number one complaint that customers report is that restaurants do not answer their inquiries. Not in time, anyways.
According to our 2019 research, even a moderately busy restaurant misses more than $200,000 per year due to ignored customer calls and inquiries.
So why do restaurants ignore their customers and their valuable business?
Maybe because running a restaurant is hard. Sometimes some equipment breaks down in the kitchen, and sometimes the staff does not show up. While you get busy fixing those issues, your customers feel ignored. One person can do only so much, right?
Well, that’s not true anymore.
With a bit of smart automation for your restaurant phone system and your website, you can ensure that no customer gets ignored, ever. This technology is called an Automated Restaurant Assistant.
What is an Automated Restaurant Assistant?
It is an intelligent software that serves as a restaurant host over the phone and online. It automatically answers restaurant phone calls, just like a live host, and also replies to the website requests. What makes it intelligent is that it understands the context of restaurant calls and inquiries and replies to them accordingly, all by itself. In other words, this bot like technology combines the 24 7 availability of an answering system with the skills better than a live host. With this technology, you will never miss any customer ever.
“The best answer is doing.” — George Herbert
In this post, you will learn the top five reasons why you should consider an automated assistant solution for your restaurant.
1. Never miss any opportunity
A restaurant gets inquiries that range from the higher-value Caterings and Private events to Table reservations to Takeout calls.
With an automated restaurant assistant, you will not miss any customer, big or small. For instance, it will share menus & packages for the Event and Catering inquiries. It will take Reservations from phone calls, text messages, and online requests. It will also queue Takeout calls and deliver to multiple phones, and even to tablets or computers.
It is essential to understand that when you need to act, you will get an instant alert, such as in the case of a Catering inquiry. The automated assistant will handle the rest by itself, such as reserving a table. You will also have complete visibility from results, reports, and insights.
2. Always respond immediately
Given the competition in the restaurant industry, it is impossible to overstate the importance of an immediate response to customers. Since the automated assistant is a software, it answers instantly to multiple customers at the same time. No matter how busy your restaurant gets, no customer will ever get a busy tone.
The best answer is doing. The automated assistant also serves the requests. For example, it follows up with the events menu and also accepts reservations. It automatically does most of the repetitive tasks, such as sending follow up emails.
3. Focus on customers
Since the automated assistant will take over your grunt work, you will get back multiple hours every week. You can then use this precious time to build relations with your customers and delight them with your service. They will become your fans.
The automated assistant will get the customer contact information and also send follow-up text messages and emails. It will even alert you about specific customers who need your attention.
4. Filter robocalls
Robocalls are a menace for everyone. They rely on deceiving the unsuspecting live staff. If not controlled, these calls lead to substantial loss of time and money.
The automated assistant is a bot on your side that keeps robocalls away just like bug-screen blocks the insects. Only your customers and sincere vendors will get your attention.
5. Your business Is-On-24
The automated assistant will acknowledge and respond to every inquiry all by itself. You will stop worrying about off-hours calls and emails.
Whether its an early morning, busy evening, or a holiday, your customers will get the same instant response no matter when they call or click. Your business will become virtually 24 7 with no effort.
There you have it. With an automated restaurant assistant, you will never miss any business ever. Customers will love your responsiveness around the clock, and your staff will thank you for eliminating robocalls.
More sales, happier customers, and focused staff, wouldn’t that be awesome?
About IsOn24
The Is-On-24 is an intelligent software that serves as a restaurant host over the phone and online. It automatically answers restaurant phone calls and replies to online requests 24 7, better than a live host.
It shares menus & packages for the Event and Catering inquiries. It takes Reservations from phone calls, text messages, and online requests. It also delivers Takeout calls to multiple phones, tablets, or computers.
Most restaurants use IsOn24 as their Answering system by publishing the local phone number provided as part of their IsOn24 subscription. Others choose to keep their existing phone number and forward their calls to the new IsOn24 phone number as their Answering service.
Every IsOn24 subscription also includes access to the IsOn24 app. It offers unique Event management features such as instant Quote & Approval via text messages and instant chat with guests. The IsOn24 app also offers powerful Marketing features, including dozens of tastefully created marketing templates ready to go, for all seasons. Just tap once to automatically share your message over Email, Phone, and Online.
As a Restauranteur, you are a Hero, and IsOn24 is the sidekick you need.
For more information, visit www.IsOn24.com
Click here to Begin a free trial. No credit card needed. Cancel anytime.
Or Schedule a live demo for your restaurant.
Subscribe to this blog to receive updates. | https://medium.com/@ison24/5-reasons-why-your-restaurant-needs-an-automated-restaurant-host-261bad42f580 | ['Vimal Misra'] | 2020-01-19 20:13:17.753000+00:00 | ['Retail Technology', 'Restaurants', 'Customer Experience', 'Answering Services', 'Phone Systems'] |
2,437 | Creating a GitHub like website in Golang | Goal
The aim of this project was to setup a GitHub like website where I could upload my code, using HTTP for git operations on remote repository. (This post won’t cover what it takes to support SSH based git remote operations, and would implement the most basic functionality.)
I wanted to explore Golang language and I would be using the code snippets or examples written in Golang but the core logic/idea would remain the same irrespective of any language.
The Basics
You can follow any guide to setup a basic HTTP server to handle various routes in your preferred web framework.
For my project I used gin (http://github.com/gin-gonic/gin) to setup some basic routes.
To make things simpler I wrote my routes in such a way that all the git operations on repository happens on a URL with this format. http://domain.com/git/repo-name
This means the remote URL for the local repository would look like above and you would use this URL to clone a repository and for other git operations as well.
Adding Functionality
Creating a Repository
POST /repo
Content-Type: application/json {
"name": "repo-name"
}
We know that to start a git repository we do git init . Running this command creates a .git folder with all the required files.
What happens when you need to create a repository on your server?
We utilize the same command but we pass an additional option --bare .
Passing this option creates a git repository without a working tree, this created repository is different as it will prevent any changes to occur on the remote repository directly and one can’t run the usual git commit command directly in this directory.
Try running the git init --bare command on your system and see the file structure, you will notice that it is similar to what .git directory looks like but instead of all these files/folder being present in .git folder it is present in the root of your folder where you ran the command.
git init --bare file structure
You can look more into the --bare option to understand it even better.
Ref: https://www.atlassian.com/git/tutorials/setting-up-a-repository/git-init
Now that we know how to setup a repository on server we can create an endpoint which will let us create new repository.
Code for such an endpoint could look like below. Here I have stripped out the various check I implemented before actually creating a repository.
Create Repository HTTP Route
I created a small utils package which I used to handle all git specific operations. Code for the CreateNewRepo function used in above snippet is below.
We now have a endpoint set-up which when hit will create a new repository for us.
Now let’s implement the main git operations.
Implementing Git Operations support
When we do git clone git push git pull on our terminal if the remote repository URL is HTTP based git internally uses HTTP as the mode to perform the required operations for these commands to work.
This is a key piece of information as all we need to do now is make sure our server support all the endpoints git will use and also handle the response and request for these endpoints.
Luckily there are already tools/libraries which does this and we won’t need to implement them from scratch.
For my project in Go I came across this project https://github.com/asim/git-http-backend/blob/master/server/server.go
The author of this project had already implemented all the necessary endpoints and the core functionality for each of them using go’s default HTTP server as the base.
Since I was using Gin I had to tweak the code to make it work specific to Gin framework, which was comparatively easy as all the Handler function needed was Request & Response .
That’s it.
We now have the required endpoints created and we can now take our code for a spin.
Let’s Execute
Let’s say if your server was running on port 8000 and you had created a repo named test .
In your local terminal you could do git clone http://localhost:8000/git/test and it would clone the repository (it would be blank if you hadn’t pushed anything).
You can now do git commit and push the changes, the changes would now be present on your remote repository.
References:
To read more in-depth about git I’d recommend the official docs (for in-depth)/Atlassian’s version (for brief intro)
Some more in-depth resources below:
Next Steps
Git Hooks based events
I modified my server to setup a WebSocket connection on a route which looks like http://localhost:8000/ws/repo-name/ . This would now create a per repo based WebSocket connection between the clients and our server.
I could then use this connection to push various information to clients.
An example of such use-case is below:
Together with Git hooks ( post-receive ) I can now push messages on the WebSocket Connection to the clients whenever a new commit/ref is updated on the remote repository.
This would allow me to setup a front-end in future which can listen to these events and show a message to the user.
You can read more about the Git hooks here
For making the above feature you would need to setup Server Side Hooks.
Authentication
To implement basic authentication where a different user can not push to another user’s repository you can use the pre-receive git hook on server side and add the authentication logic and block the operations if access is denied. (There may be other ways to implement this feature this is something I thought of and haven’t implemented yet.) | https://medium.com/@kaushik.tech/creating-a-github-like-website-in-golang-d131ebc9e7ac | [] | 2020-12-26 16:52:19.154000+00:00 | ['Git', 'Github', 'First Post', 'Golang', 'Technology'] |
2,438 | Technology, disruption and Pokemon GO | Is not amazing that a company that two weeks ago was consider as falling behind, with almost no presence on Smart Phones, launched what have become the hottest game on the planet, literally changing the world overnight. Pokemon GO is only about a week old. A WEEK!
Love it or hate it.
That’s the disruptive power of technology and software in the world we’re living in.
How long this will last? Who knows…
I’d like to think that not even Nintendo expected such a massive success, but most interesting to me is what are they going to do next to keep the momentum and/or capitalise what they achieved, because if there’s something I can agree is that they are jumping late into the Smart Phone games bandwagon, but also, clear example that you don’t have to hit first to win.
Is it a matter of the right timing? Or to be lucky at the right time when finally decided to jump in?
I personally don’t believe in just “luck”, so I will think about “circumstances”. | https://medium.com/thoughts-on-the-go-journal/technology-disruption-and-pokemon-go-30c08bf15ad7 | ['Joseph Emmi'] | 2016-07-18 22:52:14.202000+00:00 | ['Technology', 'Software', 'Pokemon Go', 'Disruption', 'Journal'] |
2,439 | From Medicine to Education: How Technology is Changing the World | Technology has changed our lives in many different ways and there is no industry or profession that has not been affected by it.
As we use our smartphones constantly every day, it is sometimes easy to forget how far we’ve come in just a few short decades. Who would have thought, after all, that it would be possible to walk around with tiny little supercomputers, permanently connected to unfathomable amounts of data, available at a touch? But we’re very far from the end of that journey. In fact, we’re only just getting started. Read on as we discuss the different sectors that have been most changed by technology, from medicine to education.
Education
Technology is used in education from a young age. It simplifies access to resources and makes education more affordable. Perhaps the biggest change in education has been the advent of online courses. Taking the example of nursing, every nurse in the UD has to become a Registered Nurse (RN) to start with. If they want to progress further in their nursing career, they need more qualifications, but this would have been difficult before online courses were introduced. Working and attending college was not always possible, but now if they want to achieve a nursing MSN online, for example, there are fewer obstacles in the way. Online courses have many advantages, including:
Flexible hours so that studying can be fitted in around lifestyle and family commitments
Cheaper fees as the educational facility have lower costs for the courses
Location is irrelevant as visits to the college or university are not needed — courses are 100% online
No commuting worries, you work from the comfort of your own home
You can work at your own pace without any pressure to be faster or slow down
Although nursing was used as an example, online courses are available in all industries and professions. They mean that more people than ever are able to gain degrees and higher qualifications which has had the effect of many people earning better salaries, as well as the shortage of qualified people in the US becoming less of a problem.
The Finance World
Banks now have apps that you can use on your smartphone to make bill payments or transfer money in just a few seconds. You no longer have to wait in long queues at a branch. You have instant access to your accounts, and this makes managing your finances much simpler. There are other things in the world of finance that on easier on your digital device, such as:
Trading on the stock market. There are software programs you can download to help you with this. Buying stocks and shares was something that used to be out of the reach of most people, but technology has meant that now it is available to everyone
Lending online. If you need a loan for a new car or some other large item, it is now quick and simple to raise the money, and you generally get a decision in just a few minutes. There are also many more lenders online and borrowing is no longer restricted to banks
Shopping has become easier too. There is very little you cannot buy online. Without the overheads of a high street store, retailers are often able to sell their products at lower prices. Hopefully, this means more money will stay in your bank account.
Savings and investing. You can search online for all types of savings and investments and find something that suits you. This has encouraged more people to put money aside for the future, as they are not restricted to saving huge amounts every month but can save what they can afford when they can afford it.
The biggest worry for anyone dealing with their finances online is the security issues involved. However, all financial institutions and online stores have tight security in place. No one can promise that your details will never be compromised, but much of the security lies in your own hands. Never give your information to anyone, not even people you know, and it is far more likely to stay safe and secure.
The World of Medicine
The world of medicine has been changed greatly by technology, including things such as:
Quicker and more accurate test results
More precise surgery which means that patients recover quicker
Better record-keeping
Record sharing online so that all doctors and nurses treating a patient have access to the same medical records
Doctors are able to deal with each patient in a shorter time, so they are able to see more people, which has cut waiting times
New drugs and treatments that work more effectively and have fewer side effects
Innovations in the world of medicine have meant that most people are living longer and healthier lives, as many of the developments are related to the prevention of illnesses and health problems, and that has made a huge difference to the health of the nation.
Leisure Time
At one time, the only entertainment in the home was TVs or music, and perhaps playing board games. Now, the list of ways you can entertain yourself at home is endless. On your digital device you can:
Play games of all types online, including playing in casinos and playing against other people
Watch TV programs at a time to suit you and not when the TV company dictate
Watch films and sports
The internet is the largest library in the world and many books can be downloaded free of charge
Interact with other people. Often this is through social media, but you can also connect with people through other platforms such as WhatsApp and Skype
There has always been the worry that digital devices will mean there is less interaction between family members and friends. However, recent research has shown that this is generally not the case. Families and friends watch films and programs together or play online games against each other. They chat on Facebook, LinkedIn, and the many other social media platforms, and it seems more interaction is taking place in this way over face to face.
Just a few years ago, much of the technology we have today was only seen in Sci-fi films and it was hard to imagine that it would ever really happen. But it has, and no doubt in a few years there will be more developments we have not thought of at the present time. | https://medium.com/edtech-trends/from-medicine-to-education-how-technology-is-changing-the-world-a75c9c592ee1 | ['Alice Bonasio'] | 2020-02-06 14:15:16.670000+00:00 | ['Tech', 'Edtech', 'Medicine', 'Education', 'Technology'] |
2,440 | What is Big data? Where does it come from? and Who uses Big Data? | What is Big Data?
Over the several years, there has been a grown understanding of the role that big data can perform in delivering valuable insights to an organization. Nevertheless, in term of definition, there are many different concepts and meanings of Big data that was interpreted and announced by data specialists and experts.
Although the big data definition is still various in general, I would like to suggest the most reasonable meaning in my opinion. It is the 5V’s of Big data which was state by Keith Gordon MBCS CITP, former Secretary of the BCS Data Management Specialist Group. The 5V’s of Big data is the combination of the following five characteristics:
Volume: where the amount of data to be analyzed and collected is sufficiently massive to require complex and special analytics.
where the amount of data to be analyzed and collected is sufficiently massive to require complex and special analytics. Velocity: where the data is built at high rates or created nearly in real-time.
where the data is built at high rates or created nearly in real-time. Variety: where the data have the various types of data from different sources. For example, structured data is any data that locate in a fixed field within a record or file, and unstructured data which is the form of binary data such as video, picture and audio.
where the data have the various types of data from different sources. For example, structured data is any data that locate in a fixed field within a record or file, and unstructured data which is the form of binary data such as video, picture and audio. Value: where the data has perceived or quantifiable benefit to
the enterprise or organization using it. (ex. customer insights, the trend of time series)
where the data has perceived or quantifiable benefit to the enterprise or organization using it. (ex. customer insights, the trend of time series) Veracity: where the correctness of the data can be investigated and assessed.
Although the 5V would be the main nature of Big data, it can be altered and changed by innovation and the capability of data adoption. In other words, the characteristics of big data can be more advanced in the future.
Where does it come from?
After we understand what is exactly call big data. Now, the question is where does it come from? How can it be generated? The answer is so simple, big data was from us by consuming and spending time on social media platforms. For example, commenting on a Facebook post, uploading pictures on your Instagram and watching videos on Youtube, all these activities were recorded immediately once you give any actions by such platform owners. However, social media is not only a source of Big data. I would like to share the 3 primary sources of Big data that I have found from cloundmoyo website which are:
Social media data: as I mentioned, it comes from the Likes, Tweets & Retweets, Comments, etc. via your preferred social media platforms. Transactional data: the data which is generated from all the daily transactions that take place both online and offline. Invoices, payment orders, storage records, delivery receipts. Machine data: the data which is defined as information which is generated by industrial equipment, sensors that are installed in machinery and even weblogs which track user behavior.
Who uses big data?
So now it seems like Big data has become a part of our life, and it is so valuable. Why? Because many companies can use big data to their advantage such as automating processes, gaining insight into their target market and improving overall performance using the feedback readily available. Let look at the examples of some of the industries that boosting their business by using big data.
Financial services: transactional data rapidly grows due to the spending ability of users that faster and more comfortable via E-banking or Online banking. The bank is able to use client data to guide decisions for credit card offerings for individuals, the credit limit and benefit, and reward offerings.
Transportation services: currently, using ride-hailing service is widespread intensively in most big cities. More interestingly, ride-hailing service firms allow to collect a vast amount of data in both rider and client-side. Although utilizing these data on individual user is challenged, but it can increase the long-term profit and the service personalization. For example, in case that rejects a ride after request, due to the unsatisfied price, the Uber application can respond back to the user in a few minutes later that the price is now lower (while knowing the precise location of the rider). This function can be done by analyzing geo-localization in real-time. Certainly, such a large size of client data can lead firms to offer the targeted promotion to users.
Hospitality service: Hotel records and stores their customer information in order to exploit the data. After the client checks out, hotels are able to collect the guest detail and preference, for example, the special requests, vegetarian, room dining service, etc. Thus, before the next accommodate, the hotel can facilitate the customer’s requirements in a more personalized way. Alternatively, hotels analyzed historical data to create and deliver targeted promotional offers. Another interesting platform in hospitality service is online platforms such as Agoda, Airbnb, VRBO and Homestay, to name a few. The platform allows guests to rent accommodation (not only hotel but also apartment) via application or website. Normally, these service providers usually are not own any accommodation but earn a commission fee (percentage service fee) from every single booking and host. The price is generally set by the host, it can be affected by the period of the year, the weekday, the number of stays, etc. Accordingly, the amount of data compiled by the platform is valuable due to both the property and the user data were collected. Consequently, analyzing the data allow companies to enhance service by recommending a reasonable price to host and developing the ranking of the offering options for each browsing user, based on previous preferences.
To sum up, it can be seen that the great of big data depends on technology development. The key to the information explosion is online platforms such as websites, mobile applications and social networks. Consequently, the more people connect to the platform the more data are generated. Correspondingly, the discussion of utilizing big data evolve service operation is identified by the sample of improved services in the financial, transportation and hospitality industry. In fact, not only the provided example but there also the big data used cases across different industries occur in the world. Ultimately, the technologies are continuously developed and change over time, the presence of big data is one of the phenomenons that everyone should aware of. | https://medium.com/@priniamchula/what-is-big-data-where-does-it-come-from-and-who-uses-big-data-9644914c1707 | ['Prin Iamchula'] | 2021-04-25 12:28:15.852000+00:00 | ['Big Data Analytics', 'Information Technology', 'Contextual Marketing', 'Big Data', 'Data Science'] |
2,441 | Billbid — Real Life’s Ad Blocker. Using augmented reality to change… | Billbid — Real Life’s Ad Blocker
Using augmented reality to change billboards around Nashville
There’s a ton of hype about augmented reality nowadays. Snapchat uses augmented reality to give you the puppy face filter, Pokemon Go uses augmented reality to spawn a Pikachu for you to catch, and Apple’s measuring app uses augmented reality to help you measure your dining table. Augmented reality allows you to add some fantasy to the world, and that’s exciting.
Education, sports, manufacturing, medical services, and many others industries are finally adopting AR. While Apple has quickly become the biggest augmented reality platform in the world, Magic Leap just released their new technology. We’ve seen the recent growth of new startups, conferences, headsets, and content. But what’s next?
The world that I’m about to describe might sound crazy, but please use your imagination. In 2028, perhaps everyone will be wearing augmented reality glasses during their morning commute, using them to watch Friends and catch up on emails. Perhaps all car windshields will contain augmented reality technology that helps your designated driver navigate to the nearest McDonalds drive-thru. Around your town, imagine that this same technology can change or block billboard ads on the street. Ad-blocking on the internet is popular today, so folks may want to do it in real life too (at least in 2028). This app would even be great for companies that want to try out different billboard ads to see which ones are best.
We decided to make an app that does just this. Billbid lets you edit some of the billboard ads around Nashville through the perspective on your phone screen, allowing you to block ads or replace them with different ones. Here’s how it works.
Under the Hood
Billbid uses a built-in experience platform provided by Unity, allowing you to create objects and place them in an infinite 3D space. It also uses an image recognition tool called Vuforia, so when your phone camera recognizes an image, the app does stuff. Combining these features together, Billbid adds new items to a specific point in 3D space when your camera is familiar with an image it sees. In our case, the new items added are new images that overlay existing ones. So, rather than making an image in the real world disappear, we actually just cover it up using our phone camera (we’re just covering up billboards). This concept could be used on car windshields in the future, assuming that billboards are primarily viewed through car windows.
In Action
Rolling through the streets of Nashville, I demonstrated Billbid blocking billboard ads via my phone! There were a few bail bond ads we decided to replace. Some before & after photos…
A little rough on the edges, but gets the point across.
Same street, different billboard.
Note: These were really created with the app and not photoshop.
We’ve Prepared for a New Billboard Ad Industry
With the concept of Billbid in mind, what if companies could place bids to replace billboards around Nashville with their own ads using augmented reality? The highest bidding company would get their billboard displayed through all car windshields, or maybe a certain percent of car windshields. To prepare for this, we created www.billbid.glitch.me, a billboard ad bidding site. On the site, just search for a billboard you like in Nashville, place a bid, and watch your ad get displayed for thousands of people through car windshield augmented reality technology. Bids will be accepted in 2028. | https://benscheer.medium.com/billbid-real-lifes-ad-blocker-c77070e58dca | ['Ben Scheer'] | 2018-11-12 17:41:05.945000+00:00 | ['Technology', 'iOS', 'Advertising', 'Augmented Reality', 'Vuforia'] |
2,442 | Why exponential technological change needs ‘exponential humanity‘, as well | This post was first published on FuturistGerd.com in 2016 -and it rings even more true today!
IS IT TIME TO BECOME EXPONENTIALLY HUMAN?
Machines can increasingly mimic the human brain and may indeed soon outpace it in certain respects such as calculations per second or storage capacity. Yet I believe that for the foreseeable future no mechanical apparatus, algorithm or bot can have an original thought, or create a meaningful work of art, or invent a new field of science, or show authentic empathy and compassion. Sure, machines will be increasingly good at amazing simulations but (hopefully) never at real existence. Yet, maybe the lure of those magical and ultra-convenient simulations is precisely why we might get hooked on them?
I think it is this sense of being, of existence, of Dasein (as some German philosophers have put it) that is missing entirely within machines, computers and algorithms, no matter how fast and powerful they may become.
Re-discover human potential, further human flourishing!
It might just be that as intelligent machines increasingly remove routines from our lives — and will soon automate and virtualize many more complex tasks, as well — humanity in the twenty-first century is called upon to re-discover and express its full potential. This may include the mind-body connectedness that has been getting lost since the Industrial Revolution, along with more holistic approaches towards a future that will actually support human flourishing (see thechapter about happiness and eudaemonia in my new book ‘Technology vs. Humanity’). Ancient Greece and Renaissance Italy may provide some clues to an educated humanity that pursues the arts in synchronicity with commercial and technical excellence.
Traditional education is becoming obsolescent
The great irony is that official education in most countries — with a strong focus on STEM disciplines (Science, Technology, Engineering and Mathematics) and an unfortunately all too common disparagement of liberal arts — is actually obsolescent. The liberal arts, so-called because they once belonged to the free, will become the platform for exponential thinking in the twenty-first century — also organizationally, where individual creativity has already overtaken traditional business processes and ROI-obsession as the primary guarantor of survival and success.
Until just a few years ago, humanness and creativity in a world of commodity products (and services) was actually a risk. But in a world of global synchronicity with infinite variety and inevitable abundance (see music, films, travel, and very soon, banking and energy), creativity becomes a Must. As the arts have withered and started to mimic science, the rich irony of our new century is that organizations need to learn to think and act like artists in order to survive.
To an artist, chaos is natural. Steve Jobs, the most iconic entrepreneur of the new era, was essentially an artist in a Chief Executive’s disguise. As Apple burst the boundaries of computing to become a universal ecosystem, humans are now called upon to leave behind their passive, rote and scripted teaching-by-example and best-practices. Organizational leadership, by turn, will evolve into the art of learning by inquiry — a new Renaissance of discovery.
We need to become ever more (‘exponentially’) human to counter-balance exponential technological progress.
Image below via
Originally published on futuristgerd.com: | https://medium.com/futurist-gerd-technology-business-and-digital/why-exponential-technological-change-needs-exponential-humanity-as-well-6de0879ad7b7 | ['Futurist Gerd Leonhard'] | 2021-01-03 16:29:26.815000+00:00 | ['Humanity', 'Robotics', 'Technology', 'Digital', 'Tech'] |
2,443 | 4 ways retailers can use blockchain to their advantage | From artificial intelligence and conversational commerce to robots and the future of payments, technology is helping retailers improve the customer experience and create a competitive advantage. These topics will be covered in depth at the upcoming NRFtech 2018, where technology and innovation leaders will explore the latest retail tech and find new ways to connect with their digitally savvy customers.
While blockchain was developed a decade ago, it has been compared with the internet in terms of the impact it could have on business and society. Michael Carney, principal at venture capital firm Upfront Ventures, says blockchain and decentralized systems give retailers an opportunity to drive efficiency and establish an advantage. Here are his thoughts on what retailers need to know about blockchain and ways it can be put into use.
When thinking about the opportunity to implement blockchain, every retailer should seek to answer a few basic questions:
What are my highest-leverage business problems that could benefit from greater trust, transparency and collaboration between stakeholders? Where and how am I using data in my business, who controls that data and what risks am I undertaking in its storage and utilization? What are my internal capabilities to test and implement these emerging technologies? If inadequate, what is my plan to bring in additional expertise?
At its core, blockchain is primarily exciting for its ability to enable greater trust, transparency and collaboration across constituencies that would otherwise struggle to achieve as much. Additionally, the use of “smart contracts” offers a never-before-possible means of automating and auditing transactions. For retail, these benefits can be realized from vendors to employees to customers.
Michael Carney
A few areas stand out as those likely for retailers to see the greatest near-term benefit from embracing blockchain:
Supply chain and inventory management
As supply chain complexity increases, there’s an obvious opportunity to drive efficiency through greater collaboration and transparency between multiple constituencies including manufacturers, distributors, shipping carriers, insurers, importers, wholesalers and retailers. Knowing in real-time the exact source, location and state of all inventory in the system could be a game-changer for most businesses, particularly those dealing in perishable or luxury goods.
Unlike current systems, which rely on each constituent to maintain its own distinct and disconnected database — meaning limited and often delayed insight into the status of goods elsewhere in the system — blockchain facilitates real-time and trusted data sharing among constituents and can offer consensus about the true state of the system to all parties. That is especially transformative for categories that deal with counterfeiting or questions about social responsibility in sourcing and manufacturing. Layering on other technologies such as Internet of Things can further supercharge these impacts.
Payments and accounting
Blockchain enables the use of cryptocurrencies — sometimes also called tokens — as of a means of exchanging value or data. The most natural and widely adopted initial use case of these tokens today is for payments, specifically those for which the current financial infrastructure is either too inefficient or expensive to facilitate effectively. For example, cryptocurrencies offer real advantages for both cross-border payments and micro-payments.
Most attention is focused on consumer payment applications, where customers complete transactions in cryptocurrencies rather than traditional currencies. But these benefits are even more applicable today in the business context, where the size of payments and the large number of international transactions make cryptocurrency settlement an attractive proposition.
Additionally, the end-to-end data trail that blockchain provides will dramatically ease the accounting and finance burdens on organizations when applied to areas such as supply chain and inventory management. And relatedly, smart contracts will ease the hassles associated with collection and enforcement under traditional transaction structures — think instant collection and payment, automated refunds, automated insurance settlement and payout, and the like.
Loyalty and rewards
It’s a near-certainty that the loyalty programs of the future will be tokenized. Most loyalty programs today create frustration and friction rather than improved customer experiences. Tokens can dramatically simplify the tracking and managing of loyalty points, rewards cards and paper or digital coupons. Additionally, blockchain offers real-time liquidity, making points more easily swappable between consumers and across retailers. Retailers will still be able to reap the customer insight benefits, but in a more consumer-friendly way.
Retailers who choose to adopt blockchain could expand loyalty partnerships without adding complexity, driving increased brand awareness and program adoption while making it possible for smaller retailers to compete effectively with larger competitors. A welcome side effect for many program operators will be a reduction in the balance-sheet liabilities associated with unredeemed points and rewards.
Advertising and consumer data
Consumers are increasingly fed up with perceived abuses of their personal data and regulators are beginning to ask hard questions about data security. We should expect blockchain to underpin the consumer data and advertising systems of the future, enabling collaboration between stakeholders and reducing fraud while also giving consumers greater transparency and control over their own data — how it’s used and by whom. For retailers who collect and retain consumer data today, getting ahead of this coming shift by embracing open and transparent blockchain-based models will have a positive impact not only on operations, but at a public relations level as well.
Michael Carney is a principal at Upfront Ventures, a venture capital firm that invests in technology-led businesses in digital media, consumer internet and retail innovation. | https://medium.com/nrf-events/4-ways-retailers-can-use-blockchain-to-their-advantage-1b4139a0b57d | [] | 2018-04-26 14:34:57.495000+00:00 | ['Retail', 'Security', 'Blockchain', 'Cryptocurrency', 'Technology'] |
2,444 | How to Create a Tesla Geofence: Automated SMS from your Model 3/S/X | *Update 11/13/2020: Unfortunately, Tesla has started restricting access to their API through most major services including AWS. This decision has broken this tutorial.
Introduction
Your Tesla can do many amazing things from driving itself to updating itself. Let’s add another cool trick — automatically texting someone when you get close.
Here is my motivation for implementing such a feature. Every day I pick up my daughter from swim practice, which means driving through an obnoxious pickup process. She stays inside until I arrive, park, and text her that I am here. Or, she just texts me over and over “are you here yet?” while I am driving. Once she knows I have arrived, she grabs her stuff and walks out to the pickup area, which has a car line I pull up to after giving her a couple of minutes to get outside.
After repeating this 500,000 times, there are some unnecessary steps in this whole process I really want to eliminate for my own sanity. I could make everything so much quicker if I text her that I am almost there while driving and her meet me at the pickup area without parking first. That solution would be unsafe for obvious reasons (don’t text and drive!).
Here is what I want to happen. When I am about 3 minutes away from the pool, my Tesla automatically sends her a text message to get her stuff and meet me at the pickup area. I don’t park but go straight to the front where she is magically waiting for me, ready to get in the car every time.
This tutorial will show you how to create a Tesla geofence to automatically send a SMS text from your Model 3/S/X when you get close to a location. This is particularly useful for daily tasks like picking up the kids from school or practice. It is also another “look at what my Tesla can do that your puny car cannot” feature.
What is a geofence?
A geofence is a virtual perimeter or “fence” around a physical location. | https://medium.com/initial-state/how-to-create-a-tesla-geofence-automated-sms-from-your-model-3-s-x-2efabc6b7335 | ['Jamie Bailey'] | 2020-11-13 15:24:52.489000+00:00 | ['Geofencing', 'Gps', 'Technology', 'Tesla', 'Data'] |
2,445 | Hunting for the perfect marketing budget — constantly. | Issue #07, 14th December 2020
Biswajit Das
Like all budgets, the marketing budget is a reflection of a plan — the marketing plan. And as we know, a plan is a priority task list which is based on forecasts of the key parameters.
In this case, the key parameters are:
Sales Plan
Competitors’ Plans
Options Available &
The Market.
How much to spend is a function of how much sales we want, while accounting for the competitor’s performance, as well as the volume of their ‘marketing noise (& to what extent we want to counter it!)
Where to spend must acknowledge the alternative options available, their cost, importance & efficiency. As also, which market(s) there’re opportunities / threats.
Given the huge volatility of the above parameters, it stands to reason that marketing budgets must be validated frequently!
Just like an engineering system constantly hunts for equilibrium,
so also a marketing budget must hunt for its right level
as frequently as possible.
From a corporate control angle, you must benchmark spends vis a vis your peers. And also track spends-as-a-percentage-of-revenue to compare with your category & industry. This could act as the first trigger for under- as well as over-spending.
The above approach must involve a multidisciplinary team of specialists from marketing, finance & specialists.
Marketing highlights growth ambitions / threats-to-be-thwarted from competition for each category, brand & market.
Finance highlights the company’s financial position & projections.
And the specialists will highlight / confirm “value-for-money” as well as “must-have” options.
With this approach, the marketing budget can be built & regularly revised — including reallocation between brands & markets.
Execution Is As Important As Planning
A plan can only be as good as its implementation. And given the dynamic nature of the market, executing a marketing plan itself involves constant evaluation & planning at a micro level.
On completion, the plan is usually found to have undergone significant changes. Hence it needs rigorous monitoring prior to bill-passing. At one level, it’s monitoring which highlights a lot of discrepancies, which yields real savings. (Not enough can be said about the cost-saving effects of monitoring in passing advertising & marketing bills.)
At another level, plan performance is dissected to learn from each plan execution in an exercise generally referred to as post-implementation evaluation or ‘post eval’.
Given that plans cut across multiple markets, regional offices, budget heads & myriad activities, all this is far from easy.
Marketing Data Visibility
It’s essential to review marketing budgets vis a vis performance. And the competition will continue to innovate — both product as well as marketing — so it’s best to keep a close watch using syndicated competitive intelligence. Regular reviews of marketing performance, ROI & the competition is mandatory to increase or decrease spends.
If ‘post eval’ is the micro-component of ‘hunting for equilibrium’ in marketing budgets & plans, then regular review is the macro component.
Both need free flow of data. ‘Post eval’ is currently executed in a somewhat ‘painful’ manner because plan data is usually scattered over multiple spreadsheets. The second is also done infrequently because marketing data is notorious for being ‘invisible’ to the C Suite!
To achieve this, it’s mandatory to make your marketing data more ‘visible’, which by the way is notorious for being practically invisible to the C-Suite!
Merge The Data Silos
One of the obvious reasons behind ‘invisible marketing data’ is the existence & promotion of data silos! Take plan data which lies scattered in individual spreadsheets — these must typically be aggregated from individual spreadsheets to a central sheet before processing for reporting.
Analytics : Frequency vs Accuracy
Marketing data along with sales & other data must be rendered on real time dashboards which are designed to promote analysis & review by the multidisciplinary team of marketing, finance professionals along with specialists.
Use analytics to recalibrate your marketing spends.
The above exercise is just the starting point and needs to be refined with regular analysis of market trends, forecasts & deep business insights for accurate budgeting. (This actually means managing flexible marketing plans which change with the latest insights — on a regular basis.)
This is why analytics exercises must update models every few weeks to help confirm trends & re-chart the course. A lower accuracy is acceptable when the frequency is high.
This also includes building attribution models in the short term for assessing impact of various media, promotions & other factors. The basic idea is to use data sources quickly to test if they exhibit any significant divergence from current understanding.
Need for Ready Data
Assessments & insights may not be perfect, but speed is of the essence.
To get speedy insights on a sustainable basis, clean data must be readily available without much ‘struggle’.
We hope you enjoyed reading this piece & would love to hear from you.
Email us at [email protected] | https://medium.com/media-trends-digest/hunting-for-the-perfect-marketing-budget-constantly-933f489b248e | ['Biswajit Das'] | 2020-12-14 08:00:28.170000+00:00 | ['Marketing Technology', 'Marketing', 'Marketing Automation', 'Automation', 'Data Analytics'] |
2,446 | Reviewing A/B Testing Course by Google on Udacity | Reviewing A/B Testing Course by Google on Udacity
Read to find out how A/B tests are performed at Google.
A/B tests are online experiments which are used to test potential improvements to a website or mobile application. This experiment requires two groups — control group and experiment group. Users in the control group are shown the existing website whereas users in the experiment group are shown the changed version of the website.The results are then compared and the data is analyzed to see if the change is worth launching. The A/B testing course by Google in association with Udacity explains the various metrics that need to be considered for analysis , how to leverage the power of statistics to evaluate the results and finally whether the change must be launched or not. The course does not delve deeper into the statistical concepts instead explains the business application of these tests.
A/B tests are used extensively by various companies. For example, Amazon used it to test user recommendation feature , Google uses it so extensively that they had once used it to test 41 different shades of blue in UI! Albeit the term is recent the method has been in practice since a very long period of time. It has been used by farmers to see what method yields the best crop or by doctors for clinical trials to check the effectiveness of a drug.
Credit : Unsplash
Policy and Ethics :
Before running A/B tests one must consider a few things.
Risk : What risk is the participant undertaking while participating in the test. Minimal risk is defined as the probability and magnitude of harm that a participant would encounter in normal daily life.If the risk exceeds this then the participant must be informed about it. Benefit : It is important to be able to state what the benefit would be from completing the study. Alternatives : what the other alternative services that a user might have, and what the switching costs might be, in terms of time, money, information, etc. Data sensitivity : It refers to the data being gathered , whether it would reveal the identity of the person , security measures taken to safeguard the data and the implications it might have on a person if the data becomes public.
Choosing and Characterizing Metrics :
The metrics can be divided into two categories. Invariant metrics are the ones that should stay the same across both the groups. For instance , the number of people in both the groups ,distribution based on demographics. Evaluation metrics are the ones that you will use to measure the effectiveness of your experiment. For example, if you are testing for a change in the number of users who click on “start button”, click through probability (number of unique visitors who clicked / total number of unique visitors) could be your evaluation metric. Sanity check is performed to check that the invariant metrics you have chosen are correct. A/A tests can be performed to check this.
Sensitivity and robustness must also be considered. The metric should neither be too sensitive nor too robust. Mean can be too sensitive to outliers whereas a median may be too robust to capture the change. 90th or 99th percentiles are considered as good metrics to notice the change.To find a good balance between sensitivity and robustness one can perform A/A tests , use data from previous experiments or perform retrospective analysis of logs.
Designing an experiment :
Unit of diversion is used to define an individual person in the experiment.
User id : If a person has logged in to the account , we can use user id as unit of diversion to track the activities.
: If a person has logged in to the account , we can use user id as unit of diversion to track the activities. Cookie : It is a small piece of data sent from a website and stored on the user’s computer by the user’s web browser while the user is browsing.Cookies are browser specific and can also be cleared by the user.
: It is a small piece of data sent from a website and stored on the user’s computer by the user’s web browser while the user is browsing.Cookies are browser specific and can also be cleared by the user. Event based diversion : Any action such as reloading a page can be considered as an event. These are mainly used for non user visible changes. For example, latency change.
Population : You must also consider the population on which you will run the experiment, whether it would be the entire population , people from a specific region or people from a specific sector.
Size : The size of the population is also an important factor. It is influenced by parameters like significance level (alpha), sensitivity(1 — beta) and whether the unit of analysis is equal to unit of diversion.
Duration and Exposure : It refers to the time period for which you want to run the experiment. Also , it is very important to determine when to run the experiment. For example , on weekends or weekdays , holiday season or non holiday season. Generally, it is good to balance between the two so that you understand the trend and seasonality effect better. Exposure is basically the fraction of population to which you want to expose the new feature.
Analyzing results :
Sanity check : After you have the results of the experiment with you , the first thing you have to do is check whether your invariant metrics are the same across both the groups, like whether the population distribution was done correctly.
Single metric : To check if a single invariant metric is within the acceptable range you will have to calculate the pooled standard error and multiply it with Z- score (1.96 for 95% CI) and find the margin of error. Then find the lower and upper bounds and check if the difference in values for a particular metric is within the range.
You should also perform tests on evaluation metrics to check if the result is both statistically and practically significant. For a result to be statistically significant the range of the difference in values must not contain 0 and for a result to be practically significant the range should not contain the practical significance boundary. Also, to double check the result a sign test may be performed. If the two tests do not agree with each other we might have to look deeper into the data as it might be due to Simpson’s paradox(individual subgroups are showing stable results but their aggregation is causing the problem).
Multiple metrics : If you are considering multiple metrics at a time , it is possible that you might be seeing one of the metric as significant because of a false positive. In order to deal with this you may use bootstrapping or bonferroni correction.
After the results have been analyzed you have to answer a few questions like is the change significant ? Do I understand the effect of this change? Is the change worth launching after reviewing other business factors? Based on this you may either launch the change , perform some more tests or may not launch the change. You may need to perform A/A tests in pre and post periods for sanity checking and to see the effects of changes on users.
Conclusion :
A/B testing course by Google on Udacity is a must for anyone who wants to understand the process of A/B testing. The project at the end can further help you in understanding the concepts. This is just an overview of the course. | https://towardsdatascience.com/reviewing-a-b-testing-course-by-google-on-udacity-2652b2235330 | ['Suyash Maheshwari'] | 2020-05-12 12:16:22.389000+00:00 | ['Technology', 'Google', 'Data Science', 'Business', 'Testing'] |
2,447 | AI Is Enhancing Global Agriculture | AI Is Enhancing Global Agriculture
With over 25% increase in population by 2050, there is only a 4% increase in agricultural land. AI is helping to make more out of the preexisting land. Aravind Sanjeev Follow Dec 18, 2020 · 5 min read
Farmer holding freshly harvested crop. Image for representational purpose only.
Read the original post in humaneer.org
The United Nations Food and Agriculture Organization estimates that the world population will increase by an additional 2 billion in 2050. At the same time, the land available for agriculture will only increase by 4%. That’s over 25% increase in population to only a 4% increase in agricultural land. There is a need for the world to accommodate the exponentially increasing population to a marginally increasing food production.
Rapid industrialization in the 19th and 20th centuries allowed agriculture to flourish. Our food production has reached skyrocketing heights with most of the industrialized world managing a surplus. Along with large food availability and developments in medical science helped improve the average human life expectancy. This has resulted in a population boom that is currently overrunning our resource supplies including the supply of food.
We are in desperate need of another agricultural revolution. Like how the previous revolution was lead by the invention of mechanical machines, fertilizers, and pesticides, the new revolution is likely to be lead by artificial intelligence. Nazrini Siraji is a Google developer that developed an app called “Farmers Companion”. She developed it using Google’s open-source machine learning platform, TensorFlow. The app allows you to detect Fall Armyworm (FAW), a crop-destroying caterpillar that’s affecting African agriculture. The app even identifies which stage the worm is in its lifecycle and suggest appropriate remedies.
What we saw here is disease detection, one of the ways AI can help increase our crop productivity. A similar story came from Microsoft, which is working with farmers from the Indian state of Andhra Pradesh. They were able to achieve 30% more crop yield per hectare just by detecting the appropriate time to sow. Traditionally, farmers used to sow using conventional wisdom. AI has not just helped the farmers by aiding appropriate sowing time, it also forecasted the price of commodities which helped the state’s policymakers to determine the minimum support price.
Blue River Technology is a company that specializes in AI-based weed control. Weed Science Society of America (WSSA) has estimated that the US and Canada are losing over $43 billion annually due to uncontrollable weeds. The company has developed a device called See & Spray which uses machine learning and computer vision to identify weeds and spray optimal amount to control them. This significantly reduces the amount of pesticide that is being used up on farms. The company claims up to an 80% reduction in the use of pesticides. The United States reportedly uses over 1 billion pounds of pesticides every year.
When it comes to weed detection, drone imagery can be used to cover large chunks of land. An estimated 60 hectares of land in just 30–40 minutes. The same is also used to check plant health, drainage and irrigation problems, pest infestations, etc.
Another problem plaguing the agricultural industry is labor cost and the availability of workers. Agriculture is a field most people are increasingly opting to stay out of. Labor in the agricultural industry is also seasonal making it an unreliable choice. AI seeks to solve this problem by developing automated machines. Machine learning is used to detect an appropriate harvest. Root AI is one such company looking to develop machines capable of automating harvest. Their Virgo harvesting robot identifies ripe fruit and picks them using mechanical arms.
A second problem with human workers is the need for regular recess, inability to work for long periods, and inability to work during nights. The robot could work up to 20 hours straight during daytime or night. This massively improves the speed and efficiency of the harvesting process.
Soil erosion is the next key factor that is dramatically affecting global agriculture. The United States Department of Agriculture has estimated that soil erosion causes approximately $44 billion in losses in the agricultural sector. Berlin-based tech startup PEAT tries to solve this problem with their deep learning application Plantix. It detects nutrient deficiencies in the soil and also provides restoration tips. The app is also capable of detecting plant diseases.
A similar app comes from California-based tech startup Trace Genomics. The app works similar to Plantix and analyzes soil to learn its strengths and weakness. The knowledge is used to increase the potential for healthy crop production.
Another important application of AI in agriculture is weather prediction. The Weather Company now acquired by IBM is leading the industry in AI-assisted weather prediction. The new IBM Global High-Resolution Atmospheric Forecasting System (IBM GRAF) is helping it push even greater heights. IBM GRAF has replaced the traditional 12 KM range between data points to 3 KM. This has increased the accuracy of weather prediction. It has also added more places like Beijing, New Delhi, and Brazil which are traditionally off-limits in typical weather prediction systems. This lead to the wide democratization of weather prediction as it gathers data across the entire globe and shares predictions in real-time. It is also getting more accurate as machine learning improves over time.
All this encompasses the concept of precision farming. The ultimate aim of precision farming is to increase profitability, efficiency, and sustainability. It manages to accomplish it via the combination of AI-powered disease detection, soil analysis, weather prediction, and suggested optimized prevention methods.
Now, implementing AI in farming is not without its challenges. Although such advance in AI has helped eliminate a lot of guesswork in agriculture and instead lead farmers to opt for meaningful decisions, there is a lot to move forward. The system isn’t perfect but it is in a rapidly maturing phase. Since crops are grown during seasons, there is usually only a once-in-a-year chance to study the farming situation. This is stretching the learning time by years. The machines are prone to giving inaccurate results when their data set is shallow. However, as time goes, the machines are getting better and better. The data pool available for the machines is increasing. It already displayed immense accuracy in its premature phase. As the next decade replaces this one, we are looking at a very possible AI-led agricultural revolution.
Like what you read? visit humaneer.org/blog to see all the latest posts. | https://medium.com/humaneer/ai-is-enhancing-global-agriculture-2ea82b3f57a1 | ['Aravind Sanjeev'] | 2020-12-18 04:29:46.530000+00:00 | ['Agriculture', 'Future', 'AI', 'Technology', 'Artificial Intelligence'] |
2,448 | The world’s best smart cities don’t just adopt new technology: they make it work for people | Written by Arturo Bris, Professor of Finance, International Institute for Management Development (IMD)
Cities are fast becoming “smart”, and the impact on people’s lives can be immense. Singapore’s smart traffic cameras restrict traffic depending on volume, and ease the commute of thousands of passengers every day. In Kaunas, Lithuania, the cost of parking is automatically deducted from the bank accounts of drivers when they park their cars. In many cities, the timing of public buses is announced at each stop with almost perfect accuracy. And free WiFi is now accessible across entire cities, including Buenos Aires, Argentina and Ramallah, Palestine.
Today, improving urban services through digital transformation is a huge industry, dominated by the likes of Cisco and IBM. But the idea of a “smart city” encompasses more than the clever application of technology in urban areas. That technology must also contribute to making cities more sustainable, and improving the quality of life for the people who live there.
That’s why a team of researchers from IMD in Switzerland and SUTD in Singapore — including myself — put together the Smart City Index. For the first time, we attempted to assess people’s perceptions of technology — as opposed to the quality of the technology itself — as a way to characterise the “smartness” of a city. We did this by conducting a massive survey among citizens of 102 cities, to assess how favourably they viewed the technology made available to them.
Problems with perceptions
Take Paris, for instance — a city which has embarked on an ambitious project to redesign its urban landscape. The initiative — called Reinventer Paris — started by receiving suggestions from citizens about how to use and renovate obsolete and disused buildings. At the same time, the velib public bike-sharing program introduced about 14,000 bicycles into regular use throughout the city, with the aim of alleviating congestion and reducing pollution.
But more than five years after its introduction, citizens are still not feeling the benefits. Our smart city index ranks Paris 51st out of 102 cities in the world, in terms of the ability of the city’s technology to improve lives. Our participants from Paris gave their city a low score of 22 out of 100 — where zero indicates total disagreement and 100 signifies complete agreement — in response to the statement that “air pollution is not a problem”. By contrast, citizens of Zurich gave their city a score of 60 in response the same statement.
And although Reinventer Paris was specifically designed to be a bottom-up, participatory process, Parisians give a score of 36 out of 100 to the statement that “residents provide feedback on local government projects”. By comparison, the city of Auckland received a score of 71 from its residents, putting it in sixth place in the overall ranking.
The global picture
Only to the extent that digital technologies make a meaningful difference to people’s lives, can cities efficiently become smart. Our ranking puts Singapore, Zurich, Oslo, Geneva and Copenhagen in the top five, followed by Auckland, Taipei, Helsinki, Bilbao and Dusseldorf. Cities at the bottom of the ranking are all in developing economies or emerging markets, including Bogota, Cairo, Nairobi, Rabat and Lagos.
We were surprised to find that cities well known globally for their adoption of new technology did not make it to the top of the ranking. This was the case for several cities in China — which have received intensive investment from the Chinese government to increase their access to technology — including Nanjin (ranked 55), Guangzhou (57) and Shanghai (59). Likewise, Tokyo shows up in 62nd position, New York City in 38th and Tel Aviv in 46th place.
Smaller, smarter
Smart cities only make sense when technology meets citizens’ needs. A bike-sharing scheme will only seem useful if the city’s infrastructure facilitates cycling — and believe me, only the brave would dare cross Place Charles de Gaulle in Paris at noon on a bike.
At the same time, people recognise when technology solves a problem, because their lives get better. In an extensive study of 16 cities — published in our new book Sixteen Shades of Smart — we found that Medellin has become a very successful smart city because technology targets citizens’ main problem — safety. Similarly, without massive investment, public WiFi in Ramallah has done more for its people by providing them with access to the outside world in a walled city, than any air pollution monitoring system.
We have also found that large cities and megacities find it difficult to become smart. Most of the cities on the top of our ranking are mid-size cities. It is easy to extend the benefits of technology to people in San Francisco (ranked number 12 with a population of 884,000) and Bilbao (ninth, with a population of 350,000); but it is much more difficult to do the same in Los Angeles (35th, population of 4m) and Barcelona (48th, population of 5.5m).
There are 29 cities in the world with a population of more than 10m (including their metropolitan area), and that’s expected to grow to 43 by 2030. The differences between cities — even those in the same country — will continue to grow, as leaders seek out digital solutions to urban problems. But the real test will be whether citizens feel the benefits.
This article is republished from The Conversation. Read the original article.
More thought leadership | https://medium.com/digital-leaders-uk/the-worlds-best-smart-cities-don-t-just-adopt-new-technology-they-make-it-work-for-people-9b411e30d046 | ['Digital Leaders'] | 2020-03-05 11:49:37.123000+00:00 | ['Smart Cities', 'Connectivity', 'Technology', 'Digital'] |
2,449 | keyTango welcomes Alexander Morris as CTO | After working in the automation and digital marketing sectors for years, Alex found his passion in decentralised and distributed systems. He co-founded and lead technology at the Blockchain Institute Chicago, and produces content and systems aimed at increasing privacy and autonomy for individuals online.
Alex also serves as a Director in XYZ Technologies, which produces fully user tested MVPs and Prototypes for tech startups and intrapreneurs alike. Alex works with team members who have deep UX, blockchain, React, and React Native capabilities, as well as strong server engineering and deployment support. They take those prototypes and turn them into results. Alex excels in the role as a developer and creative thinker with a keen business acumen.
Alex has a technical background with a business oriented mind. He has been active in the blockchain tech community over the past few years and is passionate about keyTango's mission of providing users the easiest way to get started with DeFi.
Alex will join the rest team members of keyTango, who are crypto veterans, MIT and Ycombinator alumni. He will take lead on the technical development of our MVP.
Warm welcome to the team, Alex!
keyTango provides the easiest way to get started with DeFi. Our Web3 application made for those who are struggling with complex UI/UX that looks straight out of the bloomberg terminal, which acts as a frictionless gateway to popular DeFi products and services ready to be unraveled within a couple of clicks. Unlike YFI which takes some familiarity with deep DeFi as granted, we offer easy to grasp and navigate UI/UX, that empowers you with tailored content and suggestions based on your level of experience and your blockchain history.
Sign up for our beta: www.keytango.io
Telegram: https://t.me/keytango
Twitter: https://twitter.com/TangoKey
Youtube: https://www.youtube.com/channel/UCXSPmZ4BBT_QAA7w0NjKBIQ/ | https://medium.com/keytango/keytango-welcomes-alexander-morris-as-cto-b99be5771b34 | [] | 2020-12-31 02:18:43.481000+00:00 | ['Blockchain', 'Cryptocurrency', 'Defi', 'Technology', 'Blockchain Technology'] |
2,450 | by DAVE [S2E9] Episode 9 - Full Episode | ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package.
✌ THE STORY ✌
Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing
(Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself.
✌ STREAMING MEDIA ✌
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content.
Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”.
This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation.
✌ TELEVISION SHOW AND HISTORY ✌
A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings.
A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video.
Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet.
The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 2, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRADAVE CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets.
✌ FINAL THOUGHTS ✌
The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature.
FIND US:
✔️ https://onstream.club/tv/97084-2-9/dave.html
✔️ Instagram: https://instagram.com
✔️ Twitter: https://twitter.com
✔️ Facebook: https://www.facebook.com | https://medium.com/@dave-s2-e9-fxxs/dave-2x09-series-2-episode-9-full-episode-1a94c81b4d86 | ['Dave', 'Episode - Full Episode'] | 2021-08-05 01:11:28.330000+00:00 | ['Covid 19', 'Technology', 'Politics'] |
2,451 | AppExchange and the Salesforce Ecosystem | (This is Part 8 of a multi-part series by Passage Technology: Reinventing Your Business, Reimagining Your Salesforce — see Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, and 7. In other entries for this series, we’ll be going in-depth on topics such as how Salesforce Admins can employ new approaches and empower their business partners. In addition, how businesses can streamline operations and objectively evolve strategic decision-making, ensuring group buy-in and productive innovation. And finally how to execute efficient project planning in the Salesforce environment, to be agile while achieving your goals for reinvention.)
Project management touches just about every aspect of how organizations achieve their goals, and it can have a huge impact on costs. The State of Project Management 2020 reports that project management challenges cost businesses $109 million for every $1 billion invested in a project (roughly 11% of all project costs), and one in six IT projects has a cost overage of 200%.
One of the major factors impacting project management is communication. The report shows that, each year, poor communication can cost businesses with up to 100 employees approximately $420,000, and businesses with a staff of 100,000+ employees more than $62 million. And the possibility for costly miscommunications is increasing as the remote workforce expands.
The good news is that you can improve communication by establishing project timelines, goals, and responsibilities with project management team collaboration software. A Pulse of the Profession study from the Project Management Institute found businesses that have implemented a project management structure experience 38% more success with projects than those without a structure in place.*
Managing Projects in Salesforce
Are you onboarding new employees? Integrating systems? Launching a new website or opening a new office? Salesforce can track all of the individual inputs and data points that go into your project, and AppExchange apps can help extend Salesforce’s project management capabilities. Project management apps like Milestones PM+, available on the AppExchange, can help you transform your Salesforce org into a project execution platform.
As internal technology teams implement new business processes and enhance or integrate systems, they need to document their work and stay on track in terms of budgeting, scheduling, and transferring knowledge.
Has the desired functionality been documented? Do you have a testing plan to make sure that users don’t experience bugs or usability problems? Whether a technology project involves Salesforce project management, mobile app development project management, or creating a new customer community, each project will have milestones and tasks within them that need to be completed and tracked.
By creating a project in Salesforce for each IT project type using Milestones PM+ templates, you can set your team up for success to meet their goals while facilitating efficiency, collaboration, and ensuring a smooth transition for user adoption. Following are examples of how organizations used Milestones PM+ from Passage Technology to achieve their goals with technical projects.
Project challenge #1: integrating core business systems and processes on the Salesforce platform
One of Britain’s leading energy companies, nPower needed a tool for managing projects across multiple work streams to integrate all of their internal sales processes. They were also implementing their credit system in Salesforce, which included an integration with Experian and Attradius.
The configurability of Milestones PM+ allowed them to take a controlled, iterative approach to improving program governance. By helping them apply structure to their Salesforce development processes, the app enabled them to adjust their project management processes to accommodate output growth without the loss of performance or quality.
Project challenge #2: historical data and systems from multiple companies are integrated into one Salesforce org
Uniti Fiber is the fiber infrastructure segment of Uniti, a publicly traded Real Estate Investment Trust (REIT). As a result of expansion, the Systems Operations team at Uniti Fiber was balancing the challenge of handling an influx of feature requests from all departments while integrating new companies with multiple other systems. They had also tripled their number of Salesforce users in just three years to 600 and were in the process of converting to Lightning in their Salesforce org.
Understanding costs is critical when you build out fiber optics and Milestones PM+ helped Uniti Fiber track and aggregate costs for management. They also used Rollup Helper from Passage Technology, which is free on the AppExchange, to aggregate dollar amounts. With Rollup Helper and Milestones PM+, they were able to track costs from multiple departments, and understand the monthly recurring cost savings.
Conclusion
Using a Salesforce project management app like Milestones PM+ gives you the power to transform your Salesforce org into a project execution platform, whether you’re managing large-scale IT projects, opening new locations, onboarding new employees, or managing marketing projects.
To learn more about Passage Technology’s apps and services, Contact Us or visit Milestones PM+ and Rollup Helper overview pages.
Copyright 2021 — Passage Technology LLC — All Rights Reserved — Not for Distribution without Prior Written Approval by Passage Technology
Source: *
The State of Project Management in 2020 [42 Statistics], Saaslist, Business Software Advisor | https://medium.com/inside-the-salesforce-ecosystem/strengthening-it-project-management-to-maximize-efficiency-c039a2fbc2a3 | ['Carrie Brown'] | 2021-05-03 14:02:14.701000+00:00 | ['Apps', 'Technology', 'Appexchange', 'Salesforce Tools', 'Salesforce'] |
2,452 | Our Chillest Launch Yet. | We’re excited to share today that we’ve expanded our Eat offerings to include Frozen Eats, featuring sweet treats from Coolhaus, Chloe’s Pops, Marco, and Dalci, alongside savory foods from Chef Bombay, Nuggs, Cappello’s, and more.
Frozen Eats speaks to our mission of simplifying your life by bringing you the most coveted premium brands we know you will love — FastAF.
To celebrate the launch of the new category and National Ice Cream Day, we’re hitting the streets this weekend (Saturday, July 17 — Sunday, July 18) with the FastAF #ChillSummer Ice Cream Truck Tour. For a $50 flat fee, you can select from three curated assortment of goodies including That’s Bold (artisanal flavors), FastAF Faves, and No Moo (dairy-free, plant-based), all delivered to your door by one of our trucks. Don’t wait, as spots are limited.
You can also order curated and fantastically-themed sundae bundles this weekend via our normal delivery method through the FastAF app, including Vacation Vibes ($25), Coffee Obsession ($30), and the Ultimate Sundae ($35). Each bundle comes with two ice cream pints, toppings, and a kit of add-ons to indulge in.
Just download the app and shop Frozen Eats to enjoy in LA, NY, SF & Miami.
Continue to check the app for the latest Frozen Eats to keep you cool this summer! | https://medium.com/@fastaf/our-chillest-launch-yet-9180319c9c55 | [] | 2021-07-15 16:38:44.495000+00:00 | ['Delivery', 'Launch', 'Food', 'Technews', 'Technology'] |
2,453 | How to ETL with MongoDB and Postgres (Part 1) | Part 1, Learning the Lay of the Land
Getting Out of the Comfort Zone
It’s nice to have a comfort zone. Your comfort zone gives you a place to retreat to in times of trouble and uncertainty. It’s your safe haven and a spot you can go to when you need to think, reflect, and plan. But like most things, it also possesses not only a sunny yang side but also a darker yin aspect as well.
The dark side of your comfort zone comes into play when its used as a hiding place rather than as a retreat. Spending too much time there inhibits both spiritual and intellectual growth. Given the breadth, depth, and rate of change in the Javascript ecosystem, it is especially crucial for Web Developers to get out of the comfort zones as part of their learning strategy.
“Only those who will risk going too far can possibly find out how far one can go.” — T. S. Elliot
One way to extend your knowledge and build a foundation for future achievement is to set a stretch goal. Stretch goals embrace the concepts of difficulty as a way to increase your capabilities, and novelty to provide motivation. Doing something difficult is always more enjoyable and rewarding if it’s engaging and fun.
Setting Our Stretch Goal
The sheer volume required is one attribute of modern-day application software that sets it apart from applications developed in prior decades. Data visualization, machine learning, scientific, e-commerce, and front and back office applications all require massive amounts of data to deliver on their goal of adding user value.
A requirement for frontend developers and an imperative for backend developers is to understand the pitfalls and techniques associated with processing large amounts of information. This is true not only for the sake of performance, but also to achieve reliability, availability, and serviceability (RAS).
This project starts with the assumption that ingesting large amounts of data into an application is best accomplished by using a staging area to quickly capture, cleanse, and organize data before loading it into an operational database (like an SQL DBMS) for permanent storage. This stems from the impact of large amounts of information and the relationships between them have on performance and operational efficiency.
One solution is to develop an extraction, transformation, and load (ETL) process that adds the raw data to a staging area, like a MongoDB database, without regard to data quality or their relationships. Once in the staging area, data can be reviewed and cleansed before moving it to a permanent home such as a Postgres SQL database. This strategy can be implemented to encompass two distinct load processes — an initial one-time bulk load and a periodic load of new data.
Understanding the Raw Data Source and Format
Before starting its essential that we take the time to understand the raw data that’s to be staged, transformed, and loaded.
“Unreasonable haste is the direct road to error.” — Moliere
This project will use data in the Global Historical Climatology Network — Daily (GHCND) dataset made available by the U.S. National Oceanic and Atmospheric Administration (NOAA). This data was chosen due to its volume (28GB), rather than its content. The goal isn’t to use this data other than to explore techniques for efficiently processing large quantities of data.
The GHCND combines daily weather observations from 30 different sources encompassing 90,000 land-based stations across the globe into a single data source. The majority of these observations are precipitation measurements, but may also include daily maximum and minimum temperature, the temperature at the time of observation, snowfall and snow depth.
The format and relationships of the weather data format in the following files below are documented in the readme.txt file that accompanies them.
ghcnd-all.tar.gz : Daily observation files. Each file contains the measurements from a single observation station.
: Daily observation files. Each file contains the measurements from a single observation station. ghcnd-countries.txt : List of country codes (FIPS) and names
: List of country codes (FIPS) and names ghcnd-inventory.txt : File listing the periods of record for each station and element
: File listing the periods of record for each station and element ghcnd-stations.txt : List of stations and their metadata (e.g., coordinates)
: List of stations and their metadata (e.g., coordinates) ghcnd-states.txt : List of U.S. state and Canadian Province codes
Reviewing the structure and format of the raw data makes its possible to create an entity relationship diagram depicting the different groups of information and their relationships with one another. Keep in mind that at this point this is NOT a database design. It is merely a tool to understand the various data elements, their attributes, and relationships to one another.
Figure 1 — Entity Relationship Diagram
Daily observations contain an identifying number defined as an aggregate field made up of a country code, a network code identifying the numbering system used by the observing station, and the station identifier. This field is used to relate a station with its observations.
Figure 2 — Raw Weather Station Data
The year and month of the measurement, along with the element type are used to qualify each observation. Element types help to describe the object of the observation — precipitation, snowfall, temperature, etc.
Figure 3 — Raw Weather Observation Data
An array with 31 elements contains the observations, one for each day of the month. Observations are made up of the following components:
A number representing the day of the month
A measurement flag describes the measurement value (e.g., precipitation total formed from two 12-hour totals)
A quality flag defines whether the measurement was obtained successfully taken and if not, the error that was detected (e.g., failed duplicate check).
A source flag defining the origin of the observation (e.g., U.S. Automated Surface Observing System)
A value containing the measurement
The problem scope becomes apparent because weather observations are currently contained in over 108K unique files. This alone underscores the need for thoughtful analysis and design of a data load process.
Figure 4 — Weather Observation File Directory
It may or may not be evident that other than the daily weather observation files, the other files contain metadata describing various attributes of the observations and stations. For example, ghcnd-countries.txt defines the country codes and their corresponding names.
There is also data in the observations for which there is no formal definition. Specifically, the `readme.txt` file defines the values of the element type as well as those for the measurement, quality, and source flags, but there is no machine-readable definition of these codes.
Figure 5 — Measurement Flag Values
Approach
“Plans are of little importance, but planning is essential.” ― Winston Churchill
This project is divided into six high-level steps as shown below, each of these contains detailed tasks which must also be completed for the step to considered complete. As with any project, the result and lessons learned from a high-level step will be used to refine its successors.
Figure 6 — High-Level Steps
Following an Agile approach, detailed specification and design necessary for each step will be deferred until it is required. This prevents wasting time generating detailed plans for what we don’t yet know, or to resist change because it might alter “the plan.”
“The Phoenix Must Burn to Emerge”
Photo by Marcus Dall Col on Unsplash
Janet Fitch must have well understood the need to stretch the boundaries of her comfort zone to have said: “The phoenix must burn to emerge.” Making it a point to go beyond your boundaries is a necessary step in the ongoing process of personal development. Regardless of your chosen profession, whether a Wafer Machine Operator or Web Developer, it’s essential to use tools like stretch goals to increase your knowledge, broaden your experience, and to fail forward.
“Failure should be our teacher, not our undertaker. Failure is delay, not defeat. It is a temporary detour, not a dead end. Failure is something we can avoid only by saying nothing, doing nothing, and being nothing.” — Denis Waitley
In this article, we’ve set the stage for an ambitious ETL project dealing with large volumes of data, NoSQL, and SQL. From this starting point, there is no guarantee of success since there are quite a few unknowns and we’ll be using unfamiliar technologies and techniques. This journey will, however, be beneficial and making mistakes will be a welcomed part of the process, but what we learn as a result will help make us better Developers.
Up next — Part 2, Design & Set Up the Environment.
What is Chingu?
Come join us at Chingu.io to get out of “Tutorial Purgatory” by joining one of our remote Voyage project teams to build apps, refine “hard” skills, and learn new “soft” skills. We help Developers bridge the gap between what they’ve learned and the skills employers are looking for. | https://medium.com/chingu/how-to-etl-with-mongodb-and-postgres-part-1-ef8476f0b8b2 | ['Jim Medlock'] | 2020-06-29 16:32:56.026000+00:00 | ['Mongodb', 'JavaScript', 'Programming', 'Technology', 'Software Development'] |
2,454 | Pass CKAD smart not hard | After playing with Kubernetes for almost a year and a half, it motivated me to challenge myself and pursue the certification.
I have recently passed my CKAD (Certified Kubernetes Application Developer) Certification on the first attempt with 89 percentiles and would like to share my experience with other prospective developers/DevOps enthusiasts looking to pursue the path.
By now you may know that CKAD (and CKA) is an open book exam and you can use Kubernetes Documentation for referring the commands and yaml snippets. It challenges your practical knowledge of defining application resources and uses core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes. rather than testing for book written fundamentals. The exam is 120 minutes (2 hours) long and tests various features such as Pod design, configurations, and environment variable management, logs and observability, networking and services, and storage persistence. As it will be a race against time, awareness, and utilization of the resources is crucial. Here we will see a few of them as I used in my test.
Setup an alias
Setting up an alias is a gift when the clock is ticking faster than you expect hence as soon as you get your terminal up and running, set up the alias as below
alias k=kubectl
Tip: You may have to work on different servers/contexts hence you you will have to set the alias for all servers individually.
As often, you will play with the dry-run option to generate resource scaffolding, setting up an alias with that option would be a time saver too.
alias kdr="kubectl --dry-run=client -o yaml"
Use Short names
Believe me, you would thank yourself for mastering the short names of resources. Below is the list of short names as of version 1.19:
Kubectl API Resources with short names
kubectl api-resources will provide the up-to-date api resources details along with short names.
Imperative Commands
Keeping imperative commands handy is essential for not only CKAD/CKA certification exam but also to spin up Kubernetes resources quickly. Here are imperative commands for some widely used resources using kubectl (version ≥ 1.18):
Before you start, please note that kubectl version 1.19 onwards “kubectl run” command will only generate pod. To create all other resources, use “kubectl create” command.
Create a namespace
kubectl create ns dev-ns
Create a pod label
kubectl run busybox --image=busybox -l tier=webapp
Create a pod with advanced settings
kubectl run busybox --image=busybox --limits "cpu=200m,memory=512Mi" --requests "cpu=100m,memory=256Mi" --command -- sh -c "sleep 3600" -o yaml --dry-run=client
Update pod/container configuration/property
kubectl get pod busybox -o yaml > busybox.yaml
You can update/correct a container image, activeDeadlineSeconds or pod tolerations in running pod inline with kubectl edit pod command.
kubectl edit pod busybox
Create a deployment
kubectl create deployment redis-deployment --image=redis --replicas=2
Scale deployment to 5 replicas
kubectl scale deployment/redis-deployment --replicas=5
Update container image in the deployment
kubectl set image deployment.v1.app/redis-deployment redis=redis:alpine
Update container resources in the deployment
kubectl set resources deployment.v1.apps/redis-deployment -c=redis --limits=cpu=200m,memory=512Mi
You can easily edit any field/property of the POD template with edit deployment command as well. Since the pod template is a child of the deployment specification, with every change the deployment will automatically delete and create a new pod with the fresh changes.
kubectl edit deployment redis-deployment
Create a cronjob
kubectl create cronjob my-job --schedule="*/1 * * * *" --image=busy-box
Create a configMap
kubectl create configmap my-config-map --from-literal=APP_COLOR=green
Create a secret
kubectl create secret generic app-secret --from-literal=USERNAME=root --from-literal=PASSWORD=Test
This post alone does not pass the certification, but it will definitely help you complete tasks promptly. There are many resources, videos and posts are available on the internet including the official Kubernetes documentation, but the intention here is to share tips and tricks I used.
Happy kubectl-ing!!! | https://medium.com/@akshakp/pass-ckad-smart-def303ac8b4d | ['Ronak Patel'] | 2020-12-25 16:46:39.973000+00:00 | ['Cloud Services', 'Kubernetes', 'Cloud Native', 'Technology', 'Cloud'] |
2,455 | 1000+ Bookmarks Later: These Are The Top 5 Most Influential Articles I’ve Read In the Past 5 Years | Man has the internet taught me a lot.
Today I found myself digging through a time-capsule worth of bookmarks from the past 5 years in search for a master list of design resources requested by a co-worker.
In the process of all this digging, I came across countless articles I’ve since forgotten existed, yet at the time of reading were nothing short of mind-expanding for me.
It’s funny how learning works. When we really learn something, the lesson becomes part of who we are. But somewhere along the way, we tend to forget the source.
Perhaps this is where the myth of a self-made person comes from? In ourselves and others, all we ever see is the end-result; too easily forgetting all the people that’ve helped along the way.
As I look back on the 1000+ articles I’ve read and bookmarked over the past 5 years, I thought I would share the 5 that have been most influential on my thinking.
I’ve chosen these articles because, since reading each of them, I’ve experienced a distinct before/after in how I approach the given topic. And collectively, the lessons I’ve taken away have served as more of an education than school ever could:
1) On making big life decisions:
https://hbr.org/2013/11/stop-worrying-about-making-the-right-decision 2) On minimizing regret and living a good life:
https://www.linkedin.com/pulse/study-reveals-5-biggest-regrets-people-have-before-die-iwuoha/ 3) On asking for, and giving, advice:
https://cfe.umich.edu/no-advice-is-better-than-bad-advice/ 4) On the pursuit of mastery:
https://jamesclear.com/deliberate-practice 5) On making things people want:
https://blog.bufferapp.com/people-dont-buy-products-they-buy-better-versions-of-themselves
Here’s to feeling infinitely grateful to all the people we’ve never actually met, and yet thanks to the internet, have permanently changed the course of our lives 🙏 | https://medium.com/the-mission/1000-bookmarks-later-these-are-the-top-5-most-influential-articles-ive-read-in-the-past-5-years-2fd6174cb5db | ['Aj Goldstein'] | 2018-11-20 17:24:52.441000+00:00 | ['Technology', 'Education', 'Lessons Learned', 'Learning', 'Life'] |
2,456 | Robert Gherghe, Head of Communication Modex: Communicating tech is like inventing another language | Passionate about history, innovative technologies and writing good articles, our next guest in the #WeAreModex series of interviews believes that doing PR in the tech world is nicer than in other industries. If he had the chance, Robert would have liked to be a film director, but for the moment he is smoothly ‘directing’ Modex’s Communications department. Here’s what he has to say about blockchain, journalism, PR and… specialty coffee.
Tell us a bit about your studies: high-school, University, Master’s Degree.
I was born in a small, communist style town in the South-East of Romania, where I grew up and lived, like every young Romanian in search of happiness at the beginning of the 21st century. The funny thing is that I’ve graduated from a mathematics and economy-oriented high-school, and then I made a living from writing.
After high-school I was accepted at the University of Bucharest. I became a student at the Faculty of History, my first passion which, most probably, will stay with me for my entire life. When I was 5 years old, I started learning about Romanian and Universal History. So, for me it was quite easy to choose history in college, because at that time I thought I knew almost everything about this domain. A few months later, I discovered it that I knew nothing, so in a way it was very interesting for me to learn what history represents. I then took the Master’s Degree in History of Modernism, with a dissertation paper about Sexuality in the traditional Romanian society.
Find out more about Robert and his professional path. | https://blog.modex.tech/robert-gherghe-head-of-communication-modex-communicating-tech-is-like-creating-another-language-c913f8910b41 | [] | 2021-11-24 12:46:22.877000+00:00 | ['Communication', 'Blockchain', 'Blockchain Technology', 'Public Relations', 'Modex'] |
2,457 | Why Should Startups Make Their Priority Task To Use Blockchain? | The newly established business usually confronts issues in managing and recording the payment process. All such concerns can now be easily resolved by using Blockchain technology as it pushes startups and SMEs in the Digi finance world that ultimately help businesses know all small to big things related to finance.
In 2020, the worldwide Blockchain technology market size was evaluated at $3.67 billion, now anticipated to extend at a CAGR of 82.4% from 2021–2028.
Image Source: Grandviewresearch
By 2024 , it’s estimated that organizations will spend $20 billion per annum on Blockchain technical services .
, it’s estimated that will on . IBM Blockchain app developers are working on 500+ blockchain projects. The industries in which IBM is implementing Blockchain technology are banking, shipping, healthcare, and food safety.
Here, I have stated few factors that will help you know the effectiveness of using Blockchain in startups, and by doing so, you can boost your business growth.
6 Factors That Can Convince Startups To Use Blockchain
Whether it is a startup or large-scale business, security is one of the major concerns for all. With Blockchain, all the payment-related issues can be easily managed, which finally helps firms maintain a trustworthy relationship with clients. Below I have stated the few factors that will help you know the answer to the “Why Startups should make their priority task to use Blockchain” question.
Well, the implementation of Blockchain technology in software is not so simple as it seems. To ignore such issues and make the Blockchain process more manageable, you can avail of Blockchain development services from the top Blockchain development company.
1. Decentralized Services
Image Source: CB Insights
In simple words, Blockchain is defined as a decentralized technology. Decentralized services are the backbone as it offers startups unique access to the alternatives that are currently not accessible in the market. Due to Blockchain’s high security and encrypted nature, it is employed as the foundation for the world’s most famous cryptocurrency — Bitcoin.
2. Transparency
Another vital part of using Blockchain technology is transparency. Making use of Blockchain can help businesses know all moves related to payment. Moreover, Blockchain transparency also helps minimize the risk factor as it allows you to track all sorts of activity, which can ultimately help businesses find the culprit.
3. Digital Freedom
Image Source: Sentrifuge
Well, there are several activities that are not allowed to be performed using online banking apps, and this is the main reason why most people avoid using official mobile banking apps. But integrating Blockchain technology in the custom software allows you to perform multiple tasks, and it doesn’t restrict you from doing various payment activities, which ultimately grants you digital freedom.
4. High Security
Image Source: Tokens24
Well, there are several activities that are not allowed to be performed using online banking apps; this is the main reason why most people avoid using official mobile banking apps. But integrating Blockchain technology in the custom software allows you to perform multiple tasks, and it doesn’t restrict you from doing various payment activities, which ultimately grants you digital freedom.
5. Enhanced Efficiency
Blockchain technology offers end-to-end encryption (communications between receiver and sender) means there is no involvement of any third party in the process, which ultimately enhances the overall processes. Moreover, transactions also take a few seconds instead of a week to complete international transactions.
6. Inexpensive
Image Source: MH Imaging
As compared to other technology, Blockchain is affordable, which means for startups, this technology will be best to manage payment activities. The removal of centralized authority from the process helps in improving costing.
It is estimated that by 2026 Blockchain-based projects will add around $360 billion of value to enterprises. (Gartner)
These are the major reasons why startups and SME businesses should focus on using Blockchain technology. If you have any query related to Blockchain app development, just simply consult experts working in the Blockchain software development company. This will help you clear all your doubts and get adequate solutions.
Industry Verticals Using Blockchain Technology
The popularity of Blockchain is rapidly rising around the globe; because of that, now various industrial sectors are focusing on adopting Blockchain technology. Below view, which business sectors are using Blockchain.
Supply Chain Management
Image Source: Farm to Fork
Blockchain allows businesses to track transactions in real-time. Supply chain management is the sector where the exchange of goods is the crucial part, and due to this, multiple payment-related activities are performed per day. Retailers are finding Blockchain helpful as it allows them to keep a record of all transactions, minimize delays, and cut extra expenses; because of these reasons, the Supply Chain Management sector started using Blockchain technology.
Healthcare
Image Source: Serviceware SE
Blockchain allows patients to share health data with doctors; moreover, the technology ensures to keep your data secure; this is the main reason why the healthcare sector is looking forward to adopting Blockchain technology.
eCommerce
Image Source: Distributive Advertising
Well, we all know that most of the eCommerce sites allow you to make payments using various payment gateways, but all the time, it’s not secure. So to avoid security issues, eCommerce sectors are also preferring using Blockchain as it offers high security without involving any third party between retailer and buyer.
Banking
Image Source: Forbes
Blockchain is widely used in the Banking sector as it allows you to record all data offering high security. Banks prefer using Blockchain as it does not require any third party to make a transaction utilizing a cryptocurrency, like Bitcoin or others.
Famous Small to Large Scale Companies Using Blockchain
Here I have mentioned the names of popular startups, SMEs, and large enterprises using Blockchain technology.
Bank & Finance: BBVA, Barclays, HSBC, Visa
BBVA, Barclays, HSBC, Visa Healthcare: Pfizer, Change Healthcare, FDA
Pfizer, Change Healthcare, FDA Supply Chain Management: Walmart, Ford, Unilever, DB
Walmart, Ford, Unilever, DB Energy: Siemens, TenneT, CNE
Siemens, TenneT, CNE Real Estate: Westfield, JLL, Brookfield
Westfield, JLL, Brookfield Government: MAS, Government of Dubai, SEOUL
MAS, Government of Dubai, SEOUL Travel: Delta, British Airways, Singapore Airways
Delta, British Airways, Singapore Airways Trade: Bank of China, ANZ, SEB
Wrapping Up
Now you must have understood why startups should make their priority task to use Blockchain. Using this technology in your business process can help you out trace and record all payment-related activities and, moreover, help you boost productivity.
It has been seen that most of the big companies are investing in Blockchain technology as it is helping them get more advanced in the finance world. So if you want that your startup or SME business take part in the advancement race, then focus on the implementation of Blockchain. If in case you are having any sort of doubt or confronting difficulty in making use of Blockchain, then hire Blockchain developers in India working in the top-rate Blockchain software development company. | https://medium.com/predict/why-should-startups-make-their-priority-task-to-use-blockchain-77bfc633b7d4 | ['Emma Jhonson'] | 2021-04-24 21:07:35.722000+00:00 | ['Blockchain Application', 'Blockchain Technology', 'Blockchain Development', 'Blockchain', 'Blockchain Startup'] |
2,458 | My 5 Years in IT | My 5 Years in IT
In the second chapter, I discussed my membership in UDev Community, my experience doing university projects, and my graduation with a bachelor’s degree.
In this chapter, I will cover my beginnings in the master’s degree where I will talk about the courses I’ve enrolled in, my continuity with UDev, and how I’ve decided to focus more on Cisco certification.
After completing my bachelor’s degree, I carried on my student’s life by registering for a master’s degree. But first, I had to specialize in one of the three offered fields, which were Networking & Distributed Systems, Information & Data Systems, and Artificial Intelligence.
As I had already begun on my Networking path with Cisco certification courses, I chose Networking & Distributed Systems for the Master’s Degree. In the first semester, I’ve enrolled in some advanced courses like Data Analysis, Algorithms & Distributed Systems, Operational Research, Modeling & Simulation, Digital Signal Processing, and Advanced Database as it was in common with the students that have specialized in Information & Data Systems. Among these courses, we had three practical sessions, one in Algorithms & Distributed Systems in which we were programming a client-server application with Socket using Java language on a Linux system, one in Modeling & Simulation where we had to write programs that calculate probabilities using C language, and one in Advanced Databases where we were programming with PL/SQL.
During the weekends, I kept pursuing my Cisco certification training by starting the third level with a different instructor than the one we’ve been studying with for the first two levels. The new instructor’s methodology was based more on “learning by doing”, so during each lesson, we were practicing a lot by doing labs using GNS-3 and EVE-NG to have a great hands-on experience on advanced networking concepts like LAN Redundancy, Link Aggregation, multiple zones OSPF protocol, advanced EIGRP, and Wireless Local Area Network (WLAN).
EVE-NG Lab
A new season has started for UDev Community, a new organizational structure has been set-up and I remained in the same position as head of the Community Management department. We welcomed new talented members to the team, and I started to build and manage the Community Management team by teaching them the basics of writing formal emails and how to create content on social media.
New weekly activities have been introduced such as UTeach, where each member shared knowledge about a technology that was learned during the week. UThink, which is a problem-solving session and where members should think outside the box to solve logical thinking or coding challenges. UInterview, where all members get trained for a job interview with real-world questions. And UGames, where we were organizing an internal Escape Game so the members can play cooperatively to solve puzzles and accomplish tasks in order to progress and accomplish a specific goal in a limited amount of time.
Our graphic designer has organized an internal design contest for everyone who wanted to put into practice his/her creativity and imagination over a poster or a social media post using Canva.
As I’ve been experiencing with Canva, I wanted to give the contest a try to see how far I could go. It was divided into three phases, the first was to create a poster for our UGames activity under the theme “Treasure Hunt”, the second was to create an official Facebook cover picture for the club, and the third and last phase was to create a poster for a “Blood Donation” event for a charity club. The participant with the most collected points was declared the winner.
Treasure Hunt Poster
UDev Community Facebook Cover Picture
Blood Donation Event Facebook Post
By the end, I’ve been named the winner after successfully designing the Treasure Hunt poster, UDev’s Facebook cover picture, and collecting maximum points during the whole contest. And I’ve been awarded by the organizer and one of the founders of the club.
Second semester
In the second semester, the courses were more specialized in Networking, but they were not up to my expectations due to the lack of practice sessions in courses like Embedded Systems, in which we were programming with Arduino but we didn’t have the chance to code on a real Arduino microcontroller. Network Management, Parallel Programming, and Wireless Network courses were based on theory only.
Overall, most of the technical courses were based on Power-Point slideshows and PDF documents that had to be learned by heart to prepare for the exam.
The only course where we had good hands-on experience, was Network Security, where we were simulating passive attacks like Man-in-the-middle on a Linux based OS, capturing & analyzing packets using Wireshark, and to secure the network by using policies that allow or block traffic.
We’ve asked for more information about the Master’s Degree, and we were told that it’s a “Master of Research” degree, not a professional, which it’s designed to provide training in how to become a researcher. That’s why I’ve been thinking of what to focus on more, a master’s degree or professional certification?
After days of thinking, I chose to get hands-on experience to prepare for my career, and by that, I’ve decided to focus more on the certification path by starting the fourth and last level of the training with more advanced topics such as Wide Area Network (WAN), Virtual Private Network (VPN), BGP protocol, Quality of Service (QoS), Cloud Computing & Virtualization and Network Security concepts.
We carried on organizing events with UDev, and one day I came with the initiative to motivate the team to participate for the very first time in a national student exhibition, which is an opportunity for students or future students to discover new perspectives and be efficiently guided in the development of their career plans. We organized a Problem-Solving workshop under the name “Hack the Problem” to introduce participants to problem-solving and idea modeling, which are the core of IT.
Problem Solving Workshop “Hack the Problem”
And many other activities like coding challenges, Virtual Reality using Google Cardboard, solving cryptogram puzzles, and Arduino.
Cryptogram Puzzles “Crack the Code”
Arduino Activity
We also organized our annual event “UConf” which aims to introduce new technologies and share experiences with students through 15 minutes mini-talks. This time I haven’t participated as a speaker, but as a Community Manager, and managing the club’s social media pages and mailing. Under the theme “Do I.T Well”, many technical topics were discussed, such as 3D Modeling, Software Testing, Cryptocurrency, and GraphQL.
Software Testing talk by UDev’s President
Overall, the year was full of ups & downs and important decisions to take. But in the end, I was thankful for these kinds of experiences that we have to go through in life, so we cannot regret the decisions we’ve taken, and most importantly to make sure we’re going in the right direction.
In the next chapter, I will cover my preparation for the Cisco Certified Network Associate exam, the exam itself, and my beginnings in the job application process.
Bonus story
One night, I received a call from UDev’s president informing me that the club’s Facebook page and Gmail have been hacked. So far, we still don’t know how did it happen, maybe from a malicious link or a keystroke logging, but it was a critical situation that we had to handle together in a short time. We struggled to recover the accounts, but it was a good “mission” that in the end has been accomplished, and the funny part in this story, is that it was 3 A.M. I was sleeping, and the president who is at the same time my friend, was at a wedding. | https://medium.com/@ilyesbekaddour/my-5-years-in-it-4a4ce8cfd7ab | ['Ilyes Bekaddour'] | 2020-12-12 18:00:04.685000+00:00 | ['Experience', 'Blogging', 'Information Technology', 'Computer Science', 'Education'] |
2,459 | The Top Online Data Science Courses for 2019 | After over 80+ hours of watching course videos, doing quizzes and assignments, reading reviews on various aggregators and forums, I’ve narrowed down the best data science courses available to the list below.
TL;DR
The best data science courses:
Criteria
The selections here are geared more towards individuals getting started in data science, so I’ve filtered courses based on the following criteria:
The course goes over the entire data science process
The course uses popular open-source programming tools and libraries
The instructors cover the basic, most popular machine learning algorithms
The course has a good combination of theory and application
The course needs to either be on-demand or available every month or so
There’s hands-on assignments and projects
The instructors are engaging and personable
The course has excellent ratings — generally, greater than or equal to 4.5/5
There’s a lot more data science courses than when I first started this page four years ago, and so there needs to now be a substantial filter to determine which courses are the best. I hope you feel confident that the courses below are truly worth your time and effort, because it will take several months (or more) of learning and practice to be a data science practitioner.
In addition to the top general data science course picks, I have included a separate section for more specific data science interests, like Deep Learning, SQL, and other relevant topics. These are courses with a more specialized approach, and don’t cover the whole data science process, but they are still the top choices for that topic. These extra picks are good for supplementing before, after, and during the main courses.
Resources you should use when learning
When learning data science online it’s important to not only get an intuitive understanding of what you’re actually doing, but also to get sufficient practice using data science on unique problems.
In addition to the courses listed below, I would suggest reading two books:
Introduction to Statistical Learning — available for Free — one of the most widely recommended books for beginners in data science. Explains the fundamentals of machine learning and how everything works behind the scenes Applied Predictive Modeling — a breakdown of the entire modeling process on real-world datasets with incredibly useful tips each step of the way
These two textbooks are incredibly valuable and provide a much better foundation than just taking courses alone. The first book is incredibly effective at teaching the intuition behind much of the data science process, and if you are able to understand almost everything in there, then you’re more well off than most entry-level data scientists.
QUICK TIP
Use Video Speed Controller for Chrome to speed up any video. I usually choose between 1.5x — 2.5x speed depending on the content, and use the “s” (slow down) and “d” (speed up) key shortcuts that come with the extension.
Now to an overview and review of each course.
1. Data Science Specialization — JHU @ Coursera
This course series is one of the most enrolled in and highly rated course collections in this list. JHU did an incredible job with the balance of breadth and depth in the curriculum. One thing that’s included in this series that’s usually missing from many of data science courses is a complete section on statistics, which is the backbone to data science.
Overall, the Data Science specialization is an ideal mix of theory and application using the R programming language. As far as prerequisites go, you should have some programming experience (doesn’t have to be R) and you have a good understanding of Algebra. Previous knowledge of Linear Algebra and/or Calculus isn’t necessary, but it is helpful.
Price — Free or $49/month for certificate and graded materials
Provider — Johns Hopkins University
Curriculum:
The Data Scientist’s Toolbox R Programming Getting and Cleaning Data Exploratory Data Analysis Reproducible Research Statistical Inference Regression Models Practical Machine Learning Developing Data Products Data Science Capstone
If you’re rusty with statistics and/or want to learn more R first, check out the Statistics with R Specialization as well.
2. Introduction to Data Science — Metis
An extremely highly rated course — 4.9/5 on SwichUp and 4.8/5 on CourseReport — which is taught live by a data scientist from a top company. This is a six week long data science course that covers everything in the entire data science process, and it’s the only live online course in this list. Furthermore, not only will you get a certificate upon completion, but since this course also accredited, you’ll also receive continuing education units.
Two nights per week, you’ll join the instructor with other students to learn data science as if it was an online college course. Not only are you able to ask questions, but the instructor also spends extra time for office hours to further help those students that might be struggling.
Price — $750
The curriculum:
Computer Science, Statistics, Linear Algebra Short Course Exploratory Data Analysis and Visualization Data Modeling: Supervised/Unsupervised Learning and Model Evaluation Data Modeling: Feature Selection, Engineering, and Data Pipelines Data Modeling: Advanced Supervised/Unsupervised Learning Data Modeling: Advanced Model Evaluation and Data Pipelines | Presentations
For prerequisites, you’ll need to know Python, some linear algebra, and some basic statistics. If you need to work on any of these areas, Metis also has Beginner Python and Math for Data Science, a separate live online course just for learning the Python, Stats, Probability, Linear Algebra, and Calculus for data science.
3. Applied Data Science with Python Specialization — UMich @ Coursera
University of Michigan, who also launched an online data science Master’s degree, produce this fantastic specialization focused the applied side of data science. This means you’ll get a strong introduction to commonly used data science Python libraries, like matplotlib, pandas, nltk, scikit-learn, and networkx, and learn how to use them on real data.
This series doesn’t include the statistics needed for data science or the derivations of various machine learning algorithms, but does provide a comprehensive breakdown of how to use and evaluate those algorithms in Python. Because of this, I think this would be more appropriate for someone that already knows R and/or is learning the statistical concepts elsewhere.
If you’re rusty with statistics, consider the Statistics with Python Specialization first. You’ll learn many of the most important statistical skills needed for data science.
Price — Free or $49/month for certificate and graded materials
Provider — University of Michigan
Courses:
Introduction to Data Science in Python Applied Plotting, Charting & Data Representation in Python Applied Machine Learning in Python Applied Text Mining in Python Applied Social Network Analysis in Python
To take these courses, you’ll need to know some Python or programming in general, and there are actually a couple of great lectures in the first course dealing with some of the more advanced Python features you’ll need to process data effectively.
Dataquest is a fantastic resource on its own, but even if you take other courses on this list, Dataquest serves as a superb complement to your online learning.
Dataquest foregoes video lessons and instead teaches through an interactive textbook of sorts. Every topic in the data science track is accompanied by several in-browser, interactive coding steps that guide you through applying the exact topic you’re learning.
Video-based learning is more “passive” — it’s very easy to think you understand a concept after watching a 2-hour long video, only to freeze up when you actually have to put what you’ve learned in action. — Dataquest FAQ
To me, Dataquest stands out from the rest of the interactive platforms because the curriculum is very well organized, you get to learn by working on full-fledged data science projects, and there’s a super active and helpful Slack community where you can ask questions.
The platform has one main data science learning curriculum for Python:
Data Scientist In Python Path
This track currently contains 31 courses, which cover everything from the very basics of Python, to Statistics, to the math for Machine Learning, to Deep Learning, and more. The curriculum is constantly being improved and updated for a better learning experience.
Price — 1/3 of content is Free, $29/month for Basic, $49/month for Premium
Here’s a condensed version of the curriculum:
Python — Basic to Advanced Python data science libraries — Pandas, NumPy, Matplotlib, and more Visualization and Storytelling Effective data cleaning and exploratory data analysis Command line and Git for data science SQL — Basic to Advanced APIs and Web Scraping Probability and Statistics — Basic to Intermediate Math for Machine Learning — Linear Algebra and Calculus Machine Learning with Python — Regression, K-Means, Decision Trees, Deep Learning and more Natural Language Processing Spark and Map-Reduce
Additionally, there’s also entire data science projects scattered throughout the curriculum. Each project’s goal is to get you to apply everything you’ve learned up to that point and to get you familiar with what it’s like to work on an end-to-end data science strategy.
Lastly, if you’re more interested in learning data science with R, then definitely check out Dataquest’s new Data Analyst in R path. The Dataquest subscription gives you access to all paths on their platform, so you can learn R or Python (or both!).
5. Statistics and Data Science MicroMasters — MIT @ edX
MicroMasters from edX are advanced, graduate-level courses that carry real credits you can apply to a select number of graduate degrees. The inclusion of probability and statistics courses makes this series from MIT a very well-rounded curriculum for being able to understand data intuitively.
Due to its advanced nature, you should have experience with single and multivariate calculus, as well as Python programming. There isn’t any introduction to Python or R like in some of the other courses in this list, so before starting the ML portion, they recommend taking Introduction to Computer Science and Programming Using Python to get familiar with Python.
Price — Free or $1,350 for credential and graded materials
Provider — University of Michigan
Courses:
Probability — The Science of Uncertainty and Data Data Analysis in Social Science — Assessing Your Knowledge Fundamentals of Statistics Machine Learning with Python: from Linear Models to Deep Learning Capstone Exam in Statistics and Data Science
The ML course has several interesting projects you’ll work on, and at the end of the whole series you’ll focus on one exam to wrap everything up.
6. CS109 Data Science — Harvard
Screenshot from lecture: https://matterhorn.dce.harvard.edu/engage/player/watch.html?id=e15f221c-5275-4f7f-b486-759a7d483bc8
With a great mix of theory and application, this course from Harvard is one of the best for getting started as a beginner. It’s not on an interactive platform, like Coursera or edX, and doesn’t offer any sort of certification, but it’s definitely worth your time and it’s totally free.
Curriculum:
Web Scraping, Regular Expressions, Data Reshaping, Data Cleanup, Pandas
Exploratory Data Analysis
Pandas, SQL and the Grammar of Data
Statistical Models
Storytelling and Effective Communication
Bias and Regression
Classification, kNN, Cross Validation, Dimensionality Reduction, PCA, MDS
SVM, Evaluation, Decision Trees and Random Forests, Ensemble Methods, Best Practices
Recommendations, MapReduce, Spark
Bayes Theorem, Bayesian Methods, Text Data
Clustering
Effective Presentations
Experimental Design
Deep Networks
Building Data Science
Python is used in this course, and there’s many lectures going through the intricacies of the various data science libraries to work through real-world, interesting problems. This is one of the only data science courses around that actually touches on every part of the data science process.
7. Python for Data Science and Machine Learning Bootcamp — Udemy
Also available using R.
A very reasonably priced course for the value. The instructor does an outstanding job explaining the Python, visualization, and statistical learning concepts needed for all data science projects. A huge benefit to this course over other Udemy courses are the assignments. Throughout the course you’ll break away and work on Jupyter notebook workbooks to solidify your understanding, then the instructor follows up with a solutions video to thoroughly explain each part.
Curriculum:
Python Crash Course
Python for Data Analysis — Numpy, Pandas
Python for Data Visualization — Matplotlib, Seaborn, Plotly, Cufflinks, Geographic plotting
Data Capstone Project
Machine learning — Regression, kNN, Trees and Forests, SVM, K-Means, PCA
Recommender Systems
Natural Language Processing
Big Data and Spark
Neural Nets and Deep Learning
This course focuses more on the applied side, and one thing missing is a section on statistics. If you plan on taking this course it would be a good idea to pair it with a separate statistics and probability course as well.
An honorary mention goes out to another Udemy course: Data Science A-Z. I do like Data Science A-Z quite a bit due to its complete coverage, but since it uses other tools outside of the Python/R ecosystem, I don’t think it fits the criteria as well as Python for Data Science and Machine Learning Bootcamp.
Other top data science courses for specific skills
Deep Learning Specialization — Coursera
Created by Andrew Ng, maker of the famous Stanford Machine Learning course, this is one of the highest rated data science courses on the internet. This course series is for those interested in understanding and working with neural networks in Python.
SQL for Data Science — Coursera
Pair this with Mode Analytics SQL Tutorial for a very well-rounded introduction to SQL, an important and necessary skill for data science.
Mathematics for Machine Learning — Coursera
This is one of the most highly rated courses dedicated to the specific mathematics used in ML. Take this course if you’re uncomfortable with the linear algebra and calculus required for machine learning, and you’ll save some time over other, more generic math courses.
How to Win a Data Science Competition — Coursera
One of the courses in the Advanced Machine Learning Specialization. Even if you’re not looking to participate in data science competitions, this is still an excellent course for bringing together everything you’ve learned up to this point. This is more of an advanced course that teaches you the intuition behind why you should pick certain ML algorithms, and even goes over many of the algorithms that have been winning competitions lately.
Bayesian Statistics: From Concept to Data Analysis — Coursera
Bayesian, as opposed to Frequentist, statistics is an important subject to learn for data science. Many of us learned Frequentist statistics in college without even knowing it, and this course does a great job comparing and contrasting the two to make it easier to understand the Bayesian approach to data analysis.
Spark and Python for Big Data with PySpark — Udemy
From the same instructor as the Python for Data Science and Machine Learning Bootcamp in the list above, this course teaches you how to leverage Spark and Python to perform data analysis and machine learning on an AWS cluster. The instructor makes this course really fun and engaging by giving you mock consulting projects to work on, then going through a complete walkthrough of the solution.
Learning Guide
How to actually learn data science
When joining any of these courses you should make the same commitment to learning as you would towards a college course. One goal for learning data science online is to maximize mental discomfort. It’s easy to get caught in the habit of signing in to watch a few videos and feel like you’re learning, but you’re not really learning much unless it hurts your brain.
Vik Paruchuri (from Dataquest) produced this helpful video on how to approach learning data science effectively:
Essentially, it comes down to doing what you’re learning, i.e. when you take a course and learn a skill, apply it to a real project immediately. Working through real-world projects that you are genuinely interested in helps solidify your understanding and provides you with proof that you know what you’re doing.
One of the most uncomfortable things about learning data science online is that you never really know when you’ve learned enough. Unlike in a formal school environment, when learning online you don’t have many good barometers for success, like passing or failing tests or entire courses. Projects help remediate this by first showing you what you don’t know, and then serving as a record of knowledge when it’s done.
All in all, the project should be the main focus, and courses and books should supplement that.
When I first started learning data science and machine learning, I began (as a lot do) by trying to predict stocks. I found courses, books, and papers that taught the things I wanted to know, and then I applied them to my project as I was learning. I learned so much in a such short period of time that it seems like an improbable feat if laid out as a curriculum.
It turned out to be extremely powerful working on something I was passionate about. It was easy to work hard and learn nonstop because predicting the market was something I really wanted to accomplish.
Essential knowledge and skills
Source: Udacity
There’s a base skill set and level of knowledge that all data scientists must possess, regardless of what industry they’re in. For hard skills, you not only need to be proficient with the mathematics of data science, but you also need the skills and intuition to understand data.
The Mathematics you should be comfortable with:
Algebra
Statistics (Frequentist and Bayesian)
Probability
Linear Algebra
Basic calculus
Optimization
Furthermore, these are the basic programming skills you should be comfortable with:
Python or R,
SQL
Extracting data from various sources, like SQL databases, JSON, CSV, XML, and text files
Cleaning and transforming unstructured, messy data
Effective Data visualization
Machine learning — Regression, Clustering, kNN, SVM, Trees and Forests, Ensembles, Naive Bayes
Lastly, it’s not all about the hard skills; there’s also many soft skills that are extremely important and many of them aren’t taught in courses. These are:
Curiosity and creativity
Communication skills — speaking and presenting in front of groups, and being able to explain complex topics to non-technical team members
Problem solving — coming up with analytical solutions for business problems
Python vs. R
After going through the list you might have noticed that each course is dedicated to one language: Python or R. So which one should you learn?
Short answer: just learn Python, or learn both.
Python is an incredibly versatile language, and it has a huge amount of support in data science, machine learning, and statistics. Not only that, but you can also do things like build web apps, automate tasks, scrape the web, create GUIs, build a blockchain, and create games.
Because Python can do so many things, I think it should be the language you choose. Ultimately, it doesn’t matter that much which language you choose for data science since you’ll find many jobs looking for either. So why not pick the language that can do almost anything?
In the long run, though, I think learning R is also very useful since many statistics/ML textbooks use R for examples and exercises. In fact, both books I mentioned at the beginning use R, and unless someone translates everything to Python and posts it to Github, you won’t get the full benefit of the book. Once you learn Python, you’ll be able to learn R pretty easily.
Check out this StackExchange answer for a great breakdown of how the two languages differ in machine learning.
Are certificates worth it?
One big difference between Udemy and other platforms, like edX, Coursera, and Metis, is that the latter offer certificates upon completion and are usually taught by instructors from universities.
Some certificates, like those from edX and Metis, even carry continuing education credits. Other than that, many of the real benefits, like accessing graded homework and tests, are only accessible if you upgrade. If you need to stay motivated to complete the entire course, committing to a certificate also puts money on the line so you’ll be less likely to quit. I think there’s definitely personal value in certificates, but, unfortunately, not many employers value them that much.
Coursera and edX vs. Udemy
Udemy does not currently have a way to offer certificates, so I generally find Udemy courses to be good for more applied learning material, whereas Coursera and edX are usually better for theory and foundational material.
Whenever I’m looking for a course about a specific tool, whether it be Spark, Hadoop, Postgres, or Flask web apps, I tend to search Udemy first since the courses favor an actionable, applied approach. Conversely, when I need an intuitive understanding of a subject, like NLP, Deep Learning, or Bayesian Statistics, I’ll search edX and Coursera first.
Wrapping Up
Data science is vast, interesting, and rewarding field to study and be a part of. You’ll need many skills, a wide range of knowledge, and a passion for data to become an effective data scientist that companies want to hire, and it’ll take longer than the hyped up YouTube videos claim.
If you’re more interested in the machine learning side of data science, check out the Top 5 Machine Learning Courses for 2019 as a supplement to this article.
If you have any questions or suggestions, feel free to leave them in the comments below.
Thanks for reading and have fun learning!
Originally published at learndatasci.com. | https://medium.com/free-code-camp/top-7-online-data-science-courses-for-2019-e4afdc4693e7 | [] | 2019-05-02 20:18:44.339000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Technology', 'Data Science', 'Programming'] |
2,460 | Python Venture Thoughts for 2020 — Work on constant tasks to start your profession | Python is the most utilized programming language on earth. Picking up Python information will be your best interest in 2020. In this way, on the off chance that you need to accomplish skills in Python than it is urgent to deal with some ongoing Python venture.
Only technical information or Knowledge of anything is of no utilization until or unless one switches to an ongoing project. In this article, We EdunBox.com is giving you Python venture thoughts from fledglings to cutting edge levels with the goal that you can achieve without much of a stretch learn Arcgis by for all intents and purposes actualizing your insight.
Venture-based learning is the most significant thing to improve your insight. That is the reason Edunbox.com is giving Python for GIS instructional exercises and Python ventures thoughts for novices, intermediates, just as, for specialists. Along these, everyone can likewise step up HIS/HER programming abilities.
Do You Know ??
According to Stackoverflow!
“Python is the most preferred language which means that the majority of developers use python.”
We will talk about 200+ Python venture thoughts in our up and coming articles. They arranged as:
Python Venture Thoughts
Python Django (Web Improvement) Venture Thoughts
Python Game Development Venture Thoughts
Python Machine learning Venture Thoughts
Python AI Venture Thoughts
Python Data Science Venture Thoughts
Python Deep Learning Venture Thoughts
Python Computer Vision Venture Thoughts
Python Internet Of Things Venture Thoughts
To Know More …… Just Click | https://medium.com/@harsh.s.edunbox/python-venture-thoughts-for-2020-work-on-constant-tasks-to-start-your-profession-6946b547959c | ['Harsh Sharma'] | 2020-02-18 05:52:36.017000+00:00 | ['Innovation', 'Python', 'Arcgis', 'GIS', 'Technology'] |
2,461 | The biggest source of uncertainty in autonomous vehicles is simply other traffic participants | Autonomous vehicles engineering is quite challenging due to the complexity of the system being created and the environment where the vehicles have to operate. An autonomous vehicle needs to create a representation of its surroundings, identifying for instance where the vehicle is in the road, traffic signs, static and dynamic entities in the road or on the road vicinity.
The vehicle can use different sensors to collect information about the environment so that it can use this information to take decisions. Perceiving the environment can be achieved by taking millions of distance measurements to any objects in the vicinity of the vehicle, for instance using LIDAR sensors, or capturing images around the vehicle, using image sensors (a.k.a. cameras). The captured information is processed in different ways and used by the vehicle to decide how much it should accelerate, brake, etc.
The most important aspect is that any mistakes in such measurements done in the perception may affect the behavior of the AV. Measuring anything always has certain uncertainty associated to it. The uncertainty can have several different causes, such as failing sensors, improper operating conditions, interference from other sensors, etc.
The problem of uncertainty is that it can directly affect many other components of the autonomous vehicle. For instance, a simple estimation of how far another car is from the AV can be greatly impacted by uncertainties in the perception. The bigger the error in estimating the position of other vehicles on the road, the bigger the chances of collisions.
There is vast literature studying how to how to improve the perception of an autonomous vehicle removing any associated uncertainties. We believe that measuring the distances between the autonomous vehicle and other entities surrounding it is a matter of improving its accuracy and precision.
However, when one is driving one has to take into account not only the static, dynamic objects, vehicles, etc. surrounding the autonomous vehicle but also the intentions of other road users. Basically, when driving we have to assume other drivers will behave in a certain predictable way. The perception system also needs to take into account the behavior of other road users. Due to the different nature of measuring where other road users, obstacles, etc are in the road versus predicting the behavior of other road users. It is possible to clearly distinguish two main types of uncertainty affecting the AV systems as in the Figure below.
The error in measurements of distances to other road users is known BEFORE the fact. The error in predicting the motion of other traffic participants is only know AFTER the fact.
We believe that the uncertainty originating from the Perception is measurable and its accuracy and precision is a matter of engineering. However, the uncertainty originating from the environment, that is the motion of other traffic participants, can not be measured, or predicted with a 100% accuracy, by definition.
Currently, there is no way to “get inside the brain” of another road user (e.g. a driver or a pedestrian) and know exactly what this person wants to do. Because of this the assessment of an autonomous vehicle prediction of the motion of other road users can only be made after the fact.
Because of this the assessment of an autonomous vehicle prediction of the motion of other road users can only be made after the fact.
Concluding
Other traffic participants, such as drivers, pedestrians, cyclists, etc. provide the biggest source of uncertainty ever to an autonomous vehicle. Autonomous vehicles only know about any prediction errors after the fact.
The biggest challenge then is to make safe decisions knowing that the autonomous vehicle have imperfect predictions.
Acknowledgments
Thank you Hoang Tung Dinh, Quentin de Clercq, and Alfredo D’Alava Jr. for contributing with discussions and feedback to this article. | https://medium.com/@mhct/the-biggest-source-of-uncertainty-in-autonomous-vehicles-is-simply-other-traffic-participants-631fa19f8fe6 | ['Mario H.C.T.'] | 2020-11-16 11:35:24.515000+00:00 | ['Autonomous Cars', 'Technology', 'Prediction Model'] |
2,462 | Connected | The gifts of The Glitter Storm
Photo by Mar Bustos on Unsplash
My granddaughters live in the next state over. It’s about 270 miles from my doorstep to theirs. Back in the day when planes flew — it was a thirty-nine-minute direct flight.
I would visit them every other month or so. There were always reasons. Holidays, birthdays, date nights for Mom and Dad, long weekends which demanded a retreat from my day-to-day, Thursday…it didn’t take much for me to hop a plane to their house.
Enter COVID. The day before I was due to go visit them. For a week.
To say we were all incredibly disappointed would be a severe understatement. But interestingly enough a new dynamic has formed in These Days while we wait out The Glitter Storm — as my daughter so gently explained viral transmission so that even the four-year-old knew the importance of a thorough hand-washing.
The Glitter Storm has provided us with an opportunity to connect daily via texts and more FaceTime. As all our worlds slowed to the pace of quarantine — we found ourselves reaching out to each other naturally. Daily.
Today I had texts from all three. This evening we had a FaceTime session to update each other on our afternoons and dinner experiences. All the while, joy was shared, hope was given, reassurances and love served out in heaping measures.
This steady contact is settling into a lovely routine for us. A touchstone that lets each of us know we are all ok. This is particularly important to The Magical Creatures.
But this isn’t just for little Humans. I also have realized my oldest sister has upped her texting game since COVID. Rarely a day goes by that I don’t hear from her as well.
Receiving these short, frequent, deliberate messages of love has become a vital part of my day. They have grounded me and connected me to the Humans I cherish.
The quiet of These Days have gifted me with meaningful conversations with Humans I love. It has given all of us a chance to share our day-to-day existence in real, if virtual, time. It has allowed us to lean on each other just enough.
May your silver lining be as precious. May love sustain you through whatever burdens you bear in These Days. May you and all those whom you cherish be safe.
Until at last, The Glitter Storm passes.
Namaste. | https://medium.com/crows-feet/connected-5eb068d30fdb | ['Ann Litts'] | 2020-04-22 16:00:04.645000+00:00 | ['Technology', 'Family', 'Self-awareness', 'Love', 'Grandmother'] |
2,463 | What is ZEXE? (Part II) | ZEXE Design Strategy
ZEXE is a scheme for privacy-preserving, decentralized applications such as decentralized exchanges. It is designed to utilize a ledger-based system, and support multiple functionalites. These include user-defined fungible assets, cross-application communication, and public auditability. And of course, all of these are to be achieved in a zero-knowledge manner.
In order to realize the above goals, ZEXE breaks from existing private blockchains in several ways.
ZEXE provides a shared execution environment where multiple applications interact on the same ledger. The content of a transaction is no longer restricted to transfers of value, but instead represents a more general data unit called a record. Moreover, users can define their own functions with associated predicates that stipulate conditions under which assets can be spent, without the need to request permission to do so. Rather than an on-chain execution environment, ZEXE opts for offline computations that generate transactions and attach to each transaction a zero-knowledge proof that attest to their correct execution. The result is a protocol for a new cryptographic primitive dubbed decentralized private computation (DPC).
Zerocash Transactions
The first real-world system which used commitment schemes and zero-knowledge proofs to provide privacy was Zerocash. A Zerocash transaction tx is an ‘operation’ that consumes old coin commitments as inputs, and outputs commitments of newly created coins together with a zero-knowledge proof which attests to correct transaction computations. See Fig. 1, below.
On the ledger, each transaction consists of:
The serial numbers of the consumed coins, { sn₁, sn₂, … , snₘ}, Commitments of the created coins, { com₁), com₂), … , comₙ) } , and A zero-knowledge proof π attesting to two facts,
(i) that the serial numbers consumed belong to coins created in the past (without identifying which ones, thus ensuring privacy for parties to a transaction),
(ii) that the commitments contain new coins of the same total value of coins consumed (ensuring the overall economic integrity of the system).
That’s how the Zerocash system uses zero-knowledge proofs to ensure once-off consumption of coins, and consequently prevent double-spending. See Figure-1 below.
Figure 1: Typical Zerocash Transaction
Note that indices of the coin commitments in Fig. 1 above are not repeated. The three new commitments are labelled 4, 5, and 6, and not 1, 2, and 3, in order to signify uniqueness of commitments. Similarly, every coin has a unique serial number sn, and no two coins can share a serial number.
A Zerocash transaction is therefore private because it only reveals how many coins were consumed and how many were created, but the value of the coins is kept secret. As previously mentioned, this provides data privacy. And, in the case of Zerocash being a single functionality protocol, function privacy is achieved by default because there is no need to distinguish one function from another.
Extending the Zerocash Computational Model
Zerocash was a breakthrough with regard to privacy for distributed ledger systems. But unfortunately, the scheme is limited in the functionality it provides. What if we wanted to do more than a simple private transfer of assets?
Take Ethereum, for instance, which supports thousands of separate ERC-20 “token” contracts, each representing a distinct currency on the Ethereum ledger. In handling all the various cross currency transactions, many function calls are involved and these are each embedded to specific applications. But since every application’s internal state is public, so is the history of function calls associated with each.
Even if each of these contracts would individually adopt a zero-knowledge protocol such as Zerocash to hide details about token payments, the corresponding transaction would still reveal which token was being exchanged. Consequently, although inputs and outputs of state transitions are hidden and thus achieve data privacy, the transition functions being executed are in the open. Thus, achieving function privacy in the model of Ethereum is not possible.
ZEXE was motivated by this exact problem. In ZEXE, the goal is not only to provide data privacy (as in Zerocash) but also functional privacy. So a passive observer of the blockchain wouldn’t know anything about the application being run, nor be able to identify the parties involved. Therefore, the ZEXE model can support rich applications such as private dApps, dark pools, and private stablecoins. The programming model also allows multiple applications to interact on the same ledger, as well as promoting user-defined functions in order to achieve a totally decentralized system.
The Verifier’s Dilemma
Another appealing attribute of ledger-based systems is auditability. Whether one is a regulator or new user of a blockchain, the ability to easily verify the veracity of historic transactions is crucial.
Unfortunately, many ledger-based systems achieve public auditability via direct verification of state transitions. And such a verification method of transactions regrettably involves re-execution of the associated computations. The problem with this method is that large computations take a long time to be completed, leaving the network prone to denial-of-service (DoS) attacks.
Early smart contract blockchains such as Ethereum addressed this problem through the mechanism of gas, making users pay for longer computations, acting as a deterrent against DoS attacks. The drawback with this approach is that verification is still expensive. Furthermore, unlike solving the Proof-of-Work puzzle to find the next block, verifying transactions isn’t profitable. This is the quandary known as the Verifier’s Dilemma. In the past, this problem has caused forks in prominent blockchains like Bitcoin and Ethereum.
Unlike other blockchains, program execution in ZEXE occurs off-chain. Furthermore, by using zk-SNARKs, verification of proofs is cheap for the on-chain miners or validators. Therefore, ZEXE is effectively a solution to the Verifier’s Dilemma.
Achieving Zero Knowledge Execution
We begin with Zerocash, a protocol designed for applications with single functionality, that is, a transfer of value within the same currency. Zcash is one example of a cryptocurrency system that uses the Zerocash protocol. It uses zero-knowledge proofs (zk-SNARKs) to achieve privacy. The goal of ZEXE is to extend this protocol beyond single applications to any arbitrary program.
Records as Data Units
The first step is a switch from coins to records as data units. That is, instead of just an integer value, a record stores some arbitrary data payload. So instead of a simple transfer of value as in Zerocash, ZEXE works with arbitrary functions , as long as they are known to everyone in advance.
This change enables ZEXE to support arbitrary programs. But what about privacy?
In the public’s eye, a transaction can again be imagined as an operation that consumes old record commitments, and outputs newly created record commitments together with a zero-knowledge proof.
The structure of a record is illustrated in Figure 2 below:
Figure 2: Typical Records Transaction
At creation of a record, its commitment is published on the ledger, and its serial number is published only after the record is consumed. This time, the zero-knowledge proof attests that applying the function on the old records produced the new records.
As in the Zerocash case, each transaction on the ledger consists of,
The serial numbers of the consumed records, { sn_(old₁), sn_(old₂), … , sn_(oldₘ) }, Commitments of the created records, { com_(new₁), com_(new₂), … , com_(newₙ) } , and A zero-knowledge proof π attesting to two facts,
(i) first, that the serial numbers belong to records created in the past (without disclosing the records),
(ii) second, that the commitments contain new records of the equivalent total value of records consumed.
Supporting Arbitrary Functions
The second step is to enable multiple users to freely define their own functions and allow the functions to interact without any rules. This could be achieved via the same approach outlined in the previous step. That is, by allowing a user to fix a single function Φ that is universal, and then interpret data payloads dpᵢ as user-defined functions Φ = dpᵢ that are provided as inputs to Φ.
Using zero-knowledge proofs as in Zerocash would ensure function privacy. However, merely allowing users to define their functions does not in itself result in any useful functionality overall.
Function privacy, in this scenario, creates problems because there is no way for users to determine whether a given record was created according to any given function in a legitimate way. And given the inevitable presence of malicious users, honest users are therefore not protected from all sorts of fraud.
This particular design approach, of unrestrained freedom in users freely defining functions, is of course an extreme. The lack of rules that govern how user-defined functions can interact is the very root of its failure. But can this idea be salvaged? The answer is yes, and we see how in the next section.
Using Tags to Identify Functions
The third step in this design journey is to introduce unique identifiers to each user-defined function, Φ. That is, we include a function identification-tag to each record and use the id-tag as a way to determine which function was used to create the record. This can be done in a zero-knowledge manner. Perhaps the id-tag is defined as a hash value of the function Φ evaluated at a given value vᵢ. That is, idᵢ = H(Φ (vᵢ)). And hence, each function Φ will have a unique id-tag idᵢ if the hash function is collision-resistant.
Since having ‘no rules’ was the root problem in the foregoing fraud-riddled approach, one rule can be enforced in this case. That is, only records with the same id-tag are allowed to cooperate in the same transaction.
Zero-knowledge proofs can guarantee that records participating in the same transaction indeed have the same function id-tag. This will guarantee that only records created by the same function participate in the same transaction.
Although this type of a system provides reasonable functionality, it suffers from a complete and total process segregation. As a result, even a simple coin swap cannot be achieved. But, it represents a step in the right direction.
In Part III, we’ll discuss how the records nano-kernel enables a new cryptographic primitive called decentralized private computation (DPC), an application-ready framework that any developer can use to build private applications. | https://medium.com/zeroknowledge/what-is-zexe-part-ii-bb24b560aebd | ['Anthony Matlala'] | 2021-06-18 21:42:40.451000+00:00 | ['Privacy', 'Blockchain', 'Zero Knowledge Proofs', 'Distributed Ledgers', 'Blockchain Technology'] |
2,464 | Why do we want to be leaders? | As far as I can remember, I have always wanted to be a leader but why. What is it about the idea of leading that many of us are drawn towards? Since starting my new job at a telecommunication start-up, my boss has made me question why I want to be a leader. Throughout primary school, high school, and university I have always been a leader of some sort. Whether it was for the student body, sports teams, or outside academia, I have always put my hand up to take on these leadership positions. Barak Obama, Nelson Mandela, and Jacinda Ardern are a few examples of incredible leadership but do you think they were born with it? Some may say yes and others might think they were self-made.
Photo by CoWomen on Unsplash
In 2013, women accounted for 8% of all national leaders and 2% of all presidential posts and to some extent, this is one of the reasons why I want to lead. To bridge the gender gap. A recent study by Diversity Work New Zealand shows that 62.2% of board members are male and 37.3% female. Another reason why I want to become a leader is the people. Being a part of someone’s journey and seeing their progress, there is no greater feeling.
After a bit of research, I have found the 4 main reasons why a lot of people want to be leaders in no particular order.
Money
From the USA today show, CEO’s could well enough earn as high as 486 times more than their employees. For a company with 4 organization levels, let’s say department, division, vice president, and president. The president would receive 2 x 2 x 2 x 2 = 16 times more than employees reporting to the department manager. When we look at it like this it does become significantly more drastic in terms of how much leaders can earn. Personally, I like to think of it as a bi-product of leadership and never the main reason to lead.
2. Power
Being in control of how a company is directed comes with many risks and challenges. You are responsible for not only people’s livelihood, their future, and their goals but also the success and failure of a company. The dangerous thing with power is that it can be easily misused and taken advantage of and I’m sure we have all being witness to this once before. Whether at work, school, in government, we all know how power plays an important role.
3. Prestige
People like titles. That’s pretty much it.
4. People
Simon Sinek, a motivational speaker and the author of Leaders eat last talks about what good leadership looks like. He re-iterates that “Great leaders create leaders” and I really resonate with that. The people are what makes or breaks a company. It is their dreams, hopes, and determination that lay the foundation of a great team.
Putting these 4 points aside, I decided to ask a few experienced and skilled leaders I know about what it means to lead and some of the challenges that go with it. Here is what they said.
Hadleigh Bognuda, CEO of ezyVet
ezyVet is a veterinary practice management software company based here in Auckland. A great local success story and a game-changer in the veterinary industry delivering a solution unlike any other. Talking with Hadleigh, he tells me that a great leader is someone who balances the empathy for their direct report with driving them to set, plan, and achieve their goals. He also goes on to say that finding a good balance between the ability to separate the day-to-day from the long-term strategic goals is difficult. Hadleigh leads a team of over 100 people to continuously optimize veterinary workflows and help veterinary clinics get the most value out of the application.
Rory Hancock, Co-Founder of Vidapp
I was also fortunate to be able to interview Rory Hancock, Co-Founder of Vidapp. Vidapp provides a seamless solution that lets you publish your online courses and subscription sites as your own native application.
Talking to Rory, he told me that one of the hardest things about being a leader is the blurred line between professional difficult conversations and supporting people at the workplace. Rory goes on to say that being empathetic and humanistic in a high-pressure and high-stress environment is incredibly important in being part of people’s growth and creating an environment where they can grow. As a leader, you must be there for your team regardless of how busy you are to lift them and forgo credit, paying it back to the team where it is deserved.
We have found a few reasons why people want to become leaders and have also talked to a few successful leaders but what does it mean to you? Are your reasons covered in this article? Leave a comment below and let me know your <why/>. | https://medium.com/@claudynnlee/why-do-we-want-to-be-leaders-266eb61bfaab | ['Claudynn Lee'] | 2020-12-21 11:11:40.051000+00:00 | ['Leadership', 'Motivation', 'Technology', 'Women In Tech', 'Coaching'] |
2,465 | Do salesforce have a future or not? | As per IDC (International Data Corporation) From the beginning of 2019 to the end of 2024, worldwide spending on public cloud computing will grow 19% a year , the Salesforce ecosystem in 2019 is more than four times larger than Salesforce itself. By 2024, it will be nearly six times bigger . and also Jobs created from the use of Salesforce cloud services from 2019 through 2024 are predicted to hit 4.2 million.
But do all these facts talk is real on ground level true ? actually yes they are . About business competition and threats that salesforce can face . salesforce is keen in devloping itself either by devloping itself or adopting other technologies. the zeel of continously devlopment in salesforce is not reducing, It isContinously devloping new interface (Lightning experience), new frameworks (LWC) , Dynamic Forms ,Einstein Reply Recommendations , introducing new clouds such as marketing cloud , It all display the vast sea of salesforce have andit display the potential to win the race among its competitors .
Moreover salesforce acquistions such as Tableau , Mulesoft ,Krux ,Mapanything ,Datorama display the solid buisness plan it have to provide all the solution for marketplace at one single place. | https://medium.datadriveninvestor.com/do-salesforce-have-a-future-or-not-1e88c6e3bc5d | ['Dilbag Singh'] | 2020-10-06 00:20:53.686000+00:00 | ['Jobs', 'Certification', 'Technology', 'Crm Software', 'Salesforce'] |
2,466 | Making Redux in Rust | Making Redux in Rust
Recreating one of GitHub’s most popular libraries in Rust
Photo by Felix Brendler on Unsplash
Rust is an amazing language, boasting speeds comparable to C, and abstraction comparable to high-level languages like C# or TypeScript. It piqued my interest a few years back — but that’s as far as I went with it (merely interested). I have a pretty solid (and fanatically functional) JavaScript background, but I have always felt like I was a front-end dev only by default. There is such high availability of classic web technology guidance at the tips of my googling fingers! I have done a bit of programmerly soul-searching of late in an attempt to find my niche and really build something. And since Rust can literally go anywhere (?) what better language to learn?
Thus, I’m investing my further learning in Rust. I can still boast some great JS skills that will likely be in demand for years to come, but I see Rust as a major player in the near future.
Anyway. My assumption is you, my reader, are in a similar boat. You’ve come from JS and love web technologies; you have heard about Rust and are hungry for more. Well, I’ve got you covered, because today I’m going to write something relatable to most JS devs — Redux in Rust!
If you’re not familiar with Redux, that’s OK too. Redux is a simple state management machine that
holds some state, updates that state based on pure functions known as reducers.
That’s the most basic version, anyway. It’s an implementation of a messaging system popularized by the people at Facebook known as a Flux. Redux is probably the simplest implementation of a Flux. We’ll do more in our example than just hold state and update it, though. We will introduce the ability to handle side effects apart from the pure functional state reducer and provide a way to back-trace the state history, too.
Side note: A pure function is one that doesn’t perform any side effects. That is to say, if you give it the same input, it will always produce the same output and will never modify outside state.
First, we’ll need some state to manipulate:
Start basic.
That’s quite nearly the simplest state we could be managing. This should suffice for our examples, though. Since we want our state changes to be immutable steps, we should probably also derive some traits here:
Add some free functionality with derive .
Copy and Clone let us pass our struct around by value on the stack. Default lets us initialize our type without a new function. PartialEq allows us to compare our struct with == and != . We’ll enjoy these traits’ features when we are implementing our state store.
The next thing we need to do is define a wrapper for our state. This wrapper is what is known as the Store. You can think of the Store as a unique reference to the state it contains. There will only ever exist one current copy of your state within the Store.
Defining a simple state reducing store.
…and that got complicated quickly. This requires some explanation. Store is a generic type with a few fields.
It wraps up our state, T , which can be any type at instantiation. It takes a trait object as its reducer field. What this field allows us to do is pass a function as the value for reducer . The Box is merely a pointer to a heap-allocated item, which our function must be since its size cannot be known at compile time. We expect this function to take the form Fn(T, Msg) -> T .
But hold on, what’s Msg ?
This will hold our message types.
For right now, that’s Msg . If you’re familiar with JavaScript’s Redux library, this is what you’d normally call an action. Msg will eventually expand to cover all the possible actions for our application — or at least all the actions that we will perform for this one piece of state. Now that we’ve got some of the pieces, let’s get some implementation going. We will be implementing functionality for the Store struct:
Constructing a store with idiomatic new.
We have added some trait bounds to our impl block. where designates that the following type parameters must implement the traits provided. The traits that must be implemented on our data structures are listed in addition to format. Remember the traits we derived on our state struct, MyState ? These trait bounds allow us to leverage the traits that we’ve supplied to our state so that the compiler will know that what we’re doing has prerequisites and is OK so long as those PRs are met. This lets us do things like call the generic parameter T ’s default() function — the compiler knows T must implement the Default trait. Let’s kind of ignore the reducer parameter of our constructor for now. We’ve already discussed that heap-allocated trait object situation a moment ago, and it will make more sense as we move forward and utilize these fields than it will to discuss exactly how it works in memory.
Let’s add some more functionality. At the core of the Flux pattern is the concept of a dispatcher. Though I could (and in a more advanced scenario, probably would) make the dispatcher its own structure, I’ll opt in favor of the Redux route and simply make dispatch a method of the Store.
The only way to update our state is via dispatch.
That’s a stupid-simple and ultimately naive solution, but let’s run with it for now — it should compile anyhow. The dispatch function takes a Msg type object. All we do is take the current state and reassign it to the result of our reducer function, passing in the current state and the dispatched Msg parameter. OK, but what about the reducer? We still haven’t tried to construct a Store yet, but when we do, we will need a reducer according to our new signature. First let’s define an action that represents a request to change the state:
An actual action.
We’ve added a variant to our Msg type. Now we have something to work with in our state reducer:
A basic reducer function.
It’s a really simple reducer to accompany our really simple state structure. A reducer typically behaves this way: It matches the message received, then returns a brand new state object based on the data in that received message. Our current messages are so simple that they don’t have any payload data, they are merely flags. Our reducer only needs to match a single case for now; in all other cases, we pass back the original state. Now we can try out our machine:
A store that does…nothing. Yet.
Great…but, uh…what can I do with it? Nothing! I can’t log my state, look at my state, or do practically anything at all except send out my increment messages into the proverbial void, hoping that they’re producing correct state objects!
This leads us to another big part of the Flux pattern: isolating side effects. A side effect in the computer science world is practically everything you want to do with an application. Logging in? Side-effecting. Saving data? Side-effecting. Web request? Side-effecting again. The only thing that isn’t side-effecting is a pure function — one that, given the same input, will produce the same output every time forever. Side effects are necessary bits of chaos that must be controlled!
Since side effects aren’t part of our pure state reducer and may change variables outside of the supplied parameters, it’s best to isolate them from pure code so that you can change or remove them easily. So, to handle these side effects, I’ve opted to take the route of supplying a hook for middleware-functions to perform side effects as a part of the dispatch mechanism. First we must add a vector of pointers to functions that take the form Fn(Msg) -> Msg .
Adding a new field to our Store.
I used a vector of functions because I want to allow for the use of multiple middleware functions on the Store. This will allow us to compose our side affecting functions later. Now we need to add some implementation to utilize our new field:
Middleware-capable.
There are two new functions to add to our impl block for Store. use_middleware registers a function to our middleware pipeline. middleware_strategy is a helper that we will call within dispatch (which I’ll describe momentarily). Since we can’t mutably borrow self again inside our first for loop, our strategy gathers the results (which are of type Msg ) in a Vec and then iterates through those, allowing us to borrow self again to dispatch each Msg . Now we just need to plug this middleware hook into our dispatch function:
Noop-terminating dispatcher.
I’ve added a helper function, is_empty , which takes a message and determines whether it is the Noop variant of Msg . I’ll explain why it’s important in a series of steps:
When our dispatch fires, we immediately start the middleware loop.
fires, we immediately start the middleware loop. Each middleware returns a new message and then dispatches that message.
dispatch fires and we start the middleware loop.
fires and we start the middleware loop. And now we’re looping infinitely.
So to circumvent the infinite looping, we must have some kind of non-propagating action that will always terminate the middleware loop by not triggering a new dispatch . That’s what our Noop variant will be most useful for here. Let’s write a middleware to log messages received by the Store.
Finally, some insight.
That should now log our Increment message twice. All right! We’re doing something “useful” with our Flux implementation. Notice how we must return the Noop message in logger_mw to stop the dispatch process.
Now that we understand how middlewares work, let’s write one that would be more useful in real life. I’m going to introduce a conditional message splitter by changing a few things. First we should think of the original dispatch message as a request, which could yield success or failure:
Splitting messages, part 1.
Now we dispatch the TryIncrement message, and we will have a middleware handle the condition. Let’s write our splitter middleware now:
Splitting messages, part 2.
We’ve introduced some arbitrary rules — if we call upon our state to Increment more than ten times, the Increment should fail. Now if we plug that in with use_middleware , it should split the messages. And if we update our logger:
Logging our new messages.
…then it should also tell us what message gets dispatched. If we call it more than ten times, it should log IncrementFailed messages instead,
Before we call this experiment done, we should implement a way to see the state’s history. Let’s add some pieces:
A few things are happening here:
We’ve added fields to represent the history as a Vec of the provided type T . We modify our non-terminal dispatch branch to include capturing the previous state and storing it in our history Vec . Finally, this is the reason we added PartialEq to our trait bounds — it allows us to compare the previous and next states using a simple != operator. Based on the result of that comparison, we can either determine the states were equal and not add to the state change history, or we determine that the states were inequal, subsequently adding the previous state to the history. We also need a function to retrieve the history. bactrace looks back as many steps as requested, returning a Vec containing up to steps state changes, provided enough changes have occurred.
I think that just about wraps it all up. We’ve learned how a Redux/Flux implementation works to split pure and impure portions of code. We learned how to make modular function middlewares to perform side effects. We implemented simple logging and state back-tracing. And we did it all in Rust!
Here’s a link to the Rust Playground.
Until the next episode of my Rust journey, FP on folks! | https://medium.com/better-programming/redux-in-rust-d622822085fe | [] | 2020-11-23 17:31:58.794000+00:00 | ['Rust', 'Programming', 'Software Development', 'Technology', 'Data Science'] |
2,467 | The Future of Social Media | Consuming our lives as we speak, social media is becoming a pivotal part of everyones lives wether people like it or not. It has many positive components and many negative ones as well. But Social Media is progressing so quickly, it can be hard to know what will come next
My prediction is that Social Media will be a main form of all communication. Although this is beginning, I believe it will be even larger than it is now. Cable is becoming obsolete and I believe news services will just stream all of their telecasts instead of having an actual station. We will get all of our news from Twitter, Instagram and Facebook but I hope that at some point someone creates an app that can compile all of our social media channels into one app that makes it seamless for us to not miss a thing. I can see Apple doing this and integrating it into their phone software in the future. The future of social media is changing every day with new updates constantly. I am excited to see where it takes our world. | https://medium.com/adpr-4300-sophia-cacioppo/the-future-of-social-media-9c2a59763e72 | ['Sophia Cacioppo'] | 2019-11-25 21:10:38.759000+00:00 | ['Instagram', 'Future Technology', 'Facebook', 'Social Media', 'Twitter'] |
2,468 | Three Stories Every Entrepreneur Should Know | Storyboard for Breaking Bad, and we all know how that ended. Image from Uproxx.
What would you say your company does? It seems like a simple question that should have a simple answer, but that’s rarely the case.
Here’s something I’ve done over my 20+ years as an entrepreneur to keep everyone focused on the tasks at hand while also keeping an eye on the future.
Entrepreneurs usually blur the lines of what their startup is, what it will be, and what it should be. This is fine until you try to start planning around those stories. At that point, you need to be asking: What are the priorities today and how do we execute on those priorities without mortgaging the future? The reverse question is just as important: How much time do we spend working on those new things that aren’t generating revenue yet?
The Three Story Rule
Every startup should have three stories, loosely related to the three arcs most storytellers use in episodic storytelling. An easy way to think about it is a television series. When you watch an episode of a TV show, the writers are usually working on three storylines:
Story A: Story with an arc that begins and ends in this episode (or maybe a two-parter).
Story B: Story with a longer arc that lasts a few episodes or more. This current episode will advance the plot of Story B in smaller increments, and maybe drop a twist in here or there.
Story C: Story with a much longer arc, maybe out to the end of the season or the end of the series itself. This current episode might not advance Story C at all, or it may just drop a few hints. At the end of the season or the series, you’ll be able to look back and piece Story C together, but that won’t be easy or even possible in real time.
Now let’s take that story strategy and apply it to your startup, and I’ll use my most recent startup as an example.
Story A: Right Now
Story A is the what your company is doing today that is generating revenue, building market share, and adding value to the company. Story A is about this fiscal quarter, this fiscal year, and next fiscal year.
At Automated Insights, Story A was our story for the first few years while we were known as Statsheet, a company that aggregated sports statistics and turned them into visualizations and automated content. This is how we made our money — either using our own data to generate content or using data like Yahoo Fantasy Football to generate Fantasy Matchup Recaps.
While we were breaking new ground in the arena of sports stats, we were one player in a sea of players, and while automating content from sports stats gave us a competitive advantage, sports was still a highly commoditized and difficult marketplace.
Story B: What’s Next
Story B is what’s going to open up new markets using new technologies or new products. Story B is about what you could do if the stars aligned properly or if you raised enough money for a longer runway, because Story B usually comes with a lot more risk for a lot more reward.
A few years into Statsheet, when we went to raise our Series A round, we pitched using our proprietary automated content engine on all kinds of data, generating machine-written stories in finance, marketing, weather, fitness, you name it. We changed our name to Automated Insights and pivoted completely with a $5 million raise.
That pivot came with a ton of risk. We had friends (and potential acquirers) in sports and we would now be making sports just a part of our story. In return, we would be one of the first players in the nascent Natural Language Generation (NLG) market, a pre-cursor to the “AI” market.
It was not a coincidence that the acronym for our new company name was also AI.
Story C: The Billion-Dollar Story
Story C usually involves a seismic change that disrupts existing markets, and as you can imagine, it’s a million times more difficult to pull off.
Uber and Lyft are on Story C. They’re no longer known as a better taxi or for solving a specific problem. They’re about creating a market in which a large portion of people can no longer live without them. In most urban areas, ride hailing services are now a necessity, as the ability they offer to do more things cheaply has made a major impact on lifestyle. There’s just no going back.
Story C was actually where my vision split from my former startup. I was focused more on real-time, plain-word insights generated from a mesh of private and public data, i.e. Alexa, Google Assistant, and Siri. The company was turning towards more of a B2B approach, first as a SaaS NLG tool, and then as a business intelligence tool.
No one was wrong here, but the latter was the direction the company took. So now I’m working on a new Story A at a new startup. And I’ve got Stories B and C in my purview.
So which story do you tell? Well, it depends on who you’re talking to.
For the press, for customers, and for potential employees, stick to Story A — if these folks aren’t jazzed about Story A, then you’re not spending enough time on Story A.
In fact, you should consider Story B and Story C to be intellectual property. It’s not the kind of thing you want to go too deeply into without an NDA or some protection in place.
For your board, your investors, and your employees, focus on Story A, of course, but also keep them aware of Story B and drop hints about Story C. Story B is where you’re headed next. It might be what you raise your next round on, or it may be your next big pivot. Story C is best kept in the distance until you’ve crushed Story A and made significant progress on Story B. It’s a goal, mainly, and you should just be making sure you’re not closing doors to it as you move forward.
Once you get your stories straight, then it’s just about execution. But come back to them often, every quarter or even every sprint, and make sure everyone is still on the same page. | https://jproco.medium.com/three-stories-every-entrepreneur-should-know-3476909629bb | ['Joe Procopio'] | 2019-01-04 19:56:32.109000+00:00 | ['Product Management', 'Business', 'Entrepreneurship', 'Startup', 'Technology'] |
2,469 | Increasing User Engagement With Minute Media’s Video Content Recirculation Tool | By David Schumann, Product Manager at Minute Media
To provide users with the best possible video experience, publishers need access to relevant and engaging content. Minute Media’s fully automated video recirculation tool allows publishing partners to boost their content offering, increase time spent on page and drive more revenue through converting existing stories into engaging videos. Find out how this tool works and how it can benefit your publishing business by reading below.
Automated Solutions
Minute Media’s recirculation tool automatically reformats existing editorial stories into engaging video experiences by converting an RSS feed into a templated video file. Our contextual algorithm will then match this video to relevant articles on your partner site, adding value to each page the video sits on. After a quick initial setup, this process is fully automated, providing publishers with an easy to use video solution.
Extending the Content Offering
Minute Media’s recirculation tool allows publishers to leverage their own written content to improve their video offering.
“Minute Media’s recirculation tool allows publishers to leverage their own content to improve their video offering.”
These videos are tailored to the publisher’s specific editorial offering and complement the article to add value to consumers. In addition, partners can leverage Minute Media’s video library containing well over 50,000 videos covering a wide range of topics, including news, entertainment, politics, sports, lifestyle and more.
Mockup of Video Recirculation Tool
Increasing User Engagement
This recirculation tool is a great way for publishers to generate awareness of trending stories and drive traffic to those specific pages. Since the user already expressed an interest in that particular topic, the recirculated video is highly relevant and results in an increase in user engagement and time spent on page. Ultimately, longer user sessions can translate to additional supply and monetization opportunities.
The Benefits
Our publishing partners have seen significant benefits using this recirculation tool: | https://medium.com/@minutemedia/increasing-user-engagement-with-minute-medias-video-content-recirculation-tool-a330d4ccbefa | ['Minute Media'] | 2020-08-18 20:27:55.119000+00:00 | ['Videos', 'Digital Media', 'Online Media', 'Technology', 'Publishing'] |
2,470 | S1,E2 || The Stand (Series 1, Episode 2) | ⭐ Watch The Stand Season 1 Episode 2 Full Episode, The Stand Season 1 Episode 2 Full Watch Free, The Stand Episode 2,The Stand CBS All Access, The Stand Eps. 2,The Stand ENG Sub, The Stand Season 1, The Stand Series 1,The Stand Episode 2, The Stand Season 1 Episode 2, The Stand Full Streaming, The Stand Download HD, The Stand All Subtitle, Watch The Stand Season 1 Episode 2 Full Episodes
Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[2] The word “cinema”, short for cinematography, is ofCBS All Access used to refer to filmmaking and the film The Stand, and to the art form that is the result of it.
❏ STREAMING MEDIA ❏
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the process of delivering or obtaining media in this manner.[clarification needed] Streaming refers to the delivery method of the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies specifically to telecommunications networks, as most of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio CDs). There are challenges with streaming conCBS All Accesst on the Internet. For example, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of the conCBS All Accesst. And users lacking compatible hardware or software systems may be unable to stream certain conCBS All Accesst.
Live streaming is the delivery of Internet conCBS All Accesst in real-time much as live television broadcasts conCBS All Accesst over the airwaves via a television signal. Live internet streaming requires a form of source media (e.g. a video camera, an audio interface, screen capture software), an encoder to digitize the conCBS All Accesst, a media publisher, and a conCBS All Accesst delivery network to distribute and deliver the conCBS All Accesst. Live streaming does not need to be recorded at the origination point, although it frequently is.
Streaming is an alternative to file downloading, a process in which the end-user obtains the entire file for the conCBS All Accesst before watching or lisCBS All Accessing to it. Through streaming, an end-user can use their media player to start playing digital video or digital audio conCBS All Accesst before the entire file has been transmitted. The term “streaming media” can apply to media other than video and audio, such as live closed captioning, ticker tape, and real-time text, which are all considered “streaming text”.
❏ COPYRIGHT CONCBS All AccessT ❏
Copyright is a type of intellectual property that gives its owner the exclusive right to make copies of a creative work, usually for a limited time.[2][2][2][2][2] The creative work may be in a literary, artistic, educational, or musical form. Copyright is inCBS All Accessded to protect the original expression of an idea in the form of a creative work, but not the idea itself.[2][2][2] A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.
Some jurisdictions require “fixing” copyrighted works in a tangible form. It is ofCBS All Access shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders.[citation needed][2][1][1][1] These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.[1]
Copyrights can be granted by public law and are in that case considered “territorial rights”. This means that copyrights granted by the law of a certain state, do not exCBS All Accessd beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works “cross” national borders or national rights are inconsisCBS All Accesst.[1]
Typically, the public law duration of a copyright expires 1 to 2 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities[2] to establishing copyright, others recognize copyright in any completed work, without a formal registration.
It is widely believed that copyrights are a must to foster cultural diversity and creativity. However, Parc argues that contrary to prevailing beliefs, imitation and copying do not restrict cultural creativity or diversity but in fact support them further. This argument has been supported by many examples such as Millet and Van Gogh, Picasso, Manet, and Monet, etc.[1]
❏ GOODS OF SERVICES ❏
Credit (from Latin credit, “(he/she/it) believes”) is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[2] In other words, credit is a method of making reciprocity formal, legally enforceable, and exCBS All Accesssible to a large group of unrelated people.
The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[2] Credit is exCBS All Accessded by a creditor, also known as a lender, to a debtor, also known as a borrower.
‘The Stand’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ CBS All Accessween China, U.S.
CBS All Access’s live-action “The Stand” was supposed to be a huge win for under-represented groups in Hollywood. The $2 million-budgeted film is among the most expensive ever directed by a woman, and it features an all-Asian cast — a first for productions of such scale.
Despite well-inCBS All Accesstioned ambitions, however, the film has exposed the difficulties of representation in a world of complex geopolitics. CBS All Access primarily cast Asian rather than Asian American stars in lead roles to appeal to Chinese consumers, yet Chinese viewers rejected the movie as inauthentic and American. Then, politics ensnared the production as stars Liu Yifei, who plays The Stand, and Donnie Yen professed support for Hong Kong police during the brutal crackdown on protesters in 122. Later, CBS All Access issued “special thanks” in the credits to government bodies in China’s Xinjiang region that are directly involved in perpetrating major human rights abuses against the minority Uighur population.
“The Stand” inadverCBS All Accesstly reveals why it’s so difficult to create multicultural conCBS All Accesst with global appeal in 2020. It highlights the vast disconnect CBS All Accessween Asian Americans in Hollywood and Chinese nationals in China, as well as the exCBS All Accesst to which Hollywood fails to acknowledge the difference CBS All Accessween their aesthetics, tastes and politics. It also underscores the limits of the American conversation on representation in a global world.
In conversations with seThe Standl Asian-American creatives, Variety found that many feel caught CBS All Accessween fighting against underrepresentation in Hollywood and being accidentally complicit in China’s authoritarian politics, with no easy answers for how to deal with the moral questions “The Stand” poses.
“When do we care about representation versus fundamental civil rights? This is not a simple question,” says Bing Chen, co-founder of Gold House, a collective that mobilizes the Asian American community to help diverse films, including “The Stand,” achieve opening weekend box office success via its #GoldOpen movement. “An impossible duality faces us. We absolutely acknowledge the terrible and unacceptable nature of what’s going on over there [in China] politically, but we also understand what’s at stake on the The Stand side.”
The film leaves the Asian American community at “the intersection of choosing CBS All Accessween surface-level representation — faces that look like ours — versus values and other cultural nuances that don’t reflect ours,” says Lulu Wang, director of “The Farewell.”
In a business in which past box office success determines what future projects are bankrolled, those with their eyes squarely on the prize of increasing opportunities for Asian Americans say they feel a responsibility to support “The Stand” no matter what. That support is ofCBS All Access very personal amid the The Stand’s close-knit community of Asian Americans, where people don’t want to tear down the hard work of peers and The Stand.
Others say they wouldn’t have given CBS All Access their $1 if they’d known about the controversial end credits.
“‘The Stand’ is actually the first film where the Asian American community is really split,” says sociologist Nancy Wang Yuen, who examines racism in Hollywood. “For people who are more global and consume more global news, maybe they’re thinking, ‘We shouldn’t sell our soul in order to get affirmation from Hollywood.’ But we have this scarcity mentality.
“I felt like I couldn’t completely lambast ‘The Stand’ because I personally felt solidarity with the Asian American actors,” Yuen continues. “I wanted to see them do well. But at what cost?”
This scarcity mentality is particularly acute for Asian American actors, who find roles few and far CBS All Accessween. Lulu Wang notes that many “have built their career on a film like ‘The Stand’ and other crossovers, because they might not speak the native language — Japanese, Chinese, Korean or Hindi — to actually do a role overseas, but there’s no role being writCBS All Access for them in America.”
Certainly, the actors in “The Stand,” who have seen major career breakthroughs tainted by the film’s political backlash, feel this acutely. “You have to understand the tough position that we are in here as the cast, and that CBS All Access is in too,” says actor Chen Tang, who plays The Stand’s army buddy Yao.
There’s not much he can do except keep trying to nail the roles he lands in hopes of paving the way for others. “The more I can do great work, the more likely there’s going to be somebody like me [for kids to look at and say], ‘Maybe someday that could be me.’”
Part of the problem is that what’s happening in China feels very distant to Americans. “The Chinese-speaking market is impenetrable to people in the West; they don’t know what’s going on or what those people are saying,” says Daniel York Loh of British East Asians and South East Asians in Theatre and Screen (BEATS), a U.K. nonprofit seeking greater on-screen Asian representation.
York Loh offers a provocative comparison to illustrate the West’s milquetoast reaction to “The Stand” principal Liu’s pro-police comments. “The equivalent would be, say, someone like Emma Roberts going, ‘Yeah, the cops in Portland should beat those protesters.’ That would be huge — there’d be no getting around that.”
Some of the disconnect is understandable: With information overload at home, it’s hard to muster the energy to care about faraway problems. But part of it is a broader failure to grasp the real lack of overlap CBS All Accessween issues that matter to the mainland’s majority Han Chinese versus minority Chinese Americans. They may look similar, but they have been shaped in diametrically different political and social contexts.
“China’s nationalist pride is very different from the Asian American pride, which is one of overcoming racism and inequality. It’s hard for Chinese to relate to that,” Yuen says.
Beijing-born Wang points out she ofCBS All Access has more in common with first-generation Muslim Americans, Jamaican Americans or other immigrants than with Chinese nationals who’ve always lived in China and never left.
If the “The Stand” debacle has taught us anything, in a world where we’re still too quick to equate “American” with “white,” it’s that “we definitely have to separate out the Asian American perspective from the Asian one,” says Wang. “We have to separate race, nationality and culture. We have to talk about these things separately. True representation is about capturing specificities.”
She ran up against the The Stand’s inability to make these distinctions while creating “The Farewell.” Americans felt it was a Chinese film because of its subtitles, Chinese cast and location, while Chinese producers considered it an American film because it wasn’t fully Chinese. The endeavor to simply tell a personal family story became a “political fight to claim a space that doesn’t yet exist.”
In the search for authentic storytelling, “the key is to lean into the in-CBS All Accessweenness,” she said. “More and more, people won’t fit into these neat boxes, so in-CBS All Accessweenness is exactly what we need.”
However, it may prove harder for Chinese Americans to carve out a space for their “in-CBS All Accessweenness” than for other minority groups, given China’s growing economic clout.
Notes author and writer-producer Charles Yu, whose latest novel about Asian representation in Hollywood, “Interior Chinatown,” is a National Book Award finalist, “As Asian Americans continue on what I feel is a little bit of an island over here, the world is changing over in Asia; in some ways the center of gravity is shifting over there and away from here, economically and culturally.”
With the Chinese film market set to surpass the US as the world’s largest this year, the question thus arises: “Will the cumulative impact of Asian American audiences be such a small drop in the bucket compared to the China market that it’ll just be overwhelmed, in terms of what gets made or financed?”
As with “The Stand,” more parochial, American conversations on race will inevitably run up against other global issues as U.S. studios continue to target China. Some say Asian American creators should be prepared to meet The Stand by broadening their outlook.
“Most people in this The Stand think, ‘I’d love for there to be Hollywood-China co-productions if it meant a job for me. I believe in free speech, and censorship is terrible, but it’s not my battle. I just want to get my pilot sold,’” says actor-producer Brian Yang (“Hawaii Five-0,” “Linsanity”), who’s worked for more than a decade CBS All Accessween the two countries. “But the world’s getting smaller. Streamers make shows for the world now. For anyone that works in this business, it would behoove them to study and understand The Stands that are happening in and [among] other countries.”
Gold House’s Chen agrees. “We need to speak even more thoughtfully and try to understand how the world does not function as it does in our zip code,” he says. “We still have so much soft power coming from the U.S. What we say matters. This is not the problem and burden any of us as Asian Americans asked for, but this is on us, unfortunately. We just have to fight harder. And every step we take, we’re going to be right and we’re going to be wrong.”
☆ ALL ABOUT THE SERIES ☆
is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[2] In other words, credit is a method of making reciprocity formal, legally enforceable, and exCBS All Accesssible to a large group of unrelated people.
The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[2] Credit is exCBS All Accessded by a creditor, also known as a lender, to a debtor, also known as a borrower.
‘Hausen’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ CBS All Accessween China, U.S. | https://medium.com/the-stand-2020-s1-e2-on-cbs-all-access/s1-e2-the-stand-series-1-episode-2-37f538179ede | ['Lydia Jordan'] | 2020-12-25 12:34:14.396000+00:00 | ['Technology', 'Lifestyle', 'Coronavirus', 'TV Series'] |
2,471 | Best IT Skills to Learn in 2021. | Best IT Skills to Learn in 2021 and Give Wings to your Career.
Looking to change fields and get into tech, but don’t know what skills you need to launch your career? Maximize your marketability by pursuing tech skills in demand for the future!
Regular or Continuous Learning is the most important thing, it will help you to keep updated with new skills and technologies and also help keep on track of the latest technology trends.
The world of technology is growing rapidly, and 2021 is going to see even more of a demand for IT skills and IT-related jobs.
It won’t be too long before technology starts to have another dramatic impact on the business world, and we are even starting to see the seeds of this being sown now.
As you know now, Information Technology (IT) is a very broad field and it keeps growing day-by-day, so there are lots of options you can take to start a technical career.
Companies know that in order to keep up with the latest in the field, they need to hire individuals with the relevant Technology Skills.
So, it is the right time to start your career in Information Technology, you can just start with basic technology skills that will help you to open doors to your career in technology which results in the increment of your salary as well.
If you are looking to switch your career to technology but don’t know which technology skill you should learn to launch your new career path.
Here are 15 in-demand technology skills you can learn and give wings to your career and start flying in 2021 and beyond.
Artificial Intelligence Machine Learning Cybersecurity Data Science and Analytics Cloud Computing Internet of Things (IoT) UI/UX Design Mobile Development Extended Reality (Virtual Reality — VR and Augmented Reality — AR) Blockchain Product Management Robotics Salesforce Quantum Computing General Programming Languages
You can check this list at www.digitalstock.co.in in detail along with detailed facts, statistics data, average salary for each skill set, future prospects of each technology, etc.
You can select any among the above listed skills set or technology which will suit you to quick start your technology career in 2021 and beyond. Having competency one or a few of them can set you on the right career path.
So investing your time and money in learning them can be extremely rewarding based on today’s market demands and dynamics. Plus you keep reaping dividends from them in years to come.
Happy Learning.!!! | https://medium.com/@anishasan/best-it-skills-to-learn-in-2021-8a2ddaefd981 | ['Anis Hasan'] | 2020-12-12 18:07:19.285000+00:00 | ['In Demand Skills', 'Best It Skills', 'Career Change', 'Technology'] |
2,472 | Expensive rides or cash-strapped drivers? Uber’s dilemma through the eyes of a NYTimes reporter. | Uber has dominated so much of the marketshare, do you think an alternative — not Lyft — can become viable?
Right now in the United States, it’s basically a ride-sharing duopoly—Lyft and Uber. It would be difficult for me to see a third entrant make real headway unless they offered some super different value proposition. I’m unclear on what that would be right now. Would it be, a ride service that treated drivers better?
…It’s hard because they’ve already broken into these cities so well. It took so much money for Uber and Lyft to break in.
Abroad, it’s a much different story. There are so many different competitors that dominate different markets. I would say Uber has a much harder battle overseas where it has to fight so many more battles.
Kalanick recently sold a half billion dollars worth of stock. What does that mean for Uber’s stock value?
You saw one major investor, Shawn Carolan at Menlo Ventures, he tweeted saying, ‘I believe in the long-term viability of this company and I’m not going to sell any shares tomorrow.’
Normally, executives that are at the top of the company or really involved in it like to say, ‘look, I’m not gonna sell. I believe in the long-term value of this company.’ Travis clearly doesn’t care about those appearances, whether he believes in the long-term viability of the company or not, he just sold a very large chunk of stock. This could worry investors he thinks the stock won’t go up in value.
What’s the road to profitability for Uber?
There are a few levers. They can stop burning cash on initiatives that are expensive, like Uber Eats, where they have to burn a lot of money to grow, in countries where they might not even be winning. Or at the end of the day, they increase how much they charge riders or how much they take out of the driver’s cuts. It’s unclear. I think they’ve slowly been taking more out of each ride from drivers. It’s been showing in their financial results. That’s only gonna do so much. Dara said last week we have a path to profitability in 2021, but it’s still hard for me to say what exactly that will look like, or how they will get there.
In short, your estimation is they’ll pass the buck to the driver or the consumer?
Yeah. Those are the big levers to making more money that they have. Either you pay more for the ride or the driver’s get less for the ride.
Terrible options either way. So why won’t Uber release their traffic data to cities? If Uber is doing good, why won’t they work with cities or counties to alleviate congestion?
I think they see it as competitive, proprietary data that other competitors of theirs in private markets could use. It’s a hard problem … At the same time, they can’t say they want to work with cities and pretend to be a friend to cities when they are changing the entire landscape of how transportation works. It’s a hard problem. I don’t envy them, but I wouldn’t say they are a friend to cities like they say they are.
What impact would you say they’ve had on the Bay Area, where you and they both reside?
Just in the last 10 years they’ve been here and I’ve lived here, traffic has increased a zillion fold. It totally changed how transportation works here and they haven’t really done anything to address that or handle it or mitigate some of the problems. I don’t know if they will. It’s probably fundamental to their business because San Francisco is one of the biggest markets they have in the U.S. They’ve reaped the benefits of changing transportation without dealing with the side effects that comes with traffic and people driving from all over the state just to work in San Francisco. So it’s a problem.
Will Uber keep growing or does it have to stop and deal with all the fires that have been surrounding it?
The speculation I keep hearing is they might need to pull out of some markets. They definitely pulled out of a few markets when Dara first came on board. Now they might pull out of some more, but it’s unclear. It depends on where they’re spending money and how much they’re spending and if they want to keep that up, or if they can’t do it anymore and have to retrench. Which is entirely possible.
Can women request female drivers yet?
I don’t believe that’s an option yet on the platform although that’s something many women have requested over time, concerning the sexual assaults on any ride sharing platform.
What’s the big takeaway from the years you put into this book?
What I really wanted to get at was that for years Silicon Valley has operated on culture that worships tech founders and tech companies blindly and gives them complete control of their companies. And that thesis is worth questioning. Uber, where you have the wrong person having outsized control and being worshipped by his staff, that can really have adverse effects. I’m not going to say they are a bad or evil company or [that Travis is] a bad person — it’s more of a book about the side effects of this crazy growth and unbridled capitalism in Silicon Valley, something we should be more aware of in a time where tech now has to be accountable for what they do.
So accountability is the big takeaway.
I think that’s right. | https://thesixfifty.com/expensive-rides-or-cash-strapped-drivers-ubers-dilemma-through-the-eyes-of-a-nytimes-reporter-6c3f09b64ac4 | ['D.A. Mission'] | 2019-11-18 21:17:22.339000+00:00 | ['San Francisco', 'Ridesharing', 'Technology News', 'Silicon Valley', 'Uber'] |
2,473 | BLOCKCHAIN BUILDING BLOCKS IN CLEVELAND WITH BERNIE MORENO | Bernie Moreno on the Speaking of Crypto podcast
“Blockland is a play on the name Cleveland and the idea is that by putting what we identified as the ten critical ingredients to make Cleveland one of the top five most relevant tech cities in the United States that we would do all these 10 things at the exact same time. So, we created ten nodes, everywhere from talent development retention to the entrepreneurial environment to the legal system, the political environment, philanthropy’s engaged, a node that we call place, which is to create the largest tech centre in the world, modelled after Station F in Paris right in downtown Cleveland. How do we build some business applications around what we’re doing already? How do we have thought leadership, so that’s where our conference comes from. And virtually every single day in Cleveland right now there’s a meeting somewhere in Cleveland around blockchain.”
President, Bernie Moreno Companies
From owning and operating 21 car dealerships to launching a city-wide epic-sized blockchain initiative, Bernie Moreno is putting Cleveland or ‘Blockland’ on the map.
https://blocklandcleveland.com/
https://twitter.com/berniemoreno
He wants to make Cleveland a blockchain tech hub, complete with the largest tech centre in the world that would be an innovation incubator with a K-12 school on campus built right in downtown Cleveland.
And this isn’t just one person’s big idea. There’s a plan. Leaders in business and government and members of the community have gathered to discuss the ten nodes in place to help make this Ohio city one of the top five tech cities in the United States.
And while some people looking in from the outside may ask why blockchain and why Cleveland. Well when someone has the kind of drive and enthusiasm for something like this, that Bernie clearly does about blockchain tech and Blockland, it’s contagious.
It started when his son asked him to invest in Bitcoin, which he didn’t, but he did get looking into blockchain technology. And after investing in Votem, he wanted to start his own blockchain companies. But when he realized that Cleveland didn’t have the ideal ingredients to do what he wanted to do, instead of going to another city that had all the tech he could ever want, he decided to stay and help bring all the tech he could ever want to Cleveland.
https://votem.com/
We also talked about the future of cars. Bernie said, “the car changed the world in the turn of the century and the car is going to change the world again.” Not only will they be dramatically safer, but it will change things like people young and old being able to hop in a car with no need for a licence. Insurance companies will have to insure something besides accidents because cars won’t hit each other, and whole city blocks will change because we won’t need huge parking lots anymore.
Bernie also believes that in 15 years there will be more self-driving cars than human driving cars, and in 25 years human driving cars won’t be found on public highways. They’ll be on race tracks for sport or other weekend events.
He talked about where we’re at with blockchain tech right now like the early days of the dot com boom when everyone was starting to get a website, but all you’d see when you got online was a page that said, welcome. And one of the ways he’s getting into the space and helping to legitimize it while also being on the leading edge is to accept Bitcoin at his car dealerships.
Blockland Cleveland is also hosting a conference focussed on government and business applications in order to explore real world use cases for blockchain technology.
Some of the use cases Bernie sees coming soon are birth certificates, land records, real estate transactions, car titles, mobile voting, providence, supply chain, medical records, drug tracking, and as he explains, just like in the early days of the internet, no one could predict where it would lead us, we’re in that same place with blockchain tech now. | https://medium.com/speaking-of-crypto/031-blockchain-building-blocks-in-cleveland-with-bernie-moreno-86fc27d5736 | ['Shannon Grinnell'] | 2018-11-11 21:09:53.502000+00:00 | ['Technology', 'Blockland', 'Blockchain Technology', 'Blockchain', 'Cleveland'] |
2,474 | Best Blockchain Platforms to Look For in 2021 | Blockchain Technology is a mainstream technology nowadays. Be it Logistics, Healthcare, Supply Chain, FinTech, or Legal; Blockchain has applications in every industry. Blockchain apps are perfect for making business processes more efficient and seamless. Other notable benefits:
Higher Transparency
Improved Traceability
Enhanced Security
Increased Reliability
The global blockchain market size will grow at a CAGR of 67.3% during 2020–2025
The survey clearly states that blockchain platforms are ever-growing and will continue to evolve. So, here is a list of the top 10 blockchain platforms to explore in 2021. These platforms will make organizations more efficient & transparent in their business ecosystem.
1. Tezos (XTZ)
Tezos is an open-source and decentralized blockchain network. It allows you to deploy smart contracts and execute peer-to-peer transactions. Its architecture is quite impressive, and the upgrade mechanism is also handy. You can use it to facilitate the formal verification process.
Being a smart contract and dApp platform, Tezos provides safety and code correctness.
What’s different is Tezos? Its self-amending cryptographic mechanism keeps evolving on its own.
Features of Tezos Platform
2. Stellar
Stellar is an open-source blockchain network and a payment protocol. You can use it to create, trade, and send a digital representation of all money forms. This public-owned platform can handle millions of transactions every day.
It enables fast, cross-border transactions between multiple pairs of currencies. Stellar depends entirely on the Blockchain to keep the network in sync. Over 69% of banks use blockchain technology to make their services secure, transparent, and seamless.
What’s different in Stellar? It is one of the most rapid and scalable blockchain platforms that build robust, secure, and reliable FinTech apps, tokens, and various other financial/digital assets. With a stellar platform, you can store and move your money quickly, reliably, and cost-effectively.
Features of Stellar Platform
3. Hyperledger Fabric
Hyperledger Fabric is a modular Blockchain framework. It is the foundation for building blockchain solutions, products, and apps using plug-and-play elements. Hyperledger Fabrics includes a broad range of flexible and adaptable designs that satisfies various industrial use cases.
Why is Hyperledger Fabric different? It has a Channels’ feature. This feature provides a secure and scalable platform that supports confidential contracts, private transactions, and other sensitive data. Another distinct feature is the enablement of a network of networks. It enables members of the Fabric network to work together.
Features of Hyperledger Fabric
4. Hyperledger Sawtooth
Hyperledger Sawtooth offers a modular and flexible architecture. Sawtooth separates the core system and application domain. It helps you create and operate distributed ledger apps and networks for a specific usage by enterprises. It aims to secure distributed ledgers and smart contracts.
Why is Hyperledger Sawtooth different? This platform streamlines the blockchain app development and related operations. It also supports both the permission and permissionless blockchain networks and different consensus algorithms, including Practical Byzantine Fault Tolerance (PBFT) and Proof of Elapsed Time (PoET).
Features of Hyperledger Sawtooth
5. EOS
EOS is a blockchain-based, decentralized platform. It enables the development, hosting, and execution of commercial-scale dApps. Further, it provides smart contracts capability, decentralized storage of business solutions. EOS also solves the scalability issues faced by Bitcoin and Ethereum.
Why is the EOS platform different? EOS is the most potent infrastructure of dApps. It eliminates the user fees and provides ownership of the network to the user. Also, EOS has a dedicated community, “EOS Forum,” for developers and investors.
Features of EOS Platform
6. OpenChain
For enterprises who are looking to manage and issue digital assets in a secure and scalable manner, OpenChain is your true mate. It is an open-source distributed ledger technology that hosts a permissible chain of transactions by modifying an internal key-value store. OpenChain offers mixed APIs for building secured crypto apps.
Why is OpenChain Platform different? Unlike other bitcoin enabled systems, OpenChain uses Partitioned Consensus. It means every OpenChain instance can have a single authority validating transactions. Also, it uses a client-server architecture that is more secure and reliable than peer-to-peer architecture.
Features of OpenChain Platform
7. Corda
Corda is an open-source blockchain platform that enables next-gen transparency, efficiency, and robustness in business. With Corda, organizations can transact directly and privately with smart contracts.
Why is Corda different? It allows you to build interoperable blockchain networks that transact in a strict private environment. Corda includes smart contract technology that empowers businesses to transact directly with great value. Also, it decreases the additional record-keeping and transactional costs.
Features of Corda Platform
8. Tron (TRX)
Tron is a decentralized blockchain platform. It aims to develop a free and global content entertainment system that enables seamless and reliable sharing of content digitally. Tron can manage 2000+ transactions per second at a zero fee. It relies on the Delegated-Proof-of-Stake consensus mechanism (DPoS) to secure the Blockchain.
Why is Tron different? Tron provides a medium for creators to share their content directly with the users, discarding the middlemen. Also, the original data always remains with the creators making the process extremely secure and protected.
Features of Tron Platform
9. Hedera Hashgraph
Hedera Hashgraph is a fair, fast, and highly secure public network. It doesn’t involve computation of a heavy proof of work algorithm. You can use it to design and develop salable & innovative decentralized apps. Hashgraph supports dApps. It depends on an asynchronous Byzantine-Fault Tolerance (aBFT) consensus mechanism.
Why is Hedera Hashgraph different? Hedera is the same as every other public blockchain network, but the difference lies in its fast, fair, secure, and unique mechanism. Hedera offers much better and reliable algorithms for higher transparency.
Features of Hedera Hashgraph
10. Ethereum
Ethereum is an open-source, decentralized, and leading Blockchain platform with native cryptocurrency with Ether or ETH. It enables the development and execution of Smart Contracts and Distributed Apps without any downtime, data threats, or third-party involvement. You can use it to eliminate internet third-parties who save & track data and financial instruments.
Ethereum is the developer’s favorite as they are used to make new and featured applications.
Why is Ethereum different? It is open to everyone; you require a wallet to take part. Also, Ethereum is a ledger technology that enterprises are using to build and run new programs.
Features of Ethereum Platform
These are the top 10 Blockchain-based platforms that will rule in 2021 and beyond. Next question you have: ‘How do I choose the best platform for my next blockchain project?’
Well, we are there to guide you. Below are some of the key factors to consider while selecting a reliable Blockchain Platform. Read to explore!
How to Choose the Right Blockchain Platform for Your Business?
Permissioned vs. Permission-less
Start with choosing the ‘Type’ of your Blockchain.
a) If you want to have all your participants authorized before participating in the network, you’ll require a permission network.
b) On the other hand, put the data on a public Blockchain development framework/permission-less framework to promote business without transparency.
2. Tokens/Crypto or Not
Not every framework provides cryptocurrency or tokenization access on the Blockchain. In such a case, if you are planning for an enterprise-ready Blockchain platform with the cryptos and tokens, your options get limited.
Note: Hyperledger Fabric and Corda platforms don’t provide token/crypto facilities.
3. Scalability Requirements
Scalability issues are a big challenge in Blockchain App Development.
a) If your decentralized app is transaction-extensive, you must validate side-chain implications for security/flexibility and assess network transaction charges.
b) If your decentralized app is not transaction-intensive, you must restrict the network and vendor decision to cost, efficiency, and quality.
4. Speed of Transactions
For lighting fast transactional speed, opt for permission-based Blockchain platforms.
5. Security
Ensure strict security measures for your organization, opt for a robust platform, and offer high security. In such a case, EOS and Hyperledger Fabric can be your mate. They both hold an excellent track record of handling security.
Some additional contributing factors:
Skills Availability
Multi-functionality
Community Support
Conclusion
The demand for blockchain platforms is rising steadily. Consider the pointers mentioned above to identify the best Blockchain platform for your needs. We hope that you find this blog helpful and insightful.
However, if you are still struggling to choose the right platform according to your project needs, don’t worry! Successive Technologies has got you covered. Consult our blockchain experts today for Blockchain consulting services. We also help you conceptualize your blockchain project idea into reality with our blockchain consulting team. | https://medium.com/successivetech/best-blockchain-platforms-to-look-for-in-2021-ca3c5f2102d3 | ['Aashna Diwan'] | 2021-01-12 06:12:51.672000+00:00 | ['Blockchain Application', 'Blockchain Platforms', 'Blockchain Development', 'Blockchain', 'Blockchain Technology'] |
2,475 | How Ethereum’s Proof Of Stake saved the very Ideology of Blockchain? | One can never deny the fact that change of any form in the pre-existing methodologies has always been troublesome for humans to accept. This is why there always exists a majority of the population that wholeheartedly acts as resistance to change.
Now considering Blockchain, a technology that single-handedly challenged the entire deep-rooted mechanism of security and transparency on the internet, did face similar resistance. However, since the inception of Blockchain and after enormous debates, some of the superpowers like China, the USA, Japan are gradually realizing its true potential.
But soon enough Blockchain began to bother the majority of the world and was again a debatable technology that seemed unreliable to many. Well, truth be told, one of the major reasons for this concern regarding the future of Blockchain was embedded deep with the consensus algorithm of Proof of Work.
The Ugly Face of Proof Of Work
Since its evolution, Proof of Work literally proved its worth as an effective antidote to cyber attacks like DDoS. It undoubtedly seemed a reliable consensus mechanism for verifying the legitimacy of every transaction in the network. Thus, preventing the major issue of Double-Spending that remains unsolved for years.
The terrifying concerns with Proof of Work came into the picture after a long time and they were alarming to an extent that made the entire technology of Blockchain questionable.
How exactly does PoW destroy Blockchain’s Reputation?
Never-Ending Appetite For Electricity:
In simpler terms, Bitcoin’s Proof of Work basically achieves consensus by making all nodes solve a massively complex cryptographic puzzle, which is solved by Miners and the first one to solve gets the reward.
Now, this immense competition of mining the blocks first has made miners build massive mining farms with heavy computational resources capable of carrying out incredibly complex calculations. As a result, this entire process ended up consuming tremendous electricity.
In fact, as per the Digiconomist’s Bitcoin Energy Consumption Index, the energy consumed by Bitcoin miners alone has jumped from 54 TWh on December 2019 to 77 TWh in February 2020.
An effective analysis of this data quite clearly shows that the amount of energy used by the Bitcoin Miners is literally enough to power 5 to 7 million households in the US. In fact, this amount is nearly equal to the total energy consumption in countries like New Zealand as well as Hungary.
Threat To Decentralization:
Truth be told, the one most beautiful part of Blockchain that makes it one of a kind is its capability to ensure decentralization over the internet. Well, Proof of Work is a possible threat to it and here’s how.
So let’s understand it this way, the PoW algorithm basically rewards those who have higher hash rates. In other words, those who do possess better as well as faster hardware to compute the cryptographic problem first. Thus the more processing power you have, the higher is your hash rates and more probable you are to mine the next block and receive the award.
Now here comes the ugliest part. In order to increase their chances of getting more rewards, the miners now started to form Mining Pools, where they first combine their hashing power to compute faster and then share the rewards earned by it among themselves.
While this might seem harmless, but it briskly transforms Blockchain towards Centralization rather than Decentralization by making a few nodes on the network more powerful than the rest. And therefore, dismantling the only beauty of Blockchain.
Enters PROOF OF STAKE
It goes without saying that if Proof of Work persisted as the only consensus mechanism, the Blockchain would soon have entered a severe criticism zone, exiting which would have literally been troublesome.
Well quite, fortunately, on 11th of July 2011, QuantumMechanic, a Bitcointalk forum user amazed the world with a new proposition called Proof of Stake. The proposition was really simple but yet capable enough to eliminate the demerits of Proof Of Work with incredible efficiency.
Proof of Stake, in simpler terms, considers this entire idea of competing with each other to validate a transaction as wasteful. Oh yes! pretty direct right?
Therefore, rather than organizing a competition it simply uses an election process that ultimately selects one node randomly to validate the next block.
Note that Proof of Stake has nothing called Miners. In fact they have Validators who don’t mine but either mint or forge new blocks.
Moreover, this random selection of Validators isn’t as random as it seems. The validators are chosen based on their previous work done, i.e., the number hashes validated by a CPU or GPU. Not just this, but the node that has more coins into the network does have a higher probability of being selected as a validator, thus eliminating the competition process completely.
Proof of Stake- The SAVIOUR of BLOCKCHAIN’S IDEOLOGIES
Careful observation will undoubtedly lead us to the fact that Proof of Stake is indeed the mechanism that saves the very ideology of Blockchain.
No Centralization (Eliminating the 51% Attack Threat)
As already discussed, Proof of Work, irresponsibly, lets everyone validate the blocks which has ultimately lead to a point where miners gradually started combining their resources and forming Mining Pools, thus enhancing their chances of mining more blocks and earn more rewards.
Since these pools soon started controlling a large part of the network, thus centralizing the mining process which challenges the Blockchain’s idea of Decentralization.
Proof of Stake, on the other hand, doesn’t let everyone mine the blocks and choose only those with enough experience as well as deposits. Therefore, there is absolutely no chance for validators to join hands and form a pool, yet No CENTRALIZATION. And since there is no centralization, the 51% attack threat is also minimized to a greater extent.
Reduced Power Consumption
The power consumption by this consensus mechanism turned out to be really low. Therefore, Proof of Stake is the one effective answer to the massive energy consumption by Proof of Work.
While Proof of Work uses 77 TWh of Electricity, Proof of Stake uses just 7.7 TWh as of now.
Digiconomist’s Ethereum Energy Consumption Index
No Fraudulent Transactions
Since PoS chooses the validators randomly, a really concerning dilemma might pop up in your brain.
How can we Trust these Validators?
Well, Blockchain quite clearly believes in the fact that Internet is a Trustless community. And Proof of Stake respects this ideology.
PoS takes very good care of your transactions and provides a really strict policy for validators who try fraudulent transactions.
Just in case a validators try to validate a fake transaction, the algorithm ensures that they lose an enormous part of their stakes. In simpler terms, we can trust validators because if they approve fraudulent transactions, they lose more money than they gain through the fraud. And, truth be told, no one would ever agree for such a bad deal. | https://medium.com/coinmonks/how-ethereums-proof-of-stake-saved-the-very-ideology-of-blockchain-961b3f9acea8 | ['Zaryab Afser'] | 2020-08-25 13:32:33.405000+00:00 | ['Ethereum Blockchain', 'Blockchain Technology', 'Blockchain', 'Proof Of Stake', 'Proof Of Work'] |
2,476 | 2GT Coin Supply and Distribution 📊 | As previously announced, 2gether has started the presale of the 2GT Coin in Spain, and access will be opened to the remaining Eurozone countries in the following weeks. (Follow us on social media and don’t miss anything! 😉).
Today we’ll briefly explain the supply and distribution of the 2GT Coins. For an in-depth explanation, download the Whitepaper and Tokenomics paper, available on our web.
Summary:
2GT Coin total supply: 2,400,000,000 tokens
2GT Coins available in the 1st token sale: 400,000,000 tokens
Hard Cap of the 1st token sale: €20,000,000
Unitary price: €0.05
Bonus structure:
Bonus structure for the first €5M or until the 2GT is VFA approved
Accepted currencies: EUR, BTC, and ETH
Token delivery: in the investor’s Ether wallet within the 2gether app
Spain start date: January 14, 2019
Eurozone start date: H1 2019
Token supply and distribution
As mentioned above, the total supply of 2GT Coins will be capped at 2,400,000,000, and no more 2GT Coins will ever be created. But, how are they going to be allocated?
2GT Coin distribution
1st and 2nd token sales. The funds raised will be used to reach our business goals through 2021 and fund the 2GT rewards during the same period. To learn more about how we plan to use the funds, check out our upcoming Intended Use of Funds article. The 2GT Coins issued in these sales won’t have a lock-up period.
The funds raised will be used to reach our business goals through 2021 and fund the 2GT rewards during the same period. To learn more about how we plan to use the funds, check out our upcoming Intended Use of Funds article. The 2GT Coins issued in these sales won’t have a lock-up period. Community. These 2GT Coins will be used to create and boost 2gether’s client base, and they won’t have a lock-up period.
These 2GT Coins will be used to create and boost 2gether’s client base, and they won’t have a lock-up period. Partners. A strong network of partners is important to create a platform like 2gether — that’s why we’ll use these 2GT Coins to foster partnerships. Some of our partners already include A.T. Kearney, KPMG, Uría Menéndez, and Swiss Crypto Advisors. These 2GT Coins won’t have a lock-up period.
A strong network of partners is important to create a platform like 2gether — that’s why we’ll use these 2GT Coins to foster partnerships. Some of our partners already include A.T. Kearney, KPMG, Uría Menéndez, and Swiss Crypto Advisors. These 2GT Coins won’t have a lock-up period. Team. We have a really talented team that is working hard to make 2gether a reality and that plays a key role in the project’s success. The team members will have a four-year lock-up period with a 24-month cliff.
We have a really talented team that is working hard to make 2gether a reality and that plays a key role in the project’s success. The team members will have a four-year lock-up period with a 24-month cliff. Early investors. Some of them are former J.P. Morgan bankers or McKinsey partners, and they have helped craft 2gether. They will have an 18-month lock-up period with a 12-month cliff.
Some of them are former J.P. Morgan bankers or McKinsey partners, and they have helped craft 2gether. They will have an 18-month lock-up period with a 12-month cliff. Authorized Fund. A reserve to ensure that 2GT rewards are always distributed when accrued by a community member, and to raise further funds if needed, among other uses. The lock-up period will be managed by the smart contract following dynamic rules.
Token supply calendar
To limit potential volatility in the value of the 2GT Coin, we’ve implemented lock-up periods to ensure progressive token issuance following the 1st and 2nd token sales. The chart below highlights the predicted circulating supply year by year:
Predicted circulating supply
More info 👇
We’ve posted the Whitepaper and the Tokenomics documents on our web, where you can find more information about 2gether and the 2GT Coin.
We’ll also post more articles, such as the regulation around the regulations issued under Malta’s Virtual Financial Assets Act (VFAA) and the Intended Use of Funds, in the coming weeks.
In the meantime, follow us on Twitter and join our community! 🚀 | https://medium.com/2gether/2gt-coin-supply-and-distribution-d318855bea57 | ['Álvaro Bernabéu De Yeste'] | 2019-03-18 12:35:53.018000+00:00 | ['Cryptocurrency', 'Fintech', 'Crowdfunding', 'Blockchain', 'Technology'] |
2,477 | Blockchain For Government | Blockchain technology offers a whole world of possibilities for Government systems.
Not only can it make tasks faster, it can also be an aid in fighting corruption.
How can it do tall that you ask?
Let’s find out . . .
Trustless Transactions
The word “trustless” doesn’t mean that it’s untrustworthy, rather the opposite.
With Blockchain, tasks such as money transactions (eg. Pension) can be automated using smart contracts.
Once a smart contract is implemented, no “middle men” can tamper with it.
This allows for a completely hands-free and seamless operation, eliminating the need for “trust.”
Transparency with Privacy
-The information once stored on the blockchain database can never be tampered with, and it will be available whenever needed.
-You can store the public information on a public blockchain ledger that everyone can access (such as criminal records, employment records).
-Sensitive information (such as medical records) can be stored on a private distributed ledger, making it available to specified persons only.
-This way, the government can access and keep the information they need for providing services to the citizens, while ensuring their privacy.
Enhanced Security
-Blockchain for government and public services add extra layers of security that keep hackers away from the data.
-When you use a distributed ledger for storing your data, on a protected network, it becomes extremely difficult for hackers to get into the system.
-Blockchain is made up of multiple blocks, each block is connected to all the other blocks, and each block has the cryptographic hash of the block before it.
-For a hacker to access the system, they would need to change the data on a block as well as the data on every other block, to avoid detection.
-Each entity that makes a transaction in the block gets a private key assigned to the transactions they make.
-When a hacker tries to make any changes to the data on a block, this key becomes invalid and the peer connection is notified instantly.
-There is no single point of weakness, the data itself is stored on multiple databases and not on a single server, making hacking practically impossible
Fighting Corruption
-Blockchain technology protects your data, not only from hackers but from everyone, making falsifying of data practically impossible.
-You can use authentication to choose who gets the data, while maintaining transparency by making relevant data available to the public.
-The immutable storage keeps data intact, and citizens can easily view and verify data using a Blockchainbased explorer.
-What you see is literally what you get.
Elections
-In the era of ballots, security was a major concern and Ballot boxes were guarded with high levels of security to ensure that the votes were not forged.
-Now, this process is done with the help of electronic devices, but the problem here is that electronic devices can be easily tampered with.
-There have been several reports of voters clicking on one candidate’s button and the vote being cast for the other candidate.
-Blockchain can eliminate this completely by ensuring that each vote cast is authentic, and the immutable storage makes sure that the data is not lost.
In A Nutshell
If Implemented, Blockchain technology has the potential to change the way our system operates, forever.
A change that maximizes efficiency while safe guarding privacy and countering corruption. | https://medium.com/@blockchainx-tech/blockchain-for-government-d1aa92907b37 | [] | 2020-12-21 12:52:52.041000+00:00 | ['Blockchain', 'Blockchain Development', 'Blockchain Technology', 'Blockchain Startup'] |
2,478 | 女性向け市場分析 | in In Fitness And In Health | https://medium.com/dena-analytics-blog/%E5%A5%B3%E6%80%A7%E5%90%91%E3%81%91%E5%B8%82%E5%A0%B4%E5%88%86%E6%9E%90-60d42a2a7f02 | [] | 2020-06-09 06:09:30.297000+00:00 | ['Research', 'Technology', 'Analytics', 'Engineering', 'Games'] |
2,479 | Features That Every Developer Must Know About Spring | If you are not living under the rock, then you must have heard about Spring Boot, the framework which provides a simpler and faster way to set up, configure, and run both simple and web-based applications. Spring Boot is a framework created to simplify the bootstrapping and development of a new Spring application by the Pivotal team.
History
Well, Pivotal was heavily criticized for their heavy reliance of XML based configurations. In 2013, the CTO of Pivotal made the company mission to make an XML-free development platform that would not only simplify the development of the applications but also would simplify the dependency management, which was a nightmare back then.
In the third quarter of 2013, Spring Boot gained huge popularity by demonstrating its simplicity with a runnable web application that fit in under 140-characters, delivered in a tweet. This tweet came after its first beta release and was enough to make the developers talk about it.
https://4konline24.com/carolina/
https://4konline24.com/notre-dame/
https://4konline24.com/ncaafweek/
https://www.baldwin911.org/sites/default/files/webform/attachments/N-v-UNC-c02.html
https://www.baldwin911.org/sites/default/files/webform/attachments/N-v-UNC-c03.html
https://www.baldwin911.org/sites/default/files/webform/attachments/N-v-UNC-c04.html
Spring Boot, in a single line, means:
( Spring Framework — XML Configuration ) + Integrated Server
Some Components of Spring Boot
Spring Boot Core is a base for other Spring models and provides functionalities that work on their own with validation. Spring Boot CLI is a command-line interface based on ruby and rails for its function. However, to start and stop the application, spring boot is required.
Spring Boot Actuator enables enterprise features that can be used in your application which can auto-detect frameworks and features of your application and use it accordingly as and when required. Integrating actuators with spring boot application can be done by including the spring-boot-starter-actuator starter in the pom.xml file:
Spring Boot Starters help to initiate the project and are included as a dependency in your built file. It automatically adds starter projects and dependencies for the kind of application you are developing. There are more than 50 starters at our disposal. The most commonly used are:
spring-boot-starter : core starter, including auto-configuration support, logging, and YAML
: core starter, including auto-configuration support, logging, and YAML spring-boot-starter-data-jpa: starter for using Spring Data JPA with Hibernate
starter for using Spring Data JPA with Hibernate spring-boot-starter-security: starter for using Spring Security
starter for using Spring Security spring-boot-starter-test: starter for testing Spring Boot applications
starter for testing Spring Boot applications spring-boot-starter-web: starter for building web, including RESTful, applications using Spring MVC
Features of Spring Boot
There are tonnes of features that are proven to make the lives of the developers easier. However, the below features are at the top of the list.
Dependency Management
Prior to the release of the spring-boot framework, dependency management was quite an uphill task especially for newbie developers or even seasoned developers as it was required to know the compatible dependencies required in order to make your application up and running.
Spring Boot manages dependencies and configuration automatically. Each release of Spring Boot provides a list of dependencies that it supports. The list of dependencies is available as a part of the Bills of Materials or BOM which is essentially spring-boot-dependencies that can be used with Maven. This means that you don’t necessarily need to mention the version of the dependency as Spring manages it. This avoids mismatch of different versions of Spring Boot libraries and quite useful if you are working in a multi-module project.
Auto-configuration
If you ask me, this is the most important feature of Spring Boot is auto-configuration. It auto-configures your application according to your dependencies. It is not only intelligent and effective but also contextually smart and keep a record of your requirements.
Let us take the example of a database feature. In case you have added a requirement to a pom.xml, which somehow relates to a database, Spring boot implies by itself that you would like to use a database and thus it allows your application to make use of the precise database anytime.
The annotation @EnableAutoConfiguration enables auto-configuration for spring boot, using which the framework looks for auto-configuration beans on its classpath and automatically applies them. It is always used with @Configuration as shown below:
Most auto-configuration respects your own configuration and backs off silently if you have provided your own configuration via your own beans.
Designs Standalone Applications
Spring boot allows you to design stand-alone, production-grade quality applications that you can run on any website without wasting time. You might think that running a java application is extremely simple and easy. All you need to do is give a run command and everything starts happening exactly the way it should be. But that’s just your assumption (it was mine anyway 😑).
To run a java application, the following steps are required:
Package your application Choose the type of web server where you want to run your application and download it Configure that web server Organize the deployment process
BUT, if you’re using Spring Boot framework to run your application, you just need to follow the below two steps:
Package your application Run your application using commands such as java -jar my-application.jar
That’s all you need to do. Spring Boot takes care of the rest of your requirements by simply configuring and deploying an embedded web server like Apache Tomcat to your application.
Opinionated Configuration
As mentioned in the spring.io documentation,
“We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need minimal Spring configuration.”
I cannot explain this feature any easier than this 🤓. Spring Boot takes an opinionated view before it starts building or deploying new Spring application. When you use Java, you have adequate options to choose, starting from the web, logging, collection framework, or the build tool you use.
Instead of having so many choices in Java, developers like to use only the popular libraries. All that the Spring Boot does is that it loads and configures them in the most standard way. Hence, the developers don’t need to spend a lot of time to configure up the same thing over and over again. In this way, they have more time for writing code and meeting business requirements. | https://medium.com/@chandu1/features-that-every-developer-must-know-about-spring-60d83b971518 | [] | 2020-11-27 20:54:10.913000+00:00 | ['Technology', 'Spring', 'Java', 'Programming'] |
2,480 | AI and Pediatrics | AI and Pediatrics
A quick evolution of AI in pediatrics with some of its current uses in pediatric medicine today.
The incorporation of artificial intelligence into pediatric medicine has been primarily focused on brain mapping, developmental disorders, oncology, gene profiling, emergency care, and pattern recognition.
These processes deal with a glut of data, the amount of which is expansive.
One of the earliest papers about AI in pediatrics was published in 1984, and it introduced a computer-assisted medical decision-making system known as SHELP.
SHELP, created to diagnose inborn errors related to metabolism even in rare cases, played an important role in the clinical diagnoses and treatment of pediatric diseases.
Before 2008, the major research work on AI and pediatrics focused on the use of applications controlled by ruled-based systems (knowledge-based expert systems), artificial neural networks, genetic algorithms, and decision trees.
These applications came in handy in knowledge extraction and decision making relating to mortality and survival prediction, preterm birth, melanoma, lesion treatment, cancer, and neuroblastoma.
Between 2009 to 2012, the use of AI in pediatrics advanced, becoming more complex. It began to feature logistic regression models, discriminant analysis, and support vector machines for prediction, diagnosis, and care.
AI also assisted with signal, speech, and image processing. Some of the common disease conditions treated in collaboration with these innovations included those related to pathology, genetics, seizures, and infections, and they significantly targeted premature infants and young children.
The period from 2013 up until now has seen the creation of applications imbued with machine learning aimed at tackling the diagnosis and treatment of epilepsy, asthma, pneumonia, schizophrenia, and other neurological conditions like autism.
The newer AI-based tools also incorporate data representation, computer imaging, and other algorithm-based processes. Some of the more current uses of AI in pediatric medicine is summarized below:
a. Eliminating false alarms and alarm fatigue
False alarms are very common in hospitals and they lead to “alarm fatigue,” a situation where caregivers are overwhelmed by the sheer number of alarm signals and become desensitized.
Naturally, this leads to delayed responses and sometimes even missing alarms altogether.
It will interest you to note that about 72% to 99% of clinical alarms are false, thus the likelihood of alarm fatigue is high. As a sad result, some patients in need suffer.
Before the advent of AI-based techniques, physicians depended solely on quality improvement projects to combat this: daily electrocardiogram electrode changes, proper skin preparation education, and customization of alarm parameters were some of the only ways to decrease the number of false alarms.
Enter machine-learning algorithms, which can play an important role in automatically classifying vital sign alerts as either real or artifact.
Researchers were able to build these successfully by training them with expert-labeled vital sign data streams. Coupled with the creation and implementation of data-driven vital sign parameters, alarm fatigue can be greatly reduced in a pediatric acute care unit.
b. Biomedical diagnosis
Medical science is nothing without unbiased diagnoses.
Pediatric medicine requires paying the utmost attention to even the tiniest details, details which can be missed by even the most skilled physicians when exhaustion comes into play.
AI-based processes like artificial neural networks, support vector machines, classification trees, and ensemble methods like random forests have been successfully applied to molecular imaging modalities in disease conditions, including neurodegenerative diseases.
Researchers have also employed machine-learning algorithms in the prediction of periventricular leukomalacia in neonates after cardiac surgery.
Additionally, AI is playing a leading role in radiology, including in the automated detection of diseases, the segmentation of lesions, and quantitation.
Machines now have the ability to diagnose diseases on images at a level comparable to skilled physicians. Researchers have even reported the use of models to assess skeletal maturity on pediatric hand radiographs.
Studying genotype-phenotype interrelationships among syndromes can be a nightmare for medical geneticists. This is due to the tedious nature of the job, especially when rare syndromes are involved.
Well, the use of visual diagnostic decision support systems powered by machine-learning algorithms and digital image processing can bring these nightmares to an end.
It offers geneticists a hybrid approach to the automated diagnoses in medical genetics.
c. Wearable technology
Wearable technologies are making waves in several medical procedures like at-home data collection of blood oxygenation, recording patient visits, and keeping close tabs on heart rate, respiration, and ECGs.
These wearable technologies, also used in sleep studies, psychosocial applications, and obesity intervention, are assisting children with movement disorders.
d. Robotic technology and virtual assistants
Robots can help children with neurological disorders like autism with several learning tasks. In general, these children tend to enjoy the tasks more when they interact with a robot compared to when they interact with an adult.
Physicians can also utilize electromechanical and robot-assisted arm training for the improvement of muscular activities after a stroke. Robots and virtual assistants are the future of physical rehabilitation and psychiatric therapy as well as healthcare education and the management of chronic diseases.
Conclusion
Artificial intelligence is changing every aspect of healthcare delivery via the introduction of highly efficient algorithm-controlled processes. These processes are less prone to errors and are capable of detecting even the tiniest detail that may otherwise be missed by physicians.
Contact me or comment if you have questions about AI in pediatric medicine. I would love to hear from you! | https://medium.com/swlh/ai-and-pediatrics-e4259d4c870d | ['Sohail Merchant'] | 2020-02-06 19:07:56.068000+00:00 | ['Health', 'Technology', 'Healthcare', 'Education', 'Life'] |
2,481 | WATCH Power Book II: Ghost ([Season 1 : Episode 9]) | ⭐ Watch Power Book II: Ghost Season 1 Episode 9 Full Episode, Power Book II: Ghost Season 1 Episode 9 Full Watch Free, Power Book II: Ghost Episode 9,Power Book II: Ghost STARZ, Power Book II: Ghost Eps. 9,Power Book II: Ghost ENG Sub, Power Book II: Ghost Season 1, Power Book II: Ghost Series 1,Power Book II: Ghost Episode 9, Power Book II: Ghost Season 1 Episode 9, Power Book II: Ghost Full Streaming, Power Book II: Ghost Download HD, Power Book II: Ghost All Subtitle, Watch Power Book II: Ghost Season 1 Episode 9 Full Episodes
Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[9] The word “cinema”, short for cinematography, is ofSTARZ used to refer to filmmaking and the film Power Book II: Ghost, and to the art form that is the result of it.
❏ STREAMING MEDIA ❏
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the process of delivering or obtaining media in this manner.[clarification needed] Streaming refers to the delivery method of the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies specifically to telecommunications networks, as most of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio CDs). There are challenges with streaming conSTARZt on the Internet. For example, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of the conSTARZt. And users lacking compatible hardware or software systems may be unable to stream certain conSTARZt.
Live streaming is the delivery of Internet conSTARZt in real-time much as live television broadcasts conSTARZt over the airwaves via a television signal. Live internet streaming requires a form of source media (e.g. a video camera, an audio interface, screen capture software), an encoder to digitize the conSTARZt, a media publisher, and a conSTARZt delivery network to distribute and deliver the conSTARZt. Live streaming does not need to be recorded at the origination point, although it frequently is.
Streaming is an alternative to file downloading, a process in which the end-user obtains the entire file for the conSTARZt before watching or lisSTARZing to it. Through streaming, an end-user can use their media player to start playing digital video or digital audio conSTARZt before the entire file has been transmitted. The term “streaming media” can apply to media other than video and audio, such as live closed captioning, ticker tape, and real-time text, which are all considered “streaming text”.
❏ COPYRIGHT CONSTARZT ❏
Copyright is a type of intellectual property that gives its owner the exclusive right to make copies of a creative work, usually for a limited time.[9][9][9][9][9] The creative work may be in a literary, artistic, educational, or musical form. Copyright is inSTARZded to protect the original expression of an idea in the form of a creative work, but not the idea itself.[9][9][9] A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.
Some jurisdictions require “fixing” copyrighted works in a tangible form. It is ofSTARZ shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders.[citation needed][9][1][1][1] These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.[1]
Copyrights can be granted by public law and are in that case considered “territorial rights”. This means that copyrights granted by the law of a certain state, do not exSTARZd beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works “cross” national borders or national rights are inconsisSTARZt.[1]
Typically, the public law duration of a copyright expires 1 to 9 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities[9] to establishing copyright, others recognize copyright in any completed work, without a formal registration.
It is widely believed that copyrights are a must to foster cultural diversity and creativity. However, Parc argues that contrary to prevailing beliefs, imitation and copying do not restrict cultural creativity or diversity but in fact support them further. This argument has been supported by many examples such as Millet and Van Gogh, Picasso, Manet, and Monet, etc.[1]
❏ GOODS OF SERVICES ❏
Credit (from Latin credit, “(he/she/it) believes”) is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[9] In other words, credit is a method of making reciprocity formal, legally enforceable, and exSTARZsible to a large group of unrelated people.
The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[9] Credit is exSTARZded by a creditor, also known as a lender, to a debtor, also known as a borrower.
‘Power Book II: Ghost’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ STARZween China, U.S.
STARZ’s live-action “Power Book II: Ghost” was supposed to be a huge win for under-represented groups in Hollywood. The $9 million-budgeted film is among the most expensive ever directed by a woman, and it features an all-Asian cast — a first for productions of such scale.
Despite well-inSTARZtioned ambitions, however, the film has exposed the difficulties of representation in a world of complex geopolitics. STARZ primarily cast Asian rather than Asian American stars in lead roles to appeal to Chinese consumers, yet Chinese viewers rejected the movie as inauthentic and American. Then, politics ensnared the production as stars Liu Yifei, who plays Power Book II: Ghost, and Donnie Yen professed support for Hong Kong police during the brutal crackdown on protesters in 199. Later, STARZ issued “special thanks” in the credits to government bodies in China’s Xinjiang region that are directly involved in perpetrating major human rights abuses against the minority Uighur population.
“Power Book II: Ghost” inadverSTARZtly reveals why it’s so difficult to create multicultural conSTARZt with global appeal in 2020. It highlights the vast disconnect STARZween Asian Americans in Hollywood and Chinese nationals in China, as well as the exSTARZt to which Hollywood fails to acknowledge the difference STARZween their aesthetics, tastes and politics. It also underscores the limits of the American conversation on representation in a global world.
In conversations with sePower Book II: Ghostl Asian-American creatives, Variety found that many feel caught STARZween fighting against underrepresentation in Hollywood and being accidentally complicit in China’s authoritarian politics, with no easy answers for how to deal with the moral questions “Power Book II: Ghost” poses.
“When do we care about representation versus fundamental civil rights? This is not a simple question,” says Bing Chen, co-founder of Gold House, a collective that mobilizes the Asian American community to help diverse films, including “Power Book II: Ghost,” achieve opening weekend box office success via its #GoldOpen movement. “An impossible duality faces us. We absolutely acknowledge the terrible and unacceptable nature of what’s going on over there [in China] politically, but we also understand what’s at stake on the Power Book II: Ghost side.”
The film leaves the Asian American community at “the intersection of choosing STARZween surface-level representation — faces that look like ours — versus values and other cultural nuances that don’t reflect ours,” says Lulu Wang, director of “The Farewell.”
In a business in which past box office success determines what future projects are bankrolled, those with their eyes squarely on the prize of increasing opportunities for Asian Americans say they feel a responsibility to support “Power Book II: Ghost” no matter what. That support is ofSTARZ very personal amid the Power Book II: Ghost’s close-knit community of Asian Americans, where people don’t want to tear down the hard work of peers and Power Book II: Ghost.
Others say they wouldn’t have given STARZ their $1 if they’d known about the controversial end credits.
“‘Power Book II: Ghost’ is actually the first film where the Asian American community is really split,” says sociologist Nancy Wang Yuen, who examines racism in Hollywood. “For people who are more global and consume more global news, maybe they’re thinking, ‘We shouldn’t sell our soul in order to get affirmation from Hollywood.’ But we have this scarcity mentality.
“I felt like I couldn’t completely lambast ‘Power Book II: Ghost’ because I personally felt solidarity with the Asian American actors,” Yuen continues. “I wanted to see them do well. But at what cost?”
This scarcity mentality is particularly acute for Asian American actors, who find roles few and far STARZween. Lulu Wang notes that many “have built their career on a film like ‘Power Book II: Ghost’ and other crossovers, because they might not speak the native language — Japanese, Chinese, Korean or Hindi — to actually do a role overseas, but there’s no role being writSTARZ for them in America.”
Certainly, the actors in “Power Book II: Ghost,” who have seen major career breakthroughs tainted by the film’s political backlash, feel this acutely. “You have to understand the tough position that we are in here as the cast, and that STARZ is in too,” says actor Chen Tang, who plays Power Book II: Ghost’s army buddy Yao.
There’s not much he can do except keep trying to nail the roles he lands in hopes of paving the way for others. “The more I can do great work, the more likely there’s going to be somebody like me [for kids to look at and say], ‘Maybe someday that could be me.’”
Part of the problem is that what’s happening in China feels very distant to Americans. “The Chinese-speaking market is impenetrable to people in the West; they don’t know what’s going on or what those people are saying,” says Daniel York Loh of British East Asians and South East Asians in Theatre and Screen (BEATS), a U.K. nonprofit seeking greater on-screen Asian representation.
York Loh offers a provocative comparison to illustrate the West’s milquetoast reaction to “Power Book II: Ghost” principal Liu’s pro-police comments. “The equivalent would be, say, someone like Emma Roberts going, ‘Yeah, the cops in Portland should beat those protesters.’ That would be huge — there’d be no getting around that.”
Some of the disconnect is understandable: With information overload at home, it’s hard to muster the energy to care about faraway problems. But part of it is a broader failure to grasp the real lack of overlap STARZween issues that matter to the mainland’s majority Han Chinese versus minority Chinese Americans. They may look similar, but they have been shaped in diametrically different political and social contexts.
“China’s nationalist pride is very different from the Asian American pride, which is one of overcoming racism and inequality. It’s hard for Chinese to relate to that,” Yuen says.
Beijing-born Wang points out she ofSTARZ has more in common with first-generation Muslim Americans, Jamaican Americans or other immigrants than with Chinese nationals who’ve always lived in China and never left.
If the “Power Book II: Ghost” debacle has taught us anything, in a world where we’re still too quick to equate “American” with “white,” it’s that “we definitely have to separate out the Asian American perspective from the Asian one,” says Wang. “We have to separate race, nationality and culture. We have to talk about these things separately. True representation is about capturing specificities.”
She ran up against the Power Book II: Ghost’s inability to make these distinctions while creating “The Farewell.” Americans felt it was a Chinese film because of its subtitles, Chinese cast and location, while Chinese producers considered it an American film because it wasn’t fully Chinese. The endeavor to simply tell a personal family story became a “political fight to claim a space that doesn’t yet exist.”
In the search for authentic storytelling, “the key is to lean into the in-STARZweenness,” she said. “More and more, people won’t fit into these neat boxes, so in-STARZweenness is exactly what we need.”
However, it may prove harder for Chinese Americans to carve out a space for their “in-STARZweenness” than for other minority groups, given China’s growing economic clout.
Notes author and writer-producer Charles Yu, whose latest novel about Asian representation in Hollywood, “Interior Chinatown,” is a National Book Award finalist, “As Asian Americans continue on what I feel is a little bit of an island over here, the world is changing over in Asia; in some ways the center of gravity is shifting over there and away from here, economically and culturally.”
With the Chinese film market set to surpass the US as the world’s largest this year, the question thus arises: “Will the cumulative impact of Asian American audiences be such a small drop in the bucket compared to the China market that it’ll just be overwhelmed, in terms of what gets made or financed?”
As with “Power Book II: Ghost,” more parochial, American conversations on race will inevitably run up against other global issues as U.S. studios continue to target China. Some say Asian American creators should be prepared to meet Power Book II: Ghost by broadening their outlook.
“Most people in this Power Book II: Ghost think, ‘I’d love for there to be Hollywood-China co-productions if it meant a job for me. I believe in free speech, and censorship is terrible, but it’s not my battle. I just want to get my pilot sold,’” says actor-producer Brian Yang (“Hawaii Five-0,” “Linsanity”), who’s worked for more than a decade STARZween the two countries. “But the world’s getting smaller. Streamers make shows for the world now. For anyone that works in this business, it would behoove them to study and understand Power Book II: Ghosts that are happening in and [among] other countries.”
Gold House’s Chen agrees. “We need to speak even more thoughtfully and try to understand how the world does not function as it does in our zip code,” he says. “We still have so much soft power coming from the U.S. What we say matters. This is not the problem and burden any of us as Asian Americans asked for, but this is on us, unfortunately. We just have to fight harder. And every step we take, we’re going to be right and we’re going to be wrong.”
☆ ALL ABOUT THE SERIES ☆
is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[9] In other words, credit is a method of making reciprocity formal, legally enforceable, and exSTARZsible to a large group of unrelated people.
The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[9] Credit is exSTARZded by a creditor, also known as a lender, to a debtor, also known as a borrower.
‘Hausen’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ STARZween China, U.S. | https://medium.com/power-book-ii-ghost-s1xe9-4khd-quality/watch-power-book-ii-ghost-series-1-episode-9-online-1080p-hd-a56e4c074404 | ['Amber Lyons'] | 2020-12-25 10:40:30.683000+00:00 | ['Technology', 'Lifestyle', 'Coronavirus', 'TV Series'] |
2,482 | How to Debug a ML Model: A Step-by-Step Case Study in NLP | How to Debug a ML Model: A Step-by-Step Case Study in NLP
While there are so many articles out there on how to get started on NLP or teaching you a tutorial, one of the hardest lessons to learn is how to debug a model or task implementation.
Mental state before model/task implementation, StockSnap via Pixabay (CC0)
Mental state after model/task implementation, Free-Photo via Pixabay (CC0)
Not to worry! This article will go through the debugging process of a pretty subtle (and not so subtle) series of bugs, and how we fixed them, with a case study to walk you through the lessons. If you would like to just see a bulleted list of tips, scroll down to the end!
In order to do that, let me take you back a few months, to when we (my research collaborator Phu and I) were first implementing masked language modeling into jiant, which is an opensource NLP framework, with the goal of doing multi task training on a RoBERTa model. If this sounds like an alien language to you, I would first suggest you look into this article on transfer learning and multitask learning, and this article about the RoBERTa model.
Setting up the Scene
Masked language modeling is one of the pretraining objectives in BERT, RoBERTa, and many BERT-style variants. It consists of an input-noising objective, where given a text, the model has to predict 15% of the tokens given the context. To make things harder, these predicted tokens are 80% of the time replaced by “[MASK]”, 10% by another random token, and 10% is the correct, unreplaced token.
For example, the model will be shown the below
Example of a text changed for MLM training. Here, the model will learn to predict “tail” for the token currently occupied with “[MASK]”
Designing the Initial Implementation
We first looked into if other people had implemented MLM before, and found the original implementation by Google, and a Pytorch implementation by AllenNLP. We used mostly all of the Huggingface implementation (which has been moved since, since it seems like the file that used to be there no longer exists) for the forward function. Following the RoBERTa paper, we dynamically masked the batch at each time step. Furthermore, Huggingface exposes the pretrained MLM head here, which we utilized as below.
Thus, the MLM flow in our code became the below:
Load MLM data -> Preprocess and index data -> Load model -> In each step of the model training, we: 1. Dynamically mask batch 2. Compute NLL loss for each masked token
The jiant framework uses primarily AllenNLP for vocabulary creation and indexing, as well as instance and dataset management.
We first tested with a toy dataset of 100 dataset examples to make sure the loading was correct with AllenNLP. After we went through some pretty explicit bugs, such as some label type mismatch with AllenNLP, we came upon a bigger bug.
The First Signs of Trouble
After making sure our preprocessing code worked with AllenNLP, we found a strange bug.
TypeError: ~ (operator.invert) is only implemented on byte tensors. Traceback (most recent call last):
This was because the code we copy-pasted from Huggingface was written with an older version of Python, and in the Pytorch you needed to use .byte() instead of bool() .
Thus, we simply changed one line, from
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
to
bernoulli_mask = torch.bernoulli(torch.full(labels.shape, 0.8)).to( device=inputs.device, dtype=torch.uint8 )
Trouble Strikes
Now, finally, we were able to run a forward function without erroring out! After a few minutes of celebration, we got to work verifying more subtle bugs. We first tested the correctness of our implementation by calling model.eval() and running the model through the MLM forward functions. Since the model, in this case RoBERTa-large, has been pretrained with MLM, we would expect it to do very well on MLM. That was not the case, and we were getting very high losses.
It became clear why: the predictions were always 2 off from the gold labels. For example, if the token “tail” was assigned index 25, the label for “dog wagged its [MASK] when it saw the treat” and “[MASK]” would be 25, but the prediction would be 27.
We only discovered this after hitting this error.
`/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.`
This error meant that the prediction space was larger than the number of classes.
After a lot of pdb tracing, we realized that we weren’t using AllenNLP tag/label namespace. In AllenNLP, you can keep track of all the vocabularies you need in an AllenNLP Vocabulary object using namespaces such as the label namespace and input namespace. We found that AllenNLP vocabulary object automatically inserts @@PADDING@@ and @@UNKOWN@@ tokens to index 0 and 1 for all namespaces except for label namespaces (which are all those ending in “_tag” or “_labels.” Since we did not use the label namespace, our indices were being shifted forward by two, and the prediction space (defined by the label vocabulary size) was larger by 2! After finding this out, we renamed the label index and this particular threat was curbed.
A Last Hidden Bug, and a pivot
By this point, we had thought we had caught all or most of the bugs, and that MLM was working correctly. However, while the model was getting low perplexity now, a week later, while looking through the code with a third person, we found another hidden bug.
if self._unk_id is not None:
ids = (ids — 2) * valid_mask + self._pad_id * pad_mask + self._unk_id * unk_mask
else:
ids = (ids — 2) * valid_mask + self._pad_id * pad_mask
Somewhere buried in a separate part of the code, which we had written a few months back, we had shifted the inputs of any Transformer-based model back by 2, since AllenNLP shifted it forward by 2. Thus, essentially the model was seeing gibberish, since it was seeing whatever vocabulary tokens were two indices away from the correct inputs’ indices.
How did we fix this?
We ended up reversing a previous fix for a previous bug and not using the label namespace for inputs, since everything was shifted back by 2 anyhow. and simply making sure that the dynamically generated mask_idx is shifted forward by 2 before being fed into the model. In order to fix the previous error with mismatch between prediction and label space sizes, we made the number of labels the size of the pretrained model’s tokenizer, since that includes all of the vocabulary that that model was pretrained on.
After many countless hours debugging and running preliminary experiments, we were finally out of bugs. Phew!
So just as a recap of things we did to make sure the code was bug free, and to also recap the types of bugs we saw, here’s a nifty list.
Key Takeaways for debugging a model
Start testing with a toy dataset. For this case study, preprocessing the entire dataset would’ve taken ~4 hours. Use already created infrastructure if possible. Beware, if you are using other people’s code, make sure you know exactly how it fits into your code and any incompatibilities that may arise, both subtle and not, from integrating outside code into your own. If you’re working with pretrained models, and if it makes sense, try to load a trained model and make sure that it does well in a task. Beware of differences in Pytorch versions (and other versions of dependencies) between code. Be very careful with indexing. Sometimes sketching ut the flow of the indexing is very messy, and can cause a lot of headaches of why your model isn’t performing well. Get other people to look at your code too. You may need to get deeper into the weeds and look at the source codes of packages you are using for preprocessing (such as AllenNLP) in order to understand the source of bugs. Create unit tests to keep track of subtle bugs and to ensure you don’t fall back on those bugs with a code change.
And there you have it, a debugging case study and some lessons. We’re all on a journey to get better at debugging together, and I know that I’m far from an expert at model debugging, but hopefully this post was helpful for you! Special thanks to Phu Mon Htut for editing this post.
If you’d like to see the final implementation, check it out here! | https://towardsdatascience.com/how-to-debug-an-ml-model-a-step-by-step-case-study-in-nlp-d79d384f7427 | ['Yada Pruksachatkun'] | 2020-06-21 01:49:52.022000+00:00 | ['AI', 'NLP', 'Data Science', 'Technology', 'Machine Learning'] |
2,483 | Friday Five: NHS Covid app gets ‘fix’ for false alerts | Zone’s Ross Basham handpicks and shares the five best new stories on digital trends, experiences and technologies…
1. NHS Covid app updated to ‘fix’ false alert
The NHS Covid-19 app has been updated to ‘fix’ an issue with a confusing alert that pops up suggesting exposure to the virus, but then disappears when the user clicks on it. I put fix in inverted commas because the alert will still appear, but now users will get a follow-up message telling them to ignore it (so not really fixed at all).
The messages are a default privacy notification from Apple or Google, which provide the contact-tracing technology. The team behind the app, downloaded 16 million times, is working on another update that could stop the notifications altogether and improve the way the app uses Bluetooth to measure distance between phones.
2. Facebook bans anti-vaccination advertising
Facebook has announced it is banning adverts that explicitly discourage people from getting vaccinated. The platform, which has 2.7 billion monthly active users, has been under pressure to crack down on anti-vaccine content and misinformation, which has become even more prevalent during the coronavirus pandemic.
Facebook’s previous rule prohibited ads containing vaccine misinformation or hoaxes but did allow ads opposing vaccines if they did not contain false claims. This new rule will curtail those ads, but will still allow ones that advocate for or against legislation or government policies around vaccines, including a Covid-19 vaccine.
3. Universities using surveillance software
Students have had a pretty rough start to their university lives — first there was the A-level algorithm debacle, then they excitedly turned up for freshers’ week, only to be promptly locked down in their accommodation, confined to remote learning. Now it transpires that their universities are using surveillance software to spy on them.
Analysis of three popular learning analytics tools shows that at least 27 unis are using them to keep tabs on what lectures they attend, what reading materials they download and what books they take out of the library. While this may seem innocuous, this Wired article looks at the potential for the misuse of all this data.
4. Google’s robot buggy is literal bean counter
Google’s parent company, Alphabet, has unveiled prototype robots that can inspect individual plants in a field. The robot buggies roll through fields on upright pillars so they can travel over plants without disturbing them, while collecting huge amounts of data about them in an effort to help farmers improve crop yields.
Called Project Mineral, it is part of Alphabet X, which aims to create world-changing tech from radical ‘moonshot’ ideas. As well as being an actual bean counter, the buggy can record information such as plant height and fruit size. That data goes into a machine-learning system to try to spot patterns and insights useful to farmers.
5. Inflatable e-scooter pumps up your journey
Given that we’re trying to avoid using public transport, there’s been a focus this year on other ways of getting around town, with bikes, folding bikes and electronic scooters the obvious choices. But how about an inflatable e-scooter? It sounds ridiculous (because it is), but it has been created at Tokyo University.
The scooter has seven inflatable sections including the wheels. You simply blow it up, add the rigid components like the motor and off you go. Then you can just deflate the scooter and pack it away in your backpack. Vehicles can even be customised to fit the specifications of the rider. It’s the future, people… | https://medium.com/@thisiszone/friday-five-nhs-covid-app-gets-fix-for-false-alerts-7892747b14cb | [] | 2020-10-16 10:13:07.042000+00:00 | ['Covid', 'Farming', 'Nhs', 'Technology News', 'Technology'] |
2,484 | Virtual Validation: A Scalable Solution to Test & Navigate the Autonomous Road Ahead | By: Robert Morgan, Engineering Director and Mason Lee, Technical Program Manager
To build an autonomous vehicle (AV), we need to be able to safely and efficiently modify and test its software and hardware stacks. While on-road testing may seem like an effective way to do this, it’s simply unrealistic. It’s estimated that it would take more than 10 billion miles to collect enough data to fully validate a self-driving vehicle — that’s 400,000 trips around the Earth.
Let’s put this into perspective by considering the level of engineering required for a self-driving vehicle to complete a frequent and intrinsic action for human drivers: lane changing.
Human drivers instinctively consider the variables that impact their decision to change lanes: their own speed, the speed and distance between other vehicles on the road, congestion, whether a lane change even makes sense given an upcoming turn in the route, and others. For an autonomous vehicle to make the same decision, models for each of these variables must be developed and tested over and over (and over) again, resulting in an astronomical amount of time and resources required if solely tested on-road.
Whether making lane changes or executing more complex scenarios, it’s essential to have a scalable, virtual option to effectively validate AVs as we accelerate toward a self-driving future.
The digital co-pilot to AV road testing
At Lyft, we supplement on-road testing with virtual validation to continuously test our self-driving vehicles’ software and hardware systems. This method involves running a high volume of virtual missions in a representative simulation while measuring performance along the way. We then evaluate the results to make necessary modifications to our software and hardware stacks, and feed the learnings back into the virtual validation system to improve autonomy performance, simulation, evaluation, and test coverage over time.
In addition to eliminating the need to complete miles of on-road missions to gather data for endless scenarios, virtual validation offers a number of other benefits:
Safety. It allows us to test our system in a safe, virtual setting without taking on the risks that come with physical testing. We can also virtually test against rare edge cases — like a car slamming on its brakes in front of the AV — allowing us to proactively improve the performance and safety of our stack.
It allows us to test our system in a safe, virtual setting without taking on the risks that come with physical testing. We can also virtually test against rare edge cases — like a car slamming on its brakes in front of the AV — allowing us to proactively improve the performance and safety of our stack. Focus. In simulation, we are in control of the scenarios that the AV encounters. This means we can skip uneventful miles and focus on challenging scenarios that offer us more critical learning opportunities to improve our systems. We can also easily alter and test specific variables within a scene, like the types of traffic agents around us or the speed of oncoming vehicles. This allows us to extract more value from our testing, as opposed to spending time collecting uneventful data.
In simulation, we are in control of the scenarios that the AV encounters. This means we can skip uneventful miles and focus on challenging scenarios that offer us more critical learning opportunities to improve our systems. We can also easily alter and test specific variables within a scene, like the types of traffic agents around us or the speed of oncoming vehicles. This allows us to extract more value from our testing, as opposed to spending time collecting uneventful data. Reproducibility. We have the resources to safely reproduce scenarios with vehicles on-road; however, they’re often time-consuming and would require a lot of manual setup. With virtual validation, reproducibility is built-in and we can easily study scenario variations in parallel. Disengagements that occur in simulation can be easily played back, altered, replayed, and shared.
Level 5’s approach to virtual validation
Virtual validation is not just a “final exam” we use to evaluate the performance of our software and hardware stacks for real-world deployment. It’s an approach we leverage and fuse with on-road vehicle testing throughout the AV development process.
Virtual validation is a rigorous, continuous process at Level 5. Every week, we collect our individual teams’ latest releases into a unified software update that we refer to as the “candidate.” Before we deploy the candidate on the AV fleet, it must run through a suite of virtual tests:
Planning and Integration Regression Tests. This collection of tests includes feature-specific scenarios as well as replications of previous events and issues. This ensures that what has been working continues to work.
This collection of tests includes feature-specific scenarios as well as replications of previous events and issues. This ensures that what has been working continues to work. Virtual Intervention Tests. At Level 5, we measure the number of human interventions per 1,000 miles to evaluate the high-level performance of our autonomy stack both on-road and in simulation. The Virtual Intervention Test consolidates scenarios seen in previous releases and replays the logs using the updated autonomy stack. The key goal in this stage is to detect potential issues in simulation using a high volume of real-world data.
At Level 5, we measure the number of human interventions per 1,000 miles to evaluate the high-level performance of our autonomy stack both on-road and in simulation. The Virtual Intervention Test consolidates scenarios seen in previous releases and replays the logs using the updated autonomy stack. The key goal in this stage is to detect potential issues in simulation using a high volume of real-world data. Full System Tests. At this stage, we evaluate all the software and hardware components of an AV, without the physical car itself. In our hardware-in-the-loop (HIL) testbed environment, we’re able to run full system tests for systemic functions separate from the actual behavior of the AV, including execution performance and timing, memory usage, and latency.
Once the stages of virtual testing are complete, the surfaced issues are sent to our Triage team for evaluation. They review the virtual disengagements, failing tests and diagnostics, and monitor the various automated metrics (like road law violations) for each candidate. Once the issues from virtual testing are addressed, we deploy the candidate onto our AVs for physical world testing. This allows us to not only validate the results from virtual testing, but also to feed these new real-world scenarios back into the virtual testing pipeline for future candidates.
Leveraging real-world data to focus on the right scenarios
Virtual validation enables us to simulate specific scenarios at scale and chip away at the target of 10 billion miles driven. But, how do we identify and address the challenging scenarios that require additional testing and virtual validation? The answer to this question lies with real-world datasets.
At Lyft, we have the opportunity to tap into data from our nationwide rideshare network to study real-world driving scenarios that can inform the development of our own self-driving technology. This data can help us focus on what and how we test, like interesting event discoveries such as an object falling out of a truck bed; or naturalistic maneuvers like whether the AV changes lanes in a way that’s comfortable for passengers.
Virtual validation is essential to the development of self-driving technology. It’s safer, faster, and produces more complete answers about the performance and safety of AVs in various scenarios. By combining virtual validation with on-road testing and real-world data, Level 5 can accelerate autonomous driving technology for years to come.
Follow this blog for updates and follow @LyftLevel5 on Twitter to continue to learn about Lyft’s road to autonomy. Visit our careers page to join Lyft in the self-driving movement. | https://medium.com/lyftself-driving/virtual-validation-a-scalable-solution-to-test-navigate-the-autonomous-road-ahead-e1a7d1fe1538 | ['Lyft Self-Driving'] | 2020-10-27 17:01:29.676000+00:00 | ['Autonomous Cars', 'Autonomous Vehicles', 'Technology', 'Self Driving Cars', 'Testing'] |
2,485 | 5 Trends Revolutionizing Employer Branding In 2020 | Source : Manage HR Magazine
Employer branding is taking new shape and therefore the got to build up employer branding efforts has become a necessity to recruit and retain top talents in a corporation
Fremont, CA: the aim of employer branding is to make a perception of an authentic, distinct, credible, attractive, and consistent employer brand. it’s important to secure a brand within the actual identity, value, and culture of the organization, which can help achieve a better level of internal and external engagement and make sure the better success of the business at large.
Effective Leadership for Successful Business GrowthHere are five trends revolutionizing employer branding in 2020":
Co-operation among Internal Stakeholders
Co-operation may be a must among employer branding stakeholders like recruitment/HR, marketing, and CEOs and that they should take accountability for employer branding efforts. Co-operation is required among internal teams also to bring out the simplest employer branding efforts to draw in and retain the simplest talent.
Redesigning Employer Branding Strategy
Employer branding objectives for a corporation will start that specialize in long-term thinking and show a greater interest in building a talent brand at the worldwide level. it’ll include researchers to know what employees are sharing, thinking, and feeling about their work, employers, and their efforts.
By increasing their spending on these talent branding efforts, companies will target potential candidates across different social media platforms.
Activating EVP Internally within the Organization
For improving employer branding efforts, the organization must activate its EVP throughout the interior workforce, enhancing engagement among current employees. When employees are reminded of the advantages and satisfaction of working with the corporate , they’re going to likely refer other top talents to the organization.
Measuring KPIs
Three internal KPIs that are getting used today include average retention, employee engagement level, and new hire quality. External indicators like ranking and brand perceptions also help assess external employer branding efforts.
Implementing Social Media to maximise Results
Social media platforms like Facebook, Instagram, LinkedIn, et al. have seen significant growth in recent years and presently are the foremost important employer branding channels for organizations followed by career sites.
Social media will still influence a company’s reputation as an employer. The more a corporation advocates themselves; the higher are going to be the probabilities of attracting top talents. Social media as a channel for employer branding is predicted to grow by 70 percent within the next five years.
News Source : 5 Trends Revolutionizing Employer Branding In 2020
3 Useful Strategies for Hiring Managers to Prioritize Skills over Resumes when Recruting
Though resumes offers a perception of a candidate’s ability to find out , it’s going to not be so effective at remarking the potential candidate’s skills. during this article we are enlisting 3 strategies which will help hiring managers pick the proper candidate from the lot
FREMONT, CA: Though resumes offers a perception of a candidate’s ability to find out , it’s going to not be so effective at remarking the potential candidate’s skills. Some hiring managers may have encounter instances where a powerful resume wasn’t enough to work out an interviewee’s suitability for the work .
Academic achievements can help understand whether a replacement employee has potential or not. Yet, often businesses have immediate requirements for workers who are already skilled, not someone who may have training to try to to the work they need been hired for.
The following 3 strategies can help hiring managers pick the proper candidate from the lot:
Assess Candidate’s Character, Not Resume’s Highlights
Years of experience during a certain field might not be enough to suit into a corporation as every company has different work cultures and traditions. rather than that specialize in the potential hire’s experience list, hiring managers can attempt to grasp the essence of a candidate’s character-their eagerness to require initiative, support team members, and solve problems, and cultural fit. Employees, who can identify with their companies and their work, have the potential to drive companies towards progress. Through candidate assessments, hiring managers can identify a candidate’s work ethic, character, and behavioral competence. they will design the assessment process consistent with their business and department needs.
Format Interviews with Case-Based Approach
Interviews formatted with a case-based approach can help hiring managers get justification to a candidate’s thought process. Open-book or open-internet assessment can bring out a candidate’s ability to seek out the required information. Questions associated with any previous larger project of a candidate can show how organized their thoughts are, their presentation skills, accomplishments, and also how open they’re to share their shortcomings.
Prefer Assignments or Tasks to Resumes
Another effective strategy to rent the proper candidate is to assign them a task or project at the start of the appliance process. this will help attract interested and knowledgeable candidates for the work . Strategy intrinsically offers a much better evaluation of a candidate’s potential than what hiring managers can gather from reading uncountable resumes.
News Source : 3 Useful Strategies for Hiring Managers to Prioritize Skills over Resumes when Recruting
Check Out : Twitter | LinkedIn | https://medium.com/@emmabrown8019/5-trends-revolutionizing-employer-branding-in-2020-11b62e176d1 | [] | 2020-03-03 10:43:22.978000+00:00 | ['Employees', 'Magazine', 'Hrtech', 'Technology News', 'HR'] |
2,486 | Google Stadia and Nvidia GeForce Now to Come to iOS Through Safari | On Thursday November 19th Google and Nvidia both revealed that their own cloud game streaming services will begin supporting iOS devices but not in the way you may suspect.
Previously Apple has made it very clear that they don't want any game streaming services available through the App Store. But in a recent revision Apple made to their App Store Guidelines, they opened up a window for cloud based streaming app to list games on the App Store as individual apps. Even with those revisions developers still aren’t happy. To bypass these rules however these companies have decided to take a new route. Safari Web Apps, Both Nvidia and Google have confirmed plans to release a web app to allow iOS users to be able to play their favorite games on their iOS devices from the cloud.
Google Stadia
Google hasn’t really said too much about how or when they plan to bring Stadia to iOS but they did tweet mid-day Thursday confirming their plans to do so.
What they did note in the article from Polygon, is that the roll out will begin “several weeks from now”
Nvidia GeForce Now
In a press release from Nvidia early Thursday, Nvidia announced that starting today a beta for streaming with GeForce Now on iOS devices will begin. Nvidia says “[The GeForce Now iOS Beta will allow] more than 5 million GeForce NOW members can now access the latest experience by launching Safari from iPhone or iPad”. According to the press release members can test out the iOS beta by simply going to play.geforcenow.com, but Nvidia also says that anyone can sign up and people can take advantage of the free version of the service “to test the waters”
With Nvidia GeForce Now being tested on iOS devices, Nvidia also announced that Fortnite will soon be available on the service which in turn will also bring the return of Fortnite to iOS devices after Apple removed Fortnite from the App Store after Epic Games wouldn’t comply with Apples In-app purchase policy. | https://medium.com/@maustandrew/google-stadia-and-nvidia-geforce-now-to-come-to-ios-through-safari-da81d1942d48 | ['Andrew Maust'] | 2020-11-19 18:04:06.132000+00:00 | ['Gaming', 'Technology', 'Technology News', 'Game Development', 'iOS'] |
2,487 | Will you have heart disease? A heart disease predictor using Machine Learning | Project Design
The problem that I am trying to solve is to predict/classify whether a person has heart disease or not, based on some of the personal info and medical test results.
That is my primary goal of my project, and the second goal of my project is to build a heart disease predictor — a demo app using Flask.
For model evaluation, I will use recall score as my primary metric. Because I want to take into account of those cases where I predict someone is healthy but they actually have heart disease. I also pay attention to precision score when I am evaluating a model, because I don’t want to tell someone that he/she has heart disease but they actually don’t. This will get people upset and we don’t want that.
Tools
I will use numpy, pandas, matplotlib, seaborn, sklearn and Flask for this project.
Pipeline
Data
The data that I worked with is from UCI repo, The dataset contains 303 entries and 14 features. (The dataset is a little bit old, it is from 1988.)
The dataset includes features such as age, gender, resting blood pressure, chest pain type and etc. Some of these test can only be performed in a clinic, and patient won’t be able to perform those tests at home by themselves. So my target audience will be doctors or nurses.
Cleaning and EDA
The target variable is the last column from the original dataset which is named as num. It is a categorical feature labeled as 0, 1, 2, 3, 4. While 0 means the patient has no heart disease, 1, 2, 3, 4 mean that the patient has some kind of heart disease. Since my problem is to predict whether the person has heart disease or not, I grouped 1, 2, 3, 4 all together as one group to label the person has heart disease.
After grouping, I identified there are 6 NaNs values in my dataset which is just less than 2% of my dataset, so I decided to drop those NaNs. Finally, my class 0, which is the normal patient is about 54% of my whole dataset, and class 1 is about 46% of my dataset. I think my dataset is a balance dataset.
After some EDAs ,data cleanings and using StandScaler to scale my data (because my target variable is between 0&1, so I need to scale my non-0&1 columns to between 0&1), I built a base model that includes all 14 variables, using KNN, Logistic regression, Decision tree, Random forest, SVM, Naive Bayes, and XGboost. The best classifier for my base model is Logistic regression with a cross-validation recall score of 0.81. Then I used grid search to try to find the best hyper-parameter of each models, and still Logistic regression has the best CV recall score.
Modeling and Feature Engineering
Next, I decided to do some feature engineering to try to find out which variables contribute significantly to my models. I used chi-square test from sklearn to find out significance level of each variable to my model, and I selected all the variables that has a score is positive and high.
Model Performance in Training Set
I used those selected variables to build more models using the algorithms that I used for the base model. Logistic regression still has the best CV recall score. After perform grid search again with the new selected variables, my Logistic regression CV recall score got improved to 0.8571 and it is the best among all the models.
ROC curve
Then I used logistic regression with tuned hyper-parameters to make prediction using my test set. The recall score of my test set is 0.87, which is even better than my training CV score, so I think my model doesn’t have overfitting problem.
Precision vs Recall Curve
Then I looked at my confusion matrix, precision vs. recall curve graph and ROC graph of my model. The best threshold that could maximize both precision and recall is about 0.5, which is the threshold that I am using for my model. Also I have a AUC score of 0.9087 which is pretty good as well. So I decided this will be my final model.
Web App
After having my final model ready, I need to prepare a few things before my web app prototype could actually function. I need to create a pipeline and use pickle to export my final model and data for later use in Flask. After defining a function to make prediction, I used HTML, CSS and Flask to create a simple demo web app — Heart Disease Predictor. Users can input their info and the predictor will tell you whether you have a heart disease or not with a confidence score. | https://medium.com/@kahousio/will-you-have-heart-disease-a-heart-disease-predictor-using-machine-learning-38df796f562d | ['Ka Hou Sio'] | 2019-02-19 18:53:29.776000+00:00 | ['Healthcare Technology', 'Metis', 'Data Science', 'Supervised Learning', 'Machine Learning'] |
2,488 | 8 Tricks for Keeping a Neat Desk | Is your desk a mess? You might be surprised at how much more productive you are if you follow our simple tips for cleaning up the clutter.
By Jill Duffy
Ask any highly organized person what they need to get work done, and they’ll invariably say, “I need a tidy space.” Keeping your physical work space neat is important to helping us focus and work productively. Clutter can be very distracting. Our environment greatly influences our behavior, mood, and state of mind, so if you want to feel ready to tackle the day, you need to keep your desk area organized.
Now that so many more of us are working from home than ever, there’s even more need to keep your home work space functional. Of course, funds and resources may be tight in these days of salary reductions and furloughs. We’ve limited our suggestions accordingly. All of them are simple to implement, and most of them cost very little or nothing at all. You’re likely to have at least some of the supplies you need on hand, and the rest can be found easily online or at most office supply or hardware stores.
If the tips in this article seem up your alley, you might also want to read up on 10 ways to improve your home office, as well as everything you need to set up an ergonomic home office.
1. Digitize Ruthlessly
The number one way to keep a desk clean and clear is to prevent clutter from piling up in the first place. So what piles up on your desk or table? For many people, it’s paper. When you get a piece of paper, digitize it immediately, or at least within a week of receiving it, and then file, shred, or recycle it. Many very good mobile scanning apps are free, so you can turn paper into digital documents with your phone.
2. Use Velcro to Mount Items Not in Use
We picked up this tip from prototype designer Zack Freedman: He put one piece of a strip of Velcro on the back of his Bluetooth keyboard and the other strip on the side of his desk. That way, when he needs to use his soldering iron, the electronics are neatly out of the way. Even if you aren’t melting metal, you can still use this trick with your keyboard, a trackpad, or other small devices and items. Be sure to put the soft side of the Velcro on the device or item because it won’t scratch your desk. Don’t use this trick if you have a varnished desk, as it could ruin the finish.
3. Keep a Microfiber Cloth on Hand (Preferably XL)
A microfiber cloth is a must-have item on any computer work desk. Use it to remove smudges from your monitor, webcam, phone, and glasses. At the end of the day, drape it over your keyboard to protect it from dust.
4. Bundle Wires and Cables With Velcro One-Wraps
Musicians, who know all too well the pain of keeping wires and cables organized, swear by Velcro One-Wraps. I do, too. These little multipurpose wonders cost a few dollars for a five-pack in various colors, and they make your workspace tidy, fast. Use them to keep charging cables bundled neatly, or to reign in excess cord length dangling behind your computer. If you buy extra long cord wraps, you can secure your wires around the leg of a desk to keep them firmly in place. PCMag has more tips on organizing cords and cables, not only in your workspace, but also around your home and office.
5. Upcycle Containers to Store Odds and Ends
You know those tins and decorative boxes you get with random gifts, or packaging that you keep because you think it’ll be useful someday? Today’s the day. Use boxes and containers to store odds and ends, like paper clips, charging cables, or whatever accumulates on your desk. Stackable boxes or containers make it even tidier. If you have larger items, try a shoebox instead. If you want to add a pop of color and design to your space, cover the shoebox in hefty gift wrap or self-stick wallpaper.
6. Label Folders, Chargers, and Other Items
Organized people love label makers. When you label things around your desk, office, and home, it’s easier to find what you need quickly, as well as put it in its place. Label folders or sections of an accordion binder to keep all the papers you can’t recycle or shred, such as birth and death certificates and wills. Label chargers and never argue again about whose phone charger is the one with the frayed end. To buy a good label maker, you can spend as little or as much as you want, but we like the inexpensive Brother P Touch Cube (about $50) for modest labeling needs. It’s also tiny enough to tuck into a drawer when not in use.
7. Hang or Guide Wires With Command Hooks
Command hooks help you keep your cords and cables organized by not letting them hang all around your workspace. They work wonders around home entertainment systems, too. You can stick these little hooks around the back of your desk or against the wall to guide the cords out of the way. Command hooks have a sticky backing that peels off easily when you pull down on the tab to remove them. They’re better than nail-in coaxial cable clips if you don’t want to hammer holes into your walls or furniture.
8. Use Drawers, Cabinets, and Bookshelves
People leave stuff on their desks when they can’t or don’t want to throw it away. If you want to have a neat desk space, you must find a permanent place for these items. All too often, the space we need is right under our noses; we’re simply not using it. If your desk has cabinets or drawers, are they full? If you organized all the stuff that’s cluttering your workspace neatly or put it into stackable boxes, would you be able to fit it all into your desk drawers, cabinets, or even neatly on a bookshelf nearby?
Again, shoeboxes are a great help for storing and organizing odd items, and they’re prettier looking if you encase them in gift wrap, self-stick wallpaper, a coat of paint, or whatever you have on hand. Of course, you might need to take a little time first to clear out the unneeded, unwanted, or obsolete stuff that tends to fill up unexamined drawers, cabinets, and shelves! | https://medium.com/pcmag-access/8-tricks-for-keeping-a-neat-desk-fd3888b76610 | [] | 2020-11-10 14:42:16.155000+00:00 | ['Productivity', 'Remote Working', 'Organization', 'Tips', 'Technology'] |
2,489 | Hard conversations, hard interviews… | When I first started as a recruiter, one of my worst fears was doing interviews. At that time, I had a “mental contradictory”: Why am I not enjoying interviews if they are one of the best parts of my job as a recruiter? Meeting new people, explaining our projects to them… what’s wrong with me!
Finally, I realized what was happening to me. I didn’t know how to deal with interviews when a candidate was shy, not too talkative, rude or if I could see that he or she wasn’t honest.
After some years of experience and, what’s more important, after learning from different colleagues, I’ve acquired some techniques that help to make these conversations more fluent and in case it’s not possible (it can happen), feel confident to finish the interview with some dignity hahaha.
Image by Gerd Altmann from Pixabay
Let me share these tips with you!
When we do Competency-based interviews, it usually means that we have already met the candidate in a shorter first interview, those we use to screen candidates and check there aren’t red flags (required language level, salary according to the range we can offer, career expectations…). After this short interview, we have an idea of how this person could be so we can prepare the interview specifically for each candidate. However, we have to be careful to avoid prejudices.
Shy Person
It is common that these candidates normally answer with yes/no answers or with short sentences. First of all, we should try to find out, why this person is giving us such short answers.
There can be several possibilities: shyness, nervousness, she or he is not a talkative person… and this would be okay. Try to break the ice with this person, make them feel comfortable by dialoguing, not only asking structured questions! … If those are the reasons for their lack of speech, they will talk a little bit more. It usually happens that when these professionals are in a good environment and can be themselves, they become more confident.
Chatterbox
On the contrary, we can meet the opposite personality. Someone that never stops talking! Bla bla bla bla bla bla… and you don’t know how to interrupt this person. Has it ever happened to you? “I just wanted to know when the developer started to code in Kotlin and now I know how many brothers the professional has!” Hahaha!
When you want to clarify something or ask other questions because you already know what you need about a specific topic you wanted to check, don’t be afraid and interrupt them. Always with absolute respect. You can use sentences like… “Sorry for interrupting you, but I wanted to clarify what you said about…” or “Excuse me, before moving to whatever, I wanted to know about whatever…” The interview should be a dialogue, a conversation… the communication is reciprocal. I like these profiles as they are usually very enthusiastic about what they do. We only need to know how to get all the information we need, in time.
Critic/sceptic person
These professionals have the feeling they are being questioned all the time. They will probably adopt a defensive attitude and posture, and they will counter-question you.
In my opinion, these are one of the most difficult interviews as I don’t feel comfortable when I see they feel attacked. What can work is being honest and try to calm them down explaining them that it’s our way to know them better and check if both parties (he/she and the company) can work comfortably together. We want them to feel good if they join our teams. Turn the interview into a conversation instead of something that looks very structured so they can feel it is a dialogue. Make some jokes to distend the moment.
Show-off
When we think of a Show-off person, maybe we think of someone very senior but… don’t take things for granted. I’ve seen very junior developers so, so presumptuous… This kind of people think they are THE experts in their subject and that others are less experienced. In those cases, be brave and let them know that our teams are experts too and the decisions are made taking into account different opinions (also from junior developers…).
Observe their reaction, if they disagree or they explain they enjoy working on their own because it is easier and they don’t feel comfortable discussing with other experts, it’s possible that these professionals won’t fit in our culture (at least in my company). If you need someone to work on his/her own, this professional would be perfect for your position!
***
For all the interviews, it doesn’t matter what kind of professional we have opposite us (or on the other side of our screen), a very useful technique is the STAR method (Situation, Task, Action, Result). It helps the candidate to describe a concrete situation that allows us to find out about specific competences you want to check (teamwork, feedback acceptance, etc.). It is very important to ask about concrete and past examples so you can get an idea about how they perform in different moments. Take a look at the different steps , from general to more concrete:
SITUATION: start asking them to describe a specific situation. Example: Can you give an example of a situation when you made a big error at work?
TASK: Once you know the situation, ask them what their responsibility was at that moment or the tasks they were doing. Sometimes when they explain the situation, they also explain what their mistake was. If that’s the case, there’s no need to ask it again (use your Active listening skills). Example: What did you have to do? What‘s your responsibility? What was your goal?
ACTION: Ask them to specify exactly what they did when they faced that situation. It can happen that they describe what the team did. In that moment, ask them what they did as an individual. We want to “imagine” them in action! What their role was in that situation. Example: When you saw that the webpage broke, what did you do?
RESULT: We want to know how everything turned out. If the decisions of the professional were useful, if not… In any case, a bad result means that the professional did it wrongly. Sometimes, a situation couldn’t be solved due to different external factors, although our candidate made all the effort she/he could. Example: Was the webpage fixed? How did everything end?
With this technique, we can go further in depth about how they acted in different moments, which can be similar to some situations that person could experience in our company in case they join any of our teams and therefore see if they are a cultural fit.
In conclusion, it’s not always easy to do a good interview. But, if we have some tools and techniques, we can improve the quality of our interviews and in consequence, the quality of the information we get from them and advise the hiring manager to make a proper decision. | https://medium.com/@newwork-es/hard-conversations-hard-interviews-89ef8237d34 | ['New Work Spain'] | 2020-11-25 09:39:54.463000+00:00 | ['Recruitment', 'Tech', 'Software Development', 'Technology', 'Engineering'] |
2,490 | New foldable phone to compete Samsung Galaxy Z Fold 2, 40% cheaper | New foldable phone to compete Samsung Galaxy Z Fold 2, 40% cheaper
The quad camera setup has been used in the phone, which has a 64MP primary camera. Apart from this, 3 more sensors of 16MP + 8MP + 32MP have been provided in the phone.
Royole, the world's first foldable smartphone maker, has launched its new foldable phone Royole FlexPai 2. This phone is a successor of FlexPai launched in the year 2018. This company was established in 2012. This is the second foldable phone of the company, which the company launched at an event today.
Price and availability
This phone has been launched for 9988 yuan i.e. around 1,08,305 rupees. This phone is about 40 percent cheaper than the Samsung Galaxy Z Fold 2. Talk about the availability of this phone, then this phone is now available for pre-booking in China.
Specifications of Royole FlexPai 2
Many improvements have been made to this phone compared to its predecessor. This phone uses the 3rd generation Cicada Wing Flexible OLED display which comes with stepless 3S hinge. This phone can be folded with zero gap. After folding the phone completely, the company has reduced the thickness of the phone by 40%.
This phone comes with a 7.8-inch unfolded display, which is the largest foldable smartphone display yet. Whose resolution is 1920 x 1,440 pixels. After folding, the primary display of the phone is 5.5 inches and the secondary display is 5.4 inches. This phone comes with Snapdragon 865 chipset which comes with 8GB / 12GB RAM and 256GB / 512GB internal storage. The phone comes with dual SIM card slot and dual 5G connectivity mode. The phone runs on Android 10 operating system.
The camera
The quad camera setup has been used in the phone, which has a 64MP primary camera. Apart from this, 3 more sensors of 16MP + 8MP + 32MP have been provided in the phone. This phone can be purchased in Sunrise Gold, Midnight Black and Cosmic Gray color options. This phone has a battery of 4,450mAh.
For more tech news | https://medium.com/@rowhit66/new-foldable-phone-to-compete-samsung-galaxy-z-fold-2-40-cheaper-eff865f04750 | ['Rowhit Sharma'] | 2020-09-23 03:03:08.842000+00:00 | ['Technews', 'Technology', 'Technology News', 'Samsung Galaxy Fold 2', 'Tech'] |
2,491 | TT Technology is offering a NordVPN discount — protect yourself online now | For people that invest a significant amount of time in tinkering with technology, having a set of reliable sources that review hardware and software is pivotal for being in the know of the hottest upcoming consumer tech out there. TT Technology YouTube channel is a great source of information when it comes to Product reviews. Now they offer a NordVPN discount — stay safe online for a good price.
How to get a NordVPN discount from TT Technology?
Click on the link below:
TT Technology 70% discount on NordVPN 3-year subscription
After clicking on the link, you will need to choose a 3-year subscription option and enter your payment details.
A discount code from TT Technology will be applied automatically like in the screenshot below:
What does NordVPN have to offer?
With a fat discount from TT Technology you will be able to look at the Internet from a new and unique point of view: After connecting to one of the 5000 servers located in 60+ countries you will find it easy to bypass almost any geographical restriction out there with just a few mouse clicks. AES military grade encryption standard will ensure that your traffic won’t be decrypted by anyone — it’s simply impossible, thus you can browse on any public-wifi network without a single worry in the world, your data will be safe with you.
Never heard of TT Technology?
According to their YouTube description, TT Technology YouTube channel provides technology reviews and tutorials.
Alongside their product reviews there’s also tons of tutorials that help you improve your “tech-tinkering” skills with Android & iOS devices.
TT Technology YouTube channel was created back in 2015 and in 4 years gained more than 250 000 subscribers.
TT Technology isn’t the only YouTuber who is using NordVPN — ADV China and NumberPhile use NordVPN too.
Why NordVPN?
Here are a few more reasons why TT Technology chose NordVPN:
NordVPN does not log your browsing data: This VPN provider is located in Panama, where no mandatory data retention laws exist — whenever you connect to a VPN server, you can be sure that nobody’s collecting your personal information.
Fast servers ensure that you can connect to a VPN server located thousands of miles away from you and still continue gaming/streaming or torrenting without encountering slow internet connectivity issues.
There are 5000 servers located in 60+ countries — this is a guarantee that you will never ever feel that there’s a shortage of servers and you can bypass almost any geo-restriction out there.
Want to know what the hell is UDP and TCP? Simply contact NordVPN Customer support and they will explain you every single tech related term quickly and in a professional manner. | https://medium.com/@gentelmang2/tt-technology-nordvpn-discount-offer-3f44798ee277 | ['George Space'] | 2019-09-30 09:55:35.014000+00:00 | ['Cybersecurity', 'YouTube', 'Privacy', 'Technology', 'VPN'] |
2,492 | The Cheapest EV Is Here. But, Who Will Buy It? | Electric vehicles (EVs) have gone beyond the exception and are fast becoming the norm. First, it was Tesla, but now every major car manufacturer is jumping the bandwagon. The long-term cost may be lower but upfront gas cars tend to be cheaper than electric. Now one company is going to change that. The Chinese firm Kandi is debuting its sensational low-cost electric car in the US. The cost? As low as $7500 after federal and state subsidies (price in California). Kandi created a lot of buzz in China when it launched its low-cost cars there this summer. Their entire inventory is pre-booked even before the launch. Will it create a similar buzz in the US? Or will it pale in comparison to Tesla and the upcoming avalanche of European EVs?
Who is Kandi?
Kandi Technologies is a Chinese battery and electric car manufacturer, headquartered in Jinhua, China. The company is launching its cars in the US in 2020 with its K23 and K27 models. Both of the models look like a tiny hatchback. They come fitted with all the modern gadgetry — a touchscreen, backup camera, and Bluetooth. It is not clear if advanced features like smart cruise control, rear traffic alert, and lane change monitor are available. The K27 model retails for $17,499 while the larger K23 would retail at $24,499. But, the prices come down heavily when federal ($7,500) and state subsidies ($2,500 in CA and TX) are included. The K27 comes down to $7,499 while the K23 to 14,499. But, even with all the price cuts, will the Kandi sell big in the US?
A comparison of various EVs. Sources — Kandi K23 and K27, Tesla, Nissan
Why Kandi may fail?
Several things run against the Kandi cars. They are tiny, to begin with. Compared to a Tesla Model S (lowest-priced tesla), the larger Kandi, the K23 is shorter and narrower. Then comes the power under the hood. With only a 17.6 kWh battery, the Kandi K27 can barely reach 63 mph top speed, making it a less desirable vehicle for highway driving. The K23 reaches 72 mph with a 41.4 kWh battery. The range is no better. Neither car can go a 150 mile in one charge; the K27’s range is 59 miles while K23’s is 111 miles. And that charge would take 6–7 hours on 240 V supply (level 2 charging). Fast charging is not available on either. So pretty much no highway driving at least for the K27.
Kandi K27. Image from Kandi America.
Forget comparison to Tesla, the Kandi products don’t even compare to the current cheapest EV in the US — Nissan Leaf (see table). Add to this the American love for bigger and powerful automobiles, the Kandi may seem dead on arrival. The smaller K27 resembles a Fiat 500 while the larger K23 looks like Honda Fit. Yet, if you see the company’s stock it's booming! Right after it received approval for state subsidies from Texas and California. Part of it is just the stock sentiment, but others could be the utility of Kandi as an urban car.
The utility of the car
Kandi is tiny, doesn’t drive very fast, and can only last for a 100 mile on a charge! Perfect for office commute and city driving. You can’t go above 40 in most cities. Even with traffic, the mile range should last a few days and the car can happily charge in the garage overnight. Gas cars tend to give poor mileage in city driving. Kandi can come in handy, especially for the nifty price. This can work as a second, city car.
The car can also work well as a cab. For those that do drive sharing, the car can be perfect. It can offset their gas bills which in states like California can still be a lot. Kandi may tie-up with Lyft or Uber for a car purchase program that allows it to sell its car more. It can also tie-up with car-share services like Zipcar to rent their car to urban day renters. The company ran a similar program in China called EV carshare.
Given the low initial price, Kandi should be able to sell their car to the above consumers as long as the subsidies last (only the first 2000 cars in TX). Perhaps that’s what’s guiding the investor sentiment. Even though the subsidies would phase out with time, the company would have sold quite a few of its inventory. If their product is good, perhaps they can expand to more competitive and traditional offerings. If Kandi creates a ripple in the market, more manufacturers would try to enter the low-cost EV segment.
While Kandi’s products may look unappealing at a first glance, it would be worthwhile to recall the humble Japanese brands — Honda and Toyota. Each of them came as a cheap alternative looked rather utilitarian but are consistently rank at the top of their segments, eventually taking over the auto market. The reason reliability, longevity, and performance. Honda Civic is one of the bestselling cars ever.
Maybe Kandi K23 or K27 would become the bestselling tiny EV! For now, Kandi is pre-booking their cars with a refundable $100 deposit. Interested?
Update: Recently Hindenberg Research, a short-seller, accused Kandi of falsifying its sales. After that, both the stock and the reputation of the company has taken a bit of a hit. | https://medium.com/@salhasan/the-cheapest-ev-is-here-but-who-will-buy-it-be3026dcac4c | ['Salman Hasan'] | 2020-12-10 21:57:35.183000+00:00 | ['Innovation', 'Electric Vehicles', 'Transportation', 'Technology', 'Cars'] |
2,493 | 100 Words On….. Education | Photo by Susan Yin on Unsplash
Cyber Security is a constant learning curve that changes daily. New threats are emerging while old threats continue to plague us. Entering the workforce after years of education, we are bombarded with reminders to be vigilant. I often think many have begun their cyber security awareness training too late. With nearly every child today not knowing a world without the Internet, smart phones, and millions of apps, are we finding more wilful ignorance or simply desensitisation? Experience and cyber smarts are best started from an early age to gain a crucial employment advantage, protect our valuable data, and remain secure. | https://medium.com/the-100-words-project/100-words-on-education-f3150e9ae478 | ['Digitally Vicarious'] | 2020-12-16 23:28:44.442000+00:00 | ['Information Technology', '100 Words Project', 'Cybersecurity', 'Information Security', 'Education'] |
2,494 | Quirky for Mobile Products? | I read about Quirky.com in the paper the other day. From my perusal it’s a site where people (you and me) can submit ideas for manufactured products.
For example, let’s say I have an idea for a dog leash with a built in flashlight. Clever eh? Well, you can describe it to the best of your ability and submit it to the Quirky.com site.
Ideas are socially curated (translation: visitors to the site can vote on how much they like the product.) Once they reach a certain threshold then the Quirky staff analyzes the idea from a design, marketing, manufacturing perspective, and decides whether it’s worth pursuing. If it is, they actually design or mock it up and put it up for “pre-sale” in their catalog. If enough people buy it then the product is actually created and you (the inventor) gets a small cut of the profits. Here’s their diagram of the process.
What if we created a similar service, but focused exclusively on products for app phones (e.g. iPhone and Android.) What do you think? | https://medium.com/pito-s-blog/quirky-for-mobile-products-df801c00d81c | ['Pito Salas'] | 2017-06-08 19:19:22.944000+00:00 | ['Technology', 'Programming', 'Mobile', 'Quirky'] |
2,495 | Watch - Family Guy 'Season 19' Episode 9 (s19e9) On FOX's | ➕Official Partners “TVs” TV Shows & Movies
● Watch Family Guy Season 19 Episode 9 Eng Sub ●
Family Guy Season 19 Episode 9 : Full Series
ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ
ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ
Family Guy — Season 19, Episode 9 || FULL EPISODES : When the family fails to help Lois with the Christmas shopping, she walks out on the family and the Griffins must try to save Christmas on their own. .
Family Guy 19x9 > Family Guy S19xE9 > Family Guy S19E9 > Family Guy TVs > Family Guy Cast > Family Guy Online > Family Guy Eps.19 > Family Guy Season 19 > Family Guy Episode 9 > Family Guy Premiere > Family Guy New Season > Family Guy Full Episodes > Family Guy Season 19 Episode 9 > Watch Family Guy Season 19 Episode 9 Online
Streaming Family Guy Season 19 :: Episode 9 S19E9 ► ((Episode 9 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV Family Guy ➤ Let’s go to watch the latest episodes of your favourite Family Guy.
Family Guy 19x9
Family Guy S19E9
Family Guy TVs
Family Guy Cast
Family Guy Online
Family Guy Eps.19
Family Guy Season 19
Family Guy Episode 9
Family Guy Premiere
Family Guy New Season
Family Guy Full Episodes
Family Guy Watch Online
Family Guy Season 19 Episode 9
Watch Family Guy Season 19 Episode 9 Online
⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package.
✌ THE STORY ✌
Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing
(Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself.
✌ STREAMING MEDIA ✌
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content.
Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”.
This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation.
✌ TELEVISION SHOW AND HISTORY ✌
A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings.
A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video.
Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet.
The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 19, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAFamily Guy CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets.
✌ FINAL THOUGHTS ✌
The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature.
FIND US:
✔️ https://www.ontvsflix.com/tv/1434-19-9/family-guy.html
✔️ Instagram: https://instagram.com
✔️ Twitter: https://twitter.com
✔️ Facebook: https://www.facebook.com | https://medium.com/family-guy-s19-episode-9/watch-family-guy-season19-episode-9-s19e9-on-fox-s-842b88c3790f | ["Rock D'Johnson"] | 2020-12-13 10:45:23.145000+00:00 | ['Animation', 'Covid 19', 'Technology', 'Cartoon', 'Anime'] |
2,496 | Programmable Logic Controller (PLC) Market Size, Share, Price, Trend, and Forecast by 2024 | Global Programmable Logic Controller (PLC) Market Report 2019 — Market Size, Share, Price, Trend, and Forecast is a professional and in-depth study on the current state of the global Programmable Logic Controller (PLC) industry.
The global market size of Programmable Logic Controller (PLC) is $$ million in 2018 with — % CAGR from 2014 to 2018, and it is expected to reach $$ million by the end of 2024 with a CAGR of — % from 2019 to 2024.
Key Points Covered in this Report:
The report provides key statistics on the market status of the Programmable Logic Controller (PLC) manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the industry. The report provides a basic overview of the industry including its definition, applications and manufacturing technology. The report presents the company profile, product specifications, capacity, production value, and 2013–2018 market shares for key vendors. The total market is further divided by company, by country, and by application/type for the competitive landscape analysis. The report estimates the 2019–2024 market development trends of the Programmable Logic Controller (PLC) industry. Analysis of upstream raw materials, downstream demand and current market dynamics is also carried out The report makes some important proposals for a new project of the Programmable Logic Controller (PLC) Industry before evaluating its feasibility.
Get a Sample Copy of this Report @ https://www.planetmarketreports.com/report-sample/global-programmable-logic-controller-plc-market-report-2019
There are 4 key segments covered in this report:
Competitor segment
Product type segment
Application segment
Geographical segment
Competitor segment, At least 13 companies are included:
Rockwell Allen-Bradley US
Schneider Modicon US
GE Fanuc US
TI US
Idec US
Maxim US
Enquire Now to get a complete list of companies.
Product Type segment,
Nano
Micro
Medium
Large
Application segment,
Steel Industry
Petrochemical and Gas Industry
Power Industry
Automobile Industry
Others
Geographical segment,
North America
U.S.
Canada
Mexico
Europe
Germany
France
UK
Italy
Spain
Russia
Asia Pacific
China
Japan
India
South Korea
Australia
The Middle East and Africa
Saudi Arabia
UAE
South Africa
South America
Brazil
Argentina
Get More Information @ https://www.planetmarketreports.com/reports/global-programmable-logic-controller-plc-market-report-2019
Why Purchase this Report:
Analyzing the outlook of the market with the recent trends and SWOT analysis
Market dynamics scenario, along with growth opportunities of the market in the years to come
Market segmentation analysis including qualitative and quantitative research incorporating the impact of economic and non-economic aspects
Regional and country-level analysis integrating the demand and supply forces that are influencing the growth of the market.
Market value USD Million and volume Units Million data for each segment and sub-segment
Competitive landscape involving the market share of major players, along with the new projects and strategies adopted by players in the past five years
The information for each competitor includes:
Company Profile Main Business Information SWOT Analysis Sales, Revenue, Price and Gross Margin Market Share
Comprehensive company profiles covering the product offerings, key financial information, recent developments, SWOT analysis, and strategies employed by the major market players
Please let us know your requirements and we can provide custom report accordingly.
Contact Info:
Name: Jennifer Daniel
Email-Id: [email protected]
US: +1–716–2260907
UK: +447441952057
Organization: Planet Market Reports
Web | Facebook | Linkedin | Twitter | https://medium.com/@pressreleases/programmable-logic-controller-plc-market-size-share-price-trend-and-forecast-by-2024-728618f0e568 | ['Sameer Shah'] | 2019-12-17 10:15:26.617000+00:00 | ['Technology', 'Industry 4 0', 'Computers', 'Plc', 'Market Research Reports'] |
2,497 | Artificial Intelligence: Its types and Implications | Artificial Intelligence: Its types and Implications
In recent days the trending topic is Artificial Intelligence (AI). AI has brought a drastic change in human capital as well as in workforce. Read further to know about AI and its types and also its implications on the real world.
What is Artificial Intelligence?
Think of a machine or a program which acts as an intermediary between humans without direct contact. This is called Artificial Intelligence (AI). However, the response of people towards AI was a failure when it was introduced. But in recent days the people solely depend upon the Tech Giants like Google, Facebook, Apple, etc. and these companies use AI to improve their technologies in order to catch up people’s mind. Could you believe that a person has a lifetime dream to buy an iPhone? This situation was not there in the early 2000s. How is this possible?
AI Influences on People
You may wonder how a machine could replace a human but it’s happening. For example, in Google, if you type anything it gives out the best result for you. Probably you wouldn’t command it to give out the best results, then how is it possible? This is because of AI. Let me explain to you elaborately, Google sets up an AI program in the search bar where you search for your results where it will analyze your previous searches and optimizes the result and gives out the best for you. From this case, you can observe that AI reduces our workforce from typing in an elaborate manner. This may seem to be small scale but there are cases where AI is applied in a large scale too. For example, many companies are using highly advanced AI programs to manage their database environments and they also use for their software development. This influences people to produce more AI programs to manage works and other development stuff.
This graph represents the no.of startups developing AI programs and systems. It clearly represents the influence of AI on people over the years and it was at its peak on and after 2015. Many people were encouraged to do AI systems for enterprises to manage their work and today many companies relied on AI for workflow and development. The evolution of AI goes on and many new applications are found for effectiveness and capability as the expectations go on.
Applications of AI
Anyone can create their own AI program or system according to their own flexibility. As anyone can create their AI program it is called a User-Defined system. Being a user-defined system there are many applications of AI around the world. Some of the prominent applications are Machine Learning and Deep Learning.
Machine Learning
Machine Learning is an application of AI which deals with computer algorithms. You might have a question what is Computer Algorithms? It is a set of sequence which are pre-build in computers to solve cases and problems. Machine Learning provides a computer or software with the ability to learn automatically. The program for Machine Learning is not necessary to be in a precise manner which is considered as an advantage of Machine Learning. What can Machine Learning be used for? Machine Learning focuses on the development of computer programs and it is also used for identifying data patterns (tells us the way by which the data has been extracted). Being an automated learning program, Machine Learning identifies the Data Patterns and gives us the ideal decision to make and also used for predictions. Some of the well-known examples are Siri by Apple and Alexa by Amazon.
Deep Learning
Deep Learning is considered as a subset of Machine Learning where artificial neural networks( a series of algorithms which imitates the function of human brain )and algorithms which are inspired by the human brain provide the computer with a broader range of data. As the computer access a large amount of data Deep Learning helps the machine to solve complex problems even when it is using an unstructured data pattern. You can observe that I had related the human brain with Deep learning at places. Why is it so? There are many relations between our human brain and Deep Learning. Firstly, the functions of Deep Learning are well-related to the highly advanced functions of the human brain. Deep Learning has the presence of some large elements which are most similar to the neurons present in our human brain. Finally, Deep Learning consists of artificial neural networks which are designed way too similar to the networks present in the human brain. One of the main advantages of Deep Learning is the capacity to execute programs on their own and it combines data using networks to enable the program to be faster.
Types of Artificial Intelligence
There are many AI programs out there, but here we’re going to have a look at the three predominant types of AI. The three types are namely:
Artificial Narrow Intelligence (ANI)
Artificial General Intelligence (AGI)
Artificial Super Intelligence (ASI)
Artificial Narrow Intelligence (ANI):
ANI is artificial intelligence where it performs only one narrow or simple task at a time. It is used in areas such as weather forecasting, able to play chess, data analyzing and also helps to develop software like Siri and Alexa. This intelligence is also called Weak Intelligence due to its functions and capacity. ANI is considered as Stage-1 in this classification.
Artificial General Intelligence (AGI):
AGI is the hypothetical intelligence of a machine which has the capacity to understand or learn any task that a human can do. It matches its intelligence with human brain when compared to functions. The paramount target of Artificial General Intelligence is Future Studies. One of the best examples is IBM’s Watson, which is an AI program considered as one of the smartest programs in the world.
Artificial Super Intelligence (ASI):
ASI is considered as the future vision of AI. This vision is accomplished when the capability of computers and machines outclass humans. This intelligence when improvised will be the dominant form or position in major sectors of the world. However, Technological researchers and Statisticians disagree with this Theory of machines surpassing human intelligence. One of the main drawbacks of Artificial Super Intelligence is that they can’t match the cognitive senses of human and the human perception skills. These two play a vital role in decision-making situations. For example, imagine there is a popular company which has a CEO. Now the CEO of the company has to make an important decision which has impact on employees across the globe, we can’t do any analysis at this time. So, we can’t expect anything from a program at this point and here is where humans use their cognitive and perception skills to find a solution. The advantage of ASI is that they have the capability to produce instant reports and they are smart at every moment where humans are not.
Implications of AI in Finance
There are many implications of AI in the finance industry. AI is one of the best in data mining. This advantage can be used in sectors like Wealth Management and Investment Banking where they use data mining for sentiment analysis which means the status of a company like performing, outperforming, poor etc. AI helps the Wealth Management sectors to be efficient by enhancing the accuracy of trading plans and decisions. Nowadays, top wealth management companies like Goldman Sachs, J.P.Morgan, and Vanguard Group are using AI for better insight. As AI being an user-defined system financial companies are using trading algorithms which are built using sentiment analysis and other public data sources, helps in processing the data. Banks use Artificial Intelligence to identify fraudulent cases and also provides personalized plans and recommendations for customers. AI has been very helpful in Risk Management by providing optimized models and paradigms (methodologies). Therefore, AI has been a ‘game-changer’ in the Financial World by shifting it towards technological innovations, and still, there are many implications of AI in finance.
Can AI change the world?
The world is fast evolving and there has been a trend in getting anything at instant. For example, in cricket people don’t like to watch a test series instead they are interested in T20 matches. Being a rush-up world people don’t have the patience to get a job done within a long period and here is where AI gives up a hand to people to get the job done at an instant. AI has the capacity and it functions competitively to our human brain. For example, if we got our hand at fire we would quickly take off our hand from it and this activity would take place within fraction of seconds and this is how AI performs. This is the reason why AI has become more popular. But the question is Can AI change the world? My answer will be a ‘No’ because the globe’s most abundant and important resource is Human Capital and how can a machine or a program replace it. Human emotions play a vital role in this world. Can you ever imagine a world with a sequence of algorithmic programs? If you ask what is the future of AI it would be robotic devices, smart intelligence, human interaction, etc. but the future of Humans is their vision and nothing is capable of replacing it. Apart from this, there is also a bright side of AI. Artificial Intelligence plays a key role in productivity in the future. AI is fully based on automated processes which helps in analyzing and predictions. According to a report, the market spending of companies is major in cognitive technologies and AI. This reflects how the companies are relied on AI for productivity. AI is also considered a good investment for the future as it is one of the fastest-growing technological chains in the world. Therefore AI can’t change the world by replacing humans in various sectors but can change the world by shifting people towards a technological side.
Originally published at my website https://www.insightbig.com/. You are welcome to visit my website. | https://medium.com/datazen/artificial-intelligence-its-types-and-implications-b95d5a4b9959 | ['Nikhil Adithyan'] | 2020-07-14 04:01:46.610000+00:00 | ['AI', 'Tech', 'Finance', 'Artificial Intelligence', 'Technology'] |
2,498 | In the course of a tough year, air cargo digitalisation takes off | SADLY, it has taken a global health pandemic to inspire air cargo industry digitalisation.
COVID-19 has inadvertently helped the airfreight industry reach a major turning point that airline association IATA has been championing for years: dragging a reluctant airfreight industry kicking and screaming into the digital age.
Shippers’ unequivocal demands for greater accessibility, transparency, efficiency and speed — especially noticeable qualities amidst the health crisis and with the general switch to e-commerce trading — has forced a remarkable air cargo industry volte-face move away from antiquated, legacy, manual, paper-based processes.
With their eye on enhanced profitably, some airfreight companies have now shrewdly turned to a group of dynamic digital disruptors to transform their transactions. Cargo.One is typically such an innovator which is currently propelling the acceleration of digitalisation amongst major players operating in global airfreight supply chains.
Moritz Claussen, one of three co-founders and one of two managing directors of Cargo.One, explains how the company’s digitalisation products have been assisting airline cargo departments through the pandemic crisis. “We have seen that the need and demand for digital distribution has grown substantially, especially with COVID-19 hitting the market,” he observes. “With rates and available freight capacities being more volatile than ever, Cargo.One is now the perfect partner to distribute air cargo capacities effectively to freight forwarders,” he states.
“We are digitally connected to the core cargo systems of all our partner airlines, allowing us to reflect real-time information on [our] platform and to facilitate bookings with instant confirmations.”
This way, the disruptor is also helping to free up much-needed resources in airlines’ thinly-stretched sales departments. “Digitalisation helps them focus [instead] on value-adding tasks — while at the same time digitally processing sales of general, express and cooled cargo for them, with no need for manual interference.”
The air cargo industry’s emergence into the digital sunlight has led to two distinct key advantages in these difficult times: firstly, cost-savings through efficiency gains in sales transactions and, secondly, incremental revenue opportunities — “by enabling airlines to sell to a vast and fast-growing increasing number of freight forwarders, at the right price,” Claussen insists.
He cites a comment from Jonathan Celetaria, the European sales director at strategic partner AirBridgeCargo (ABC) Airlines. “Over the past year, we have been able to access new market segments through Cargo.One, all the while improving the quality of our customer experience,” is Celetaria’s endorsement.
If more proof were needed of the digital footprint that Cargo.One’s e-bookings platform has made in such a small space of time, Condor, TUI and Sunclass – all of which have sales departments outsourced to the ECS group — are the latest airlines to have entered into partnerships with the Berlin-based, data-driven digital distribution channel, in which Lufthansa Cargo is a shareholder.
“With the prospect of digitalisation, we are currently in the process of integrating more carriers to the platform.”
They have joined TAP Air Portugal, Japan’s largest airline All Nippon Airways (ANA), Etihad Cargo, Japanese freighter airline Nippon Cargo Airlines, EL AL Israel Airlines’ freight division, Russia’s AirbridgeCargo (ABC) Airlines and its UK affiliate CargoLogicAir (CLA), Finnair Cargo and Brussels Airlines.
After only three years of doing business, Cargo.One’s total number of carriers already stands at 15, whilst the number of forwarders its platform is able to reach, including the global players Agility, Hellmann Worldwide Logistics and Dachser, has risen to 2,000. This figure includes many small- and medium-sized companies that were previously difficult or impossible to serve because of their embedded, costly manual processes.
Claussen further outlines: “We are currently in the process of integrating more carriers to the platform, with a number of large players from other continents lined up to join. While we are still only serving a relatively small percentage of all global airlines on the platform, we are growing rapidly. Combining large global carriers such as Lufthansa Cargo, ANA Cargo or AirBridgeCargo, for example, with smaller local players such as Finnair Cargo or TAP Air Cargo on the platform, we have been very successful in creating a great network coverage.”
At the click of a button, the system allows freight forwarders to instantly find capacity and real-time prices to almost any destination airport in the world — a particularly valuable development particularly in volatile times. “This capability has led to a substantial increase in the number of freight forwarders that have registered to book on Cargo.One over the past nine months. Year-over-year (2019–2020) our user base has tripled,” he underscores.
“The beauty of what we have been able to build is that we can integrate with literally any legacy system.”
It doesn’t end there. Overall, Claussen sees a high sense of urgency and awareness amongst other airlines all around the globe to digitise their business and thereby save costs and reach more customers. At the end of July, the enterprise announced it had raised US$18.6million capital investment to expand into North America and Asia markets, which will facilitate its ambition to build a global e-transactions system for the entire airfreight industry.
Claussen explains how the cash injection has already assisted the business in the rapid expansion of its services. “We are constantly working on bringing new functionalities and features to both our airline partners and those freight forwarders using our platform. To support this mission, we have grown our team significantly over the last few months which is helping us to keep up with integrating new airlines and entering new markets while at the same time delivering new product features,” he reveals.
Amongst those new features are options that allow teams to easily collaborate remotely, or improve the monitoring of new special events such as passive temperature-controlled shipments, for example. Furthermore, the company has invested in an array of data manipulation products “to support our partners in making data-driven decisions and better monitoring of their own performance,” Claussen emphasises. “Lastly, we are eyeing expansion into new [geographical] market [areas] to support our airline partners with a more global offering.”
Not surprisingly, the digital disruptor has been growing “at crazy rates” over the last year. “Bookings have increased by more than 600 per cent on the platform year-on-year, whilst we are now serving almost 2,000 freight forwarding branch offices on Cargo.One,” Claussen reveals.
“The beauty of what we have been able to build is that we can integrate with literally any legacy system,” he points out.“In fact, we have formed strategic partnerships with a number of digitalisation technology partners in the industry, for example IBS Software, to create plug-and-play solutions.” The company has successfully integrated with carriers that run on versions of Accelya’s SkyChain, Champ’s Cargospot, IBS Software’s iCargo, Champ Cargosystems’ eChamp and many other self-developed systems. “And, while we prefer working with application programming interfaces (APIs) and web services, we know how to make much simpler technology work,” he underlines.
Looking ahead, Claussen notes: “Over the next months and the whole year of 2021, the air cargo industry will play a crucial role in distributing COVID-19 vaccines around the world and [already] we see many airlines working hard to be able to provide the right infrastructure to master this major global challenge, for example, by preparing their cool hubs.”
Not surprisingly, he predicts that digitalisation will have accelerated further within the airfreight industry within the next two or three years, as most airlines will have established or will be in the process of establishing an infrastructure that allows them to distribute their offering much more effectively via digital booking channels.
***
This story first appeared in aircargoeye.com on 11 December 2020 | https://medium.com/predict/in-the-course-of-a-tough-year-air-cargo-digitalisation-takes-off-1029ecaee4b9 | ['Thelma Etim'] | 2020-12-14 01:44:09.123000+00:00 | ['Transportation', 'Logistics', 'Digital Transformation', 'Technology', 'Air Cargo'] |
2,499 | More on the Importance of Precision in Language | “Many of us in the Deep Learning community know that the major models of Deep Learning, i.e. Convolutional Neural Nets, LSTM Recurrent Neural Nets or Neural networks in general have existed since the 90s. It is now that we have the data (thanks to the Internet) and the computational power that we are able to see Deep Learning making an impact on our daily lives.”
It is great that many of you recognize that so called “deep” learning models have been around for almost three decades now. Why is it so difficult then for you to accept the fact that machines can’t learn and ‘deep’ is nothing more than an important sounding but ultimately meaningless signifier term invented to make something which really is just a (slightly) different form of the actual thing, seem more important and special than it actually is. Alternatively it can be used as a distraction from the fact that the thing it is being used to modify is not actually a real thing as in the case of machine learning, the logically impossible, logical contradiction. It is interesting that all of the things you mention have existed since the 90s, because computers and computing have also. Today, computers still exist and they are still computing, albeit in different and more interesting and complicated ways. What they are most definitely not doing is learning. Something they have also not been doing since the 90s.
The so called “AI’ revolution is really nothing more than the evolution of modern computing. Like machines that “learn” machines that are “intelligent” also have not ever, and do not currently exist. This is to take nothing away from the incredible advances modern computing has made and the impact it has had, and will continue to have on all our lives. It is only to point out that all of those advances are trivialized and distracted from when language is constantly abused in the name of what? I used to think it was all about the hype these terms always engender. Writing an article about powerful computers doing such and such interesting new thing might get some attention, but if those boring old normal powerful computers are ‘AI’ instead watch the page views grow before your very eyes. Same could be said for that grant application you have been slaving away on for the past year. Of course your work is novel and could potentially pave the way for the next generation of computers, but if you work paves the way for the next generation of machines that learn all of a sudden your funding chances have gotten a whole lot brighter. Maybe at one time it was mostly about the hype but I think now it has become so commonplace to talk about machines that learn and are intelligent that the people who say those things have simply ‘forgotten’ what these words actually mean. Importantly, however, the vast majority of persons have not and still believe their ordinary everyday meanings continue to apply. This dichotomy of understanding and belief has important implications and may have grave and unforeseen consequences.
Language does evolve and grow and words that once meant one thing become words that mean another. That is a normal and natural thing and it is mostly healthy and productive. However, typically words evolve new meanings, or new words are invented in order to clarify confusions, to improve our understanding of the thing or things that are now to be described by the word(s) or term. In the case of ‘AI’, and ‘machine learning’, and ‘deep {insert tech thing}’ there is no clarification, only obfuscation and distraction. Either one accepts the current meanings of learning, and intelligence, in which case there is no such thing as AI or machine learning, as I do, or one believes these words now have new meanings. Of course, everyone is free to make this choice and it seems that most people for whom these things are important have chosen the second option. There is however a third position and this is the position that is held by the vast majority of persons currently alive on the planet. It is a confused and illogical position because the people who hold it still believe in the ordinary meanings of the terms ‘learning’ and ‘intelligence’ but they actually believe computers and machines are capable of these things. Moreover, they believe that computers and machines that are intelligent and can learn actually exist.
And so what is the harm in that, what does it really matter how we define these words, how we use them, or who believes what meaning applies when and to what? The problem is that although the technorati and other digitally literate among us understand fully what is meant when we say machines “learn” or a computer is “intelligent” the average, everyday person, is still operating under the assumption that the ordinary, everyday meanings still apply (my third position above). So they begin to fear these machines and the people who build them and work with them. They fear them because they feel they are on the cusp of becoming even more marginalized than they already are. Never forget that these people outnumber the techno-literate by many orders of magnitude. It is ironic that what began as a way of building up interest and excitement in technology and computers, and has succeeded wildly in doing so, is now the same thing that may one day spell their demise at the hands of people made afraid of things that don’t even exist. | https://everydayjunglist.medium.com/more-on-the-importance-of-precision-in-language-af569d29f11f | ['Daniel Demarco'] | 2018-02-05 01:25:48.641000+00:00 | ['Language', 'Machine Learning', 'Artificial Intelligence', 'Technology', 'Philosophy'] |