Discover the Best in Technology and Data

The world's leading brands post jobs and run content on icrunchdata's award-winning platform.

Job Seekers

Post your resume and get noticed
by industry leading companies.
Add Resume

Employers

Advertise your job to reach the
best talent in Techology and Data.
Post a Job

Latest Insights

In Season 1 of the Crackle original series StartUp , one of the main characters creates a new cryptocurrency to rival Bitcoin that they call GenCoin. The discussion in the show is to bring currency and banking to parts of the world that don’t have it and to take government control from it. What Nurucoin plans to do is address this in a very real way across Africa. Africa is rather notorious for its volatile political and economic landscape. Nurucoin is being developed by BlazeBay which is kind of like Amazon by way of comparison, and they currently work with about 200 manufacturers. I spoke with CEO Isaac Muthui to get an idea of the problem he is facing and the solution he is proposing. What follows is distilled from that conversation and from information in their white paper and on their websites. What Muthui wants to do is nothing less than to completely change the way business happens in Africa through the use of cryptocurrency and blockchain technology. He is starting in East and Central Africa with a solution to deal with the volatile currency situation, especially in countries like Zimbabwe and Congo where their currency is crippled. His objective is to eliminate as many of the corrupt middlemen that always get involved and create something that is modular, scalable and simple to facilitate intra-African trade. To that end, they’ve already built up an impressive suite of businesses. Many ICOs have only a white paper and a website without having any proof of concept code. In the case of Nurucoin, they have a big business already running and have built all sorts of financial products to support that business. Now, they are just implementing the blockchain and cryptocurrency functions into it. There doesn’t seem to be any question that they can do it, and there does appear to be a large potential market with pent-up demand to cater to. In addition to Blazebay, they have a retail shopping system called Nunua254 with over 10,000 active users. They own a local payment gateway called NuruPay , and finally, a financial and shopping services portal called Nunur . All of this gives Nurucoin the ready-made market and tools needed to adopt cryptocurrency and blockchain technology quickly and with a lot of users. So what is it that Nurucoin is doing specifically? With an enormous disparity in exchange rates for currencies between the various African nations, and the very real possibility of surveillance of purchases for potentially nefarious use, the wider markets are difficult to work with outside of a particular country. Nurucoin is being deployed first in a version of BlazeBay called BlazeBayBlock (B3) and a B2C to provide SME businesses in Africa a simple channel to grow their business. Phase 2 will encompass B2B functionality to facilitate supply chain. Additionally, Nurucoin plans to address the “unbanked” with a mobile payment system. By some accounts, about 70% of the 1.26 billion people in Africa do not have access to banking. The opportunity to create a stable financial system across Africa that is outside the control of the ever-changing leadership and borders is quite large. The Nurucoin ICO will be selling 100 million tokens at a start price of $.10 with various discounts available in the pre-sale. The token will be tied to the price of the US dollar to a certain extent, although the value will float on the open exchanges. The utility of the token is as a currency within that Blazebay ecosystem initially. It isn’t yet clear if more can be mined later, but that would create potential inflation. This ICO has potential to be life changing for a lot of people, and there is tremendous amount of passion with the team and established history in this market space. Article written by Shawn Gordon Image credit by Nurocoin Want more? For Job Seekers | For Employers | For Influencers
Yesterday at AWS Re:Invent, Amazon Web Services, Inc. (AWS) announced five new machine learning services and a deep learning-enabled wireless video camera for developers. Amazon SageMaker is a fully-managed service for developers and data scientists to build, train, deploy and manage their own machine learning models. The company also introduced AWS DeepLens, a deep learning-enabled wireless video camera that can run real-time computer vision models to give developers hands-on experience with machine learning. And, AWS announced four new application services that allow developers to build applications that emulate human-like cognition. Amazon SageMaker and AWS DeepLens aim to make machine learning accessible to all developers Today, implementing machine learning is complex, involves a great deal of trial and error and requires specialized skills. Developers and data scientists must first visualize, transform and pre-process data to get it into a format that an algorithm can use to train a model. Even simple models can require massive amounts of compute power and time to train, and companies may need to hire dedicated teams to manage training environments that span multiple GPU-enabled servers. All of the phases of training a model — from choosing and optimizing an algorithm, to tuning the millions of parameters that impact the model’s accuracy — involve manual effort and guesswork. Then, deploying a trained model within an application requires a different set of specialized skills in application design and distributed systems. As data sets and variables grow, customers have to repeat this process again and again as models become outdated and need to be continuously retrained to learn and evolve from new information. All of this takes a lot of specialized expertise, access to massive amounts of compute power and storage and a great deal of time. To date, machine learning has been out of reach for most developers. Amazon says SageMaker is a fully-managed service that removes the heavy lifting and guesswork from each step of the machine learning process. It makes model building and training easier by providing pre-built development notebooks, popular machine learning algorithms optimized for petabyte-scale datasets and automatic model tuning. Amazon claims SageMaker also simplifies and accelerates the training process, automatically provisioning and managing the infrastructure to both train models and run inference to make predictions using these models. AWS DeepLens was designed to help developers get hands-on experience in building, training and deploying models by pairing a physical device with a broad set of tutorials, examples, source code and integration with familiar AWS services to support learning and experimentation. “Our original vision for AWS was to enable any individual in his or her dorm room or garage to have access to the same technology, tools, scale and cost structure as the largest companies in the world. Our vision for machine learning is no different,” said Swami Sivasubramanian, VP of Machine Learning, AWS. “We want all developers to be able to use machine learning much more expansively and successfully, irrespective of their machine learning skill level. Amazon SageMaker removes a lot of the muck and complexity involved in machine learning to allow developers to easily get started and become competent in building, training, and deploying models.” The company says Amazon SageMaker developers can: Easily build machine learning models with performance-optimized algorithms: SageMaker is a fully-managed machine learning notebook environment for developers to explore and visualize data they have stored in Amazon Simple Storage Service (Amazon S3) and transform it using all of the popular libraries, frameworks, and interfaces. SageMaker includes 10 of the most common deep learning algorithms (e.g. k-means clustering, factorization machines, linear regression and principal component analysis). Developers choose an algorithm and specify their data source, and SageMaker installs and configures the underlying drivers and frameworks. SageMaker includes native integration with TensorFlow and Apache MXNet with additional framework support reported as coming soon. Developers can also specify any framework and algorithm they choose by uploading them into a container on the Amazon EC2 Container Registry. Experience fast, fully-managed training: Select the type and quantity of Amazon EC2 instances and specify the location of their data. SageMaker sets up the distributed compute cluster, performs the training, outputs the result to Amazon S3 and tears down the cluster when complete. SageMaker can automatically tune models with hyper-parameter optimization, adjusting thousands of different combinations of algorithm parameters to arrive at the most accurate predictions. Deploy models into production: SageMaker takes care of launching instances, deploying the model and setting up a secure HTTPS end-point for the application to achieve high throughput and low-latency predictions, as well as auto-scaling Amazon EC2 instances across multiple availability zones (AZs). It also provides native support for A/B testing. Once in production, SageMaker aims to eliminate the heavy lifting involved in managing machine learning infrastructure, performing health checks, applying security patches and conducting other routine maintenance. The company says that with AWS DeepLens, developers can: Get machine learning experience: AWS DeepLens is a fully-programmable video camera designed to put deep learning into the hands of any developer. It includes a HD video camera with on-board compute capable of running deep learning computer vision models in real-time. The hardware, capable of running over 100 billion deep learning operations per second, comes with sample projects, example code and pre-trained models so developers with no machine learning experience can run their first deep learning model in less than 10 minutes. Developers can extend these tutorials to create their own custom, deep learning-powered projects with AWS Lambda functions. For example, AWS DeepLens could be programmed to recognize the numbers on a license plate and trigger a home automation system to open a garage door, or AWS DeepLens could recognize when the dog is on the couch and send a text to its owner. Train models in the cloud and deploy them to AWS DeepLens: Developers can train their models in the cloud with Amazon SageMaker and then deploy them to AWS DeepLens in the AWS Management Console. The camera runs the models, in real time, on the device. Clients on board “We’ve deepened our relationship with AWS, adding them as an Official Technology Provider of the NFL and are excited to use Amazon SageMaker for our next-generation stats initiative,” said Michelle McKenna-Doyle, SVP and CIO, National Football League. DigitalGlobe, provider of high-resolution Earth imagery, data and analysis, works with enormous amounts of data every day. “[We are] making it easier for people to find, access and run compute against our 100PB image library which is stored in the AWS cloud in order to apply deep learning to satellite imagery,” said Dr. Walter Scott, Chief Technology Officer of Maxar Technologies and founder of DigitalGlobe. “We plan to use Amazon SageMaker to train models against petabytes of earth observation imagery datasets using hosted Jupyter notebooks, so DigitalGlobe's Geospatial Big Data Platform (GBDX) users can just push a button, create a model and deploy it all within one scalable distributed environment at scale,” said Scott. Matt Fryer, VP and Chief Data Science Officer of Hotels.com and Expedia Affiliate Network, said, "At Hotels.com, we are always interested in ways to move faster, to leverage the latest technologies and stay innovative. With Amazon SageMaker, the distributed training, optimized algorithms and built-in hyperparameter features should allow my team to quickly build more accurate models on our largest data sets, reducing the considerable time it takes us to move a model to production. It is simply an API call. Amazon SageMaker will significantly reduce the complexity of machine learning, enabling us to create a better experience for our customers, fast.” Khalid Al-Kofahi, who leads Thomson Reuters center for AI and Cognitive Computing, commented, “For over 25 years we have been developing advanced machine learning capabilities to mine, connect, enhance, organize and deliver information to our customers, successfully allowing them to simplify and derive more value from their work. Working with Amazon SageMaker enabled us to design a natural language processing capability in the context of a question-answering application. Our solution required several iterations of deep learning configurations at scale using the [SageMaker] capabilities.” New speech, language and vision services to build intelligent applications For those developers who are not experts in machine learning, but are interested in using these technologies to build a new class of apps that exhibit human-like intelligence, Amazon Transcribe, Amazon Translate, Amazon Comprehend and Amazon Rekognition video aim to provide high-quality, high-accuracy machine learning services that are scalable and cost-effective. "Today, customers are storing more data than ever before, using Amazon Simple Storage Service (Amazon S3) as their scalable, reliable and secure data lake. These customers want to put this data to use for their organization and customers, and to do so they need easy-to-use tools and technologies to unlock the intelligence residing within this data,” said Swami Sivasubramanian, VP of Machine Learning, AWS. “We’re excited to deliver four new machine learning application services that will help developers immediately start creating a new generation of intelligent apps that can see, hear, speak and interact with the world around them.” Transcribe (available in preview) converts speech to text, allowing developers to turn audio files stored in Amazon S3 into accurate, fully-punctuated text. Transcribe can generate a time stamp for every word so that developers can precisely align the text with the source file. Today, it supports English and Spanish with more languages to follow. In the coming months, Amazon says Transcribe will have the ability to recognize multiple speakers in an audio file and will also allow developers to upload custom vocabulary for more accurate transcription for those words. Translate (available in preview) uses neural machine translation techniques to provide translation of text from one language to another. Translate can translate short or long-form text and supports translation between English and six other languages (Arabic, French, German, Portuguese, Simplified Chinese and Spanish), with more slated in 2018. Comprehend (available today) can understand natural language text from documents, social network posts, articles or other textual data stored in AWS. Comprehend uses deep learning techniques to identify text entities (e.g. people, places, dates, organizations), the language the text is written in, the sentiment expressed in the text and key phrases with concepts and adjectives, such as ‘beautiful,’ ‘warm,’ or ‘sunny.’ Comprehend has been trained on a range of datasets, including product descriptions and customer reviews from Amazon.com, to build language models that extract key insights from text. It also has a topic modeling capability that helps applications extract common topics from a corpus of documents. Comprehend integrates with AWS Glue to enable analytics of text data stored in Amazon S3 and other popular Amazon data sources. Rekognition Video (available today) can track people, detect activities and recognize objects, faces, celebrities and inappropriate content in millions of videos stored in Amazon S3. It also provides real-time facial recognition across millions of faces for live stream videos. Rekognition Video’s API is powered by computer vision models that are trained to detect thousands of objects and activities and extract motion-based context from both live video streams and video content stored in Amazon S3. Rekognition Video can automatically tag specific sections of video with labels and locations (e.g. beach, sun, child), detect activities (e.g. running, jumping, swimming), detect, recognize and analyze faces and track multiple people, even if they are partially hidden from view in the video. Customers of these services include: The communications platform for sales teams RingDNA (using Transcribe) The media intelligence software company Isentia (using Translate) The Washington Post, enterprise application and service provider Infor and supply chain platform Elementum (each using Comprehend) Motorola and The City of Orlando (each using Rekognition Video) To learn more about AWS's machine learning services, visit: https://aws.amazon.com/machine-learning/ Article published by Anna Hill Image credit by AWS Want more? For Job Seekers | For Employers | For Influencers
One of the basic human needs for the sustenance of life is food. What is the genesis for the food we consume on a daily basis? In a word – agriculture. Agriculture and methods of planting, cultivating and harvesting have been around for centuries, from the dawn of mankind to the current age. There are various components of schools of thought which drive the agricultural industry, from environmental considerations to profitability to social entities. The industry consists of an exchange of trillions of dollars, representing 40% of the job market, 30% of greenhouse gas emissions and 10% of spending by consumers within the global marketplace. Advances in technology have enhanced the agricultural industry by encouraging developments of improved products from the farm to the table, increased productivity/sustainability and providing better efficiency for the agricultural resources available for customer consumption. The technology of farming – Drones and advances in IoT We live in the age of drones whereby they are deployed easily to farming communities in order to take better inventory and monitor daily life on the farm. Livestock can even be designated with barcodes which the drone can scan in order to track migration of the animals on the farm, eating habits, growth and overall general health. Drones can be used to relay messages or video to the landowner to track the growth of fruits and vegetables, to scan data regarding the best conditions, to plant seeds and to harvest the crops. Weather data can be exchanged to monitor the agriculture for drought conditions, excessive rains, winds and temperature – all natural elements which can make or break a farm if conditions are not optimal.  The ways in which the agricultural industry will thrive under these technological advances will revolutionize the agricultural industry with progressive and substantial commerce. A recognition of the agriculture of precision will be prevalent with the optimization of economic advancement through the use of information technology, which will improve this industry of daily consumption significantly for better growth and expansion of global food supply. By increasing productivity, the scarcity of goods will decrease, resulting in lower costs for the consumer. In addition, costs will be controlled by eliminating many of the multiple challenges farms face with equipment maintenance, consultants, overhead, etc. Farms will experience more connectivity with each other through real-time exchange of data and monitoring of the industry. It is predicted there will be 75 million devices on farms being used for IoT by the year 2020. And it is expected that 4.1 million data points will be generated on a daily basis in 2050. Cognitive devices may prove challenging for the agricultural industry, however. The ease of data collection through the use of handheld devices such as iPhones and tablets through the rapid exchange of data will result in benefits for the agricultural community as well as the “farm to table” public enjoying the bountiful harvest from the efficient and robust production of goods directly from the providers. Agriculture can reap many benefits through the utilization of progressive technology through IoT. These technologies can be deployed in order to select the crops with the most growth potential for specific areas or climates. Weather patterns can be researched and soil samples taken to determine the appropriate seeds which can be cultivated to their maximum potential on specific farms. Also, the types of insects which target specific crops can be researched for patterns of infestation, thus allowing farms to take the appropriate precautions with their crops. And consumer trends in tandem with marketplace demands can assist the farm in calculating profitability. Chatbots – Predicting a bountiful harvest through technology Chatbots will be the trend in agricultural communities as they will serve as “real-time” assistants providing instantaneous interactions with their users. These components of artificial intelligence are personalized based upon the users’ specific needs. Chatbots are able to comprehend language and can be programmed for agricultural terminology allowing the user to have an assistant at the ready for help accessing immediate data. Currently, you will find chatbots in the realms of travel, media and retail. However, they can be readily modified to assist the agricultural community with specific farming needs and advisement. Utilization of drones As mentioned earlier, drones will be used on a regular basis to collect farming data. The Watson IoT platform will utilize technology by integrating machine learning capabilities and joining them with data derived from the drones. The everyday entities of management systems will be modified and integrated as systems of artificial intelligence. Data can be pulled from various sets of sources, albeit it historical or archived data, current reports of weather, research of soil samples or consumer data, to name a few. AI will change the modern agricultural community as we know it today, to the progressive and profitable business through which to better serve the consumer. In the last 50 years, as citizens have moved from farming communities to urban areas, there has been a significant decrease in the traditional farm of days gone by. It is predicted that in just a few decades, over half of the world’s population will reside in urban areas. As such, if no one is residing in rural farming areas, there will be but a few members of the population working in the agricultural industry. To solve this discrepancy in a lack of workforce on the farm, the cognitive systems will be implemented at just the right time in order to best serve the needs of agriculture. Through this innovative technology, there will not even be a need for someone to be on the land being cultivated. Actions and decision could occur remotely, business decisions could be made instantaneously, challenges could be recognized and more informative decisions can be made and executed, thus avoiding any potential crises. Moving forward with productivity and innovation As an example, visual recognition APIs and the IBM Watson IoT platform joined with Aerialtronics to create commercial drones to capture real-time data and images to provide progressive agricultural analysis to monitor crops, as well as the specific seeds used for planting and the health and life cycles of specific plants. As such, these AI tools will streamline the agricultural industry, and in turn, save crops from destruction due to pestilence or weather conditions which can prove fatal to crops. And most importantly, these AI tools will drive efficiency and profitability. The agricultural industry is long overdue for a revolution in productivity and profitability.  With progressive innovations in the realm of technology, specifically IoT, agriculture will grow like never before with increases in marketable livestock and produce through which the entire world will benefit. Article written by Raj Kosaraju Image credit by Getty Images, Moment, Miguel Sotomayor Want more? For Job Seekers | For Employers | For Influencers
View All Insights