Discover the Best in Technology and Data

The world's leading brands post jobs and run content on icrunchdata's award-winning platform.

Job Seekers

Post your resume and get noticed
by industry leading companies.
Add Resume


Advertise your job to reach the
best talent in Techology and Data.
Post a Job

Latest Insights

Yesterday at AWS Re:Invent, Amazon Web Services, Inc. (AWS) announced five new machine learning services and a deep learning-enabled wireless video camera for developers. Amazon SageMaker is a fully-managed service for developers and data scientists to build, train, deploy and manage their own machine learning models. The company also introduced AWS DeepLens, a deep learning-enabled wireless video camera that can run real-time computer vision models to give developers hands-on experience with machine learning. And, AWS announced four new application services that allow developers to build applications that emulate human-like cognition. Amazon SageMaker and AWS DeepLens aim to make machine learning accessible to all developers Today, implementing machine learning is complex, involves a great deal of trial and error and requires specialized skills. Developers and data scientists must first visualize, transform and pre-process data to get it into a format that an algorithm can use to train a model. Even simple models can require massive amounts of compute power and time to train, and companies may need to hire dedicated teams to manage training environments that span multiple GPU-enabled servers. All of the phases of training a model — from choosing and optimizing an algorithm, to tuning the millions of parameters that impact the model’s accuracy — involve manual effort and guesswork. Then, deploying a trained model within an application requires a different set of specialized skills in application design and distributed systems. As data sets and variables grow, customers have to repeat this process again and again as models become outdated and need to be continuously retrained to learn and evolve from new information. All of this takes a lot of specialized expertise, access to massive amounts of compute power and storage and a great deal of time. To date, machine learning has been out of reach for most developers. Amazon says SageMaker is a fully-managed service that removes the heavy lifting and guesswork from each step of the machine learning process. It makes model building and training easier by providing pre-built development notebooks, popular machine learning algorithms optimized for petabyte-scale datasets and automatic model tuning. Amazon claims SageMaker also simplifies and accelerates the training process, automatically provisioning and managing the infrastructure to both train models and run inference to make predictions using these models. AWS DeepLens was designed to help developers get hands-on experience in building, training and deploying models by pairing a physical device with a broad set of tutorials, examples, source code and integration with familiar AWS services to support learning and experimentation. “Our original vision for AWS was to enable any individual in his or her dorm room or garage to have access to the same technology, tools, scale and cost structure as the largest companies in the world. Our vision for machine learning is no different,” said Swami Sivasubramanian, VP of Machine Learning, AWS. “We want all developers to be able to use machine learning much more expansively and successfully, irrespective of their machine learning skill level. Amazon SageMaker removes a lot of the muck and complexity involved in machine learning to allow developers to easily get started and become competent in building, training, and deploying models.” The company says Amazon SageMaker developers can: Easily build machine learning models with performance-optimized algorithms: SageMaker is a fully-managed machine learning notebook environment for developers to explore and visualize data they have stored in Amazon Simple Storage Service (Amazon S3) and transform it using all of the popular libraries, frameworks, and interfaces. SageMaker includes 10 of the most common deep learning algorithms (e.g. k-means clustering, factorization machines, linear regression and principal component analysis). Developers choose an algorithm and specify their data source, and SageMaker installs and configures the underlying drivers and frameworks. SageMaker includes native integration with TensorFlow and Apache MXNet with additional framework support reported as coming soon. Developers can also specify any framework and algorithm they choose by uploading them into a container on the Amazon EC2 Container Registry. Experience fast, fully-managed training: Select the type and quantity of Amazon EC2 instances and specify the location of their data. SageMaker sets up the distributed compute cluster, performs the training, outputs the result to Amazon S3 and tears down the cluster when complete. SageMaker can automatically tune models with hyper-parameter optimization, adjusting thousands of different combinations of algorithm parameters to arrive at the most accurate predictions. Deploy models into production: SageMaker takes care of launching instances, deploying the model and setting up a secure HTTPS end-point for the application to achieve high throughput and low-latency predictions, as well as auto-scaling Amazon EC2 instances across multiple availability zones (AZs). It also provides native support for A/B testing. Once in production, SageMaker aims to eliminate the heavy lifting involved in managing machine learning infrastructure, performing health checks, applying security patches and conducting other routine maintenance. The company says that with AWS DeepLens, developers can: Get machine learning experience: AWS DeepLens is a fully-programmable video camera designed to put deep learning into the hands of any developer. It includes a HD video camera with on-board compute capable of running deep learning computer vision models in real-time. The hardware, capable of running over 100 billion deep learning operations per second, comes with sample projects, example code and pre-trained models so developers with no machine learning experience can run their first deep learning model in less than 10 minutes. Developers can extend these tutorials to create their own custom, deep learning-powered projects with AWS Lambda functions. For example, AWS DeepLens could be programmed to recognize the numbers on a license plate and trigger a home automation system to open a garage door, or AWS DeepLens could recognize when the dog is on the couch and send a text to its owner. Train models in the cloud and deploy them to AWS DeepLens: Developers can train their models in the cloud with Amazon SageMaker and then deploy them to AWS DeepLens in the AWS Management Console. The camera runs the models, in real time, on the device. Clients on board “We’ve deepened our relationship with AWS, adding them as an Official Technology Provider of the NFL and are excited to use Amazon SageMaker for our next-generation stats initiative,” said Michelle McKenna-Doyle, SVP and CIO, National Football League. DigitalGlobe, provider of high-resolution Earth imagery, data and analysis, works with enormous amounts of data every day. “[We are] making it easier for people to find, access and run compute against our 100PB image library which is stored in the AWS cloud in order to apply deep learning to satellite imagery,” said Dr. Walter Scott, Chief Technology Officer of Maxar Technologies and founder of DigitalGlobe. “We plan to use Amazon SageMaker to train models against petabytes of earth observation imagery datasets using hosted Jupyter notebooks, so DigitalGlobe's Geospatial Big Data Platform (GBDX) users can just push a button, create a model and deploy it all within one scalable distributed environment at scale,” said Scott. Matt Fryer, VP and Chief Data Science Officer of and Expedia Affiliate Network, said, "At, we are always interested in ways to move faster, to leverage the latest technologies and stay innovative. With Amazon SageMaker, the distributed training, optimized algorithms and built-in hyperparameter features should allow my team to quickly build more accurate models on our largest data sets, reducing the considerable time it takes us to move a model to production. It is simply an API call. Amazon SageMaker will significantly reduce the complexity of machine learning, enabling us to create a better experience for our customers, fast.” Khalid Al-Kofahi, who leads Thomson Reuters center for AI and Cognitive Computing, commented, “For over 25 years we have been developing advanced machine learning capabilities to mine, connect, enhance, organize and deliver information to our customers, successfully allowing them to simplify and derive more value from their work. Working with Amazon SageMaker enabled us to design a natural language processing capability in the context of a question-answering application. Our solution required several iterations of deep learning configurations at scale using the [SageMaker] capabilities.” New speech, language and vision services to build intelligent applications For those developers who are not experts in machine learning, but are interested in using these technologies to build a new class of apps that exhibit human-like intelligence, Amazon Transcribe, Amazon Translate, Amazon Comprehend and Amazon Rekognition video aim to provide high-quality, high-accuracy machine learning services that are scalable and cost-effective. "Today, customers are storing more data than ever before, using Amazon Simple Storage Service (Amazon S3) as their scalable, reliable and secure data lake. These customers want to put this data to use for their organization and customers, and to do so they need easy-to-use tools and technologies to unlock the intelligence residing within this data,” said Swami Sivasubramanian, VP of Machine Learning, AWS. “We’re excited to deliver four new machine learning application services that will help developers immediately start creating a new generation of intelligent apps that can see, hear, speak and interact with the world around them.” Transcribe (available in preview) converts speech to text, allowing developers to turn audio files stored in Amazon S3 into accurate, fully-punctuated text. Transcribe can generate a time stamp for every word so that developers can precisely align the text with the source file. Today, it supports English and Spanish with more languages to follow. In the coming months, Amazon says Transcribe will have the ability to recognize multiple speakers in an audio file and will also allow developers to upload custom vocabulary for more accurate transcription for those words. Translate (available in preview) uses neural machine translation techniques to provide translation of text from one language to another. Translate can translate short or long-form text and supports translation between English and six other languages (Arabic, French, German, Portuguese, Simplified Chinese and Spanish), with more slated in 2018. Comprehend (available today) can understand natural language text from documents, social network posts, articles or other textual data stored in AWS. Comprehend uses deep learning techniques to identify text entities (e.g. people, places, dates, organizations), the language the text is written in, the sentiment expressed in the text and key phrases with concepts and adjectives, such as ‘beautiful,’ ‘warm,’ or ‘sunny.’ Comprehend has been trained on a range of datasets, including product descriptions and customer reviews from, to build language models that extract key insights from text. It also has a topic modeling capability that helps applications extract common topics from a corpus of documents. Comprehend integrates with AWS Glue to enable analytics of text data stored in Amazon S3 and other popular Amazon data sources. Rekognition Video (available today) can track people, detect activities and recognize objects, faces, celebrities and inappropriate content in millions of videos stored in Amazon S3. It also provides real-time facial recognition across millions of faces for live stream videos. Rekognition Video’s API is powered by computer vision models that are trained to detect thousands of objects and activities and extract motion-based context from both live video streams and video content stored in Amazon S3. Rekognition Video can automatically tag specific sections of video with labels and locations (e.g. beach, sun, child), detect activities (e.g. running, jumping, swimming), detect, recognize and analyze faces and track multiple people, even if they are partially hidden from view in the video. Customers of these services include: The communications platform for sales teams RingDNA (using Transcribe) The media intelligence software company Isentia (using Translate) The Washington Post, enterprise application and service provider Infor and supply chain platform Elementum (each using Comprehend) Motorola and The City of Orlando (each using Rekognition Video) To learn more about AWS's machine learning services, visit: Article published by Anna Hill Image credit by AWS Want more? For Job Seekers | For Employers | For Influencers
One of the basic human needs for the sustenance of life is food. What is the genesis for the food we consume on a daily basis? In a word – agriculture. Agriculture and methods of planting, cultivating and harvesting have been around for centuries, from the dawn of mankind to the current age. There are various components of schools of thought which drive the agricultural industry, from environmental considerations to profitability to social entities. The industry consists of an exchange of trillions of dollars, representing 40% of the job market, 30% of greenhouse gas emissions and 10% of spending by consumers within the global marketplace. Advances in technology have enhanced the agricultural industry by encouraging developments of improved products from the farm to the table, increased productivity/sustainability and providing better efficiency for the agricultural resources available for customer consumption. The technology of farming – Drones and advances in IoT We live in the age of drones whereby they are deployed easily to farming communities in order to take better inventory and monitor daily life on the farm. Livestock can even be designated with barcodes which the drone can scan in order to track migration of the animals on the farm, eating habits, growth and overall general health. Drones can be used to relay messages or video to the landowner to track the growth of fruits and vegetables, to scan data regarding the best conditions, to plant seeds and to harvest the crops. Weather data can be exchanged to monitor the agriculture for drought conditions, excessive rains, winds and temperature – all natural elements which can make or break a farm if conditions are not optimal.  The ways in which the agricultural industry will thrive under these technological advances will revolutionize the agricultural industry with progressive and substantial commerce. A recognition of the agriculture of precision will be prevalent with the optimization of economic advancement through the use of information technology, which will improve this industry of daily consumption significantly for better growth and expansion of global food supply. By increasing productivity, the scarcity of goods will decrease, resulting in lower costs for the consumer. In addition, costs will be controlled by eliminating many of the multiple challenges farms face with equipment maintenance, consultants, overhead, etc. Farms will experience more connectivity with each other through real-time exchange of data and monitoring of the industry. It is predicted there will be 75 million devices on farms being used for IoT by the year 2020. And it is expected that 4.1 million data points will be generated on a daily basis in 2050. Cognitive devices may prove challenging for the agricultural industry, however. The ease of data collection through the use of handheld devices such as iPhones and tablets through the rapid exchange of data will result in benefits for the agricultural community as well as the “farm to table” public enjoying the bountiful harvest from the efficient and robust production of goods directly from the providers. Agriculture can reap many benefits through the utilization of progressive technology through IoT. These technologies can be deployed in order to select the crops with the most growth potential for specific areas or climates. Weather patterns can be researched and soil samples taken to determine the appropriate seeds which can be cultivated to their maximum potential on specific farms. Also, the types of insects which target specific crops can be researched for patterns of infestation, thus allowing farms to take the appropriate precautions with their crops. And consumer trends in tandem with marketplace demands can assist the farm in calculating profitability. Chatbots – Predicting a bountiful harvest through technology Chatbots will be the trend in agricultural communities as they will serve as “real-time” assistants providing instantaneous interactions with their users. These components of artificial intelligence are personalized based upon the users’ specific needs. Chatbots are able to comprehend language and can be programmed for agricultural terminology allowing the user to have an assistant at the ready for help accessing immediate data. Currently, you will find chatbots in the realms of travel, media and retail. However, they can be readily modified to assist the agricultural community with specific farming needs and advisement. Utilization of drones As mentioned earlier, drones will be used on a regular basis to collect farming data. The Watson IoT platform will utilize technology by integrating machine learning capabilities and joining them with data derived from the drones. The everyday entities of management systems will be modified and integrated as systems of artificial intelligence. Data can be pulled from various sets of sources, albeit it historical or archived data, current reports of weather, research of soil samples or consumer data, to name a few. AI will change the modern agricultural community as we know it today, to the progressive and profitable business through which to better serve the consumer. In the last 50 years, as citizens have moved from farming communities to urban areas, there has been a significant decrease in the traditional farm of days gone by. It is predicted that in just a few decades, over half of the world’s population will reside in urban areas. As such, if no one is residing in rural farming areas, there will be but a few members of the population working in the agricultural industry. To solve this discrepancy in a lack of workforce on the farm, the cognitive systems will be implemented at just the right time in order to best serve the needs of agriculture. Through this innovative technology, there will not even be a need for someone to be on the land being cultivated. Actions and decision could occur remotely, business decisions could be made instantaneously, challenges could be recognized and more informative decisions can be made and executed, thus avoiding any potential crises. Moving forward with productivity and innovation As an example, visual recognition APIs and the IBM Watson IoT platform joined with Aerialtronics to create commercial drones to capture real-time data and images to provide progressive agricultural analysis to monitor crops, as well as the specific seeds used for planting and the health and life cycles of specific plants. As such, these AI tools will streamline the agricultural industry, and in turn, save crops from destruction due to pestilence or weather conditions which can prove fatal to crops. And most importantly, these AI tools will drive efficiency and profitability. The agricultural industry is long overdue for a revolution in productivity and profitability.  With progressive innovations in the realm of technology, specifically IoT, agriculture will grow like never before with increases in marketable livestock and produce through which the entire world will benefit. Article written by Raj Kosaraju Image credit by Getty Images, Moment, Miguel Sotomayor Want more? For Job Seekers | For Employers | For Influencers
(This article is a sponsored placement written by Capella University, and republished from the Capella blog .) You’ll find a large selection of books available on the topic of data analytics and business intelligence, but how do you know which ones will best support your career? Capella University faculty members offer six books every data professional should read. 1. “ Signal: Understanding What Matters in a World of Noise ” By Stephen Few (2015) Stephen Few is a data visualization guru who teaches practical techniques for analyzing and presenting quantitative information. In this age of big data, organizations are implementing new technologies to increase the amount of information they can collect and store. However, the vast amount of collected data makes it harder to find important bits of information within. Few provides straightforward instruction on how to differentiate useful information (signals) from the noise. He teaches readers how to apply statistics and visual methods to gain comprehensive understanding of data, and encourages professionals to look for ways to detect changes in the patterns that characterize data. Additional Stephen Few books worth reading include: " Now You See It: Simple Visualization Techniques for Quantitative Analysis " (2009) " Show Me the Numbers: Designing Tables and Graphs to Enlighten " (2012) " Information Dashboard Design: Displaying Data for At-a-Glance Monitoring " (2013) 2. “ The Visual Display of Quantitative Information ” By Edward Tufte (2001) This book covers the practices and theories behind some of the best and worst statistical graphics, charts, and tables. The author discusses how to communicate statistical data through the simultaneous presentation of words, numbers, and statistics in a precise manner optimal for quick analysis. 3. “ Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners ” By Jared Dean (2014) Big data shapes critical decision-making processes in the business world. Dean provides an overview of the current state of data analytics and growing trends toward high performance analytics tools. This book is a comprehensive resource for technology and marketing executives looking to drive efficiency and produce positive results. 4. “ R for Everyone: Advanced Analytics and Graphics ” By Jared P. Lander (2013) Lander’s book introduces the open source R language for building statistical models, offering extensive hands-on practice and sample code. Readers will download and install R, navigate the environment, master basic program control, import and manipulate data, and practice several essential tests. Even non-statisticians can acquire the foundation necessary to construct several of their own models and use data mining techniques. Lander’s intuitive guide allows any data professional to understand and write R programs to tackle all manner of statistical problems. 5. “ Principles of Data Integration ” By AnHai Doan, Alon Halevy, Zachary Ives (2012) This book provides a thorough introduction to the theory and concepts of today’s data integration techniques, including tips for application. Data integration addresses the challenge of extracting from multiple sources, whether it is across a large enterprise, query processing on the Web, coordination between government agencies, or collaboration between scientists. Through a range of data integration exercises, readers will learn how to build new algorithms and implement effective data integration applications. 6. “ An Introduction to SAS® Visual Analytics: How to Explore Numbers, Design Reports, and Gain Insight into Your Data ” By Tricia Aanderud, Rob Collum, and Ryan Kumpfmiller (2017) This book provides detailed instructions for how to use the SAS Visual Analytics Designer and also provides insight into the elements of data visualizations. Readers will learn how to access, prepare, and present data with SAS Visual Analytics, helping them go from accessing content to building a report and customizing data visualizations in no time. Capella offers a variety of programs in data analytics and business intelligence including: Bachelor of Science in Information Technology, Data Analytics Minor Bachelor of Science in Information Technology, Data Management Minor Bachelor of Science in Business, Business Intelligence Minor Master of Business Administration in Business Intelligence Master of Science in Analytics Doctor of Business Administration in Business Intelligence Graduate Certificate in Business Intelligence Graduate Certificate in Analytics Using SAS Graduate Certificate in Advanced Analytics Using SAS Article published by Anna Hill Image credit by Unsplash Want more? For Job Seekers | For Employers | For Influencers
View All Insights