sexta-feira, 30 de outubro de 2015

The promise of the blockchain

The trust machine

The technology behind bitcoin could transform how the economy works

BITCOIN has a bad reputation. The decentralised digital cryptocurrency, powered by a vast computer network, is notorious for the wild fluctuations in its value, the zeal of its supporters and its degenerate uses, such as extortion, buying drugs and hiring hitmen in the online bazaars of the “dark net”.
This is unfair. The value of a bitcoin has been pretty stable, at around $250, for most of this year. Among regulators and financial institutions, scepticism has given way to enthusiasm (the European Union recently recognised it as a currency). But most unfair of all is that bitcoin’s shady image causes people to overlook the extraordinary potential of the “blockchain”, the technology that underpins it. This innovation carries a significance stretching far beyond cryptocurrency. The blockchain lets people who have no particular confidence in each other collaborate without having to go through a neutral central authority. Simply put, it is a machine for creating trust.
The blockchain food chain
To understand the power of blockchain systems, and the things they can do, it is important to distinguish between three things that are commonly muddled up, namely the bitcoin currency, the specific blockchain that underpins it and the idea of blockchains in general. A helpful analogy is with Napster, the pioneering but illegal “peer-to-peer” file-sharing service that went on line in 1999, providing free access to millions of music tracks. Napster itself was swiftly shut down, but it inspired a host of other peer-to-peer services. Many of these were also used for pirating music and films. Yet despite its dubious origins, peer-to-peer technology found legitimate uses, powering internet startups such as Skype (for telephony) and Spotify (for music streaming)—and also, as it happens, bitcoin.
The blockchain is an even more potent technology. In essence it is a shared, trusted, public ledger that everyone can inspect, but which no single user controls. The participants in a blockchain system collectively keep the ledger up to date: it can be amended only according to strict rules and by general agreement. Bitcoin’s blockchain ledger prevents double-spending and keeps track of transactions continuously. It is what makes possible a currency without a central bank.
Blockchains are also the latest example of the unexpected fruits of cryptography. Mathematical scrambling is used to boil down an original piece of information into a code, known as a hash. Any attempt to tamper with any part of the blockchain is apparent immediately—because the new hash will not match the old ones. In this way a science that keeps information secret (vital for encrypting messages and online shopping and banking) is, paradoxically, also a tool for open dealing.
Bitcoin itself may never be more than a curiosity. However blockchains have a host of other uses because they meet the need for a trustworthy record, something vital for transactions of every sort. Dozens of startups now hope to capitalise on the blockchain technology, either by doing clever things with the bitcoin blockchain or by creating new blockchains of their own (see article).
One idea, for example, is to make cheap, tamper-proof public databases—land registries, say, (Honduras and Greece are interested); or registers of the ownership of luxury goods or works of art. Documents can be notarised by embedding information about them into a public blockchain—and you will no longer need a notary to vouch for them. Financial-services firms are contemplating using blockchains as a record of who owns what instead of having a series of internal ledgers. A trusted private ledger removes the need for reconciling each transaction with a counterparty, it is fast and it minimises errors. Santander reckons that it could save banks up to $20 billion a year by 2022. Twenty-five banks have just joined a blockchain startup, called R3 CEV, to develop common standards, and NASDAQ is about to start using the technology to record trading in securities of private companies.
These new blockchains need not work in exactly the way that bitcoin’s does. Many of them could tweak its model by, for example, finding alternatives to its energy-intensive “mining” process, which pays participants newly minted bitcoins in return for providing the computing power needed to maintain the ledger. A group of vetted participants within an industry might instead agree to join a private blockchain, say, that needs less security. Blockchains can also implement business rules, such as transactions that take place only if two or more parties endorse them, or if another transaction has been completed first. As with Napster and peer-to-peer technology, a clever idea is being modified and improved. In the process, it is fast throwing off its reputation for shadiness.
New chains on the block
The spread of blockchains is bad for anyone in the “trust business”—the centralised institutions and bureaucracies, such as banks, clearing houses and government authorities that are deemed sufficiently trustworthy to handle transactions. Even as some banks and governments explore the use of this new technology, others will surely fight it. But given the decline in trust in governments and banks in recent years, a way to create more scrutiny and transparency could be no bad thing.
Drawing up regulations for blockchains at this early stage would be a mistake: the history of peer-to-peer technology suggests that it is likely to be several years before the technology’s full potential becomes clear. In the meantime regulators should stay their hands, or find ways to accommodate new approaches within existing frameworks, rather than risk stifling a fast-evolving idea with overly prescriptive rules.
The notion of shared public ledgers may not sound revolutionary or sexy. Neither did double-entry book-keeping or joint-stock companies. Yet, like them, the blockchain is an apparently mundane process that has the potential to transform how people and businesses co-operate. Bitcoin fanatics are enthralled by the libertarian ideal of a pure, digital currency beyond the reach of any central bank. The real innovation is not the digital coins themselves, but the trust machine that mints them—and which promises much more besides.
BLOG

Operationalize Predictive Analytics for Significant Business Impact

 Print
 Email
 Reprints
 Comment
 Twitter
 LinkedIn
 Facebook
 Google+
One of the key findings in our latest benchmark research into predictive analytics is that companies are incorporating predictive analytics into their operational systems more often than was the case three years ago.
The research found that companies are less inclined to purchase stand-alone predictive analytics tools (29% vs 44% three years ago) and more inclined to purchase predictive analytics built into business intelligence systems (23% vs 20%), applications (12% vs 8%), databases (9% vs 7%) and middleware (9% vs 2%). This trend is not surprising since operationalizing predictive analytics – that is, building predictive analytics directly into business process workflows – improves companies’ ability to gain competitive advantage: those that deploy predictive analyticsvr_NG_Predictive_Analytics_12_frequency_of_updating_predictive_models within business processes are more likely to say they gain competitive advantage and improve revenue through predictive analytics than those that don’t.
In order to understand the shift that is underway, it is important to understand how predictive analytics has historically been executed within organizations. The marketing organization provides a useful example since it is the functional area where organizations most often deploy predictive analytics today.
In a typical organization, those doing statistical analysis will export data from various sources into a flat file. (Often IT is responsible for pulling the data from the relational databases and passing it over to the statistician in a flat file format.) Data is cleansed, transformed, and merged so that the analytic data set is in a normalized format. It then is modeled with stand-alone tools and the model is applied to records to yield probability scores.
In the case of a churn model, such a probability score represents how likely someone is to defect. For a marketing campaign, a probability score tells the marketer how likely someone is to respond to an offer. These scores are produced for marketers on a periodic basis – usually monthly. Marketers then work on the campaigns informed by these static models and scores until the cycle repeats itself.
The challenge presented by this traditional model is that a lot can happen in a month and the heavy reliance on process and people can hinder the organization’s ability to respond quickly to opportunities and threats. This is particularly true in fast-moving consumer categories such as telecommunications or retail.
For instance, if a person visits the company’s cancelation policy web page the instant before he or she picks up the phone to cancel the contract, this customer’s churn score will change dramatically and the action that the call center agent should take will need to change as well.
Perhaps, for example, that score change should mean that the person is now routed directly to an agent trained to deal with possible defections. But such operational integration requires that the analytic software be integrated with the call agent software and web tracking software in near-real time.
Similarly, the models themselves need to be constantly updated to deal with the fast pace of change. For instance, if a telecommunications carrier competitor offers a large rebate to customers to switch service providers, an organization’s churn model can be rendered out of date and should be updated.
Our research shows that organizations that constantly update their models gain competitive advantage more often than those that only update them periodically (86% vs 60% average), more often show significant improvement in organizational activities and processes (73% vs 44%), and are more often very satisfied with their predictive analytics (57% vs 23%).
Building predictive analytics into business processes is more easily discussed than done; complex business and technical challenges must be addressed. The skills gap that I recently wrote about is a significant barrier to implementing predictive analytics. Making predictive analytics operational requires not only statistical and business skills but technical skills as well. From a technical perspective, one of the biggest challenges for operationalizing predictive analytics is accessing and preparing data which I wrote about.
Four out of ten companies say that this is the part of the predictive analytics process vr_NG_Predictive_Analytics_02_impact_of_doing_more_predictive_analyticswhere they spend the most time.
Choosing the right software is another challenge that I wrote about. Making that choice includes identifying the specific integration points with business intelligence systems, applications, database systems, and middleware. These decisions will depend on how people use the various systems and what areas of the organization are looking to operationalize predictive analytics processes.
For those that are willing to take on the challenges of operationalizing predictive analytics the rewards can be significant, including significantly better competitive positioning and new revenue opportunities. Furthermore, once predictive analytics is initially deployed in the organization it snowballs, with more than nine in ten companies going on to increase their use of predictive analytics.
Once companies reach that stage, one third of them (32%) say predictive analytics has had a transformational impact and another half (49%) say it provides a significant positive benefits.

quinta-feira, 29 de outubro de 2015

Resultado de imagem para dia do livro 29 de outubro

Ainda bem que Gutenberg inventou a imprensa!

INFOGRAPHICS

INFOGRAPHIC: Comparing Data Science and Analytics

Source: smartdatacollective.com
Link: Comparing Data Science and Analytics

A 'e-conomia' existe mesmo?

A economia das ocupações independentes está mudando o mundo do trabalho ou é só tempestade em copo d'água?

É comum ouvir que as formas tradicionais de emprego estão com os dias contados. O assunto pegou nos Estados Unidos e também é motivo de debates acalorados na Grã-Bretanha. Há cada vez mais pessoas que vivem - ou complementam sua renda - com o que ganham vendendo mercadorias no Etsy ou no Ebay, oferecendo serviços de táxi pelo Uber (eventualmente deixando o carro para alugar no easyCar Club quando não estão transportando passageiros) ou acomodando, via Airbnb, turistas no quarto desocupado que têm em casa (e talvez aproveitando para oferecer também, via JustPark, vagas de estacionamento em suas garagens). Em suma, o mundo do trabalho parece estar mudando. É a chamada e-conomia (em que o "e-" serve para indicar seu caráter eletrônico, empreendedor e, possivelmente, eclético), por meio da qual são comercializados, de forma independente e online, bens e serviços individuais.
A questão é que, consultando-se os dados oficiais, não se encontram muitos sinais dessa revolução. Nos Estados Unidos e na Grã-Bretanha, a porcentagem de trabalhadores com empregos fixos não sofreu grandes alterações nas últimas décadas; e a fatia dos indivíduos que acumulam mais de um emprego também permaneceu estável. Nos EUA, o porcentual de trabalhadores autônomos está inclusive caindo, e embora viesse crescendo em ritmo acelerado na Grã-Bretanha, parece ter se estabilizado no ano passado.
Um exame mais detido e aprofundado dos dados tampouco permite comprovar a existência real da e-conomia. Os freelancers - representantes mais prováveis desse admirável mundo novo do trabalho - mal chegam a 2% da força de trabalho; número que não mudou muito nos últimos 15 anos. Observa-se crescimento bem mais acentuado entre as pessoas que trabalham por conta própria, mas não há como dizer com segurança se o que está por trás disso é a e-conomia ou formas tradicionais de autoemprego.
E se as ocupações independentes e ocasionais que caracterizam a e-conomia são algo a que as pessoas se dedicam em paralelo a empregos tradicionais, em vez de constituir sua principal fonte renda? Os dados oficiais tampouco confirmam essa hipótese. Só uma pequena minoria de trabalhadores, sugerem os dados, extraem rendas complementares de ocupações autônomas.
Porventura há vestígios da e-conomia no tipo de serviço executado pelos trabalhadores autônomos? O quadro é ambíguo. De 2009 para cá, as ocupações autônomas que mais cresceram foram as dos cabeleireiros, faxineiros e consultores empresariais. É bem verdade que essas atividades podem ser oferecidas através dos canais típicos da e-conomia, mas tampouco se pode negar que são atividades cujo histórico de autoemprego crescente é anterior ao surgimento da e-conomia. A quarta ocupação autônoma que mais se expandiu nos últimos anos é a de "locação e administração de imóveis", o que poderia ser um reflexo do aumento no número de pessoas que alugam suas casas e garagens online. Por outro lado, a prestação de serviços de táxi foi a atividade que mais recuou entre as ocupações autônomas, o que talvez permita refutar a ideia de que o Uber está tomando conta do mercado.
Vê-se, assim, que os dados oficiais não oferecem respaldo à noção de que a e-conomia já é uma realidade. Mas isso não é o mesmo que dar razão aos céticos. Há no mínimo dois fatores que permitem contradizê-los.
O primeiro é que a revolução talvez ainda esteja no começo. Exemplo: analistas dizem que a elevação significativa do patamar básico de remuneração que os empregadores britânicos serão obrigados a oferecer, em razão da nova política de salário mínimo adotada pelo governo de David Cameron, talvez empurre mais pessoas para as atividades autônomas. As perdas significativas com restituições do imposto de renda no próximo ano também poderão levar muitos britânicos a pensar em adotar fontes de renda alternativas, a fim de complementar o salário que recebem em seus empregos.
O segundo fator é que talvez não estejamos formulando as perguntas corretas. Os levantamentos conduzidos pelos órgãos oficiais de estatística nunca foram muito bons para aferir transformações no mercado de trabalho. A polêmica sobre a real extensão dos contratos de "zero horas" (tipo de contrato de trabalho que se popularizou na Grã-Bretanha depois da crise financeira de 2007-2008, em que o trabalhador é remunerado apenas pelo número de horas que trabalha, de "zero horas" a um período integral, podendo não receber nada quando o empregador não tem trabalho para lhe oferecer) ilustra bastante bem a questão: os números oficiais talvez estejam subestimados, pois as pessoas confundem esse tipo de relação empregatícia com um trabalho temporário como outro qualquer.
É um tipo de confusão a que as atividades da e-conomia estão particularmente sujeitas. Exemplo: as pessoas tendem a não perceber que a locação de sua residência ou de seu veículo conta como trabalho, e, em vista disso, omitem a informação dos estatísticos governamentais. Nos Estados Unidos, evidências preliminares indicam que isso talvez esteja realmente acontecendo. Na Grã-Bretanha, levantamento recente conduzido pela desenvolvedora de softwares Intuit, focado em atividades da e-conomia, indica que atualmente 6% dos britânicos geram renda com mecanismos de economia compartilhada - proporção um pouco mais elevada do que a apontada pelas estatísticas oficiais.
Medir a e-conomia é importante. Informações confiáveis sobre como as pessoas combinam empregos, trabalho e outras atividades para gerar renda permitem conhecer melhor seu padrão de vida e, assim, avaliar o que realmente importa: se, para os trabalhadores, os efeitos da e-conomia são positivos ou negativos. Por isso é fundamental que os órgãos oficiais de estatística desenvolvam mecanismos que possibilitem mensurar os novos tipos de atividade econômica.
Na vida, a possibilidade de fazer escolhas é, de modo geral, uma coisa boa. E o rebaixamento das barreiras de acesso, promovido pela tecnologia atual, tende a democratizar as oportunidades que as pessoas têm de se dedicar ao empreendedorismo. Mas a fragmentação associada à e-conomia também pode trazer consigo novas formas de vulnerabilidade, devendo exigir, no mínimo, que sejam reformulados os métodos de avaliação do impacto de políticas públicas no tocante a direitos do trabalhador (e do consumidor), estabilização da renda e aposentadorias. Para que tenhamos uma visão adequada de como se equilibram a liberdade, a segurança e a margem para intervenções políticas, o primeiro passo é a criação de ferramentas estatísticas que mostrem com mais clareza e precisão quem efetivamente são os atores da e-conomia.
© 2015 THE ECONOMIST NEWSPAPER LIMITED. DIREITOS RESERVADOS. TRADUZIDO POR ALEXANDRE HUBNER, PUBLICADO SOB LICENÇA. O TEXTO ORIGINAL EM INGLÊS ESTÁ EM WWW.ECONOMIST.COM.

quarta-feira, 28 de outubro de 2015

Amazon Will Disrupt Business Intelligence, Analytics Markets

 Print
 Email
 Reprints
 Comments (3)
 Twitter
 LinkedIn
 Facebook
 Google+
Get ready for AWS business intelligence (BI): it's real and it packs a punch!
Today’s BI market is like a perpetual motion machine — an unstoppable engine that never seems to run out of steam. Forrester currently tracks more than 50 BI vendors, and not a month goes by without a software vendor or startup with tangential BI capabilities trying to take advantage of the craze for BI, analytics, and big data. This month is no exception: On October 7, Amazon crashed the party by announcing QuickSight, a new BI and analytics data management platform. BI pros will need to pay close attention, because this new platform is inexpensive, highly scalable, and has the potential to disrupt the BI vendor landscape. QuickSight is based on AWS’s cloud infrastructure, so it shares AWS characteristics like elasticity, abstracted complexity, and a pay-per-use consumption model. Specifically, the new QuickSight platform provides
  • New ways to get terabytes of data into AWS
  • Automatic enrichment of AWS metadata for more effective BI
  • An in-memory accelerator  (SPICE) to speed up big data analytics
  • An industrial grade data analysis and visualization platform (QuickSight), including mobile clients
  • Open APIs
But the best part is price! QuickSight comes in two flavors: the standard edition of QuickStart is $9 per month per user with 10 GB of SPICE storage. The enterprise edition adds features like Active Directory integration, user access controls and encryption at double the throughput of the standard edition for $18 per month per user. Users of both editions can add storage for $0.25 per GB per month. Such low subscription costs present a formidable challenge to BI vendors that charge an order of magnitude more — and even to the similarly priced Microsoft PowerBI.
 
But, alas, QuickSight is not a panacea, no BI platform is. A highly effective BI platform has to balance two often contradictory requirements: ease of use and comprehensiveness. While many leading BI vendors come close to addressing this challenge, for every BI feature ease of use may come at the expense of advanced functionality or vice versa. BI pros who consider adding QuickSight to their existing arsenal of BI tools at this early stage of the game will have to deal with QuickSight’s limitations and concerns
  • A lack of out-of-the-box connectors to popular enterprise applications
  • Reliance on database models and schemas
  • The lack of a report writer with pixel perfect report generation capabilities
  • Unclear long-term total cost of ownership
Read the detailed Forrester report which will
  • Dig deeper into the details behind the QuickSight strengths and concerns
  • Advice to BI professionals if and when to consider QuickSight as their BI platform, and
  • How QuickSight may potentiall change the BI vendor landscape as we know it

terça-feira, 27 de outubro de 2015

Data Science Test - How do you rank?

We created a Data Science position test. 
This short test includes few multiple choice questions to check your knowledge.
This test includes the following skills:
  • R
  • Hadoop
  • Python

So You Want to be a Data Scientist...

Posted by:
Jonathan Buckley
— Bio —


Jonathan Buckley is a Silicon Valley serial entrepreneur with a career focus on bringing highly disruptive B2B technologies to market in the enterprise data and security-related spaces. With a background in econometric modeling and business strategy from Arthur Andersen LLC, Jonathan has lead award winning marketing teams at many notable companies ranging from a co-founded IoT startup (since acquired) to a NASDAQ 100 networking company (since acquired) where revenues for his product grew from $60M to $222M per year under his leadership. After launching one of the world’s first enterprise cloud storage companies in 2007, Jonathan founded his own consultancy in 2008 called The Artesian Network, LLC specializing in brining disruptive technologies to market leveraging lean startup techniques. Qubole joined the roster of Artesian clients in early 2015 until Jonathan and members of his Team joined Qubole full-time later in the year provide their full attention to the hyper growth of the company. 

Image

The field of data science is admittedly an attractive one for people who are already skilled with computers and statistics. With big data all the rage these days, the demand for qualified data scientists is higher than it has ever been. Not only does the average data scientist command a high salary (more than $110,000), but some of the skills associated with the profession were listed on LinkedIn as the top skills that got people hired last year. Everyone seems to want to use big data to improve their businesses, even companies that you normally wouldn’t associate with cutting edge technology. With data scientists in so much demand, you probably are interested in pursuing this career path. Being a data scientist can be a rewarding and fulfilling experience, but it’s not the easiest goal to attain. If you’re still struggling on how best to become a big data scientist, the following tips should prove useful in getting yourself noticed above the competition.
Any first step in getting to where you want in a career should include getting the right education. With big data science being such a new field, it may be tough to find programs at universities and other institutions specifically dedicated to this course of study. Luckily, many top-notch schools have responded to the growing need by developing their own courses designed to train a new generation ofdata scientists. These come in the form of Master’s and Ph.D. programs that give students the advanced skills they’ll need to land a job in a data science career. Some of these programs come from famed learning institutions like Stanford, NYU, Columbia, and Harvard.
Getting a Master’s or Ph.D. is basically a requirement now, since nearly 90 percent of data scientists have a Master’s degree, while nearly half have a Ph.D. If a degree in data science isn’t available, you can also show your expertise by getting a degree in computer science, machine learning, or mathematical statistics. Educational opportunities may also be available through free online courses, but most experts seem to agree that these classes don’t do enough. They act more as a learning template for the future and don’t teach the skills needed to secure a job in the near future. Online courses may be used more as a supplemental learning tool to go with your already extensive education, but it shouldn’t be looked at as a total replacement.
Of course, all these classes are there to teach you some of the technical skills employers will be looking for. The list of these skills can be a lengthy one, but a few of the most important include expertise in Python coding, familiarity with using big data tools like Hadoop and Apache Spark, and experience in working with unstructured data. The unstructured data skill is perhaps one of the more challenging to acquire since many organizations struggle with it even today, but proving you can manage and create real value from it will attract attention from employers of all types. Showing your skills with ad-hoc data analysis is a necessary step on the path to a full data science career.
While much of the focus may be placed on the technical aspects of what data scientists bring to the table, other skills are needed too. Data scientists need to have deep knowledge of the industries they are in. This helps them provide unique insights into how data can be used to tackle some of the problems businesses are trying to solve. A data scientist with extensive knowledge about the healthcare industry will have a better chance of landing a job there than someone with little healthcare experience, even if they display similar tech talents. You’ll also want to brush up on yourcommunication skills. You’ll need to help those who may not be familiar with big data understand how it can be used and why it will benefit the business. With the right communication, you can become avaluable member of any team.
Of course, there are other skills that are hard to measure -- intangibles, if you will. Having a curious mind that’s always wanting to know more is a definite attribute you should obtain. Practicing your creativity (the old “thinking outside the box” mindset) will also help you excel as a data scientist. These are just a few of the many skills and characteristics that businesses want to see from their top data scientists. With the right education and willingness to expand your skills, you can position yourself at the top of your field for a long time to come.