These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

HTTP as a Substrate

I am spending a significant amount of time reading RFCs lately. I find the documents to be very cumbersome to read, but the more you read, the more tolerant you become. When I browse through RFCs I’m always reminded of how little I actually know about the web. In an effort to push forward my education, and maybe yours along the way, I’m going to be cherry picking specific sections of the interesting RFCs I’m digesting here on the blog. Today’s RFC is 3205, filed under Best Current Practice”, and is on the use of HTTP as a Substrate.

_Recently there has been widespread interest in using Hypertext Transfer Protocol (HTTP) [1] as a substrate for other applications- level protocols. Various reasons cited for this interest have included:

  • familiarity and mindshare,
  • compatibility with widely deployed browsers,
  • ability to reuse existing servers and client libraries,
  • ease of prototyping servers using CGI scripts and similar extension mechanisms, authentication and SSL or TLS,
  • the ability of HTTP to traverse firewalls, and
  • cases where a server often needs to support HTTP anyway.

The Internet community has a long tradition of protocol reuse, dating back to the use of Telnet as a substrate for FTP and SMTP. However, the recent interest in layering new protocols over HTTP has raised a number of questions when such use is appropriate, and the proper way to use HTTP in contexts where it is appropriate. Specifically, for a given application that is layered on top of HTTP:

  • Should the application use a different port than the HTTP default of 80?
  • Should the application use traditional HTTP methods (GET, POST, etc.) or should it define new methods?
  • Should the application use http: URLs or define its own prefix?
  • Should the application define its own MIME-types, or use something that already exists (like registering a new type of MIME-directory structure)?

This memo recommends certain design decisions in answer to these questions.

This memo is intended as advice and recommendation for protocol designers, working groups, implementors, and IESG, rather than as a strict set of rules which must be adhered to in all cases. Accordingly, the capitalized key words defined in RFC 2119, which are intended to indicate conformance to a specification, are not used in this memo._

I love the notion of HTTP as a substrate. The definition of substrate is “a substance or layer that underlies something, or on which some process occurs, in particular” or also, “the surface or material on or from which an organism lives, grows, or obtains its nourishment”. I also love the notion of providing guidance for this line of thought. There are many things contained in this document I have learned from my time in the space, and included in my storytelling, without much thought regarding where it originated, or how accurate it was. I particularly like the notion of HTTP as a material in which an organism lives, but maybe more of a digital organism, or a bot. A reminder that everything that may take seed, flourish and grow in this environment, might not always be a good thing.

The Essential API Elements In My World

In 2017 there seems to be an API for just about everything. You can make products available via an API, messing, images, videos, and any of the digital bits that make up our lives. I still get excited by some new APIs, but APIs have to have real usage, and deliver real value before I’ll get too worked up about them. I’m regularly looking down the list of my digital bits thinking about which are the most important to me, which ones I’ll keep around, and the services I’ll adopt to help me define and manage these bits.

This process has got me thinking really deeply about what I’d consider to be the three most important types of APIs in my life:

  • Compute - In my world compute is all about AWS EC2 instances, but when I think about it, Github really handles the majority of the compute for my front-end, but EC2 is the scalable compute for the backend of my world that is driving my APIs.
  • Storage - Primarily storage is all about Amazon S3, but I also depend on Dropbox, Google Drive, and I also put Github into the storage bucket because I store quite a bit of JSON, YAML, and other data there.
  • DNS - and are very important domains in my world–they are how I make my living, and share my stories. CloudFlare is how I manage this frontline of my world, making DNS an extremely important element in my world.

I leverage compute, storage, and DNS APIs regularly throughout each day–making them very important APIs in my existence. However, these are also the essential ingredients of my APIs as well. I consume these APIs, but I also deploy my APIs with these three elements. Each API has a compute and storage layer, with DNS as the naming, addressing, and discovery for these valuable resources in my world. This makes these three aspects of operating online, the three most essential elements in my world–even beyond images, messaging, video, and other elements that are ubiquitous across my digital presence.

It is interesting for me to think about the importance of these elements in my world, as storage and compute were the first two APIs that turned on the light bulb in my head when it came to the importance of web APIs. When Amazon launched Amazon S3 and Amazon EC2, that is when I knew APIs were going to be bigger than Flickr or Twitter. You could deploy global infrastructure with APIs–you could deploy APIs with APIs! I really enjoy thinking deeply about all my digital bits, and the role APIs are playing–regularly reassessing the value of API-driven resources in my world. It helps me think through what is important, and what isn’t–showing the 98% of all of this tech doesn’t matter, but there is a 2% that does make an actual difference in my digital existence.

API Rate Limiting At The DNS Layer

I just got an email from my DNS provider CloudFlare about rate limiting and protecting my APIs. I am a big fan of CloudFlare, partly because I am a customer, and I use to manage my own infrastructure, but also partly due to the way they understand APIs, and actively use them as part of their business, products, and services.

Their email spans a couple areas of my research that I find interesting, and extremely relevant: 1) DNS, 2) Security, 3) Management. They are offering me something that is traditionally done at the API management layer (rate limiting), but now offering to do it for me at the DNS layer, expanding the value of API rate limiting into the realm of security, and specifically in defense against DDoS attacks--a serious concern.

Talk about an easy way to add value to my world as an API provider. One that is frictionless, because I'm already depending on them for the DNS layer of my web, and API layers of operations. All I have to do is signup for the new service, and begin dialing it in for my all of my APIs, which span multiple domains--all conveniently managed using CloudFlare.

Another valuable thing CloudFlare's approach does, in my opinion, is to reintroduce the concept of rate limiting to the world of websites. This helps me in my argument that companies, organizations, institutions and government agencies should be considering having APIs to alleviate website scraping. Using CloudFlare they can now rate limit the website while pointing legitimate use cases to the API where their access can be measured, metered, and even monetized when it makes sense.

I'm hoping that CloudFlare will be exposing all of these services via their API, so that I can automate the configuration of rate limiting for my APIs at the DNS level using APIs. As I design and deploy new API endpoints I want them automatically protected at the DNS layer using CloudFlare. I don't want to have to do extra work when it comes to securing and managing web or API access. I just want a baseline for all of my operations, and when I need I can customize per specific domains, or down to the individual API path level--the rest is automated as part of my continuous integration workflows.

Connecting My API Logging With My API DNS Using CloudFlare Page Rules API

As I'm spending time learning more about what my DNS provider CloudFlare offers when it comes to securing my APIs. To facilitate this, I am playing around with how I can utilize my Apache log files, to help me better drive the definition of DNS security using the CloudfFare API. I guess this is kind of a real time reactive, but also hopefully eventually a proactive solution to quantifying and defining the frontline of my API operations.

I originally embarked on this endeavor to help me manage some of the shift in the API Evangelist network and help mitigate 404 errors across my network of API research. I had recently migrated what I call my API Stack research to a new domain (, and I am anticipating quite a few broken links in stories over the years that reference this area of my work. I have been trying to attack this from the content level by rewriting links as I find them, but I'm thinking I could automate this using my Apache log files and setting up PageRules using CloudFlareAPIs as well.

Once I started sifting through the Apache log files I began to see other traffic patterns that were more in the area of security, then with the stability of my platform and its linkages. As with any type of log file, it is taking some time for all of this to come into focus for me. I will have to spend a great deal of time evaluating traffic from specific IP ranges, user agents, etc., but I know I should be able to quickly establish some rules at the DNS level that will better help me lock down the front line of my API traffic.

Right now I am just keeping my Apache log files backed up to Amazon S3 to help alleviate server load, and keep around for historical purposes. I have built a log file viewer for sifting through my API traffic, and at the moment I'm manually creating page rules in CloudFlare, but it is something I hope to automate via the CloudFlare API once I have established an awareness of the common types of rules I will be creating. Once I evolve to this point I will write about again, and hopefully talk more about how API access to the logging for my API traffic, in conjunction with how API at the DNS level for my API is helping me better define and secure the frontline of my API operations.

Messente API: Always Use A Backup DNS Solution

I found the DNS implementation over at the Messente SMS API interesting, and worth of sharing for deeper evaluation. I've been considering the various approaches by API providers when crafting their domains, or subdomains for API access heavily over the last couple weeks.

During some research time today I stumbled across the Messente SMS API which opts to provide two domains for making HTTP(S) requests of their API:


Messente provides a little disclaimer to handle the developer side of manual load-balancing these API calls:

These two domains have the same final destination regarding the API functions. In order to ensure that your requests always reach Messente API services, please use one of them as primary and the second one as backup. Both API domains work as equal, but in case of any unexpected downtime with one of them (HTTP 5xx), the other one must be used on client side.

I'm not sure this manual approach to providing API endpoints is the optimal path when delivering on the stability of your API, let alone the location of your resources, but it does provide an interesting contrast on the perspectives that are available out there in API-land.

Sometimes I feel like I should rebrand as API Anthropologist, as I find the approach of my fellow API providers more interesting than what I'd expect to find in a mature API landscape. This reflects the importance of showcasing what is going on, to help bridge to where we should be, rather than focusing exclusively on where we should be. (deep shit, man)

Increasing The Focus On APIs In Higher Education Is Important

Maybe I’m a little biased at the moment, after participating in a Reclaim Your Domain hackathon with some really smart folks from multiple universities, as well as working on my first white paper on APIs in higher education, but I feel pretty strongly that higher education institutions focusing on APIs will extremely important in the next two years.

I’m constantly working to understand the big picture of the emerging API economy, the importance of the government API development phase, and working to understand what is next for the US government API strategy, while also acknowledging we need the enterprise to continue waking up to the potential of APIs. I think, right along with government, and the enterprise, another importance piece of the overall API puzzle is increasing the focus on APIs in higher education.

The University Argument
If I am making a pitch to a university, I would tell my Amazon API story, and how APIs can open up access to institutional resources, making them more accessible across campus, and externally with partners and vendors. APIs are how startups, SMB, enterprise, and the government are increasing efficiency, agility, and delivering the web and mobile apps that are part of a larger, healthier digital strategy vision. Top universities like University of Washington, UC Berkeley, and Brigham Young University or leading the way with modern API platforms, that are changing the way they do business on campus—take a look at the 250+ APIs from BYU, to get a idea.

The Student Argument
If I am making the pitch for why students should care about APIs, during the most formative years of their lives, I would point out that APIs are already touching every aspect of their lives, from the websites they visit, to the mobile phone in their pocket. If your college years are about preparing you for the world, APIs need to front and center in your education, giving you the basics, but also allowing you to peel back the black curtain on the technology that is slowly taking over our world, and establish skills that will give you an edge in your career.

Web Literacy
Computers, and the Internet are part of the higher education experience, and it is increasingly important that ALL students obtain at least a basic level of web literacy to be able to operate on the web. Understanding the workings of the Internet, like HTTP, SSL, URLs, DNS, Email, and fundamentals of privacy, security, and terms of service, are essential to the education of every individual. While they may not retain everything they learn, like the rest of their education, it will provide a fundamental base for them to work from the rest of their lives.

Domain Literacy
Students today are faced with understanding who they are in this big new world they are thrust into after being at home with their parents, and that includes understating and expanding their digital identity. What is the difference between university, commercial, and government web and mobile applications? Students need to have a sense of what is theirs, and what is a companies or institutions, and understand when some information or content is something they should personally own. Domain literacy is not just about learning about online domains like .com and .org, it is about understanding your own domain on social networks like Facebook and Twitter, or your student information system account, class forums, and the possibilities that are opened up when you are in control your own online domain.

Portfolio Ownership
Every classroom, project and program experience for a student should be considered a potential candidate for addition to a student's portfolio. Contributing to, and managing a portfolio in 2014 is done online, allowing for a portfolio to potentially be spread across campus, corporate and other 3rd party sites, platforms, and systems. Educating students about owning their own content, data and other information, and the opportunities around data portability and APIs, in helping them assert ownership and control over their portfolio is essential to education in the digital age. I'm not talking traditional e-portfolio, I'm talking about defining, understanding, and aggregating the best of what you do online, during your college years--in preparation for entering the real world.

Workplace Ready
Higher education is about preparing students for their role in society, and hopefully part of that is being a positive contributor to the workplace, and larger workforce. Modern web APIs are born out of the most tech savviest employees, developing the work-arounds, and access to resources they need to get their jobs done, and solve the problems they face. Whether it is pulling Census Bureau data, and populating a spreadsheet, or migrating the companies blog platform from blogger to a dedicated Wordpress instance, APIs are central to the skills that tomorrow's workforce will need. APIs aren’t always about developing a website, or mobile application, they can be as simple as migrating form entries from an online form, and populating a Google Spreadsheet using Zapier. If one of our goals is to make sure students are prepared for the workforce, APIs have to be a regular part of their educational diet, ensuring that when they hit the ground as part of the workforce, API literacy is default.

Digital Citizen
APIs are already touching every aspect of life from looking for a restaurant on Yelp, to paying your taxes using TurboTax. Not every individual needs to understand the inner workings of APIs and oAuth, but they need to have basic API literacy, so they know APIs exist, and that they can get their photos and other information out of a service they use. Every citizen needs to understand the apps on their mobile phone, and the relationship to their online accounts, and who has access to their personal information using oAuth and APIs, and how they can manage these settings. To interact with government, APIs are playing an ever increasing role, and allowing citizens to participate in the political process, access student aid, pay their taxes, and get access to their energy and healthcare data. Let's prepare our students for the future.

In a perfect world students need to be aware of APIs by the time they first set foot on campus. Ideally they are already exposed to them in their daily online interactions, or someday through the FAFSA process, but at the very least it should be up to the university to expose them to APIs when it comes to class registration, student information systems, or ideally as part of the school’s Domain of One’s Own program.

My argument isn't just about colleges and universities getting on board with APIs across all of campus operations, this is about faculty and administrations becoming API literate, and exposing students to APIs as part of every interaction. You don't like the student information system or class schedule when you come in as freshman? Ok, make it better. Need a list of students for a class? Here is the Google Spreadsheet to class schedule API connector. Want to bring in your posts from Tumblr into the classroom? Use the Tumblr API to get your content out, and published where you want using Zapier. Let's teach them to solve the everyday problems they face, by applying technology in sensible ways.

I’ve seen some amazing movement during my last four years of evangelizing APIs, across mutiple industries, and within city and federal government, just by educating a handful of energetic entrepreneurs and civic activists, turning on the API light--resulting in incremental change in the way companies do business, organizations and government operates. Imagine if we turned on a whole generation of citizens, helping them understand that this is the way business is done, and how personal, corporate, organizational, institutional resources are accessed, shared, and managed?

Similar to my efforts on APIs in the federal government over the last two years, I’m going to turn up the focus on APIs in higher education. By 2016, I want APIs to be ubiquitous at higher educational institutions around the globe.

API Virtual Stack Composition Like The Absolut Drinks Data API

If you read my blog regularly, you know I am constantly pushing the boundaries of how I see the API space, and sometimes my ramblings can be pretty out there, but API Evangelist is how I work through these thoughts out loud, and hopefully bring them down to a more sane, practical level that everyone can understand.

My crazy vision for the day centers around virtual API stack composition, as beautiful as the Absolut Drinks Database API. Ok, before you can even begin to get up to speed with my crazy rant, you need to be following some of my rants around using virtual cloud containers like we are seeing from docker, AWS and OpenShift, and you need to watch this video from APIStrategy & Practice about Absolut Drink Databse API deployment.

Ok, you up to speed? Are you with me now?

Today, as I was playing around with the deployment of granular API resources using AWS CloudFormations, I was using their CloudFormer tool, that allows me to browse through ALL of my AWS cloud resources (ie. DNS, EC2 Servers, S3 Storage, RDS Databases), and assemble them into a CloudFormation Templates, which is just a JSON definition of this stack I’m going to generate.

Immediately I think of the presentation from Absolut, and how they spent years assembling the image and video parts and pieces that went into the 3500 drinks they wanted available via their API, for developers to use when developing interactive cocktail apps. They had 750 images, and video clips, with a combination of 30K mixing steps, that went into the generation of the 3500 drink combinations. * mind blown *

Now give me this same approach but for composing virtual API stacks, instead of cocktails. With this approach you could define individual API resources such as product API or screen capture API. These are all independent API resources, with open source server and client side code, openly licensed interface via API Commons, and virtual container definitions for both AWS CloudFormations and OpenShift.

Imagine if I have hundreds of high value, individual API resources available to me when composing a virtual stack. I could easily compose exactly the stack my developers need, composed of new and existing API resources. I wouldn’t be restricted to building directly on top of existing data stores or APIs, I could deploy external API deployments that depend on sync to stay up to date, providing the performance levels I demand from my API stack--I could mix virtual API stacks, like I would a cocktail. 

Anyhoooo, that is my rant for now. I’ll finish doing the work for deploying AWS CloudFormation and OpenShift containers for my screen capture API, rounding of all the architectural components I outlined in my API operational harness, and then rant some more.

Thanks for reading my rant. I hope it made some sense! ;-)

The APIs I Depend On To Run API Evangelist

I maintain an active list of online services I depend on for my business, using Evernote. Each month I spend an hour or two maintaining this list, to make sure it is complete and actively change my logins when appropriate. 

I saw the recent Heartbleed SSL situation as an opportunity to move forward some of my IT practices, including using 1Password to manage all of my accounts, and better profiling which APIs I'm consuming. This gave me an opportunity to update my list of APIs that I depend on, adding about 4 or 5 new ones.

First I depend on a couple of the core Google APIs:

Gmail - Integrate my daily emails, as well as email blasts with my administrative system
Google Contacts - Keep business and individual profiles in my admin system in sync with my daily Google Contacts activity.
Google Calendar - Publish hackathon calendars to Google Calendar as well as keep conferences, meetups and other events I pull through APIs and curate in sync
Google Docs - Publish copies of blog posts to Google Docs, as well as version of pages from my content management system to Google Docs
Google Sites - All of my research is in Google Sites. So I tend to publish lists of curated news, blog posts and other research to wiki pages under specific projects

Next, I would say Amazon Web Services delivers some pretty critical APIs I can't live without:

Amazon EC2 - I deploy and shutdown various EC2 instances for various jobs I run for API Evangelist. All APIs are managed on AWS EC2
Amazon S3 - All heavy objects in my systems are stored at Amazon S3 including photos, PDFs, presentations and video
Amazon Route 53 - I use AWS Route 53 to manage the underlying DNS for all my applications and sites across multiple domains

Then there are an assortment of other APIs I use throughout my web sites and applications:

3Scale - I depend on 3Scale API Infrastructure APis to remotely manage different aspects of my API management workflow
AlchemyAPI - I use alchemy for content, keyword and author extraction on articles and site pages that I curate as part my daily routine
AngelList - I pull company profiles from AngelList and use in my research and profile for API Evangelist
Bitly - I manage most of my shortened URLs for tracking on link traffic across the API Evangelist using Bitly
Crunchbase - I pull company profiles from Crunchbase and use in my research and profile for API Evangelist
EventBrite - I pull hackathons, meetups and conferences from EventBrite and use in my admin system
Evernote - I do all my note taking and recording of thoughts in Evernote, there are some folders I keep in sync with my admin system
Flickr - I've historically published a lot of public images to Flickr for SEO purposes, so many of my blog posts or events that I record a lot of images and video from get stored at Flickr using the API in my admin system
Foursquare - I use Foursquare as a journal and pull the timeline into my admin system and apply as framework to my writing and traveling
Full Contact - I use FullContact for building oiut profies of individuals in my company CRMS system,which helps me understand the public profiles of people like Twitter, LinkedIn and Github
Github - All my stories use Gists to display code and some of my larger productions have full repositories that I access via command line and via the API
Nimble - I use Nimble to manage my CRM externally. It offers some featuers that simply CRM maangement for specific projects or groups. Sometimes I setup CRM systems here for customers.
Paypal - I handle subscriptions and white paper purchases via Paypal
Pinboard - All my curation runs through Pinboard. Anything I bookmark while reading feeds or on the open web gets bookmarked with Pinboard, then with the API I pull into my admin system
ProgrammableWeb - I use ProgrammableWeb's API to pull new APIs into my curation system
Stack Exchange - I use stack exchange to monitor API activity on the forums and keep track of discussion counts for various APIs.
Tumblr - I assemble some curated posts and summaries and publish to Tumblr via the API
Twitter - Twitter is central to my API monitoring, ranking and curation system. I depend on the REST and Streaming APIs

I depend on these APIs to run API Evangelist, as well as support the other research and consulting that I do. Some of these services I pay for, some of them I use for free.  

I’m sure I depend on a lot of APIs, partly because this is my game, I’m an API Evangelist, but it is also because APIs provide me with the data and resources I need to operate.

Tracking on the APIs I depend is a regular part of my IT strategy, and eventually I am going to publish this as a public page on my websites--showcasing what APIs have done for my business.

Common Building Blocks of Cloud APIs

I’ve been profiling the API management space for almost four years now, and one of the things I keep track of is what some of the common building blocks of API management are. Recently I’ve pushed into other areas like API design, integration and into payment APIs, trying to understand what the common elements providers are using to meet developer needs.

Usually I have to look through the sites of leading companies in the space, like the 38 payment API providers I’m tracking on to find all the building blocks that make up the space, but when it came to cloud computing it was different. While there are several providers in the space, there is but a single undisputed leader—Amazon Cloud Services. I was browsing through AWS yesterday and I noticed their new products & solutions menu, which I think has a pretty telling breakdown of the building blocks of cloud APIs.

Compute & Networking

Compute - Virtual Servers in the Cloud (Amazon EC2)

Auto Scaling - Automatic vertical scaling service (AutoScaling)

Load Balancing - Automatic load balancing service (Elastic Load Balancing)

Virtual Desktops - Virtual Desktops in the Cloud (Amazon WorkSpaces)

On-Premise - Isolated Cloud Resources (Amazon VPC)

DNS - Scalable Domain Name System (Amazon Route 53)

Network - Dedicated Network Connection to AWS (AWS Direct Connect)

Storage & CDN

Storage - Scalable Storage in the Cloud (Amazon S3)

Bulk Storage - Low-Cost Archive Storage in the Cloud (Amazon Glacier)

Storage Volumes - EC2 Block Storage Volumes (Amazon EBS)

Data Portability - Large Volume Data Transfer (AWS Import/Export)

On-Premise Storage - Integrates on-premises IT environments with Cloud storage (AWS Storage Gateway)

Content Delivery Network (CDN) - Global Content Delivery Network (Amazon CloudFront)


Relational Database - Managed Relational Database Service for MySQL, Oracle, SQL Server, and PostgreSQL (Amazon RDS)

NoSQL Database - Fast, Predictable, Highly-scalable NoSQL data store (Amazon DynamoDB)

Data Caching - In-Memory Caching Service (Amazon ElastiCache)

Data Warehouse - Fast, Powerful, Fully Managed, Petabyte-scale Data Warehouse Service (Amazon Redshift)


Hadoop - Hosted Hadoop Framework (Amazon EMR)

Real-Time - Real-Time Data Stream Processing (Amazon Kinesis)

Application Services

Application Streaming - Low-Latency Application Streaming (Amazon AppStream)

Search - Managed Search Service (Amazon CloudSearch)

Workflow - Workflow service for coordinating application components (Amazon SWF)

Messaging - Message Queue Service (Amazon SQS)

Email - Email Sending Service (Amazon SES)

Push Notifications - Push Notification Service (Amazon SNS)

Payments - API based payment service (Amazon FPS)

Media Transcoding - Easy-to-use scalable media transcoding (Amazon Elastic Transcoder)

Deployment & Management

Console - Web-Based User Interface (AWS Management Console)

Identity and Access - Configurable AWS Access Controls (AWS Identity and Access Management (IAM))

Change Tracking - User Activity and Change Tracking (AWS CloudTrail)

Monitoring - Resource and Application Monitoring (Amazon CloudWatch)

Containers - AWS Application Container (AWS Elastic Beanstalk)

Templates - Templates for AWS Resource Creation (AWS CloudFormation)

DevOps - DevOps Application Management Services (AWS OpsWorks)

Security - Ops Application Management Services (AWS OpsWorks)Security - Hardware-based Key Storage for Regulatory Compliance (AWS CloudHSM)

The reason I look through at these spaces in this way, is to better understand the common services that API providers are, that are really making developers lives easier. Through assembling a list of the common building blocks, it allows me look at the raw ingredients that makes things work, and not get hunt up with just companies and their products.

There is a lot to be learned from API pioneers like Amazon, and I think this list of building blocks provides a lot of insight into what API driven resources the are truly making the Internet operate in 2014.

What Is The Next Phase Of APIs?

I've been polishing my version of the history of web APIs since I started API Evangelist. Through my research it became clear that the world of web APIs had evolved through several key phases that have gotten us to where we are at, and were essential in making the API economy a viable opportunity. So far my history tracks on 5 key phases:

  • Commerce - The first wave of web apis came from commerce pioneers like Salesforce, eBay and Amazon deploying APIs to make commerce more distributed.
  • Social - Early pioneers like Flickr, Delicious, Facebook and Twitter have made the Internet social by default using web APIs.
  • Business - As APIs evolved API management providers like Mashery, 3Scale and Apigee have standardized the business approach of leading APIs, delivering tools and services that other API providers can put to use.
  • Cloud - Amazon forever changed the way we compute using APIs, proving that we could deploy essential global infrastructure like compute, storage and DNS using web APIs.
  • Mobile - The final piece of the puzzle was the mobile computing device, ushered in by Apple with the iPhone, followed up by Google with Android, mobile phones will forever change how we interact, with APIs delivering the essential resources we need to make mobile possible.

In my opinion, the 2014 API economy wouldn't be possible if APIs hadn’t developed and evolved through these stages. Commerce, social, business, cloud, mobile are all essential to a thriving API economy. Sure there are other APIs in other genres that fill in the cracks, but these five areas are the pillars that not only showed that APIs are viable, but will also be the pillars that the API economy rests on from here forward.

As I track on the API space, I’m trying to understand where we are at, and where we are going, hoping to identify the next phases of API history. I'm keeping an eye on trends like aggregation, realtime, data, baas, reciprocity, single page apps, Internet of things (IoT), and other areas trying to understand what is next. While I think IoT is definitely the most compelling and seems to be moving the fastest, I think we need to step back, and be careful not look at this through a technological lens.

I identified early on that this world of APIs wasn't going to be all about the tech. From an API provider, consumer or analyst viewpoint we should not only consider the technology of APIs, and remember that the business of APIs is essential to everything that happens. While I think there will be an incredible amount of innovation from startups when it comes to API deployment in new areas like real time and reciprocity, I think one of the phases we are in the middle of right now, is the enterprise phase.

In the last month I've talked with more fortune 500 companies about their API strategy, than any other group, well maybe the same as government (parallel phase?). I've talked with familiar enterprise players like AT&T, but have also had conversations with newer entrants like Adobe. There is always a place for startups to innovate with APIs, pushing us into new areas, but it is going to take the resources of the fortune 500 to truly make the API economy a reality when it comes to the global economy.

As I see it, all the phases I”ve described don’t happen one after another, they overlap and feed off each other, and much like we couldn't fully realize the potential of commerce, without cloud computing and mobile--I don't think the API economy will fully be realized until major companies have a solid API program, and working API strategy. As with other phases of API development, this won't all be good, I think there is a lot to be worked out at the city, state and federal level government levels, and important issues around the politics of APIs are growing more critical every day, but I still think the current shift by the enterprise towards APIs will be seen as a significant phase in the history of APIs, when we look back.

420% Growth In DNS API Usage Over At Dyn

The folks over at Dyn who provide traffic, message, remote access and domain services, including a suite of SOAP and REST based APIs, have released some interesting stats on their API usage.

Dyn has 500 managed DNS users and partners using their APIs, growing from 7.3 million monthly API requests in January 2012 to 38.1 million API requests in September 2013, that is a 420% growth over 20 months.

In their data they split out who is using SOAP vs REST APIs, with only 3.3% of their total API requests being SOAP, with almost no growth in same time period, compared to the 420% growth in REST usage.

I don't think the data is particularly noteworthy, it represents what we already know about the space and see reflected in other charts. What I think is noteworthy is Dyn sharing the data. You don't see many API publishers sharing their numbers, and its something I'd like to see more of.

API Evangelist, and Hacker Storytelling

I've been slowly evolving API Evangelist from a single site, into an interconnected network of individual API projects. API Evangelist started as a research project back in July 2010, making its shift to be a network of smaller, inter-connected research projects is fitting.

While API Evangelist currently still runs on my home brew CMS, shortly it will finish the migration to completely run on Github, making it merely a "hollywood front" for what is currently 37+ API related, living research projects of mine.

I call my evolving approach to projects, Hacker Storytelling. I made up the name, but the approach is borrowed from several other philosophies which starts with concept of data journalism, but then has also evolved from conversations last year in Washington DC from very smart folks including Ben Balter (@BenBalter), Gray Brooks (@gbinal), and the very forward thinking work of the Development Seed team. Then of course I add my own style and approach to what I've learned.

As I move my own network of research projects to run on Github, using this new approach, I'm also seeing other positive signs coming out of Washington on the same front. First the White House Open Data Policy released in May was created and published on Github, but then I just finished reading Code Developed by the People and for the People, Released Back to the People, by Alex Howard (@digiphile). His post outlines how the United States Department of Health and Human Services (HHS)launched to support the Affordable Care Act -- AKA "Obamacare". The website was built iteratively, in public, over the last couple months and it was completely done using Github, using a similar approach to my Hacker Storytelling. Seeing all of this, really makes me hopeful for my next year in Washington.

Alex does an amazing job of telling the story behind, I highly recommend reading his post. After reading, I wanted take a fresh walk through my approach, and talk about the importance of this new approach managing my projects, that I think will change the web, how we govern and conduct business.

My personal approach is derived from a need to quickly turn research into public stories, allowing people to take my work and put it to use in their worlds. Since my mission is to educate the masses of the benefits of APIs, and reach the largest audience possible, I needed a new approach that was fast, efficient and scaled--the result is Hacker Storytelling.

To manage my projects and tell my stories, I'm using a handful of building blocks:

  • Blog Posts
  • Static Pages
  • Widgets
  • Open Data
  • Presentations

The best part about these building blocks is that they only use, lightweight, open protocols:

  • HTML
  • CSS
  • JavaScript
  • JSON

Each project becomes an open source repository, that I host at Github. Some projects start as private repositories, but if possible EVERYTHING becomes public. If you are unfamiliar with how Github and Git works, Github is a cloud service that provides version control for code. However since code is usually just files, you can apply the same open source code process to web site or documents you can build with HTML, CSS, JavaScript and JSON.

I don't know about you. But I can build some pretty fast websites, prototype applications and even full blown production apps in HTML, CSS, JavaScript and JSON. If you want to see the extreme version of what I'm doing, head over to Development Seed and see what they are up to. They are producing some mind blowing projects, using this approach

The really powerful thing about all of this, is this can run anywhere. You can run the same configuration of site on Amazon S3, Dropbox, or anywhere you can setup hosting. This isn't just something for alpha geeks, look closer at the Amazon S3 example, that is the CTO of Amazon running his blog using this approach.

So why am I doing this? There are so many reason, and to help me wrap my head further around them, I thought I would take a crack at listing as many as I could.

This approach to my research and storytelling has allowed me to decouple the individuals pieces of my original API Evangelist work, which after three years has become very bloated. I have a lot of content and structured data about the API industry. This has allowed me to decouple one very big project into 35 smaller projects, with the potential for many more in the future.

When I kick off a new project, I start off the planning process with a new Github repository, with a fresh README file. Then using the native Github features I can make the project public or private and invite other people to join me in the planning process. The README quickly becomes an outline, giving a backbone to my project.

Once a project has been kicked off, I kick of researching and publishing all notes, bookmarks and other relevant assets to Github after each session. Pretty soon there is a wealth of knowledge located within the repository, with every step of the way versioned, allowing me to manage additions, removals and potential conflicts.

Github has made the process of developing open source software, a social adventure. You can create repositories within individual Github accounts or underneath the umbrella of an organization. You can invite any other Github user to participate in the process, using open source software process, built into Github. Once you make public, you can also add Disqus and solicit public comments, if if so desired.

Git is the central core of Github. Git was developed by Linus Torvalds to help him manage the developed of the open source operating system, Linux. Every document that is submitted to a Git(hub) repository, is versioned, allowing you to manage changes, accept contributions and even roll back to earlier versions when necessary. Git is well suited to open source, collaborative software development, but also works well for many other types of projects as well.

My approach, that of, and Development Seed all uses Jekyll alongside each project deployment. Jekyll is a simple, blog aware static site framework that runs very well on Github. Jekyll gives you a very simple, but powerful way to manage your pages as well as maintain a blog. This has changed my view of what a blog is for, making it as simple as four chronological journal entries for a single project or powering the 800+ blog entries of API Evangelist. Jekyll was actually developed by Tom Preston-Warner, the founder of Github, but the framework is so univeral it can run anywhere such as Amazon S3 and Dropbox.

While this approach is not for everyone. I enjoy making projects open by default from birth to death. Hosting on Github, allowing it to be licensed openly and allowing collaboration and public input makes for a healthier overall project. Transparency can let the sunlight into any process, providing a sort of disinfectant to the overall process. Something I feel is essential in all my work.

Living Projects
Most of the projects I embark on will be living, allowing me to keep updated weekly, monthly or as often as necessary. Because I can open up projects to collaborators and public feedback, it opens up my work to even live beyond the attention I can give the project. I can even transfer ownership and administration of a project to someone else, or they can fork my work and take in an entirely new direction, breathing life into my work I could never imagine.

Each Github repository can be forked or downloaded as a zip file. Allowing the entire project to be moved, unpacked and setup at a new location--not in hours but often in minutes. This type of portability is essential in this crazy, cloud based world we've created for ourselves. It also allows me to easily deploy a project within the firewall of a company or government agency.

Github possesses one of the most powerful syndication tools, which is called "forking". Any Github user can fork one of my projects and set to work making it their own. Adding to it, cleaning it up and when appropriate make "pull requests" back to the original project, which allows me potentially to accept their work back into the central copy. After I add common social sharing tools, and you consider the native social features built into Github, this approach offers unlimited potential for syndication.

Each project I fire up, gets Google Analytics added. Allowing me to track all traffic and usage of my projects. Beyond the page views, visits and other common metrics, Github gives me a whole other layer of metrics for tracking how many favorites, forks, downloads, commits and other vital data about how projects are doing.

Common Formats
Every project I build uses HTML, CSS, JavaScript and JSON. All of these common formats can be opened by simple text editors and do not require any proprietary software to create, access or edit. HTML and CSS is very accessible to many, and depending on how tech savvy you are, JavaScript and JSON are pretty easy to wield, with a little training.

Open By Default
Everything is open by default. Publicly available, collaboration, open formats and open licenses go a long way in setting the right tone for a project. Open by default takes away a lot of stress for me, and opens projects up for the widest possible collaboration, re-use, distribution and ultimately attribution to me and my work.

Machine Readable By Default
All data is stored via simple, lightweight JSON files. Every listing, chart, graph within a project has a JSON data source. The entire contents of a project can have a simple JSON manifest, allowing programmatic indexing of a project's content and data sources. Machine readable by default, using JSON has changed the way I look at data management.

I do not have to scale the infrastructure for any my projects. I've run IT infrastructure for years and very capable in doing this for myself, but I don't have time. All projects automatically are scaled as needed, to meet, not just my projects demands, but everyone on the platform. For me, the backend has been reduced to my internal systems and a handful of APIs. Everything that is public is automatically scaled in the clouds.

The usage of simple, light-weight open formats like HTML, CSS, JavaSCript and JSON. Plus all the benefits of the Github platform. Equal a pretty sweet opportunity for fast loading web pages. Everything is easily cached, providing for very fast page loads in addition to scalability. If projects are hosted on Amazon S3, there are additional opportunities for caching and distribution of content to regions around the globe using CloudFront.

Along with the need to run back-end infrastructure, much of the concern with securing sites and applications goes away. I'm pretty confident that Github, Amazon and Dropbox have fairly decent security teams and with all projects being static sites and apps, using open formats, much of the opportunity for exploitation has been removed.

Low Cost
Github repositories are free, if they are public. Another incentive for being open by default. Even if you pay for repositories with Github, the costs are dollars each month. Amazon and Dropbox are both extremely affordable, further evolving past models for web hosting or rolling out costly infrastructure for projects. The cloud has enabled entirely new approaches deploying web sites, applications, as well as content and data oriented projects.

The entire life cycle of my projects has changed for the positive. I can start new projects, on a whim. Fire up new repository and generate an outline during planning stages, invite all participants and have a public site up in minutes. Some projects I work on for hours each week, others I give just minutes a month to make sure they have my latest research published. I can easily walk away from projects, passing the torch and potentially keeping a project alive. Projects can be forked, downloaded and evolved, adding layers to the lifecycle that are totally out of my control. The potential for my research and stories to reach a larger audience has grown significantly, extending both the reach and the life of my work.

Web Literacy
When this approach is employed, each individual involved receives a healthy dose of web literacy. Introducing them to essential building blocks of our growing digital world, like DNS, HTML, CSS, JSON, Git and more. The portability of this approach allows you to truly own your projects, enabling you to deploy them wherever you choose. Web literacy is critical in this day and age, for everyone.

For me, Hacker Storytelling has empowered me to do more research, tell more stories and reach a wider audience. I've only been doing it for six months. At first, all of this can seem daunting to learn, but once you get a grasp of all the building blocks at play, it can be very empowering. It is something non-developers can employ to solve the problems they face everyday, in a way that encourages collaboration and even programmatic integration with other systems or projects. It has the potential to empower each of us to innovate and work together in new ways.

Hacker Storytelling is my version of this new way to build sites and apps. Development Seed and the HHS are developing their own approach as well. While there are lot of common building blocks, each individual or organization can develop their own style and set of tools and building blocks that work best for them.

That is 21 reasons I'm moving my projects to this new approach to publishing sites and applications on the Internet. I'm choosing to do this because it makes me more efficient at my research and storytelling, which is essential to my career.

I'm hoping to share my approach with as many people as I can. I'm watching my girlfriend Audrey discover how easy it is to setup new projects, and publish her work there. Setting everything I listed above into motion for her world, developer her own approach.

I don't think this methodology is for everyone, but if you are interested I'm happy to share. I will be adding more widgets and tools to my Hacker Storyteling project. While also point you to other similar implementations like, and people who are innovating with this approach like Development Seed.

The Resource Stack

I've been organizing much of my research around APIs into groupings that I call "stacks". The term allows me to loosely bundle common API resources into meaningful "stacks" for my readers to learn about.

I'm adding a new project to my list of 30+ stacks, that is intended to bring together the most commonly used API resources, into a single, meaningful stack of resources any web or mobile developer can quickly put to use.

So far I have compiled the following APIs in 29 separate groups:

  • Compute
    • Amazon EC2
    • Google AppEngine
    • Heroku
  • Storage
    • Amazon S3
    • Dropbox
    • Rackspace Cloud Files
  • Database
    • Amazon RDS
    • Amazon SimpleDB
  • DNS
    • Amazon Route 53
    • Rackspace Cloud DNS
    • DNS Made Easy
    • DNSimple
  • Email
    • SendGrid
    • Amazon SES
    • Rackspace Email
  • SMS
    • Twilio
    • AT&T SMS
  • MMS
    • Mogreet
    • AT&T SMS
  • Push Notifications
    • Urban Airship
    • AT&T SMS
  • Chat
    • Skype
    • Facebook Chat
    • Google Talk
  • Social
    • Twitter
    • Facebook
    • Google+
    • LinkedIn
  • Location
    • Google Directions
    • Google Distance Matrix
    • Google Geocoding
    • Google Latitude
    • Geoloqi
  • Photos
    • Flickr
    • Facebook
    • Instagram
  • Documents
    • Box
    • Google Drive
  • Videos
    • YouTube
    • Flickr
    • Facebook
    • Viddler
    • Vimeo
    • Instagram
  • Audio
    • SoundCloud
    • Mixcloud
  • Music
    • Echo Nest
    • Rdio
    • Mixcloud
  • Notes
    • Evernote
  • Bookmarks
    • Delicious
    • Pinboard
  • Blog
    • Wordpress
    • Blogger
    • Tumblr
  • Content
    • ConvertAPI
    • AlchemyAPI
  • Contacts
    • Google
    • Facebook
    • LinkedIn
    • FullContact
  • Businesses / Places
    • Factual
    • Google Places
  • Checkins
    • Foursquare
    • Facebook
  • Calendar
    • Google
  • Payments
    • Dwolla
    • Stripe
    • Braintree
    • Paypal
    • Google Payments
  • Analytics
    • Google
    • Mixpanel
  • Advertising
    • Adsense
    • Adwords
    • Facebook
    • Twitter
    • AdMob
    • MobClix
    • InMobi
  • Real-time
    • Google Real-time
    • Firebase
    • Pusher
  • URL Shortener
    • Google URL Shortener

This is just a start. I will publish a full stack, complete with logos, descriptions and links. For now I'm just flushing out my thoughts regarding what are some of the top resources that are currently available for developers.

I will be making another pass over the APIs I track on in the coming weeks, as well as add to the list each week as part of my monitoring.

If you see anything missing, that should be in there...let me know!

I Like Individually Priced API Resources That Flex and Scale

On a regular basis I review my API consumption to evaluate how I’m using various APIs, and what I’m paying for them. I depend on around 20 APIs to make API Evangelist work, and I need to make sure I’m using them to their fullest potential while also being mindful of budget.

As a part of my regular review, I am looking at the differences in pricing between three key services:

  • FullContact API - I use FullContact for all my company and individual contact intelligence. I go through phases of light or heavy use depending on research projects I have going on. Full Contact provides me with per API call rates depending on the endpoint and call volume, and they limit me between four packages Trying It out (Free), Getting Started ($19/month), Gaining Traction ($99/month) and Rolling ($499/month)
  • Alchemy API - I use Alchemy API to primarily pull text content from blog posts, so I can use it internally for indexing. With Alchemy I get access to three packages Free, Small Business ($250.00) and Basic ($800.00)
  • AWS APIs - I use AWS for all my computer, storage, database and DNS API services. With AWS I pay by the resource, bandwidth transfer and storage and the other parts and pieces or specific actions in cloud computing. There are no packages, just modular API resources I use and get billed for.

I’m just one use case, but figured I’d share my thoughts on how I use these three API resources within my little world.

For Full Contact the jump from $19/month to $99/month isn’t too bad. I’ll lump all my processing together into a single month and get lots of work done. So I tend to toggle each month between these two tiers based upon my needs.

For Alchemy API I operate within the free tier and stick with the rate limit of 1,000 API calls per day. If a blog post doesn’t get pulled because I hit my limit, I queue it for another day when I have room within my daily limit. The jump from zero to $250.00 / month is really just too big of a jump for me to make.

My Amazon Web Services bill runs between $250.00 and $1,000.00 / month. Depending on how much traffic, harvesting, processing and other crazy stuff I’m doing. I have a buffet of compute, storage, database, IP address, monitoring,DNS and other cloud computing modules that I have developed and associated with pricing in my head, so that I can make decisions in the moment about whether I can afford to process a bunch of data, launch new API, website etc.

I have a few good friends in the API space that feel API rate limits stimulate creativity, innovation and work-arounds. I agree with that statement, and think every API has to understand its own consumer and make the decision they feel is best. But in the end, I personally like single API resource pricing based upon units, that I can scale infinitely as needed in any moment. I’m am more likely to integrate services into my world when they are independent bite size chunks, with pricing not restricted by other services or service tiers. Each module tends to have different value in my world and I like to make independent decisions about how to use just that resource, disconnected from all the other resources.

The world is starting to look different when I depend on 10-20 APIs vs 1-2 APIs, and I can imagine when I reach the point where I'm depending on 100-200 APIs I will have an even greater need to have API resources be priced independent of other services or limiting pricing tiers.  

Who Runs The Internet?

I came across the great infographic from ICANN titled, Who Runs the Internet?  i want to brush up on my own knowledge about all the key stakeholders in the Internet, so I typed up some of the text from the infographic, for my own benefit, as well as to make it a little more interactive.

Who Runs The Internet?

No One Person, Company, Organization or Government Runs the Internet

The Internet itself is a globally distributed computer network comprised of many voluntarily interconnected autonomous networks. Similarly, its governance is conducted by a decentralized and international multi-stakeholder network of interconnected autonomous groups drawing from civil society, the private sector, governments, academic and research communities, and national and international organizations. They work cooperatively from their respective roles to create shared policies and standards that maintain the Internet's global interoperability for the public good.

Here is how it works:

  • Operations & Services - Internet Operations span all aspects of hardware, software, and infrastructure required to make the Internet work. Services include education, access, web browsing, online commerce, social networking. etc
  • Policies & Standards - Internet Policies are the shared priciniples, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. Internet Standards enable interoperability of systems on the Internet by defining protocols, messages formats, schemas, and languages
  • Open Debate - The forma and informal process of debating policy and standard propositions in a multi-stakeholder model using any variety of methods: in-person, Internet Drafts, public forums, publishing and many more
  • Multi-Stakeholders - Civil Society & Internet Users, the Private Sector, Governments, National & Internal Organizations, Research, Academic and Technical communities all have a say in how the Internet is run

Who is Involved:

  • IAB - Internet Architecture Board - Oversees the technical and engineering development of the IEFT and IRTF
  • ICANN - Internet Corporate for Assigned Names and Numbers - Coordinates the Internet's systems of unique identifiers: IP Addresses, Protocol-Parameter registries, top-level domain space (DNS root zone)
  • IETF - Internet Engineering Task Force - Develops and promotes a wide range of Internet standards dealing in particular with standards of the Internet protocol suite. Their technical documents influence the way people design, use and manage the Internet
  • IGF - Internet Governance Forum - A multi-stakeholder open forum of rebate on issues related to Internet governance
  • IRTF - Internet Research Task Force - Promotes research of the evolution of the Internet by creating focused, long-term research groups working on topics related to Internet protocols, applications, architecture and technology
  • Governments and Inter-Governmental Organizations - Develop laws, regulations and policies applicable to the Internet within their jurisdictions; participants in multilateral and multi-stakeholder regional and internal fora on Internet Governance
  • ISO 3166 MA - International ORganization for Standardization, Maintenance Agency - Defines names and postal codes of countries, dependent territories, special areas of geographic significance
  • ISOC - Internet Society - Assure the open development, evolution and use of the Internet for the benefit of all people throughout the world. Currently ISOC has over 90 chapters in around 80 countries
  • RIRs - 5 Regional Internet Registries - Manage the allocation and registration of Internet number resources, such as IP addresses, within geographic regions of the world - Africa -, Asia Pacific -, Canada & United States -, Latin America & Caribbean -, Europe, the Middle East & parts of Central Asia -
  • W3C - World Wide Web Consortium - Create standards for the world wide web that enable an Open Web Platform, for example, by focusing ton issues of accessibility, internationalization, and mobile web solutions
  • Internet Network Operators Groups - Discuss and influence matters related to Internet operations and regulation within informal fora made up of Internet Service Providers (ISPs), Internet Exchange Points (IXPs) and others

Netflix API Is Much More Than A Public API

Netflix has entered the final stages of shuttering its public API last week. Its been coming for a while now, starting in June of 2012, and now is official with the platform no longer accepting new API registrations.

After reading about the changes to the Netflix Public API program on their blog, and hearing much of the news in response, everyone seems to file this away, along with the Twitter API--just another API platform screwing over its developers.

As I do, I wanted to take a step back, look at the bigger picture and try understand what happened.  On October 1st 2008, Netflix launched their public API, and they appear to have done everything right. They had a blog, solicited code samples from developers, accepted application submissions and even showcased the developers apps in the gallery. Netflix would even help promote your app to Netflix subscribers and threw hackathons. The Netflix API team worked to improve API performance, communicate regularly, but really nothing that amazing happened.

There were applications like InstaWatcher and WhichFlicks (among others) developed on the API, but as Daniel Jacobson puts it, a thousand flowers didn’t bloom. In these situations its easy to blame the API provider, but developers didn’t really step up and build anything that innovative and cool. So is this a failure of Netflix? A failure of developers to innovate? Or could it possibly be a third: failure of the API vision?

I would say the demise of the Netflix public API is equal part Netflix and the developer, and just the nature of the industry it exists in. It didn’t take me long to look through the Netflix API blog, so I can tell they didn’t put alot into evangelizing the API. But I really can’t find any innovation that occurred by developers as part of it, so I think us devs have to share some of the responsibility as well.

Several of the blog posts covering the news last week, compared this to Twitter which I think for the untrained eye of the mainstream tech blogosphere, this is easy to do. But Twitter is user generated content, via one of the newest types of content platforms, and Netflix is heavily licensed and policed content from one of the oldest content platforms. I think expecting public API success from Netflix and / or developers was a lot to ask.

I love and believe in APIs, but I’m not delusional enough to think they will work magically everywhere they are applied. However, even with the closing of the public Netflix API, I consider Netflix is an API success story. Look what they’ve done with their internal and partner APIs. They’ve managed to scale not just from the data center to the cloud, but globally and across 800+ devices--while also sharing this knowledge and wisdom with the public via their blog:

If that wasn't enough, they are also open sourcing much of the technology behind their approach:

  • eureka - AWS Service registry for resilient mid-tier load balancing and failover
  • RxJava - a library for composing asynchronous and event-based programs using observable sequences for the Java VM
  • Governator - A library of extensions and utilities that enhance Google Guice to provide: classpath scanning and automatic binding, lifecycle management, configuration to field mapping, field validation and parallelized object warmup
  • Priam - Co-Process for backup/recovery, Token Management, and Centralized Configuration management for Cassandra
  • edda - Service to track changes in your cloud recipes-rss - RSS Reader Recipes that uses several of the Netflix OSS components
  • astyanax - Cassandra Java Client
  • karyon - The nucleus or the base container for Applications and Services built using the NetflixOSS ecosystem
  • netflix-graph - Compact in-memory representation of directed graph data
  • asgard - Web interface for application deployments and cloud management in Amazon Web Services (AWS)
  • Hystrix - Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable
  • servo - Netflix Application Monitoring Library
  • frigga - Utilities for working with Asgard named objects

When measuring the success or failures of API initatives, we can't use the same yardstick in all scenarios. When you look at the knowledge, wisdom and code that has come out of Netflix, there is no way you can say their API initiative is anything but a success. I don’t see see Netflix as a case study in how to stream movies over the web via public APIs, but a deeply important experiment in how to deliver licensed content to over 800 devices, via the next generation of APIs.  Something that probably isn't an edge case, it actually represents where we all might be headed in the near future.

Let’s not get caught up in the recent deprecation of the Netflix public API.  There is so much going on!  Let's get studying some of the knowledge and technology coming out of Netflix. I know its my motivation for writing this post, and doing this research.

MySQL, PostgreSQL and RDS to API With Emergent One

There are numerous companies, with existing IT infrastructure, who are looking to deploy APIs in 2013. These companies will be deploying APIs using their existing technology teams, or depending on one of the 17 API management service providers available.

This market is ripe for the 3Scale's of the world to provide valuable services to, but for many companies, organizations and government agencies who need to deploy APIs this year, API deployment will be about taking an existing database or multiple databases, and open them up to the public, partners, 3rd party developers or possibly just provide access to a remote department or branch of their company in the easiest way possible.

Not all companies will have the resources, or the need to deploy full blown API programs.  They just need a dead simple database to API solution that will quickly expose their data over the web in a secure way. Until recently this solution wasn’t available in the cloud, but a new API service provider called Emergent One has stepped up to fill in the gap.

Emergent One is a cloud service that allows you to connect to your company's MySQL, PostgreSQL or Amazon RDS databases, then generate a REST API from your existing data stores.

Using Emergent One you can define a new API, connect to your database using an agent or direct connection, then define your API resources complete with meta data, sub-resources, in-line resources, fields and computed fields. Pretty much all you will need to make a clean web API from a database.

Once you have your API resources defined, Emergent One provides you with a developer portal around these APIs, and the ability to provide developer registration, then issue keys they can use to access your data--providing the openness you desire, while keeping things secure and within your control.

The developer portal is complete with documentation, explorer console and code libraries for iOS, Android and in Ruby. Emergent One allows you to bind your own DNS to your endpoints, and also provide paid plans, complete with billing management.

Emergent One is a perfect service for any company looking to develop and manage APIs from your local databases. The only thing I would say is missing is support tools, to help you interact with your developers after you launch your API.  But I'm sure its coming.

I’m happy to see the Emergent One platform has a freemium tier giving you up to 5,000 API requests a month, a business tier for up to 2.5M requests a month, as well as an enterprise tier with sales support for all levels of needs.  The freemium tier will be critical for businesses to play with, and get to know better.

The Emergent One admin needs a little polishing, it could use a more dashboard like feel for the home page, but overall the Emergent One team nails it! Providing dead simple, yet robust API deployment from your MySQL, PostgreSQL and Amazon RDS databases, in a way that anyone can put to use.

If your company is looking to deploy and API using a database, and don’t have the resources internally to make it happen, I definitely recommend taking a look at Emergent One.

The APIs That I Depend On For My Business

I maintain an active list of online services I depend on for my business, in Evernote. Each month I spend an hour or two maintaining this list, to make sure it is complete and actively change my logins when appropriate. As a recovering IT guy, and maintaining infrastructure for myself, but also Audrey Watters--so I keep good tabs on the various services I use.

While going through this my services list this month, I added a new section to it, and started tracking if I depend on the service for their API. I have enough automated jobs running on top of APIs I needed to make sure I keep good track of which APIs I depend on. Here are some of the API I depend on to keep my business operational.

First I depend on a couple of key Google APIs:

Gmail - Integrate my daily emails, as well as email blasts with my administrative system
Google Contacts - Keep business and individual profiles in my admin system in sync with my daily Google Contacts activity.
Google Calendar - Publish hackathon calendars to Google Calendar as well as keep conferences, meetups and other events I pull through APIs and curate in sync
Google Docs - Publish copies of blog posts to Google Docs, as well as version of pages from my content management system to Google Docs
Google Sites - All of my research is in Google Sites. So I tend to publish lists of curated news, blog posts and other research to wiki pages under specific projects

Next, I would say Amazon Web Services delivers some pretty critical APIs I can't live without:

Amazon EC2 - I deploy and shutdown various EC2 instances for various jobs I run for API Evangelist. All APIs are managed on AWS EC2
Amazon S3 - All heavy objects in my systems are stored at Amazon S3 including photos, PDFs, presentations and video
Amazon Route 53 - I use AWS Route 53 to manage the underlying DNS for all my applications and sites across multiple domains

Then there are an assortment of other APIs I use throughout my web sites and applications:

AlchemyAPI - I use alchemy for content, keyword and author extraction on articles and site pages that I curate as part my daily routine
Crunchbase - I pull company profiles from Crunchbase and use in my research and profile for API Evangelist
EventBrite - I pull hackathons, meetups and conferences from EventBrite and use in my admin system
Evernote - I do all my note taking and recording of thoughts in Evernote, there are some folders I keep in sync with my admin system
Flickr - I've historically published a lot of public images to Flickr for SEO purposes, so many of my blog posts or events that I record a lot of images and video from get stored at Flickr using the API in my admin system
Foursquare - I use Foursquare as a journal and pull the timeline into my admin system and apply as framework to my writing and traveling
Github - All my stories use Gists to display code and some of my larger productions have full repositories that I access via command line and via the API
Paypal - I handle subscriptions and white paper purchases via Paypal
Pinboard - All my curation runs through Pinboard. Anything I bookmark while reading feeds or on the open web gets bookmarked with Pinboard, then with the API I pull into my admin system
ProgrammableWeb - I use ProgrammableWeb's API to pull new APIs into my curation system
Stack Exchange - I use stack exchange to monitor API activity on the forums and keep track of discussion counts for various APIs.
Tumblr - I assemble some curated posts and summaries and publish to Tumblr via the API
Twitter - Twitter is central to my API monitoring, ranking and curation system. I depend on the REST and Streaming APIs

I depend on these APIs to run API Evangelist, API Voice and The API Stack as well as support the other research and consulting that I do. Some of these services I pay for, some of them I use for free. Usually if a service sticks around in my world for more than 3 or 4 months I pay for some sort of premium account or access.

I’m sure I depend on a lot of APIs, partly because this is my game, I’m an API Evangelist. But it is also because APIs provide me with the data and resources I need to operate, and as a programmer I’m able to quickly put APIs to use for my business.

Tracking on the APIs I depend will be a regular part of my IT strategy, and I’m even going to publish this as a public page on my websites--showcasing what APIs have done for my business.

Helping Voters Register with the Cost of Freedom Project

Last week during the Hackathon for Social Good in New York City, I was fortunate enough to be connected with Faye Anderson (@andersonatlarge) of the Cost of Freedom Project.  The Hackathon for Social Good was put on my WebVisions, using the hackathon model to further projects that are making a social impact in our lives.

The Cost of Freedom Project is centered around providing the necessary information and resources needed by U.S citizens to be able to vote in the 2012 elections, primarily targeted the 5 states that have strict laws requiring voters to show a government issued photo ID in order to vote.

When it comes to making a social impact, Faye’s project is a shining example, and I couldn’t ignore her need for a hacker to move her project forward.  After hearing her pitch, I joined her project team which included Lori Widelitz-Cavallucci (@lwcavallucci) a UX Designer, and Jack Aboutboul (@jackfoundation), Developer Evangelist from Twilio.

As Lori and Faye got to work on the site layout and user experience I started setting up the back-end that would be necessary to run the app:

  • Amazon EC2 Instance Running Fedora Linux and Apache Web Server  PHP 5.3
  • Twitter Bootstrap
  • DNS for Domain Setup

By the end of the Hackathon we had a site layout, with all pages setup with initial content.  All the site content is editable from a Google spreadsheet allowing Faye to maintain control over her content and crowsdsource the management of content using the spreadsheet interface.

The site uses CityGrid to pull vital record offices by state, county voter registration and local DMV offices when a user enters their city and zip code.

The Cost of Freedom Project is a great example of what you can pull together at a hackathon, but also the wide range of apps you can build using CityGrid data.  Sites do not have to be local directories, CityGrid places data can be used to build informational sites that add value to almost any process.

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.