{"DNS API"}

DNS API Blog Posts

These are posts from the API Evangelist blog that are focused on DNS APIs, allowing for a filtered look at my analysis on the topic. I rely on these posts, along with the curated, organizations, APIs, and tools to help paint me a picture of what is going on.


Connecting My API Logging With My API DNS Using CloudFlare Page Rules API

As I'm spending time learning more about what my DNS provider CloudFlare offers when it comes to securing my APIs. To facilitate this, I am playing around with how I can utilize my Apache log files, to help me better drive the definition of DNS security using the CloudfFare API. I guess this is kind of a real time reactive, but also hopefully eventually a proactive solution to quantifying and defining the frontline of my API operations.

I originally embarked on this endeavor to help me manage some of the shift in the API Evangelist network and help mitigate 404 errors across my network of API research. I had recently migrated what I call my API Stack research to a new domain (stack.network), and I am anticipating quite a few broken links in stories over the years that reference this area of my work. I have been trying to attack this from the content level by rewriting links as I find them, but I'm thinking I could automate this using my Apache log files and setting up PageRules using CloudFlareAPIs as well.

Once I started sifting through the Apache log files I began to see other traffic patterns that were more in the area of security, then with the stability of my platform and its linkages. As with any type of log file, it is taking some time for all of this to come into focus for me. I will have to spend a great deal of time evaluating traffic from specific IP ranges, user agents, etc., but I know I should be able to quickly establish some rules at the DNS level that will better help me lock down the front line of my API traffic.

Right now I am just keeping my Apache log files backed up to Amazon S3 to help alleviate server load, and keep around for historical purposes. I have built a log file viewer for sifting through my API traffic, and at the moment I'm manually creating page rules in CloudFlare, but it is something I hope to automate via the CloudFlare API once I have established an awareness of the common types of rules I will be creating. Once I evolve to this point I will write about again, and hopefully talk more about how API access to the logging for my API traffic, in conjunction with how API at the DNS level for my API is helping me better define and secure the frontline of my API operations.


Messente API: Always Use A Backup DNS Solution

I found the DNS implementation over at the Messente SMS API interesting, and worth of sharing for deeper evaluation. I've been considering the various approaches by API providers when crafting their domains, or subdomains for API access heavily over the last couple weeks.

During some research time today I stumbled across the Messente SMS API which opts to provide two domains for making HTTP(S) requests of their API:

  • api2.messente.com
  • api3.messente.com

Messente provides a little disclaimer to handle the developer side of manual load-balancing these API calls:

These two domains have the same final destination regarding the API functions. In order to ensure that your requests always reach Messente API services, please use one of them as primary and the second one as backup. Both API domains work as equal, but in case of any unexpected downtime with one of them (HTTP 5xx), the other one must be used on client side.

I'm not sure this manual approach to providing API endpoints is the optimal path when delivering on the stability of your API, let alone the location of your resources, but it does provide an interesting contrast on the perspectives that are available out there in API-land.

Sometimes I feel like I should rebrand as API Anthropologist, as I find the approach of my fellow API providers more interesting than what I'd expect to find in a mature API landscape. This reflects the importance of showcasing what is going on, to help bridge to where we should be, rather than focusing exclusively on where we should be. (deep shit, man)


Increasing The Focus On APIs In Higher Education Is Important

Maybe I’m a little biased at the moment, after participating in a Reclaim Your Domain hackathon with some really smart folks from multiple universities, as well as working on my first white paper on APIs in higher education, but I feel pretty strongly that higher education institutions focusing on APIs will extremely important in the next two years.

I’m constantly working to understand the big picture of the emerging API economy, the importance of the government API development phase, and working to understand what is next for the US government API strategy, while also acknowledging we need the enterprise to continue waking up to the potential of APIs. I think, right along with government, and the enterprise, another importance piece of the overall API puzzle is increasing the focus on APIs in higher education.

The University Argument
If I am making a pitch to a university, I would tell my Amazon API story, and how APIs can open up access to institutional resources, making them more accessible across campus, and externally with partners and vendors. APIs are how startups, SMB, enterprise, and the government are increasing efficiency, agility, and delivering the web and mobile apps that are part of a larger, healthier digital strategy vision. Top universities like University of Washington, UC Berkeley, and Brigham Young University or leading the way with modern API platforms, that are changing the way they do business on campus—take a look at the 250+ APIs from BYU, to get a idea.

The Student Argument
If I am making the pitch for why students should care about APIs, during the most formative years of their lives, I would point out that APIs are already touching every aspect of their lives, from the websites they visit, to the mobile phone in their pocket. If your college years are about preparing you for the world, APIs need to front and center in your education, giving you the basics, but also allowing you to peel back the black curtain on the technology that is slowly taking over our world, and establish skills that will give you an edge in your career.

Web Literacy
Computers, and the Internet are part of the higher education experience, and it is increasingly important that ALL students obtain at least a basic level of web literacy to be able to operate on the web. Understanding the workings of the Internet, like HTTP, SSL, URLs, DNS, Email, and fundamentals of privacy, security, and terms of service, are essential to the education of every individual. While they may not retain everything they learn, like the rest of their education, it will provide a fundamental base for them to work from the rest of their lives.

Domain Literacy
Students today are faced with understanding who they are in this big new world they are thrust into after being at home with their parents, and that includes understating and expanding their digital identity. What is the difference between university, commercial, and government web and mobile applications? Students need to have a sense of what is theirs, and what is a companies or institutions, and understand when some information or content is something they should personally own. Domain literacy is not just about learning about online domains like .com and .org, it is about understanding your own domain on social networks like Facebook and Twitter, or your student information system account, class forums, and the possibilities that are opened up when you are in control your own online domain.

Portfolio Ownership
Every classroom, project and program experience for a student should be considered a potential candidate for addition to a student's portfolio. Contributing to, and managing a portfolio in 2014 is done online, allowing for a portfolio to potentially be spread across campus, corporate and other 3rd party sites, platforms, and systems. Educating students about owning their own content, data and other information, and the opportunities around data portability and APIs, in helping them assert ownership and control over their portfolio is essential to education in the digital age. I'm not talking traditional e-portfolio, I'm talking about defining, understanding, and aggregating the best of what you do online, during your college years--in preparation for entering the real world.

Workplace Ready
Higher education is about preparing students for their role in society, and hopefully part of that is being a positive contributor to the workplace, and larger workforce. Modern web APIs are born out of the most tech savviest employees, developing the work-arounds, and access to resources they need to get their jobs done, and solve the problems they face. Whether it is pulling Census Bureau data, and populating a spreadsheet, or migrating the companies blog platform from blogger to a dedicated Wordpress instance, APIs are central to the skills that tomorrow's workforce will need. APIs aren’t always about developing a website, or mobile application, they can be as simple as migrating form entries from an online form, and populating a Google Spreadsheet using Zapier. If one of our goals is to make sure students are prepared for the workforce, APIs have to be a regular part of their educational diet, ensuring that when they hit the ground as part of the workforce, API literacy is default.

Digital Citizen
APIs are already touching every aspect of life from looking for a restaurant on Yelp, to paying your taxes using TurboTax. Not every individual needs to understand the inner workings of APIs and oAuth, but they need to have basic API literacy, so they know APIs exist, and that they can get their photos and other information out of a service they use. Every citizen needs to understand the apps on their mobile phone, and the relationship to their online accounts, and who has access to their personal information using oAuth and APIs, and how they can manage these settings. To interact with government, APIs are playing an ever increasing role, and allowing citizens to participate in the political process, access student aid, pay their taxes, and get access to their energy and healthcare data. Let's prepare our students for the future.

In a perfect world students need to be aware of APIs by the time they first set foot on campus. Ideally they are already exposed to them in their daily online interactions, or someday through the FAFSA process, but at the very least it should be up to the university to expose them to APIs when it comes to class registration, student information systems, or ideally as part of the school’s Domain of One’s Own program.

My argument isn't just about colleges and universities getting on board with APIs across all of campus operations, this is about faculty and administrations becoming API literate, and exposing students to APIs as part of every interaction. You don't like the student information system or class schedule when you come in as freshman? Ok, make it better. Need a list of students for a class? Here is the Google Spreadsheet to class schedule API connector. Want to bring in your posts from Tumblr into the classroom? Use the Tumblr API to get your content out, and published where you want using Zapier. Let's teach them to solve the everyday problems they face, by applying technology in sensible ways.

I’ve seen some amazing movement during my last four years of evangelizing APIs, across mutiple industries, and within city and federal government, just by educating a handful of energetic entrepreneurs and civic activists, turning on the API light--resulting in incremental change in the way companies do business, organizations and government operates. Imagine if we turned on a whole generation of citizens, helping them understand that this is the way business is done, and how personal, corporate, organizational, institutional resources are accessed, shared, and managed?

Similar to my efforts on APIs in the federal government over the last two years, I’m going to turn up the focus on APIs in higher education. By 2016, I want APIs to be ubiquitous at higher educational institutions around the globe.


API Virtual Stack Composition Like The Absolut Drinks Data API

If you read my blog regularly, you know I am constantly pushing the boundaries of how I see the API space, and sometimes my ramblings can be pretty out there, but API Evangelist is how I work through these thoughts out loud, and hopefully bring them down to a more sane, practical level that everyone can understand.

My crazy vision for the day centers around virtual API stack composition, as beautiful as the Absolut Drinks Database API. Ok, before you can even begin to get up to speed with my crazy rant, you need to be following some of my rants around using virtual cloud containers like we are seeing from docker, AWS and OpenShift, and you need to watch this video from APIStrategy & Practice about Absolut Drink Databse API deployment.

Ok, you up to speed? Are you with me now?

Today, as I was playing around with the deployment of granular API resources using AWS CloudFormations, I was using their CloudFormer tool, that allows me to browse through ALL of my AWS cloud resources (ie. DNS, EC2 Servers, S3 Storage, RDS Databases), and assemble them into a CloudFormation Templates, which is just a JSON definition of this stack I’m going to generate.

Immediately I think of the presentation from Absolut, and how they spent years assembling the image and video parts and pieces that went into the 3500 drinks they wanted available via their API, for developers to use when developing interactive cocktail apps. They had 750 images, and video clips, with a combination of 30K mixing steps, that went into the generation of the 3500 drink combinations. * mind blown *

Now give me this same approach but for composing virtual API stacks, instead of cocktails. With this approach you could define individual API resources such as product API or screen capture API. These are all independent API resources, with open source server and client side code, openly licensed interface via API Commons, and virtual container definitions for both AWS CloudFormations and OpenShift.

Imagine if I have hundreds of high value, individual API resources available to me when composing a virtual stack. I could easily compose exactly the stack my developers need, composed of new and existing API resources. I wouldn’t be restricted to building directly on top of existing data stores or APIs, I could deploy external API deployments that depend on sync to stay up to date, providing the performance levels I demand from my API stack--I could mix virtual API stacks, like I would a cocktail. 

Anyhoooo, that is my rant for now. I’ll finish doing the work for deploying AWS CloudFormation and OpenShift containers for my screen capture API, rounding of all the architectural components I outlined in my API operational harness, and then rant some more.

Thanks for reading my rant. I hope it made some sense! ;-)


The APIs I Depend On To Run API Evangelist

I maintain an active list of online services I depend on for my business, using Evernote. Each month I spend an hour or two maintaining this list, to make sure it is complete and actively change my logins when appropriate. 

I saw the recent Heartbleed SSL situation as an opportunity to move forward some of my IT practices, including using 1Password to manage all of my accounts, and better profiling which APIs I'm consuming. This gave me an opportunity to update my list of APIs that I depend on, adding about 4 or 5 new ones.

First I depend on a couple of the core Google APIs:

Gmail - Integrate my daily emails, as well as email blasts with my administrative system
Google Contacts - Keep business and individual profiles in my admin system in sync with my daily Google Contacts activity.
Google Calendar - Publish hackathon calendars to Google Calendar as well as keep conferences, meetups and other events I pull through APIs and curate in sync
Google Docs - Publish copies of blog posts to Google Docs, as well as version of pages from my content management system to Google Docs
Google Sites - All of my research is in Google Sites. So I tend to publish lists of curated news, blog posts and other research to wiki pages under specific projects

Next, I would say Amazon Web Services delivers some pretty critical APIs I can't live without:

Amazon EC2 - I deploy and shutdown various EC2 instances for various jobs I run for API Evangelist. All APIs are managed on AWS EC2
Amazon S3 - All heavy objects in my systems are stored at Amazon S3 including photos, PDFs, presentations and video
Amazon Route 53 - I use AWS Route 53 to manage the underlying DNS for all my applications and sites across multiple domains

Then there are an assortment of other APIs I use throughout my web sites and applications:

3Scale - I depend on 3Scale API Infrastructure APis to remotely manage different aspects of my API management workflow
AlchemyAPI - I use alchemy for content, keyword and author extraction on articles and site pages that I curate as part my daily routine
AngelList - I pull company profiles from AngelList and use in my research and profile for API Evangelist
Bitly - I manage most of my shortened URLs for tracking on link traffic across the API Evangelist using Bitly
Crunchbase - I pull company profiles from Crunchbase and use in my research and profile for API Evangelist
EventBrite - I pull hackathons, meetups and conferences from EventBrite and use in my admin system
Evernote - I do all my note taking and recording of thoughts in Evernote, there are some folders I keep in sync with my admin system
Flickr - I've historically published a lot of public images to Flickr for SEO purposes, so many of my blog posts or events that I record a lot of images and video from get stored at Flickr using the API in my admin system
Foursquare - I use Foursquare as a journal and pull the timeline into my admin system and apply as framework to my writing and traveling
Full Contact - I use FullContact for building oiut profies of individuals in my company CRMS system,which helps me understand the public profiles of people like Twitter, LinkedIn and Github
Github - All my stories use Gists to display code and some of my larger productions have full repositories that I access via command line and via the API
Nimble - I use Nimble to manage my CRM externally. It offers some featuers that simply CRM maangement for specific projects or groups. Sometimes I setup CRM systems here for customers.
Paypal - I handle subscriptions and white paper purchases via Paypal
Pinboard - All my curation runs through Pinboard. Anything I bookmark while reading feeds or on the open web gets bookmarked with Pinboard, then with the API I pull into my admin system
ProgrammableWeb - I use ProgrammableWeb's API to pull new APIs into my curation system
Stack Exchange - I use stack exchange to monitor API activity on the forums and keep track of discussion counts for various APIs.
Tumblr - I assemble some curated posts and summaries and publish to Tumblr via the API
Twitter - Twitter is central to my API monitoring, ranking and curation system. I depend on the REST and Streaming APIs

I depend on these APIs to run API Evangelist, as well as support the other research and consulting that I do. Some of these services I pay for, some of them I use for free.  

I’m sure I depend on a lot of APIs, partly because this is my game, I’m an API Evangelist, but it is also because APIs provide me with the data and resources I need to operate.

Tracking on the APIs I depend is a regular part of my IT strategy, and eventually I am going to publish this as a public page on my websites--showcasing what APIs have done for my business.


Common Building Blocks of Cloud APIs

I’ve been profiling the API management space for almost four years now, and one of the things I keep track of is what some of the common building blocks of API management are. Recently I’ve pushed into other areas like API design, integration and into payment APIs, trying to understand what the common elements providers are using to meet developer needs.

Usually I have to look through the sites of leading companies in the space, like the 38 payment API providers I’m tracking on to find all the building blocks that make up the space, but when it came to cloud computing it was different. While there are several providers in the space, there is but a single undisputed leader—Amazon Cloud Services. I was browsing through AWS yesterday and I noticed their new products & solutions menu, which I think has a pretty telling breakdown of the building blocks of cloud APIs.

Compute & Networking

Compute - Virtual Servers in the Cloud (Amazon EC2)

Auto Scaling - Automatic vertical scaling service (AutoScaling)

Load Balancing - Automatic load balancing service (Elastic Load Balancing)

Virtual Desktops - Virtual Desktops in the Cloud (Amazon WorkSpaces)

On-Premise - Isolated Cloud Resources (Amazon VPC)

DNS - Scalable Domain Name System (Amazon Route 53)

Network - Dedicated Network Connection to AWS (AWS Direct Connect)

Storage & CDN

Storage - Scalable Storage in the Cloud (Amazon S3)

Bulk Storage - Low-Cost Archive Storage in the Cloud (Amazon Glacier)

Storage Volumes - EC2 Block Storage Volumes (Amazon EBS)

Data Portability - Large Volume Data Transfer (AWS Import/Export)

On-Premise Storage - Integrates on-premises IT environments with Cloud storage (AWS Storage Gateway)

Content Delivery Network (CDN) - Global Content Delivery Network (Amazon CloudFront)

Database

Relational Database - Managed Relational Database Service for MySQL, Oracle, SQL Server, and PostgreSQL (Amazon RDS)

NoSQL Database - Fast, Predictable, Highly-scalable NoSQL data store (Amazon DynamoDB)

Data Caching - In-Memory Caching Service (Amazon ElastiCache)

Data Warehouse - Fast, Powerful, Fully Managed, Petabyte-scale Data Warehouse Service (Amazon Redshift)

Analytics

Hadoop - Hosted Hadoop Framework (Amazon EMR)

Real-Time - Real-Time Data Stream Processing (Amazon Kinesis)

Application Services

Application Streaming - Low-Latency Application Streaming (Amazon AppStream)

Search - Managed Search Service (Amazon CloudSearch)

Workflow - Workflow service for coordinating application components (Amazon SWF)

Messaging - Message Queue Service (Amazon SQS)

Email - Email Sending Service (Amazon SES)

Push Notifications - Push Notification Service (Amazon SNS)

Payments - API based payment service (Amazon FPS)

Media Transcoding - Easy-to-use scalable media transcoding (Amazon Elastic Transcoder)

Deployment & Management

Console - Web-Based User Interface (AWS Management Console)

Identity and Access - Configurable AWS Access Controls (AWS Identity and Access Management (IAM))

Change Tracking - User Activity and Change Tracking (AWS CloudTrail)

Monitoring - Resource and Application Monitoring (Amazon CloudWatch)

Containers - AWS Application Container (AWS Elastic Beanstalk)

Templates - Templates for AWS Resource Creation (AWS CloudFormation)

DevOps - DevOps Application Management Services (AWS OpsWorks)

Security - Ops Application Management Services (AWS OpsWorks)Security - Hardware-based Key Storage for Regulatory Compliance (AWS CloudHSM)

The reason I look through at these spaces in this way, is to better understand the common services that API providers are, that are really making developers lives easier. Through assembling a list of the common building blocks, it allows me look at the raw ingredients that makes things work, and not get hunt up with just companies and their products.

There is a lot to be learned from API pioneers like Amazon, and I think this list of building blocks provides a lot of insight into what API driven resources the are truly making the Internet operate in 2014.


What Is The Next Phase Of APIs?

I've been polishing my version of the history of web APIs since I started API Evangelist. Through my research it became clear that the world of web APIs had evolved through several key phases that have gotten us to where we are at, and were essential in making the API economy a viable opportunity. So far my history tracks on 5 key phases:

  • Commerce - The first wave of web apis came from commerce pioneers like Salesforce, eBay and Amazon deploying APIs to make commerce more distributed.
  • Social - Early pioneers like Flickr, Delicious, Facebook and Twitter have made the Internet social by default using web APIs.
  • Business - As APIs evolved API management providers like Mashery, 3Scale and Apigee have standardized the business approach of leading APIs, delivering tools and services that other API providers can put to use.
  • Cloud - Amazon forever changed the way we compute using APIs, proving that we could deploy essential global infrastructure like compute, storage and DNS using web APIs.
  • Mobile - The final piece of the puzzle was the mobile computing device, ushered in by Apple with the iPhone, followed up by Google with Android, mobile phones will forever change how we interact, with APIs delivering the essential resources we need to make mobile possible.

In my opinion, the 2014 API economy wouldn't be possible if APIs hadn’t developed and evolved through these stages. Commerce, social, business, cloud, mobile are all essential to a thriving API economy. Sure there are other APIs in other genres that fill in the cracks, but these five areas are the pillars that not only showed that APIs are viable, but will also be the pillars that the API economy rests on from here forward.

As I track on the API space, I’m trying to understand where we are at, and where we are going, hoping to identify the next phases of API history. I'm keeping an eye on trends like aggregation, realtime, data, baas, reciprocity, single page apps, Internet of things (IoT), and other areas trying to understand what is next. While I think IoT is definitely the most compelling and seems to be moving the fastest, I think we need to step back, and be careful not look at this through a technological lens.

I identified early on that this world of APIs wasn't going to be all about the tech. From an API provider, consumer or analyst viewpoint we should not only consider the technology of APIs, and remember that the business of APIs is essential to everything that happens. While I think there will be an incredible amount of innovation from startups when it comes to API deployment in new areas like real time and reciprocity, I think one of the phases we are in the middle of right now, is the enterprise phase.

In the last month I've talked with more fortune 500 companies about their API strategy, than any other group, well maybe the same as government (parallel phase?). I've talked with familiar enterprise players like AT&T, but have also had conversations with newer entrants like Adobe. There is always a place for startups to innovate with APIs, pushing us into new areas, but it is going to take the resources of the fortune 500 to truly make the API economy a reality when it comes to the global economy.

As I see it, all the phases I”ve described don’t happen one after another, they overlap and feed off each other, and much like we couldn't fully realize the potential of commerce, without cloud computing and mobile--I don't think the API economy will fully be realized until major companies have a solid API program, and working API strategy. As with other phases of API development, this won't all be good, I think there is a lot to be worked out at the city, state and federal level government levels, and important issues around the politics of APIs are growing more critical every day, but I still think the current shift by the enterprise towards APIs will be seen as a significant phase in the history of APIs, when we look back.


420% Growth In DNS API Usage Over At Dyn

The folks over at Dyn who provide traffic, message, remote access and domain services, including a suite of SOAP and REST based APIs, have released some interesting stats on their API usage.

Dyn has 500 managed DNS users and partners using their APIs, growing from 7.3 million monthly API requests in January 2012 to 38.1 million API requests in September 2013, that is a 420% growth over 20 months.

In their data they split out who is using SOAP vs REST APIs, with only 3.3% of their total API requests being SOAP, with almost no growth in same time period, compared to the 420% growth in REST usage.

I don't think the data is particularly noteworthy, it represents what we already know about the space and see reflected in other charts. What I think is noteworthy is Dyn sharing the data. You don't see many API publishers sharing their numbers, and its something I'd like to see more of.


API Evangelist, Healthcare.gov and Hacker Storytelling

I've been slowly evolving API Evangelist from a single site, into an interconnected network of individual API projects. API Evangelist started as a research project back in July 2010, making its shift to be a network of smaller, inter-connected research projects is fitting.

While API Evangelist currently still runs on my home brew CMS, shortly it will finish the migration to completely run on Github, making it merely a "hollywood front" for what is currently 37+ API related, living research projects of mine.

I call my evolving approach to projects, Hacker Storytelling. I made up the name, but the approach is borrowed from several other philosophies which starts with concept of data journalism, but then has also evolved from conversations last year in Washington DC from very smart folks including Ben Balter (@BenBalter), Gray Brooks (@gbinal), and the very forward thinking work of the Development Seed team. Then of course I add my own style and approach to what I've learned.

As I move my own network of research projects to run on Github, using this new approach, I'm also seeing other positive signs coming out of Washington on the same front. First the White House Open Data Policy released in May was created and published on Github, but then I just finished reading Healthcare.gov: Code Developed by the People and for the People, Released Back to the People, by Alex Howard (@digiphile). His post outlines how the United States Department of Health and Human Services (HHS)launched Healthcare.gov to support the Affordable Care Act -- AKA "Obamacare". The website was built iteratively, in public, over the last couple months and it was completely done using Github, using a similar approach to my Hacker Storytelling. Seeing all of this, really makes me hopeful for my next year in Washington.

Alex does an amazing job of telling the story behind Healthcare.gov, I highly recommend reading his post. After reading, I wanted take a fresh walk through my approach, and talk about the importance of this new approach managing my projects, that I think will change the web, how we govern and conduct business.

My personal approach is derived from a need to quickly turn research into public stories, allowing people to take my work and put it to use in their worlds. Since my mission is to educate the masses of the benefits of APIs, and reach the largest audience possible, I needed a new approach that was fast, efficient and scaled--the result is Hacker Storytelling.

To manage my projects and tell my stories, I'm using a handful of building blocks:

  • Blog Posts
  • Static Pages
  • Widgets
  • Open Data
  • Presentations

The best part about these building blocks is that they only use, lightweight, open protocols:

  • HTML
  • CSS
  • JavaScript
  • JSON

Each project becomes an open source repository, that I host at Github. Some projects start as private repositories, but if possible EVERYTHING becomes public. If you are unfamiliar with how Github and Git works, Github is a cloud service that provides version control for code. However since code is usually just files, you can apply the same open source code process to web site or documents you can build with HTML, CSS, JavaScript and JSON.

I don't know about you. But I can build some pretty fast websites, prototype applications and even full blown production apps in HTML, CSS, JavaScript and JSON. If you want to see the extreme version of what I'm doing, head over to Development Seed and see what they are up to. They are producing some mind blowing projects, using this approach

The really powerful thing about all of this, is this can run anywhere. You can run the same configuration of site on Amazon S3, Dropbox, or anywhere you can setup hosting. This isn't just something for alpha geeks, look closer at the Amazon S3 example, that is the CTO of Amazon running his blog using this approach.

So why am I doing this? There are so many reason, and to help me wrap my head further around them, I thought I would take a crack at listing as many as I could.

Decoupling
This approach to my research and storytelling has allowed me to decouple the individuals pieces of my original API Evangelist work, which after three years has become very bloated. I have a lot of content and structured data about the API industry. This has allowed me to decouple one very big project into 35 smaller projects, with the potential for many more in the future.

Planning
When I kick off a new project, I start off the planning process with a new Github repository, with a fresh README file. Then using the native Github features I can make the project public or private and invite other people to join me in the planning process. The README quickly becomes an outline, giving a backbone to my project.

Research
Once a project has been kicked off, I kick of researching and publishing all notes, bookmarks and other relevant assets to Github after each session. Pretty soon there is a wealth of knowledge located within the repository, with every step of the way versioned, allowing me to manage additions, removals and potential conflicts.

Collaboration
Github has made the process of developing open source software, a social adventure. You can create repositories within individual Github accounts or underneath the umbrella of an organization. You can invite any other Github user to participate in the process, using open source software process, built into Github. Once you make public, you can also add Disqus and solicit public comments, if if so desired.

Versioned
Git is the central core of Github. Git was developed by Linus Torvalds to help him manage the developed of the open source operating system, Linux. Every document that is submitted to a Git(hub) repository, is versioned, allowing you to manage changes, accept contributions and even roll back to earlier versions when necessary. Git is well suited to open source, collaborative software development, but also works well for many other types of projects as well.

Storytelling
My approach, that of Healthcare.gov, and Development Seed all uses Jekyll alongside each project deployment. Jekyll is a simple, blog aware static site framework that runs very well on Github. Jekyll gives you a very simple, but powerful way to manage your pages as well as maintain a blog. This has changed my view of what a blog is for, making it as simple as four chronological journal entries for a single project or powering the 800+ blog entries of API Evangelist. Jekyll was actually developed by Tom Preston-Warner, the founder of Github, but the framework is so univeral it can run anywhere such as Amazon S3 and Dropbox.

Transparency
While this approach is not for everyone. I enjoy making projects open by default from birth to death. Hosting on Github, allowing it to be licensed openly and allowing collaboration and public input makes for a healthier overall project. Transparency can let the sunlight into any process, providing a sort of disinfectant to the overall process. Something I feel is essential in all my work.

Living Projects
Most of the projects I embark on will be living, allowing me to keep updated weekly, monthly or as often as necessary. Because I can open up projects to collaborators and public feedback, it opens up my work to even live beyond the attention I can give the project. I can even transfer ownership and administration of a project to someone else, or they can fork my work and take in an entirely new direction, breathing life into my work I could never imagine.

Portable
Each Github repository can be forked or downloaded as a zip file. Allowing the entire project to be moved, unpacked and setup at a new location--not in hours but often in minutes. This type of portability is essential in this crazy, cloud based world we've created for ourselves. It also allows me to easily deploy a project within the firewall of a company or government agency.

Syndication
Github possesses one of the most powerful syndication tools, which is called "forking". Any Github user can fork one of my projects and set to work making it their own. Adding to it, cleaning it up and when appropriate make "pull requests" back to the original project, which allows me potentially to accept their work back into the central copy. After I add common social sharing tools, and you consider the native social features built into Github, this approach offers unlimited potential for syndication.

Analytics
Each project I fire up, gets Google Analytics added. Allowing me to track all traffic and usage of my projects. Beyond the page views, visits and other common metrics, Github gives me a whole other layer of metrics for tracking how many favorites, forks, downloads, commits and other vital data about how projects are doing.

Common Formats
Every project I build uses HTML, CSS, JavaScript and JSON. All of these common formats can be opened by simple text editors and do not require any proprietary software to create, access or edit. HTML and CSS is very accessible to many, and depending on how tech savvy you are, JavaScript and JSON are pretty easy to wield, with a little training.

Open By Default
Everything is open by default. Publicly available, collaboration, open formats and open licenses go a long way in setting the right tone for a project. Open by default takes away a lot of stress for me, and opens projects up for the widest possible collaboration, re-use, distribution and ultimately attribution to me and my work.

Machine Readable By Default
All data is stored via simple, lightweight JSON files. Every listing, chart, graph within a project has a JSON data source. The entire contents of a project can have a simple JSON manifest, allowing programmatic indexing of a project's content and data sources. Machine readable by default, using JSON has changed the way I look at data management.

Scalable
I do not have to scale the infrastructure for any my projects. I've run IT infrastructure for years and very capable in doing this for myself, but I don't have time. All projects automatically are scaled as needed, to meet, not just my projects demands, but everyone on the platform. For me, the backend has been reduced to my internal systems and a handful of APIs. Everything that is public is automatically scaled in the clouds.

Speed
The usage of simple, light-weight open formats like HTML, CSS, JavaSCript and JSON. Plus all the benefits of the Github platform. Equal a pretty sweet opportunity for fast loading web pages. Everything is easily cached, providing for very fast page loads in addition to scalability. If projects are hosted on Amazon S3, there are additional opportunities for caching and distribution of content to regions around the globe using CloudFront.

Security
Along with the need to run back-end infrastructure, much of the concern with securing sites and applications goes away. I'm pretty confident that Github, Amazon and Dropbox have fairly decent security teams and with all projects being static sites and apps, using open formats, much of the opportunity for exploitation has been removed.

Low Cost
Github repositories are free, if they are public. Another incentive for being open by default. Even if you pay for repositories with Github, the costs are dollars each month. Amazon and Dropbox are both extremely affordable, further evolving past models for web hosting or rolling out costly infrastructure for projects. The cloud has enabled entirely new approaches deploying web sites, applications, as well as content and data oriented projects.

Lifecycle
The entire life cycle of my projects has changed for the positive. I can start new projects, on a whim. Fire up new repository and generate an outline during planning stages, invite all participants and have a public site up in minutes. Some projects I work on for hours each week, others I give just minutes a month to make sure they have my latest research published. I can easily walk away from projects, passing the torch and potentially keeping a project alive. Projects can be forked, downloaded and evolved, adding layers to the lifecycle that are totally out of my control. The potential for my research and stories to reach a larger audience has grown significantly, extending both the reach and the life of my work.

Web Literacy
When this approach is employed, each individual involved receives a healthy dose of web literacy. Introducing them to essential building blocks of our growing digital world, like DNS, HTML, CSS, JSON, Git and more. The portability of this approach allows you to truly own your projects, enabling you to deploy them wherever you choose. Web literacy is critical in this day and age, for everyone.

Empowering 
For me, Hacker Storytelling has empowered me to do more research, tell more stories and reach a wider audience. I've only been doing it for six months. At first, all of this can seem daunting to learn, but once you get a grasp of all the building blocks at play, it can be very empowering. It is something non-developers can employ to solve the problems they face everyday, in a way that encourages collaboration and even programmatic integration with other systems or projects. It has the potential to empower each of us to innovate and work together in new ways.

Conclusion
Hacker Storytelling is my version of this new way to build sites and apps. Development Seed and the HHS are developing their own approach as well. While there are lot of common building blocks, each individual or organization can develop their own style and set of tools and building blocks that work best for them.

That is 21 reasons I'm moving my projects to this new approach to publishing sites and applications on the Internet. I'm choosing to do this because it makes me more efficient at my research and storytelling, which is essential to my career.

I'm hoping to share my approach with as many people as I can. I'm watching my girlfriend Audrey discover how easy it is to setup new projects, and publish her work there. Setting everything I listed above into motion for her world, developer her own approach.

I don't think this methodology is for everyone, but if you are interested I'm happy to share. I will be adding more widgets and tools to my Hacker Storyteling project. While also point you to other similar implementations like HealthCare.gov, and people who are innovating with this approach like Development Seed.


The Resource Stack

I've been organizing much of my research around APIs into groupings that I call "stacks". The term allows me to loosely bundle common API resources into meaningful "stacks" for my readers to learn about.

I'm adding a new project to my list of 30+ stacks, that is intended to bring together the most commonly used API resources, into a single, meaningful stack of resources any web or mobile developer can quickly put to use.

So far I have compiled the following APIs in 29 separate groups:

  • Compute
    • Amazon EC2
    • Google AppEngine
    • Heroku
  • Storage
    • Amazon S3
    • Dropbox
    • Rackspace Cloud Files
  • Database
    • Amazon RDS
    • Amazon SimpleDB
  • DNS
    • Amazon Route 53
    • Rackspace Cloud DNS
    • DNS Made Easy
    • DNSimple
  • Email
    • SendGrid
    • Amazon SES
    • Rackspace Email
  • SMS
    • Twilio
    • AT&T SMS
  • MMS
    • Mogreet
    • AT&T SMS
  • Push Notifications
    • Urban Airship
    • AT&T SMS
  • Chat
    • Skype
    • Facebook Chat
    • Google Talk
  • Social
    • Twitter
    • Facebook
    • Google+
    • LinkedIn
  • Location
    • Google Directions
    • Google Distance Matrix
    • Google Geocoding
    • Google Latitude
    • Geoloqi
  • Photos
    • Flickr
    • Facebook
    • Instagram
  • Documents
    • Box
    • Google Drive
  • Videos
    • YouTube
    • Flickr
    • Facebook
    • Viddler
    • Vimeo
    • Instagram
  • Audio
    • SoundCloud
    • Mixcloud
  • Music
    • Echo Nest
    • Rdio
    • Mixcloud
  • Notes
    • Evernote
  • Bookmarks
    • Delicious
    • Pinboard
  • Blog
    • Wordpress
    • Blogger
    • Tumblr
  • Content
    • ConvertAPI
    • AlchemyAPI
  • Contacts
    • Google
    • Facebook
    • LinkedIn
    • FullContact
  • Businesses / Places
    • Factual
    • Google Places
  • Checkins
    • Foursquare
    • Facebook
  • Calendar
    • Google
  • Payments
    • Dwolla
    • Stripe
    • Braintree
    • Paypal
    • Google Payments
  • Analytics
    • Google
    • Mixpanel
  • Advertising
    • Adsense
    • Adwords
    • Facebook
    • Twitter
    • AdMob
    • MobClix
    • InMobi
  • Real-time
    • Google Real-time
    • Firebase
    • Pusher
  • URL Shortener
    • Bit.ly
    • Google URL Shortener

This is just a start. I will publish a full stack, complete with logos, descriptions and links. For now I'm just flushing out my thoughts regarding what are some of the top resources that are currently available for developers.

I will be making another pass over the APIs I track on in the coming weeks, as well as add to the list each week as part of my monitoring.

If you see anything missing, that should be in there...let me know!


I Like Individually Priced API Resources That Flex and Scale

On a regular basis I review my API consumption to evaluate how I’m using various APIs, and what I’m paying for them. I depend on around 20 APIs to make API Evangelist work, and I need to make sure I’m using them to their fullest potential while also being mindful of budget.

As a part of my regular review, I am looking at the differences in pricing between three key services:

  • FullContact API - I use FullContact for all my company and individual contact intelligence. I go through phases of light or heavy use depending on research projects I have going on. Full Contact provides me with per API call rates depending on the endpoint and call volume, and they limit me between four packages Trying It out (Free), Getting Started ($19/month), Gaining Traction ($99/month) and Rolling ($499/month)
  • Alchemy API - I use Alchemy API to primarily pull text content from blog posts, so I can use it internally for indexing. With Alchemy I get access to three packages Free, Small Business ($250.00) and Basic ($800.00)
  • AWS APIs - I use AWS for all my computer, storage, database and DNS API services. With AWS I pay by the resource, bandwidth transfer and storage and the other parts and pieces or specific actions in cloud computing. There are no packages, just modular API resources I use and get billed for.

I’m just one use case, but figured I’d share my thoughts on how I use these three API resources within my little world.

For Full Contact the jump from $19/month to $99/month isn’t too bad. I’ll lump all my processing together into a single month and get lots of work done. So I tend to toggle each month between these two tiers based upon my needs.

For Alchemy API I operate within the free tier and stick with the rate limit of 1,000 API calls per day. If a blog post doesn’t get pulled because I hit my limit, I queue it for another day when I have room within my daily limit. The jump from zero to $250.00 / month is really just too big of a jump for me to make.

My Amazon Web Services bill runs between $250.00 and $1,000.00 / month. Depending on how much traffic, harvesting, processing and other crazy stuff I’m doing. I have a buffet of compute, storage, database, IP address, monitoring,DNS and other cloud computing modules that I have developed and associated with pricing in my head, so that I can make decisions in the moment about whether I can afford to process a bunch of data, launch new API, website etc.

I have a few good friends in the API space that feel API rate limits stimulate creativity, innovation and work-arounds. I agree with that statement, and think every API has to understand its own consumer and make the decision they feel is best. But in the end, I personally like single API resource pricing based upon units, that I can scale infinitely as needed in any moment. I’m am more likely to integrate services into my world when they are independent bite size chunks, with pricing not restricted by other services or service tiers. Each module tends to have different value in my world and I like to make independent decisions about how to use just that resource, disconnected from all the other resources.

The world is starting to look different when I depend on 10-20 APIs vs 1-2 APIs, and I can imagine when I reach the point where I'm depending on 100-200 APIs I will have an even greater need to have API resources be priced independent of other services or limiting pricing tiers.  


Who Runs The Internet?

I came across the great infographic from ICANN titled, Who Runs the Internet?  i want to brush up on my own knowledge about all the key stakeholders in the Internet, so I typed up some of the text from the infographic, for my own benefit, as well as to make it a little more interactive.

Who Runs The Internet?

No One Person, Company, Organization or Government Runs the Internet

The Internet itself is a globally distributed computer network comprised of many voluntarily interconnected autonomous networks. Similarly, its governance is conducted by a decentralized and international multi-stakeholder network of interconnected autonomous groups drawing from civil society, the private sector, governments, academic and research communities, and national and international organizations. They work cooperatively from their respective roles to create shared policies and standards that maintain the Internet's global interoperability for the public good.

Here is how it works:

  • Operations & Services - Internet Operations span all aspects of hardware, software, and infrastructure required to make the Internet work. Services include education, access, web browsing, online commerce, social networking. etc
  • Policies & Standards - Internet Policies are the shared priciniples, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. Internet Standards enable interoperability of systems on the Internet by defining protocols, messages formats, schemas, and languages
  • Open Debate - The forma and informal process of debating policy and standard propositions in a multi-stakeholder model using any variety of methods: in-person, Internet Drafts, public forums, publishing and many more
  • Multi-Stakeholders - Civil Society & Internet Users, the Private Sector, Governments, National & Internal Organizations, Research, Academic and Technical communities all have a say in how the Internet is run

Who is Involved:

  • IAB - Internet Architecture Board - Oversees the technical and engineering development of the IEFT and IRTF
  • ICANN - Internet Corporate for Assigned Names and Numbers - Coordinates the Internet's systems of unique identifiers: IP Addresses, Protocol-Parameter registries, top-level domain space (DNS root zone)
  • IETF - Internet Engineering Task Force - Develops and promotes a wide range of Internet standards dealing in particular with standards of the Internet protocol suite. Their technical documents influence the way people design, use and manage the Internet
  • IGF - Internet Governance Forum - A multi-stakeholder open forum of rebate on issues related to Internet governance
  • IRTF - Internet Research Task Force - Promotes research of the evolution of the Internet by creating focused, long-term research groups working on topics related to Internet protocols, applications, architecture and technology
  • Governments and Inter-Governmental Organizations - Develop laws, regulations and policies applicable to the Internet within their jurisdictions; participants in multilateral and multi-stakeholder regional and internal fora on Internet Governance
  • ISO 3166 MA - International ORganization for Standardization, Maintenance Agency - Defines names and postal codes of countries, dependent territories, special areas of geographic significance
  • ISOC - Internet Society - Assure the open development, evolution and use of the Internet for the benefit of all people throughout the world. Currently ISOC has over 90 chapters in around 80 countries
  • RIRs - 5 Regional Internet Registries - Manage the allocation and registration of Internet number resources, such as IP addresses, within geographic regions of the world - Africa - http://afrinic.net, Asia Pacific - http://apnic.net, Canada & United States - http://arin.net, Latin America & Caribbean - http://lacnic.net, Europe, the Middle East & parts of Central Asia - http://rip.net
  • W3C - World Wide Web Consortium - Create standards for the world wide web that enable an Open Web Platform, for example, by focusing ton issues of accessibility, internationalization, and mobile web solutions
  • Internet Network Operators Groups - Discuss and influence matters related to Internet operations and regulation within informal fora made up of Internet Service Providers (ISPs), Internet Exchange Points (IXPs) and others


Netflix API Is Much More Than A Public API

Netflix has entered the final stages of shuttering its public API last week. Its been coming for a while now, starting in June of 2012, and now is official with the platform no longer accepting new API registrations.

After reading about the changes to the Netflix Public API program on their blog, and hearing much of the news in response, everyone seems to file this away, along with the Twitter API--just another API platform screwing over its developers.

As I do, I wanted to take a step back, look at the bigger picture and try understand what happened.  On October 1st 2008, Netflix launched their public API, and they appear to have done everything right. They had a blog, solicited code samples from developers, accepted application submissions and even showcased the developers apps in the gallery. Netflix would even help promote your app to Netflix subscribers and threw hackathons. The Netflix API team worked to improve API performance, communicate regularly, but really nothing that amazing happened.

There were applications like InstaWatcher and WhichFlicks (among others) developed on the API, but as Daniel Jacobson puts it, a thousand flowers didn’t bloom. In these situations its easy to blame the API provider, but developers didn’t really step up and build anything that innovative and cool. So is this a failure of Netflix? A failure of developers to innovate? Or could it possibly be a third: failure of the API vision?

I would say the demise of the Netflix public API is equal part Netflix and the developer, and just the nature of the industry it exists in. It didn’t take me long to look through the Netflix API blog, so I can tell they didn’t put alot into evangelizing the API. But I really can’t find any innovation that occurred by developers as part of it, so I think us devs have to share some of the responsibility as well.

Several of the blog posts covering the news last week, compared this to Twitter which I think for the untrained eye of the mainstream tech blogosphere, this is easy to do. But Twitter is user generated content, via one of the newest types of content platforms, and Netflix is heavily licensed and policed content from one of the oldest content platforms. I think expecting public API success from Netflix and / or developers was a lot to ask.

I love and believe in APIs, but I’m not delusional enough to think they will work magically everywhere they are applied. However, even with the closing of the public Netflix API, I consider Netflix is an API success story. Look what they’ve done with their internal and partner APIs. They’ve managed to scale not just from the data center to the cloud, but globally and across 800+ devices--while also sharing this knowledge and wisdom with the public via their blog:

If that wasn't enough, they are also open sourcing much of the technology behind their approach:

  • eureka - AWS Service registry for resilient mid-tier load balancing and failover
  • RxJava - a library for composing asynchronous and event-based programs using observable sequences for the Java VM
  • Governator - A library of extensions and utilities that enhance Google Guice to provide: classpath scanning and automatic binding, lifecycle management, configuration to field mapping, field validation and parallelized object warmup
  • Priam - Co-Process for backup/recovery, Token Management, and Centralized Configuration management for Cassandra
  • edda - Service to track changes in your cloud recipes-rss - RSS Reader Recipes that uses several of the Netflix OSS components
  • astyanax - Cassandra Java Client
  • karyon - The nucleus or the base container for Applications and Services built using the NetflixOSS ecosystem
  • netflix-graph - Compact in-memory representation of directed graph data
  • asgard - Web interface for application deployments and cloud management in Amazon Web Services (AWS)
  • Hystrix - Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable
  • servo - Netflix Application Monitoring Library
  • frigga - Utilities for working with Asgard named objects

When measuring the success or failures of API initatives, we can't use the same yardstick in all scenarios. When you look at the knowledge, wisdom and code that has come out of Netflix, there is no way you can say their API initiative is anything but a success. I don’t see see Netflix as a case study in how to stream movies over the web via public APIs, but a deeply important experiment in how to deliver licensed content to over 800 devices, via the next generation of APIs.  Something that probably isn't an edge case, it actually represents where we all might be headed in the near future.

Let’s not get caught up in the recent deprecation of the Netflix public API.  There is so much going on!  Let's get studying some of the knowledge and technology coming out of Netflix. I know its my motivation for writing this post, and doing this research.


MySQL, PostgreSQL and RDS to API With Emergent One

There are numerous companies, with existing IT infrastructure, who are looking to deploy APIs in 2013. These companies will be deploying APIs using their existing technology teams, or depending on one of the 17 API management service providers available.

This market is ripe for the 3Scale's of the world to provide valuable services to, but for many companies, organizations and government agencies who need to deploy APIs this year, API deployment will be about taking an existing database or multiple databases, and open them up to the public, partners, 3rd party developers or possibly just provide access to a remote department or branch of their company in the easiest way possible.

Not all companies will have the resources, or the need to deploy full blown API programs.  They just need a dead simple database to API solution that will quickly expose their data over the web in a secure way. Until recently this solution wasn’t available in the cloud, but a new API service provider called Emergent One has stepped up to fill in the gap.

Emergent One is a cloud service that allows you to connect to your company's MySQL, PostgreSQL or Amazon RDS databases, then generate a REST API from your existing data stores.

Using Emergent One you can define a new API, connect to your database using an agent or direct connection, then define your API resources complete with meta data, sub-resources, in-line resources, fields and computed fields. Pretty much all you will need to make a clean web API from a database.

Once you have your API resources defined, Emergent One provides you with a developer portal around these APIs, and the ability to provide developer registration, then issue keys they can use to access your data--providing the openness you desire, while keeping things secure and within your control.

The developer portal is complete with documentation, explorer console and code libraries for iOS, Android and in Ruby. Emergent One allows you to bind your own DNS to your endpoints, and also provide paid plans, complete with billing management.

Emergent One is a perfect service for any company looking to develop and manage APIs from your local databases. The only thing I would say is missing is support tools, to help you interact with your developers after you launch your API.  But I'm sure its coming.

I’m happy to see the Emergent One platform has a freemium tier giving you up to 5,000 API requests a month, a business tier for up to 2.5M requests a month, as well as an enterprise tier with sales support for all levels of needs.  The freemium tier will be critical for businesses to play with, and get to know better.

The Emergent One admin needs a little polishing, it could use a more dashboard like feel for the home page, but overall the Emergent One team nails it! Providing dead simple, yet robust API deployment from your MySQL, PostgreSQL and Amazon RDS databases, in a way that anyone can put to use.

If your company is looking to deploy and API using a database, and don’t have the resources internally to make it happen, I definitely recommend taking a look at Emergent One.


The APIs That I Depend On For My Business

I maintain an active list of online services I depend on for my business, in Evernote. Each month I spend an hour or two maintaining this list, to make sure it is complete and actively change my logins when appropriate. As a recovering IT guy, and maintaining infrastructure for myself, but also Audrey Watters--so I keep good tabs on the various services I use.

While going through this my services list this month, I added a new section to it, and started tracking if I depend on the service for their API. I have enough automated jobs running on top of APIs I needed to make sure I keep good track of which APIs I depend on. Here are some of the API I depend on to keep my business operational.

First I depend on a couple of key Google APIs:

Gmail - Integrate my daily emails, as well as email blasts with my administrative system
Google Contacts - Keep business and individual profiles in my admin system in sync with my daily Google Contacts activity.
Google Calendar - Publish hackathon calendars to Google Calendar as well as keep conferences, meetups and other events I pull through APIs and curate in sync
Google Docs - Publish copies of blog posts to Google Docs, as well as version of pages from my content management system to Google Docs
Google Sites - All of my research is in Google Sites. So I tend to publish lists of curated news, blog posts and other research to wiki pages under specific projects

Next, I would say Amazon Web Services delivers some pretty critical APIs I can't live without:

Amazon EC2 - I deploy and shutdown various EC2 instances for various jobs I run for API Evangelist. All APIs are managed on AWS EC2
Amazon S3 - All heavy objects in my systems are stored at Amazon S3 including photos, PDFs, presentations and video
Amazon Route 53 - I use AWS Route 53 to manage the underlying DNS for all my applications and sites across multiple domains

Then there are an assortment of other APIs I use throughout my web sites and applications:

AlchemyAPI - I use alchemy for content, keyword and author extraction on articles and site pages that I curate as part my daily routine
Crunchbase - I pull company profiles from Crunchbase and use in my research and profile for API Evangelist
EventBrite - I pull hackathons, meetups and conferences from EventBrite and use in my admin system
Evernote - I do all my note taking and recording of thoughts in Evernote, there are some folders I keep in sync with my admin system
Flickr - I've historically published a lot of public images to Flickr for SEO purposes, so many of my blog posts or events that I record a lot of images and video from get stored at Flickr using the API in my admin system
Foursquare - I use Foursquare as a journal and pull the timeline into my admin system and apply as framework to my writing and traveling
Github - All my stories use Gists to display code and some of my larger productions have full repositories that I access via command line and via the API
Paypal - I handle subscriptions and white paper purchases via Paypal
Pinboard - All my curation runs through Pinboard. Anything I bookmark while reading feeds or on the open web gets bookmarked with Pinboard, then with the API I pull into my admin system
ProgrammableWeb - I use ProgrammableWeb's API to pull new APIs into my curation system
Stack Exchange - I use stack exchange to monitor API activity on the forums and keep track of discussion counts for various APIs.
Tumblr - I assemble some curated posts and summaries and publish to Tumblr via the API
Twitter - Twitter is central to my API monitoring, ranking and curation system. I depend on the REST and Streaming APIs

I depend on these APIs to run API Evangelist, API Voice and The API Stack as well as support the other research and consulting that I do. Some of these services I pay for, some of them I use for free. Usually if a service sticks around in my world for more than 3 or 4 months I pay for some sort of premium account or access.

I’m sure I depend on a lot of APIs, partly because this is my game, I’m an API Evangelist. But it is also because APIs provide me with the data and resources I need to operate, and as a programmer I’m able to quickly put APIs to use for my business.

Tracking on the APIs I depend will be a regular part of my IT strategy, and I’m even going to publish this as a public page on my websites--showcasing what APIs have done for my business.


Helping Voters Register with the Cost of Freedom Project

Last week during the Hackathon for Social Good in New York City, I was fortunate enough to be connected with Faye Anderson (@andersonatlarge) of the Cost of Freedom Project.  The Hackathon for Social Good was put on my WebVisions, using the hackathon model to further projects that are making a social impact in our lives.

The Cost of Freedom Project is centered around providing the necessary information and resources needed by U.S citizens to be able to vote in the 2012 elections, primarily targeted the 5 states that have strict laws requiring voters to show a government issued photo ID in order to vote.

When it comes to making a social impact, Faye’s project is a shining example, and I couldn’t ignore her need for a hacker to move her project forward.  After hearing her pitch, I joined her project team which included Lori Widelitz-Cavallucci (@lwcavallucci) a UX Designer, and Jack Aboutboul (@jackfoundation), Developer Evangelist from Twilio.

As Lori and Faye got to work on the site layout and user experience I started setting up the back-end that would be necessary to run the app:

  • Amazon EC2 Instance Running Fedora Linux and Apache Web Server  PHP 5.3
  • Twitter Bootstrap
  • DNS for Domain Setup

By the end of the Hackathon we had a site layout, with all pages setup with initial content.  All the site content is editable from a Google spreadsheet allowing Faye to maintain control over her content and crowsdsource the management of content using the spreadsheet interface.

The site uses CityGrid to pull vital record offices by state, county voter registration and local DMV offices when a user enters their city and zip code.

The Cost of Freedom Project is a great example of what you can pull together at a hackathon, but also the wide range of apps you can build using CityGrid data.  Sites do not have to be local directories, CityGrid places data can be used to build informational sites that add value to almost any process.


Tumblr Releases API v2

Tumblr just released version 2.0 of the popular blogging platform API, in an effort to make developers lives a little easier when integrating with the Tumblr platform.

The previous version of the API made distinctions between read and write operations and pushed different activity to two separate domains, the www.tumblr.com and the blog subdomain.

The new API version consolidates all API access to api.tumblr.com and exposes two major resources in the URI: /blog and /user. Consolidation under one domain will allow Tumblr to effectively measure and balance traffic using DNS.

The new URLs will follow a pattern, making them intuitive, allowing developers to easily discover and experiment with the API without having to rely exclusively on documentation.

Instead of adhering to strict RESTful practices, Tumblr is looking to create simple URLs that enable manipulation by the average human.

For example, to pull the avatar of my tumblr blog I use the following URL: http://api.tumblr.com/v2/blog/kinlane.tumblr.com/avatar

The API returns the default avatar image, and if I want a larger size, I just append the size to the URL: http://api.tumblr.com/v2/blog/kinlane.tumblr.com/avatar/512

In the first version of the Tumblr API, all data was available in XML, with JSON support added largely as an afterthought. Now Tumblr has just decide to eliminate XML, and focused on dialing in their JSON implementation. A common approach for API owners.

The new API implements OAuth 1.0a for authentication and they are looking at upgrading to OAuth 2.0 in the neear future.

I like how Tumblr has consolidated under a single domain, and I favor their approach to intuitive URL's over a strict RESTful implemntation.

The Tumblr API v2 is definitely an improvement, I will see how it sizes up against the new Posterous API launched last month.


Where Cloud Computing Growth Will Slow

Cloud computing holds the promise of infinite scalability and capacity. We hust pay for what we use, and whatever capacity you need is there.

This holds true on the server and back-end of the cloud. Where this promse will begin to break down is on the front-end of the network.

The faces of the front-end of the cloud, are companies like Comcast, and ATT and Verizon. These Internet and mobile providers provide everyone the connectivity to the cloud.

These providers are increasingly capping, slowing and filtering connectivity to the Internet in an effort to control and squeeze more profits out of their networks.

This approach is at odds with the vision and promise of cloud computing, and at some point cloud computing growth will slow because of this.

In 2008 I worked hard to redefine the back-end for SAP's North American events to scale on the Amazon cloud. I setup everything to run in the cloud from the SQL Server database, web application servers, version control, development, email, file storage to the DNS.

I could scale every component in the delivery chain, to support large events such as SAPPHIRE. I had plans for horizontally scaling web servers and email, and vertically scaling of the databases as needed.

The cloud delivers on its promise!

That is, until we got on-site at the Orlando Convention Center, Moscone or any other event venue. Because of the existing relationship between venue and Internet service providers, the Internet, well sucks shit! Its no secret. It does at any event space.

So it didn't matter how much capacitiy, redundancy, or scalability I had in the cloud. It all went out the window because of the client side of my network.

The Internet and mobile internet providers will continue to work harder at controlling, limiting and monetizing their networks, over rolling out the capacity necessary to scale along with cloud computing.

I've been one of the biggest fans of cloud computing. I pushed for deploying AWS at SAP events against the recommendation of SAP IT in Germany. I have fought many entrenched IT folks when it comes to cloud computing. I just believe in the cloud.

Except, the vision and promise will fall short in the hands of the cable and telecom companys who control the Internet connectivity in our country.


API Tools & Service Providers

When I review a new API, I take a look at the technology they provide, but I tend to focus on the business of their API.

One area I look at, is what tools they use to deliver the building blocks that make up their API. Did they build it themselves? Did they use Mashery, Apigee or 3Scale? Are they using other open-source tools or a software as a service (SaaS) provider?

I'm always on the look out for new open-source tools or services providers that can be leveraged for APIs in this way. My goal is to find an open-source solution or service provider for each one of the building blocks I have defined. Some examples are: I like to have an open-source tool available for each building block, but having a solid company that can deliver specialized services for a API, and have them up and running instantly is even better.

Some characteristics I look for in a service provider are:
  • Self-service - Instant registration and activation of services.
  • Branding - The ability to brand and make look like your API area.
  • Tiered Pricing - A pay as you go model allowing small companies to try out first, then grow as needed.
  • DNS - Ability to point subdomain at service, to keep with a single company domain.
  • Support - Quality support from self-service forum to someone you can talk to.
  • Focus - Delivering a high quality service ina single area, and not trying to be everything to everyone.
  • Data Portability - The ability to take ALL of your data with you when you are ready to leave is critical.
These are just a few of the key strengths I look for when finding API tool and service providers.

As an example, take a look at AppStores.com, they have almost everything I look for in a building block service provider. They are still working on the self-service and tiered pricing aspects, but its at least on their roadmap.

Do you have any open-source tools or software as a service providers that you use to deliver your API?


Amazon Launches DNS Web Service

Amazon added a DNS service to their stack of web services today The service is called Amazon Route 53.

Now you can host your domains DNS with Amazon for $1.00 / month per zone and .50 cents per million queries.

You can use the Route 53 DNS for your Amazon infrastructure as well manage external resources as well.

For my personal domains I tend to use the DNS that comes with the domain at GoDaddy. I have setup DNS services using Amazon EC2 instances, and now can stop doing this and rely on their new Route 53 DNS service.


Computing Essentials in the Cloud with APIs

As we continue to move computing off the workstation and servers and into the cloud, we need to recreate all the essentials of computing we are used to. APIs are how the next generation of computing essentials like email, print, DNS, and file system will be delivered in this environment.

As we distribute and virtualize our web sites, applications and other tools, APIs are playing an ever more increasing role in connecting these applications for us. A couple examples are: APIs allow developers to off-load specific computing tasks to third party providers who specialize in these areas. APIs can deliver speedier development, and support widely distributed system by communicating using XML and JSON, and provide security using standards like OAuth.

We will see more computing essentials like print and email exposed online via open APIs for integration into mobile and web applications.

APIs provide the data and functionality needed to drive the next generation of computing that will occur primarily in the clouds via mobile interfaces.


Internet Service Provider (ISP) at Amazon Web Services (AWS)

I was just refining a wiki page of various building blocks I use at Amazon Web Services. I noticed it would make a great Internet Service Provider (ISP) package for someone who wanted to start an ISP, or even used as model for an existing ISP looking to migrate to cloud computing.

These are a few of the components I have my list:
  • Web Server on Amazon EC2
    • Linux / Windows
    • EBS Volumes for Storage
    • Machine Images
  • Amazon S3 Central File Storage + Jungle Disk
    • Server Backup
    • Central File Storage
    • Client Cloud Storage as a Service
    • FTP Access
  • Database
    • SQL Server 2008
    • MySQL
    • Amazon RDS
    • Other (Amazon SimpleDB, Cassandra, CouchDB, etc)
  • Email
    • POP Server
    • SMTP Server
  • DNS Server
  • FTP Server
    • Web File Access
    • Central File Storage Access
  • SVN Server
    • Version Repositories
    • Client Checkout
These are just a few of the building blocks I have on my list. There are many other possibilities and configurations. You slap on an ISP Cloud Management tool like Plesk from Parallels and you can manage your network without much headache.

You are going to see many Internet Service Providers (ISP) make the jump to the cloud because of cost and ease of deployment. There are just too many benefits to ignore the cloud.


Multi-Site, Global Load Balancing, and Traffic Management in the Clouds

I was just reviewing the cloud tools offered by Zeus. I was researching Global server Load Balancing (Cloud Balancing) software yesterday and they were on my list. Unfortunately I used an image from their software and neglected to mention them in my post. I just spent the last couple hours reviewing their tools and 3 specific products stand out:
  • Multi-Site Manager - Tools for managing applications that span multiple, geographically distributed, data-centers and cloud providers.
  • Global Load Balancer - DNS solutions for managing traffic on geographic locations and data center performance and availability.
  • Traffic Manager - Manage your application traffic across distributed locations.
These are the products they offer that fall under my "cloud application" label. They also offer firewall, load balancers, and web servers. They have locations in:
  • Cambridge, UK
  • San Mateo, CA US
  • Rome, Italy
Interesting set of products. I'm working through their case studies and white papers to learn more. I just wanted to review their company and core products, and post to my cloud computing service provider wiki.


Securing Global Content With Amazon S3 Bucket Policies

I'm spending a lot of time lately looking at more efficient and secure ways of delivery web applications and content globally. I am refining my DNS strategy using Global Server Load Balancing (GSLB), and refining my file and content management policies now that Amazon Web Services is offering Bucket Policies.

Geographic Specific Sub-Domains (CNAME)

With Global Server Load Balancing I can setup geographically specific sub-domains like eu.domain.com and us-west.domain.com that are bound to specific Amazon S3 buckets and Amazon Cloudfront Distributions.

Secure Files and Content with Bucket Policies

Now with Amazon Bucket Policies I can restrict access to buckets for reading or writing to specific domain names, IP Addresses, or IP Ranges.

Not only can I distribute my files and content to Amazon Edge Locations in US, Europe, Hong Kong, and Japan. With the addition of Amazon S3 bucket policies I can secure specific objects and buckets of objects to ensure they are only delivered within geographic regions that are acceptable. If there are legal or regulatory considerations for videos, audio or other file types, I can enforce policies through my Global Server Load Balancing, Amazon S3 Bucket Policies, and Amazon Cloudfront Distributions.

You can really get granular about where your data is distributed and what geographic regions can access your data.


Cloud Balancing with Global Server Load Balancer (GSLB)

Last year I moved web site, database and email services for the SAP SAPPHIRE event to the Amazon Cloud. We needed to scale our infrastructure dramatically to support the event for about 4 months out of the year. This was our second year running the conference in the Amazon Cloud and we were able to scale up nicely to handle the traffic. There were still issues.

Global Traffic and Latency

We ran systems to support SAPPHIRE Frankfurt as well as the Virtual SAPPHIRE. So our traffic was from all over the world, and specifically in Europe for a large portion. Availability and latency with various web applications become critical. I am research different methods for addressing not only for SAPPHIRE next year but a year round calendar of events for SAP, Google and other clients.

I'm researching DNS-based Global Server Load Balancing or GSLB solutions such as Dynect. GSLB allows you to distribute application load between data centers or cloud providers in any zone or global region you choose. Traffic can be routed based upon:
  • User Geography
  • Server Capacity / Availability
  • Geographic Based Laws and Regulation
Based upon a users location when they load my application at registration.eventdomain.com and they are located in Europe, using DNS I can route them to registration-eu.domain.com which is routed to server instances I have deployed in Europe using Amazon EC2. Using Global Server Load Balancing I can:
  • Balance server traffic more evenly across my cloud infrastructure
  • Create a better user experience by routing them to servers closest to them.
  • Avoid Internet outages and bottle necks
Overall this will increase application performance and better meet business objectives around delivery web applications to users. As I"m supporting events and conferences in more locations around the world, this type of web application delivery using Infrastructure as a Services (IaaS) and Global Server Load Balancing (GLSB) is becoming critical.

Update: I am also researching Zeus as part of our Global Server Load Balancer strategy, and they were so kind to let us use their image.


Linode - VPS Hosting

Linode is a very focused and simple Virtual Private Hosting (VPS) company that delivers some very affordable cloud hosting solutions.

I just stumbled across this company, yet they are a 7 year old "cloud" platform with an pretty impressive set of features out of the box:
  • Multiple Linux distributions
  • Create Configuration Profiles which associate disk images and device nodes
  • Boot between configuration profiles
  • Share disk images between configuration profiles
  • Resize disk images
  • Network and CPU usage graphs
  • Multiple IP address support
  • Managed/hosted DNS service with slave support
  • Custom reverse DNS (rdns)
  • Access Out of band console access using Lish
  • Lish menu system to issue jobs to your Linode
  • Lish access via SSH keys
  • Support for booting into single user mode, init=/bin/bash
  • Support for booting with a custom "root=" kernel parameter
  • Support for booting with an initrd
  • Bootable recovery distribution (Finnix)
  • Add and remove extra resources to and from your Linode
  • Shutdown Watchdog will automatically reboot your Linode in case of a crash
  • Clone a Linode to another
  • Move IPs from one Linode to another
  • IP address fail over support for high availability setups
To make them even more competitive they provide data centers in:
  • London, GB, UK
  • Newark, NJ, USA
  • Atlanta, GA, USA
  • Dallas, TX, USA
  • Fremont, CA, USA
They look like a pretty scrappy cloud service provider offering computing and storage services. In the Jack of all Clouds, State of the Cloud report they show them as being one of the fastest growing cloud service providers.

With their pricing model, simple dashboard, and straightforward approach, they look like a sensible small business solution for migrating to the cloud.


Email Infrastructure in the Amazon Cloud

I am playing with different ways of monetizing my skills and knowledge. I wrote a white paper on how to setup your core email infrastructure in the Amazon cloud. I haven't spent a lot of time polishing, because I am not a writer. I have spent hundreds of hours polishing the approach.

I am offering this white papers for $25.00. I will be spending more time in the next couple weeks to syndicate this white paper and probably increase the rate. Let me know if your interested.

Email is a mission critical business operation. It needs to work and your business needs to be able to control all aspects of your email chain. This guide offers a step by step guide to setting up your business email infrastructure running on Amazon Web Services.

The Amazon cloud offers lots of advantages for email, but until recently wasn't a complete solution for all your email sending and receiving needs. Now the Amazon cloud is ready for your core email infrastructure and this is the guide to help walk you through.

This guide assumes:
  • You truly want to understand all aspects of your companies email infrastructure.
  • You wish to guarantee the send and receipt of all email through your infrastructure.
  • You want to be able to diagnose, fix and scale your email infrastructure as needed
  • You have a basic understanding of Amazon Web Services.
  • You have full DNS control over your email domain(s)
This guide also addresses several critical parts of your email infrastructure and how it runs in the Amazon Cloud:
  • Your Domain
  • Send Email (SMTP)
  • Receiving Email
  • Scaling and Consolidation
  • Elastic IP Addresses (eIP)
  • Domain Name System
  • Email Infrastructure Health
  • Subscribe / Unsubscribe
  • Administrative Oversight
  • Backup
  • Maintenance
  • Scaling
Additionally I have included links to key definitions other vital tools for building your email infrastructure in the cloud. I have also included a basic worksheet for helping walk you through setting up their infrastructure.

This email setup in the cloud guide is in its alpha format and offered for $25.00 at this point. Let me know if your looking for a guide to setting up your email in the amazon cloud as well as scaling it. I won't keep it at this rate for long.


Email in the Amazon Cloud Part 5 - Reverse DNS

I was really excited when I got this email response on my ticket from Amazon Web Services:
We've reached out to the Amazon EC2 team and here are the next steps. In order for us to proceed, we'll need to setup DNS PTR records for EIPs (incl xxx.xxx.xxx.xxx) under your AWS account. Hence could you provide us with the names you would like to use and the corresponding EIPs (being used for email purposes)?

I believe the DNS PTR names should match the DNS 'A' records you may currently have resolving to these addresses. Provide us with the name you would like to use and we'll start the process on our end.

This was not the response I had expected. I thought either Amazon would negotiate a deal with Trend Micro MAPS to de-list their IP blocks and allow email to be sent or Trend Micro MAPS would back down from the pressure of everyone harassing them.

I really didn't think Amazon would start designating reverse DNS for their IP addresses. Even though this is the response that makes the most sense. I just thought they were too large to do this.

After some discussion and convincing of them that we were deserving of reverse DNS on 10 IP addresses I got it! I got an email telling me it was done. I did a reverse DNS lookup on all my reserved IP addresses and they reflected my core network domain.

I immediately approached Trend Micro MAPS and asked them one more time to de-list my IP addresses. They immediately responded and said it was done.

I found my faith in the cloud coming back.

Email in the Amazon Cloud - Part 1 - Part 2 - Part 3 - Part 4 - Part 5 - Part 6 - Guide to Email in the Amazon Cloud


Email in the Amazon Cloud Part 3 - Maintaining IP Addresses and DNS Quality

Sending large amounts of email and ensuring they get received requires a lot of work and you need to make sure your mail framework is setup correctly and is healthy. This takes a lot of work and maintenance.

First off I reserve 11 IP addresses with Amazon Web Services. I reserve these 11 IP addresses and never let them back into the general pool.

I setup DNS for my domain to reflect using 10 of the IP addresses as smtp1.mydomain.com through smtp10.mydomain.com and properly setup MX records accordingly.

Now reverse DNS on IP addresses is nice, but in my experience is not mission critical for sending email. All last year we were fine, I sent probably 10-20 large email blasts using this setup, with no major problems with emails being received.

Before doing any emailing I check all my IP addresses against major blacklists to see if we've been listed. We have been listed on blacklists before and easily was able to get removed with no problem.

Then in November we encountered Trend Micro MAPS.

Email in the Amazon Cloud - Part 1 - Part 2 - Part 3 - Part 4 - Part 5 - Part 6 - Guide to Email in the Amazon Cloud


Amazon DNS PTR Records for Email IP Addresses

I have been going round and round with Amazon Web Services EC2 and Trend Micro MAPS for about 2 weeks now about the entire Amazon IP address block being black listed.

Finally got some action out of Amazon. They are going to add PTR records for all of the Amazon EC2 IP addresses that we lease.

Great news for legitimizing my usage of Amazon Web Services EC2 for our email infrastructure.

I'll write more about this later.