These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.15 Feb 2018
I received a tweet from my friend Kelly Taylor with USDS, asking for any information regarding establishing an “approve access to production data” for developers. He is working on an OAuth + FHIR implementation for the Centers for Medicare and Medicaid Services (CMS) Blue Button API. Establishing a standard approach for on-boarding developers into a production environment always makes sense, as you don’t want to give access to sensitive information without making sure the company, developer, and application has been thoroughly vetted.
As I do with my work, I wanted to think through some of the approaches I’ve come across in my research, and share some tips and best practices. The Blue Button API team has a section published regarding how to get your application approved, but I wanted to see if I can expand on, while also helping share this information with other readers. This is a relevant use case that I see come up regularly in healthcare, financial, education, and other mainstream industries.
Virtualization & Sandbox The application approval conversation usually begins with ALL new developers being required to work with a sandboxed set of APIs, only providing production API access to approved developers. This requires having a complete set of virtualized APIs, mimicking exactly what would be used in production, but in a much safer, protected environment. One of the most important aspects of this virtualized environment is that there also needs to be robust sets of virtualized data, providing as much parity regarding what developers will experience when they enter the production environment. The sandbox environment needs to be as robust and reliable as the production, which is a mistake I see made over and over from providers, where the sandbox isn’t reliable, or as functional, and developers never are able to reach production status in a consistent and reliable way.
Doing a Background Check Next, as reflected in the Blue Button teams approach, you should be profiling the company and organization, as well as the individual behind each application. You see company’s like Best Buy refusing any API signup that doesn’t have an official company domain that can be verified. In addition to requiring developers provide a thorough amount of information about who they are, and who they work for, many API providers are using background and profiling services like Clearbit to obtain more details about a user based upon their email, IP address, and company domain. Enabling different types of access to API resources depending on the level of scrutiny a developer is put under. I’ve seen this level of scrutiny go all the way up to requiring the scanning of drivers license, and providing corporate documents before production access is approved.
Purpose of Application One of the most common filtering approaches I’ve seen centers around asking developer about the purpose of their application. The more detail the better. As we’ve seen from companies like Twitter, the API provider holds a lot of power when it comes to deciding what types of applications will get built, and it is up to the developer to pitch the platform, and convince them that their application will serve the mission of the organization, as well as any stakeholders, and end-users who will be leveraging the application. This process can really be a great filter for making sure developers think through what they are building, requiring them to put forth a coherent proposal, otherwise they will not be able to get full access to resources. This part of the process should be conducted early on in the application submission process, reducing frustrations for developers if their application is denied.
Syncing The Legal Department Also reflected in the Blue Button team’s approach is the syncing of the legal aspects of operating an API platform, and it’s applications. Making sure the application’s terms of service, privacy, security, cookie, branding, and other policies are in alignment with the platform. One good way of doing this is offering a white label edition of the platforms legal documents for use by the each application. Doing the heavy legal work for the application developers, while also making sure they are in sync when it comes to the legal details. Providing legal develop kits (LDK) will grow in prominence in the future, just like providing software development kits (SDK), helping streamline the legalities of operating a safe and secure API platform, with a wealth of applications in its service.
Live or Virtual Presentation Beyond the initial pitch selling an API provider on the concept of an application, I’ve seen many providers require an in-person, or virtual demo of the working application before it can be added to a production environment, and included in the application gallery. It can be tough for platform providers to test drive each application, so making the application owners do the hard work of demonstrating what an application does, and walking through all of its features is pretty common. I’ve participated on several judging panels that operate quarterly application reviews, as well as part of specific events, hackathons, and application challenges. Making demos a regular part of the application lifecycle is easier to do when you have dedicated resources in place, with a process to govern how it will all work in recurring batches, or on a set schedule.
Getting Into The Code As part of the application review process many API providers require that you actually submit your code for review via Github. Providing details on ALL dependencies, and performing code, dependency, and security scans before an application can be approved. I’ve also see this go as far as requiring the use of specific SDKs, frameworks, or include proxies within the client layer, and requiring all HTTP calls be logged as part of production applications. This process can be extended to include all cloud and SaaS solutions involved, limiting where compute, storage, and other resources can be operated. Requiring all 3rd party APIs in use be approved, or already on a white list of API providers before they can be put to use. This is obviously the most costly part of the application review process, but depending on how high the bar is being set, it is one that many providers will decide to invest in, ensuring the quality of all applications that run in a production environment.
Regular Review & Reporting One important thing about the application review process is that it isn’t a one time process. Even once an application is accepted an added into the production environment, this process will need to be repeated for each version release of the application, along with the changes to the API. Of course the renewal process might be shorter than the initial approval workflow, but auditing and regular check-in should be common, and not forgotten. This touches on the client level SDK, and API management logging needs of the platform, and that regular reporting upon application usage and activity should be available in real time, as well as part of each application renewal. API operations is always about taking advantage the real time awareness introduced at the API consumption layer, and staying in tune with the healthy, and not so healthy patterns that emerge from logging everything an application is doing.
Business Model It is common to ask application developers about their business model. The absence of a business model almost always reflects the underlying exploitation and sale of data being access or generated as part of application’s operation. Asking developers how they will make money and sustain their operations, along with regular checkins to make sure it is truly in effect, is an easy to ensure that applications are protecting the interests of the platform, its partners, and the applications end-users.
There are many other approaches I’ve seen API providers require before accepting an application into production. However, I think we should also be working hard to keep the process simple, and meaningful. Of course, we want a high bar for quality, but as with everything in the API world, there will always be compromises in how we deliver on the ground. Depending on the industry you are operating the bar will be made higher, or possibly lowered a little to allow for more innovation. I’ve included a list of some of the application review process I found across my research–showing a wide range of approaches across API providers we are all familiar with. Hopefully that helps you think through the application review process a little more. It is something I’ll write about again in the future as I push forward my research, and distill down more of the common building blocks I’m seeing across the API landscape.
Some Leading Application Review Processes
Supporting SDKs in a variety of programming languages can be difficult for some API providers. Luckily there is tooling available that help auto-generate SDKs from API definitions, helping make the SDK part of the conversation a little smoother. Of course, it depends on the scope and complexity of your APIs, but increasingly auto-generated SDKs and code as part of a CI/CD process is becoming the normal way of getting things done, whether you are just making them available to your API consumers, or you are actually doing the consuming yourself.
- Swagger Codegen - The leading open source effort for generating SDKs from OpenAPI.
- APIMATIC - The leading service for generating SDKs from OpenAPI, and including as part of existing CI/CD efforts.
- RESTUnited - The easiest way to generate SDKs (REST API libraries): PHP, Python, Ruby, ActionScript (Flash), C#, Android, Objective-C, Scala, Java
Depending on your versioning and build processes, SDK generation can be done alongside all the other stops along this life cycle. When you iterate on an API, you simply auto-generate documentation, tests, SDKs, and other aspects of supporting your services. Not all providers I talk with are easily able to jump into the aspect of producing code, as their build processes aren’t as streamline, and some of their APIs are too large to expect auto-generated code to perform as expected. However, it is something they are working towards, along with other microservices, and decoupling efforts going on across their teams.
Once you realize an API definition driven approach to delivering APIs, the line between deployment and SDKs blurs–it is all about generating code from your definitions. Sometimes the code is providing resources, and other times it is consuming them. It just comes down to whether you are deploying code server or client side. Another significant shift I’m seeing in the landscape with SDKs, are things moving beyond just programming languages, and providing platform specific libraries for managing SalesForce, AWS, Docker, and other common components of our operations–further evolving the notion of what an SDK is and does in 2018.
Stripe.js and supporting elements provides a robust set of solutions for integrating the Stripe API into your website, web or mobile application. You can choose the pre-made element, customize as you see fit, or custom build your own using the Stripe.js SDK. It provides a great place to start when learning about Stripe, reverse engineering some existing solutions, and figuring out what integration will ultimately look like. In my scenario, the default Stripe element in their documentation works just fine for me, but I couldn’t help but playing with some of the others just to see what is possible.
You can tell Stripe has invest A LOT into their Sripe.js SDK, and the overall user experience around it. It provides a great example of how far you can go with embeddable API solutions. I like that they have the Stripe Elements published to Github, and available in six different languages. As I was learning and Googling, I came across other examples of Stripe.js deployment on other 3rd party sites, making me think it would be nice if Stripe had a user generated elements gallery as part of their offering, accepting pull requests from developers in the Stripe community. It wouldn’t be that hard to come up with a template markdown page that developers could fill out and submit, sharing their unique approach to publishing Stripe Elements.
[The Slack team has published the most robust and honest story about using OpenAPI, providing a blueprint that other API providers should be following](https://medium.com/slack-developer-blog/standard-practice-slack-web-openapi-spec-daaad18c7f8). What I like most about approach by Slack to develop, publish, and share their OpenAPI, is the honesty behind why their are doing it to help standardize around a single definition. [They publish and share the OpenAPI to Github](https://github.com/slackapi/slack-api-specs), which other API providers are doing, and I think should be standard operating procedure for all API providers, but they also go into the realities regarding the messy history of their API documentation--an honesty that I feel ALL API providers should be embracing. My favorite part of the story from Slack is the opening paragraph that honestly portrays how they've got here: _"The Slack Web API’s catalog of methods and operations now numbers nearly 150 reads, writes, rights, and wrongs. Its earliest documentation, much still preserved on api.slack.com today, often originated as hastily written notes left from one Slack engineer to another, in a kind of institutional shorthand. Still, it was enough to get by for a small team and a growing number of engaged developers."_ Even though we all wish we could do APIs correctly, and supporting API document perfectly from day one, this is never the reality of API operations, and something OpenAPI will not be a silver bullet for fixing all of this, but can go a long way in helping standardize what is going on across teams, and within an API community. Slack focuses on SDK development, Postman client usage, alternative forms of documentation, and mock servers as the primary reasons for publishing the OpenAPI for their API. They also share some of the back story regarding how they crafted the spec, and their decision making process behind why they chose OpenAPI over other specifications. They also share a bit of their road map regarding the API definition, and that they will be adopting v3.0 of OpenAPI v3.0, providing _"more expressive JSON schema response structures and superior authentication descriptors, specifications for incoming webhooks, interactive messages, slash commands, and the Events API tighter specification presentation within api.slack.com documentation, and example spec implementation in Slack’s own SDKs and tools"_. I've been covering leading API providers move towards OpenAPI adoption for some time. Writing about [the New York Times publishing of their OpenAPI definition to Github](https://apievangelist.com/2017/03/01/new-york-times-manages-their-openapi-using-github/), and [Box doing the same, but providing even more detail behind the how and why of doing OpenAPI](https://apievangelist.com/2017/05/22/box-goes-all-in-on-openapi/). Slack continues this trend, but showcases more of the benefits it brings to the platform, as well as the community. All API providers should be publishing and up to date OpenAPI definition to Github by default like Slack does. They should also be standardizing their documentation, mock and virtualized implementations, generating SDKs, and driving continuous integration and testing using this OpenAPI, just like Slack does. They should be this vocal about it too, encouraging the community to embrace, and ingest the OpenAPI across the on-boarding and integration process. I know some folks are still skeptical about what OpenAPI brings to the table, but increasingly the benefits are outweighing the skepticism--making it hard to ignore OpenAPI. Another thing I want to highlight in this story, is that Taylor Singletary ([@episod](https://twitter.com/episod)), reality technician, documentation & developer relations at Slack, brings an honest voice to this OpenAPI tale, which is something that is often missing from the platforms I cover. This is how you make boring ass stories about mundane technical aspects of API operations like API specifications something that people will want to read. You tell an honest story, that helps folks understand the value being delivered. You make sure that you don't sugar coat things, and you talk about the good, as well as some of the gotchas like Taylor has, and connect with your readers. It isn't rocket science, it is just about caring about what you are doing, and the human beings your platform impacts. When done right you can move things forward in a more meaningful way, beyond what the technology is capable of doing all by itself.
404: Not Found
I’ve written about VersionEye a couple of times. They help you monitor the 3rd party code you use, keeping an eye on dependencies, license violations, and security issues. I’ve written about the license portion of this equation, but they came up again while doing my API security research, and I wanted to make sure I revisited what they were up to in this aspect of the API lifecycle, floating them up on my radar.
VersionEye is keeping an eye on multiple security databases and helps you monitor the SDKs you are using in your application. Inversely, if you are an API provider generating SDKs for your API consumers to put to use, it seems like you should be proactively leverage VersionEye to help you be the eye on the security aspects of your SDK management. They even help developers within their existing CI/CD workflows, which is something that you should be considering as you plan, craft, and support your APIs. Making it as easy for you to leverage your APIs SDKs in your own workflow, and doing the same for your consumers, while also paying attention to security at each step, breaking your CI/CD process when security is breached.
I also wrote about how VersionEye has open sourced their APIs a while back, highlighting how you can also deploy into any environment you desire. I’m fascinated by the model VersionEye provides for the API space. They are offering valuable services that help us manage our crazy worlds, with a viable commercial and open source offering, that integrates with your existing CI/CD workflow. Next, I’m going to study the dependency portion of what VersionEye offer, then take some time to better understand their business model and pricing. VersionEye is pretty close to what I like to see in a service provider. They don’t have all the shine of a brand new startup, but they have all the important elements that really matter.
I found myself looped into another API patent situation. I’m going to write this up as I would any other patent story, then I will go deeper because of my deeper personal connection to this one, but I wanted to make sure I called this patent what it is, and what ALL API patents are–a bad idea. Today’s patent is for an automatch process and system for software development kit for application programming interface:
Title: Automatch process and system for software development kit for application programming interface Patent# : US 20170102925 A1 Abstract: A computer system and process is provided to generate computer programming code, such as in a Software Development Kit (SDK). The SDK generated allows an application to use a given API. An API description interface of the system is operable to receive API-description code describing one or more endpoints of the API. A template interface is operable to receive one or more templates of code defining classes and/or functions in a programming language which can be selected by the selection of a set of templates. A data store is operable to use a defined data structure to store records of API description code to provide a structured stored description of the API. A code generation module is operable to combine records of API with templates of code which are arranged in sets by the language of the code they contain. The combining of records and code from templates may use pointers to a data structure which is common to corresponding templates in different sets to allow templates of selected languages to be combined with any API description stored. Original Assignee: Syed Adeel Ali, Zeeshan Bhatti, Parthasarathi Roop, APIMatic Limited
If you have been in the API space for as long as I have you know that the generation of API SDKs using an API definition is not original or new, it is something that has been going on for quite some time, with many open source solutions available on the landscape. It is something everyone does, and there are many services and tooling available out there to deliver SDKs in a variety of languages for your API. It is something that just isn’t patent worthy (if there was such a thing anymore). It shouldn’t exist, and the authors should know better, and the US Patent Office should know better, but in the current digital environment we exist in there isn’t a lot of sensibility and logic when it comes to business in general, let alone intellectual property.
I haven’t pulled any patents as part of my API patent research in some time, so this particular patent hasn’t come across my radar. I was alerted to the patent via a Tweetstorm, shared with me in Slack by Adeel (one of the authors of the patent):
In which my response was, “oh. ouch. well, gonna have to think on this one sir. I’ll let you know my response, but I’m going to guess you might not like how I feel either. I’m not pro-patent. let me simmer on it for a couple days and I’ll share my thoughts.” As I was simmering I was pulled into the conversation again by Tony Tam:
Better products great for everyone. Undermining OSS efforts bad for all. And again, US patent system is hella busted this case in point— Tony Tam (@fehguy) August 26, 2017
In which my response was:
I do! But too many for the Twitterz!— Kin Lane (@kinlane) August 25, 2017
I was taking my usual time, gathering my thoughts in my notebook. Taking walks. Simmering. Then I wake up Saturday morning to this Tweet:
The comment did get me to Tweet a couple times, but ultimately I’m not going to engage or deviate from my initial response to gather my thoughts and write a more thoughtful post. The machine wants me to respond emotionally, fractionally, and in ways that can be taken out of context. This is how the technology space works, and keeps you serving it’s overlords–the money folks behind.
I do have to admit, when I first responded I was going to take a much harder line against APIMATIC, but after seeing the personal attacks on Adeel, his attempts to defend himself, then this no presence Twitter account coming at me personally, bringing into question that maybe I was being paid to write articles I changed the tone of this story significantly. The conversation around this patents shows what is wrong with the business and politics of APIs, more than anything API patents have done to the community to date.
First, Adeel doesn’t understand the entire concept of what Tony meant when he said patent, any more than he grasps what Tony meant by douchbag. This isn’t a jab at Adeel. It’s truth. Adeel is a Pakistani, living in New Zealand. I’ve spent the last couple of years engaging with him in person, and virtually via hangouts, Slack, and email. I’m regularly having to explain some very western concepts to him, and often find myself think deeply about what I’m saying to make sure I am bridging things properly. Not because Adeel isn’t smart (he’s exceptionally smart), it is just because of the cultural divide that exists between him and I.
Adeel didn’t file their patent out of some competitive ambition, or stealing of open source ideas as referenced in the Tweetstorm. He did it because it is what the academic environment where APIMATIC was born encouraged it, and it is something that is associated with smart ideas by the institution. This concept isn’t unique to New Zealand, it is something that still flourishes in the United States. Where the cultural bridge was necessary, is when it comes to why patents are bad. In these academic environments, you have a good idea, you patent it. It is something that historically acceptable, and encouraged. You see this across institutions and within enterprises around the globe, with patents as a measure of individual success. Adeel and his team had a good idea, so they patented it, he didn’t think he was doing anything wrong.
Addeel isn’t being any more aggressive or vindictive than Sal or Matt were with their hypermedia patents, or Jon Moore were with their patents. It is what their VCs, companies, and parent institutions encourage them to do with their good ideas (go after them and make your mark on the world). I fault all of these individuals just as much as I fault Adeel, or even Tony for that matter, when it comes to the name of an open source product suddenly becoming a trademarked product. Ironically this conversation is going up, right after a post regarding how much I respect Tony for his work, despite me be VERY upset about Swagger not remaining attached to an open source product we all had contributed so much to, and helped spread the word about. If we are worried about the sanctity of open source, we should be defending all dimensions of intellectual property. I know, not a perfect comparison, but I imagine Tony feels similar to what I did when this happened. Which I let him know via email, but have never gone after anyone individually, or personally, sicking my dogs on him, although I’m sure it might have felt that way.
All of this makes me think deeper about the relationship between open source and APIs. What are the responsibilities of companies who wrap open source technology with an API and offer a commercial service? How does licensing, trademarks, patents, and other intellectual property concerns need to be respected? I’m guessing I could go through many of the API patents in my research and find thatmost filings that did not consider or respect open source offerings that came before them. People had a good idea. They were in an environment that encourages patents, and they filed for a patent to show their idea was worthwhile–that USPTO stamp of approval is widely recognized as a way to acknowledge your idea is worthy.
I want to make clear. I am not defending patents. I am defending Adeel. I am doing so strongly, because of the no name person who decided to chime in and question the credibility of my API storytelling. Otherwise I would have laid things out, and told Adeel he needed to learn the lesson and move on. I just saw everyone piling on Adeel like he was a bad person, and some tech bro just patenting things to get the upper hand. I have known Adeel for years, and know that is not the situation. Once I started getting piled on as well, then it switched things into personal mode for me, and now I feel the need to defend my friend, but not his actions. The tech community loves to pile on, and I deal with wave after wave of tech bros who don’t read my work and accuse me of things that people who DO read my work would NEVER acuse me of. Adeel is my friend. Adeel is an extremely intelligent, honorable, kind-hearted person, and it pissed me off when people started piling on him.
Now that you understand my personal relationship, let’s address my business relationship. I am an advisor to APIMATIC. Not because I’m paid, or there are stock options, it is because he is my friend. In August (couple of weeks ago), Adeel and his investors had offered me a 0.25% share capital in the company but there has been no paperwork, nothing signed, and no deals made as of today. Honestly, after the $318 check I got for my latest advisory role, which I spend about 60 hours of work on, and travel costs out of my own pocket, I have little interest in pushing the conversations forward to the point where things ever get formalized. It just isn’t a priority for me, and damn sure has never influenced my writing about APIMATIC, let alone a story from June of 2015, like the Tweetstorm participant accused me of, without doing any due diligence on who I am. You are always welcome to ask ANY of my partners if I take money to write positive things about their products, or guest posts, you’ll get the same answer from all of them–it is something I get asked regularly, and just don’t give a shit about doing.
API patents are a bad idea. Patent #US 20170102925 A1, for an automatch process and system for software development kit for application programming interface is a bad idea. API patents are a bad idea because people think they are a sign of being smart when they are not. API patents are a bad idea because corporations, institutions, and organizations keep telling people it is a sign of being smart, and people believe it. API patents are a bad idea because the entire US Patent system is broken, because it is an intellectual property relic of a physical age, being leveraged in a digital realm where things are meant to be interoperable and reusable. API patents are bad idea because y’all keep doing them thinking they are a good idea, when they just open you up to being a tool that allows corporations to lock up ideas, and provide a vehicle for others to use them as leverage in a court of law, and behind closed doors negotiations. Ultimately whether or not patents are bad will come down to who litigates with their patent portfolios, and the patents they acquire. The scariest part is most of this leveraging and strong-arming won’t always happen in a public court, it will happen behind closed doors in arbitration, and with venture capital negotiations.
Which really brings me to the absurdity of this latest patent Tweetstorm. I’m all for showcasing that patents are a bad idea, and even shaming companies and individuals for doing them. However, I saw this one get personal a couple of times, and even took a jab at me. Y’all really shouldn’t do this shit in glass houses, because patents are just part of your intellectual property problems. The NDAs y’all are signing are a bad idea. Those deals you are making with VCs are mostly bad ideas. Y’all are just as ignorant, or maybe in denial about the deals you are making as Adeel is about the negative impacts of the US patent system. Most of these deals you make will result in your startup being wielded in more damaging ways than Adeel’s patent ever will. Yet you keep doing startups, signing deals, and attacking your competition, and people like me just investing in the community. Adeel is currently reading about patents, which I regret to say won’t provide him with the answers he is looking for. There are few books on the subject. Maybe reading something from Lawrence Lessig might get you part of the way there. The real answers you are looking for come from experience, and that is what you just received. This was an easy lesson, just wait until APIMATIC acquired by a bigco, and see your patents wielded in ways you never imagined–those will be some harder lessons.
I don’t have a problem with calling out Adeel for APIMATIC’s patent–it is what should happen. I have a problem with him being called douchbag, and the other personal attacks, and assumptions on his (and my) character that occurred within the Tweetstorm. By all means, call him out on Twitter. By all means, educate him around the damage to OSS and API community by doing patents. Don’t make these things personal, assume malicious intent, and recklessly begin questioning the character of everyone involved. Also, if you are just spending your time calling out you competitors for this behavior, because it bothers you, and you don’t call out your partners, and the companies you work for, and the services you use for their abusive use of OSS and damaging intellectual property claims, you have significantly weakened your argument, and won’t always be able to count on me to jump into the discussion and have your back. I do this shit full time, with no financial backing, corporate or institutional cover. I’m out here full time, not hiding behind a Twitter handle, holding folks accountable whenever and wherever I can. If you are doing a startup, learn from this story. Educate your institutions, companies, and investors about how API patents are damaging everything we are doing, and never will actually demonstrate you have a novel idea, it is just locking up other ideas.
In the end, everything we know as API is already patented. Look through the thousands of API patents in my research. Hell, Microsoft already has the patent on API definitions, so what does it matter if there is a patent on generating SDKs from API definitions? Also, my patent on API patents is pending, so it will all be game over then. Mwahahaha!! ;-) As Tony said, the patent system is broken. Let’s keep letting our companies, organizations, institutions, and most importantly the USPTO know that it is broken. Let’s keep calling out folks who still think API patents are a good idea (sorry Adeel), and schooling them on the why, but let’s not be dicks about it. Let’s not assume people have bad intentions. Let’s understand the history of patents and that many people are still taught that they are a sign of having good ideas and being what you do as part of regular business operations. Let’s just make sure we also lead by building a better API service or tooling, as Adeel brought up in his defense. Also, when he said this, he wasn’t making this an APIMATIC is better than Swagger Codegen (or others), he is just focused on making a better product in general. I’m sorry but he has. Y’all can focus on the merits of Swagger Codegen vs. APIMATIC, but what he’s done with the SDK docs, portal, and continuous integration, ARE great improvements. Much like the commercial service Swagger has become (ya know, the Swagger in Swagger Codegen) and evolved on top of the open source API specification and tooling Tony set into motion for ALL of us to build upon–thanks again Tony.
P.S. I wouldn’t have been so hard on Tony if I hadn’t been looped in to defend intellectual property and OSS like this. P.S.S. I wouldn’t have been so easy on Adeel, if no name McTwitter account had accused me of selling out when I don’t do that. P.S.S.S. Adeel, patents are bad because people use them in bad ways, and the US Patent Office is underfunded, understaffed, and can’t tell what is good or bad patents–this is the way bigcos want them. P.S.S.S.S. Sorry I’m such an asshole everyone, but I hope y’all are getting somewhat used to it after seven years.
**Disclosure: **I am an advisor to APIMATIC (very proud of it)!
I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.
One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:
- versioneye-core - Models, Services & Mails for VersionEye
- crawl_r - VersionEye crawlers implemented in Ruby.
- versioneye-security - Security Crawler for VersionEye
- versioneye-api - JSON REST API for VersionEye
- versioneye-tasks - Thin wrapper around the versioneye-core.
- versioneye - VersionEye.com
VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.
Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.
I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.
One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:
- versioneye-core - Models, Services & Mails for VersionEye
- crawl_r - VersionEye crawlers implemented in Ruby.
- versioneye-security - Security Crawler for VersionEye
- versioneye-api - JSON REST API for VersionEye
- versioneye-tasks - Thin wrapper around the versioneye-core.
- versioneye - VersionEye.com
VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.
Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.
I have been watching VersionEye for a while now. If you aren’t familiar, they provide a service that will notify you of security vulnerabilities, license violations and out-dated dependencies in your Git repositories. I wanted to craft a story specifically about their licensing notification services, which can check all your open source dependencies against a license white list, then notify you of violations, and changes at the SDK licensing level.
The first thing I like here, is the notion of an API SDK licensing whitelist. The idea that there is a service that could potentially let you know which API providers have SDKs that are licensed in a way that meets your integration requirements. I think it helps developers who are building applications on top of APIs understand which APIs they should or shouldn’t be using based upon SDK licensing, while also providing an incentive for API providers to get their SDKs organized, including the licensing–you’d be surprised at how many API providers do not have their SDK house in order.
VersionEye also provides CI/CD integration, so that you can stop a build based on a licensing violation. Injecting the politics of API operations, from an API consumers perspective, into the application lifecycle. I’m interested in VersionEye’s CI/CD, as well as security vulnerabilities, but I wanted to make sure this approach to keeping an eye on SDK licensing was included in my SDK, monitoring, and licensing research, influencing my storytelling across these areas. Some day all API providers will have a wide variety of SDKs available, each complete with clear licensing, published on Github, and indexed as part of an APIs.json. We just aren’t quite there yet, and we need more services like VersionEye to help build awareness at the API SDK licensing level to get us closer to this reality.
The Azure Marketplace has the ability to test drive anything that is deployed in the Azure Marketplace. As someone who has to sign up for an endless number of new accounts to be able to play with APIs and API services, I’m a big fan of the concept of a test drive–not just for web applications, or backend infrastructure, but specifically for individual APIs and microservices.
From the Azure site: Test Drives are ready to go environments that allow you to experience a product for free without needing an Azure subscription. An additional benefit with a Test Drive is that it is pre-provisioned - you don’t have to download, set up or configure the product and can instead spend your time on evaluating the user experience, key features, and benefits of the product.
I like it. I want more of these. I want to be able to test drive, then deploy any API I want. I don’t want to sign up for an account, enter my credit card details, talk to sales, or signup for 30 day trial–I want to test drive. I want it to have data in it, and be pre-configured for a variety of use cases. Helping me understand what is possible.
I want all the friction between me finding an API (discovery via marketplace) understanding what an API does, test driving, then deployment of the API in any cloud I want. I think we are still a little bit off from this being as frictionless as I envision in my head, but I hope with enough nudging we will get there very soon.
I’ve added three OpenAPI extensions from APIMATIC to my OpenAPI Toolbox, adding to the number of extensions I’m tracking on that service providers and tooling developers are using as part of their API solutions. APIMATIC provides SDK code generation services, so their OpenAPI extensions are all about customizing how you deploy code as part of the integration process.
These are the three OpenAPI extensions I am adding from them:
- x-codegen-settings - These settings are globally applicable to all operations and schema definitions.
- x-operation-settings - These settings can be specified inside an "operation" object.
- x-additional-headers - These headers are in addition to any headers required for authentication or defined as parameters.
If you have ever used APIMATIC you know that you can do a lot more than just “SDK generation”, which often has a bad reputation. APIMATIC provides some interesting ways you can use OpenAPI to dial in your SDK, script, and code generation as part of any continuous integration lifecycle.
Providing another example of how you don’t have to live within the constraints of the current OpenAPI spec. Anyone can augment, and extend the current specification to meet your unique needs. Then who knows, maybe it will become useful enough, and something that might eventually be added to the core specification. Which is part of the reason I’m aggregating these specifications, and including them in the OpenAPI Toolbox.
One layer of my API research is dedicated to keeping track on what is going on with API software development kits (SDK). I have been looking at trends in SDK evolution as part of continuous integration and deployment, increased analytics at the SDK layer, and SDKs getting more specialized in the last year. This is a conversation that Google is continuing to move forward by focusing on enhanced storage, logging, and analytics at the SDK level.
Google provides a nice example of how API providers are increasing the number of available resources at the SDK layer, beyond just handling API requests and responses, and authentication. I’ll try to carve out some time to paint a bigger picture of what Google is up to with SDKs. All their experience developing and supporting SDKs across hundreds of public APIs seems to be coming to a head with the Google Cloud SDK effort(s).
I’m optimistic about the changes at the SDK level, so much I’m even advising APIMATIC on their strategy, but I’m also nervous about some of the negative consequences that will come with expanding this layer of API integration–things like latency, surveillance, privacy, and security are top of mind. I’ll keep monitoring what is going on, and do deeper dives into SDK approaches when I have the time and $money$, and if nothing else I will just craft regular stories about how SDKs are evolving.
Disclosure: I am an advisor of APIMATIC.
Continuing a growing number of command line interfaces (CLI) being deployed side by side with APIs, SDK generation provider APIMATIC just released a CLI for their platform, continuing their march towards being a continuous integration provider. There was a great post the other day on Nordic APIs about CLI, highlighting one way API providers seem to be investing in CLIs to help increase the chances that their services will get baked into applications and system integrations.
"APIMatic CLI is a command line tool written in Python which serves as a wrapper over our own Python SDK. It is available in the form of a small windows executable so you can easily plug it into your build cycle. You no longer have to write your own code or set up a development environment for the consumption of our APIs."
SDK generation, API validation and API transformation baked into your workflow, all driven by API definitions available via any URL. This is a pretty important layer of your API lifecycle, something that isn't easily done if you are still battling the monolith system, but when you've gone full microservices, DevOps, Continous Integration Kung Fu (tm), it provides you with a pretty easy way to define, deploy, and validate API endpoints, as well as the system integrations that consume those APIs--all driven and choreographed by your API definitions.
I'm still very API-centric in my workflows and use the APIMATIC API to generate SDKs, and API Transformer to translate API definitions from one format to the other, but I understand that a CLI is more practical for the way some teams operate. API service providers seem to be getting the message and responding to developers with a fully functional CLI, as well as API like Zapier did the other day with the release of their CLI--pushing the boundaries of what is a continuous integration platform as a service.
I asked Adeel of APIMATIC when they would have a CLI generator based upon API definitions. I mean their CLI is a tool that wraps the APIMATIC SDK, which I assume is generated by APIMATIC, using an API definition. Why can't they just autogenerate a CLI for their customers using the same API definition being used to generated the SDK? He responded with a smile. ;-)
Disclosure: I am an advisor to APIMATIC.
I've been an advocate for OpenAPI since it's release, writing hundreds of stories about what is possible. I do not support OpenAPI because I think it is the perfect solution, I support it because I think it is the scaffolding for a bridge that will get us closer to a suitable solution for the world we have. I'm always studying how people see OpenAPI, both positive and negative, in hopes of better crafting examples of it being used, and stories about what is possible with the specification.
When you ask people what OpenAPI is for, the most common answer is documentation. The second most common answer is for generating code and SDKs. People often associate documentation and code generation with OpenAPI because these were the first two tools that were developed on top of the API specification. I do not see much pushback from the folks who don't like OpenAPI when it comes to documentation, but when it comes to generating code, I regularly see folks criticizing the concept of generating code using OpenAPI definitions.
When I think about OpenAPI I think about a life cycle full of tools and services that can be delivered, with documentation and code generation being just two of them. I feel it is shortsighted to dismiss OpenAPI because it falls short in any single stop along the API lifecycle as I feel the most important role for OpenAPI is when it comes to API literacy--helping us craft, share, and teach API best practices.
OpenAPI, API Blueprint, and other API definition formats are the best way we currently have to define, share, and articulate APIs in a single document. Sure, a well-designed hypermedia API allows you to navigate, explore, and understand the surface area of an API, but how do you summarize that in a shareableÂ and collaborative document that can be also used for documentation, monitoring, testing, and other stops along the API life cycle.Â
I wish everybody could read the latest API design book and absorb the latest concepts for building the best API possible. Some folks learn this way, but in my experience, a significant segment of the community learn from seeing examples of API best practices in action. API definition formats allow us to describe the moving parts of an API request and response, which then provides an example that other API developers can follow when crafting their own APIs. I find that many people simply follow the API examples they are familiarÂ with, and OpenAPI allows us to easily craft and shares these examples for them to follow.
If we are going to do APIs at the scale we need to helpÂ folks craft RESTful web APIs, as well as hypermedia, GraphQL, and gRPC APIs. We need a way to define our APIs, and articulate this definition to our consumers, as well as to other API providers. This helps me remember to not get hung up on any single use of OpenAPI, and other API specification formats, because first and foremost, these formats help us with API literacy, which has wider implications than any single API implementation, or stops along the API life cycle.
I've been a big supporter of APIMATIC since they started, so I'm happy to see them continuing to evolve their approach to delivering SDKs using machine readable API definitions. I got a walkthrough of their new DX Kits the other day, something that feels like an evolutionary step for SDKs, and contributing to API providers making onboarding and integration as frictionless as possible for developers.
Let's walk through what APIMATIC already does, then I'll talk more about some of the evolutionary steps they are taking when auto-generating SDKs. It helps to see the big picture of where APIMATIC fits into the larger API lifecycle to assist you in getting beyond any notion of them simply being just an SDK generation service.
What makes APIMATIC such an important service, in my opinion, is that they just don't speak using modern API definition formats, they speak in all of the API definition formats, allowing anyone to generate SDKs from the specification of your choice:
- API Blueprint
- Swagger 1.0 - 1.2
- Swagger 2.0 JSON
- Swagger 2.0 YAML
- WADL - W3C 2009
- Google Discovery
- RAML 0.8
- I/O Docs - Mashery
- HAR 1.2
- Postman Collection
- APIMATIC Format
As any serious API service provider should do be doing, APIMATIC then opened up their API definition transformation solution as a standalone service and API. This allows this type ofAPI transformations to occur and be baked in, at every stop along a modern API lifecycle, by anyone.
Being so API definition driven focused, APIMATIC needed a practical way to manage API definitions, and allow their customers to add, edit, delete, and manipulate the definitions that would be driving the SDK auto generation process. APIMATIC provides one of the best API design interfaces I've found across the API service providers that I monitor, allowing customers to manage:
- Test Cases
Because APIMATIC is so heavily invested in having a complete API definition, one that it will result in a successful SDK, they've had to bake healthy API design practices into their API design interface--helping developers craft the best API possible. #Winning
SDK Auto Generation
Now we get to the valuable, and time-saving portion of what APIMATIC does best--generate SDKs in 10 separate programming language and platform environments. Once your API definition validates, you can select to generate in their preferred language.
- Visual Studio - A class library project for Portable and Universal Windows Platform
- Eclipse - A compatible maven project for Java 5 and above
- Android Studio - A compatible Gradle project for Android Gingerbread and above
- XCode - A project based on CoCoaPods for iOS 6 and above
- PSR-4 - A compliant library with Composer dependency manager
- Python - A package compatible with Python 2 and 3 using PIP as the dependency manager
- Angular - A lightweight SDK containing injectable wrappers for your API
- Node.js - A client library project in Node.js as an NPM package
- Ruby - A project to create a gem library your API based on Ruby>=2.0.0
- Go - A client library project for Go language (v1.4)
APIMATIC takes their SDKs seriously. They make sure they aren't just low-quality auto-generated code. I've seen the overtime they put into make sure the code they produce matches the styling and the reality on the ground for each language and environment being depoyed.
Deploying your API SDKs to Github is nothing new, but being able to autogenerate your SDK from a variety of API definition languages, and then publish to Github opens up a whole new world of possibilities. This is when Github can become a sort of API definition driven engine that can be installed into not just the API life cycle, but also every web, mobile, device, bot, voice, or any other client that puts an API to use.
This is where we start moving beyond SDK for me, into the realm of what APIMATIC is calling a DX Kit. APIMATIC isn't just dumping some auto-generated code into the Github repo of your choice. They are publishing the code, and now complete documentation for the SDK to a Github README, so that any human can come along and learn about what it does, as well as any other system can also come along and put the API definition auto-generated code to work.
The evolution of the SDK continues with...well, continuous integration, and orchestration. If you go under the settings for your API in APIMATIC, you now also have the option to publish configuration files for four leading CI solutions:
APIMATIC had already opened up beyond just doing SDKs with the release of their API Transformer solution, and their introduction of detailed documentation for each kit (SDK) on Github. Now they are pushing into API testing and orchestration areas by allowing you to publish the required config files for the CI platform of your choosing.
I feel like their approach represents the expanding world of API consumption. Providing an API and SDK is not enough anymore. You have to provide and encourage healthy documentation, testing, and continuous integration practices as well. APIMATIC is aiming to "simplify API Consumption", with their DX Kits, which is a very positive thing for the API space, and worth highlighting as part of my API SDK research.
I am seeing more examples of analytics at the API client and SDK level, providing more access to what is going on at this layer of the API stack. I'm seeing API providers build them into the analytics they provider for API consumers, and more analytic services from providers for the web, mobile, and device endpoints. Many companies are selling these features in the name of awareness, but in most cases, I'm guessing it is about adding another point of data generation which can then be monetized (IoT is a gold rush!).
As I do, I wanted to step back from this movement and look at it from many different dimensions, broken down into two distinct buckets:
- More information - More data than can be analyzed
- More awareness - We will have visibility across integrations.
- Real-time insights - Data can be gathered on real time basis.
- More revenue - There will be more revenue opportunities here.
- More personalization - We can personalize the experience for each client.
- Fault Tolerance - There are opportunities for building in API fault tolerance.
- More information - If it isn't used it can become a liability.
- More latency - This layer slows down the primary objective.
- More code complexity - Introduces added complexity for devs.
- More security consideration - We just created a new exploit opportunity.
- More privacy concerns - There are new privacy concerns facing end-users.
- More regulatory concerns - In some industries, it will be under scrutiny.
I can understand why we want to increase the analysis and awareness at this level of the API stack. I'm a big fan of building in resiliency in our clients & SDKs, but I think we have to weigh the positive and negatives before jumping in. Sometimes I think we are too willing to introduce unnecessary code, data gathering, and potentially opening up security and privacy holes chasing new ways we can make money.
I'm guessing it will come down to each SDK, and the API resources that are being put to work. I'll be aggregating the different approaches I am seeing as part of my API SDK research and try to provide a more coherent glimpse at what providers are up to. By doing this, I'm hoping I can better understand some of the motivations behind this increased level of analytics being injected at the client and SDK level.
I have been doing a lot of thinking about the client and SDK areas of my research lately, considering how these areas overlap with the world of bots, as well as with voice, and iPaaS. I'm thinking about the hand-crafted, autogenerated, and even API client as a service like Postman, and Paw. I'm thinking about how APIs are being put to work, across not just web and mobile, but also systems to system integration, and the skills in voice platforms like Alexa, and the intents in bot platforms like Meya.
I'm considering how APIs can deliver the skills needed for the next generation of apps beyond just a mobile phone. I kicked off my SDK research over a year ago, where I track on the approaches of leading platforms who are offering up code samples, libraries, and SDKs in a variety of programming languages. While conducting my research, I've been seeing the definition of what is an SDK slowly expand and get more specialized, with most of the expansion in these areas:
- Mobile Development Kit - Providing code resources to help developers integrate their iPhone, Android, Windows, and other mobile applications with APIs.
- Platform Development Kits - Provide code resources for using APIs in other specific platforms like WordPress, SalesForce, and others.
In addition to mobile, and specific platform solutions, I am seeing API providers stepping up and providing iPaaS options, like ClearBit is doing with their Zapier solutions. As part of this brainstorm exercise, I feel like I should also add a layer dedicated to delivering via iPaaS:
- Integration Platform as a Service Development Kits - Delivering code resources for use in iPaaS services like Zapier, allowing for simpler system to system integration across many different platforms, with some having a specific industry focus.
Next, if I scroll down the home page of API Evangelist, I can easily spot 11 other areas of my research that stand out as either areas I'm seeing SDK movement or an area I'd consider to be an SDK opportunity:
- Voice Development Kit - Code resources to support voice application and device integrations.
- Bot Development Kit - Code resources for delivering bot implementations on various platforms.
- Visualization Development Kit - Code resources and tooling for helping deliver visualizations derived from data, content, and algorithms via API.
- Virtualization Development Kit - Code resources to support the creation of sandbox environments, use of dummy data, and other virtualization scenarios.
- Command Line Development Kit - Code resources to support usage of API resources accessed via command line interfaces.
- Orchestration Development Kit - Code resources and schema for use in continuous integration and other delivery tooling and services.
- Real Time Development Kit - Code resources designed to augment API resources with real-time features and technology.
- Spreadsheet Development Kit - Code resources designed for using spreadsheets as API data source, as well as putting API resources to use in spreadsheet environments.
- Drone Development Kit - Code resources designed to support drones, and the hardware, application, and networks that support them.
- Auto Development Kit - Code resources designed specifically for integration with major manufacturer and 3rd party aftermarket platforms.
I am thinking about open source solutions in a variety of programming languages, that put APIs to work for delivering data, content, and algorithms for delivery to the specialized endpoints above. I've seen mobile development kits evolve out of the standard approach to providing APIs SDKs out of the need to deliver resources to mobile phones, over questionable networks, and a constrained UI. This type of expansion will continue to grow, increasing the demand for specialized client solutions that all employ the same stack of web APIs.
This is just some housekeeping and brainstorming client and SDK areas of my research. I'm just working to understand how some of the other areas of my research are potentially overlapping with these layers. After seeing common patterns in iPaaS, Voice, and Bots, it got me thinking about other areas I've seen similar patterns occur. Obviously, these aren't areas of SDK development that all API providers should be thinking about, but depending on your target audience, a handful of them may apply.
I go back on forth regarding the importance of SDKs to API integration. I enjoy watching the API client as a service providers like Postman and Paw, as well as straight up SDK solutions like APIMATIC. When I crack open tooling like the Swagger Editor, and services like APIMATIC, I'm impressed with the increasing number of platforms available for deployment--enabling both API deployment and consumption. As I watch API service providers like Restlet and Apiary evolve their design, deployment, and management solutions to cater t more stops along the API life cycle, I find myself more interested in what could be next.
Every API provider will have slightly different needs, but there are definitely some common patterns which providers should be considering as they are kicking off their API presence, or looking to expand an existing platform. While there are some dissenting opinions on this subject, many API providers provide a range of specific language, mobile, and platform SDKs for their developers to put to use when integrating with their platforms.
A common approach I see from API providers when it comes to managing their SDKs is to break out their mobile SDKs into their own section, which the communications API platform Bandwidth has a good example of. Bandwidth provides iOS and Android SDKs and provides a mobile SDK quick start guide, to help developers get up and going. This approach provides their mobile developers a dedicated page to get at available SDKs, as well as other mobile-focused resources that will make integration as frictionless as possible.
Unless your anti-SDK, you should at least have a dedicated page for all your available SDK. I would also consider putting all of them on Github, where you will gain the network effect brought by managing your SDKs on the social coding platform. Then when it makes sense, also consider breaking out a dedicated mobile SDK page like Bandwidth--I will also work on a roundup of other providers who have similar pages, to help understand a wide variety of approaches when it comes to mobile SDK management.
In my work everyday as the API Evangelist, I get to have some very interesting conversations, with a wide variety of folks, about how they are using APIs, as well as brainstorming other ways they can approach their API strategy allowing them to be more effective. One of the things that keep me going in this space is this diversity. One day I’m looking at Developer.Trade.Gov for the Department of Commerce, the next talking to WordPress about APIs for 60 million websites, and then I’m talking with the The Church of Jesus Christ of Latter-day Saints about the Family Search API, which is actively gathering, preserving, and sharing genealogical records from around the world.
I’m so lucky I get to speak with all of these folks about the benefits, and perils of APIs, helping them think through their approach to opening up their valuable resources using APIs. The process is nourishing for me because I get to speak to such a diverse number of implementations, push my understanding of what is possible with APIs, while also sharpening my critical eye, understanding of where APIs can help, or where they can possibly go wrong. Personally, I find a couple of things very intriguing about the Family Search API story:
- Mapping the worlds genealogical history using a publicly available API — Going Big!!
- Potential from participation by not just big partners, but the long tail of genealogical geeks
- Transparency, openness, and collaboration shining through as the solution beyond just the technology
- The mission driven focus of the API overlapping with my obsession for API evangelism intrigues and scares me
- Have existing developer area, APIs, and seemingly necessary building blocks but failed to achieve a platform level
I’m open to talking with anyone about their startup, SMB, enterprise, organizational, institutional, or government API, always leaving open a 15 minute slot to hear a good story, which turned into more than an hour of discussion with the Family Search team. See, Family Search already has an API, they have the technology in order, and they even have many of the essential business building blocks as well, but where they are falling short is when it comes to dialing in both the business and politics of their developer ecosystem to discover the right balance that will help them truly become a platform—which is my specialty. ;-)
This brings us to the million dollar question: How does one become a platform?
All of this makes Family Search an interesting API story. The scope of the API, and to take something this big to the next level, Family Search has to become a platform, and not a superficial “platform” where they are just catering to three partners, but nourishing a vibrant long tail ecosystem of website, web application, single page application, mobile applications, and widget developers. Family Search is at an important reflection point, they have all the parts and pieces of a platform, they just have to figure out exactly what changes need to be made to open up, and take things to the next level.
First, let’s quantify the company, what is FamilySearch? “ For over 100 years, FamilySearch has been actively gathering, preserving, and sharing genealogical records worldwide”, believing that “learning about our ancestors helps us better understand who we are—creating a family bond, linking the present to the past, and building a bridge to the future”.
FamilySearch is 1.2 billion total records, with 108 million completed in 2014 so far, with 24 million awaiting, as well as 386 active genealogical projects going on. Family Search provides the ability to manage photos, stories, documents, people, and albums—allowing people to be organized into a tree, knowing the branch everyone belongs to in the global family tree.
FamilySearch, started out as the Genealogical Society of Utah, which was founded in 1894, and dedicate preserving the records of the family of mankind, looking to "help people connect with their ancestors through easy access to historical records”. FamilySearch is a mission-driven, non-profit organization, ran by the The Church of Jesus Christ of Latter-day Saints. All of this comes together to define an entity, that possesses an image that will appeal to some, while leave concern for others—making for a pretty unique formula for an API driven platform, that doesn’t quite have a model anywhere else.
FamilySearch consider what they deliver as as a set of record custodian services:
- Image Capture - Obtaining a preservation quality image is often the most costly and time-consuming step for records custodians. Microfilm has been the standard, but digital is emerging. Whether you opt to do it yourself or use one of our worldwide camera teams, we can help.
- Online Indexing - Once an image is digitized, key data needs to be transcribed in order to produce a searchable index that patrons around the world can access. Our online indexing application harnesses volunteers from around the world to quickly and accurately create indexes.
- Digital Conversion - For those records custodians who already have a substantial collection of microfilm, we can help digitize those images and even provide digital image storage.
- Online Access - Whether your goal is to make your records freely available to the public or to help supplement your budget needs, we can help you get your records online. To minimize your costs and increase access for your users, we can host your indexes and records on FamilySearch.org, or we can provide tools and expertise that enable you to create your own hosted access.
- Preservation - Preservation copies of microfilm, microfiche, and digital records from over 100 countries and spanning hundreds of years are safely stored in the Granite Mountain Records Vault—a long-term storage facility designed for preservation.
FamilySearch provides a proven set of services that users can take advantage of via a web applications, as well as iPhone and Android mobile apps, resulting in the online community they have built today. FamilySearch also goes beyond their basic web and mobile application services, and is elevated to software as a service (SaaS) level by having a pretty robust developer center and API stack.
FamilySearch provides the required first impression when you land in the FamilySearch developer center, quickly explaining what you can do with the API, "FamilySearch offers developers a way to integrate web, desktop, and mobile apps with its collaborative Family Tree and vast digital archive of records”, and immediately provides you with a getting started guide, and other supporting tutorials.
FamilySearch provides access to over 100 API resources in the twenty separate groups: Authorities, Change History, Discovery, Discussions, Memories, Notes, Ordinances, Parents and Children, Pedigree, Person, Places, Records, Search and Match, Source Box, Sources, Spouses, User, Utilities, Vocabularies, connecting you to the core FamilySearch genealogical engine.
The FamilySearch developer area provides all the common, and even some forward leaning technical building blocks:
To support developers, FamilySearch provides a fairly standard support setup:
To augment support efforts there are also some other interesting building blocks:
Setting the stage for FamilySearch evolving to being a platform, they even posses some necessary partner level building blocks:
There is even an application gallery showcasing what web, mac & windows desktop, and mobile applications developers have built. FamilySearch even encourages developers to “donate your software skills by participating in community projects and collaborating through the FamilySearch Developer Network”.
Many of the ingredients of a platform exist within the current FamilySearch developer hub, at least the technical elements, and some of the common business, and political building blocks of a platform, but what is missing? This is what makes FamilySearch a compelling story, because it emphasizes one of the core elements of API Evangelist—that all of this API stuff only works when the right blend of technical, business, and politics exists.
Establishing A Rich Partnership Environment
FamilySearch has some strong partnerships, that have helped establish FamilySearch as the genealogy service it is today. FamilySearch knows they wouldn’t exist without the partnerships they’ve established, but how do you take it to the next and grow to a much larger, organic API driven ecosystem where a long tail of genealogy businesses, professionals, and enthusiasts can build on, and contribute to, the FamilySearch platform.
FamilySearch knows the time has come to make a shift to being an open platform, but is not entirely sure what needs to happen to actually stimulate not just the core FamilySearch partners, but also establish a vibrant long tail of developers. A developer portal is not just a place where geeky coders come to find what they need, it is a hub where business development occurs at all levels, in both synchronous, and asynchronous ways, in a 24/7 global environment.
FamilySearch acknowledge they have some issues when it comes investing in API driven partnerships:
- “Platform” means their core set of large partners
- Not treating all partners like first class citizens
- Competing with some of their partners
- Don’t use their own API, creating a gap in perspective
FamilySearch knows if they can work out the right configuration, they can evolve FamilySearch from a digital genealogical web and mobile service to a genealogical platform. If they do this they can scale beyond what they’ve been able to do with a core set of partners, and crowdsource the mapping of the global family tree, allowing individuals to map their own family trees, while also contributing to the larger global tree. With a proper API driven platform this process doesn’t have to occur via the FamiliySearch website and mobile app, it can happen in any web, desktop, or mobile application anywhere.
FamilySearch already has a pretty solid development team taking care of the tech of the FamilySearch API, and they have 20 people working internally to support partners. They have a handle on the tech of their API, they just need to get a handle on the business and politics of their API, and invest in the resources that needed to help scale the FamilySearch API being just a developer area, to being a growing genealogical developer community, to a full blow ecosystem that span not just the FamilySearch developer portal, but thousands of other sites and applications around the globe.
A Good Dose Of API Evangelism To Shift Culture A Bit
A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business for FamilySearch, something that if done right, can open up FamilySearch to outside ideas, and with the right framework manage to allow the platform to move beyond just certification, and partnering to also investment, and acquisition of data, content, talent, applications, and partners via the FamilySearch developer platform.
Think of evangelism as the grease in the gears of the platform allowing it to grow, expand, and handle a larger volume, of outreach, and support. API evangelism works to lubricate all aspects of platform operation.
First, lets kick off with setting some objectives for why we are doing this, what are we trying to accomplish:
- Increase Number of Records - Increase the number of overall records in the FamilySearch database, contributing the larger goals of mapping the global family tree.
- Growth in New Users - Growing the number of new users who are building on the FamilySearch API, increase the overall headcount fro the platform.
- Growth In Active Apps - Increase not just new users but the number of actual apps being built and used, not just counting people kicking the tires.
- Growth in Existing User API Usage - Increase how existing users are putting the FamilySearch APIs. Educate about new features, increase adoption.
- Brand Awareness - One of the top reasons for designing, deploying and managing an active APIs is increase awareness of the FamilySearch brand.
- What else?
What does developer engagement look like for the FamilySearch platform?
- Active User Engagement - How do we reach out to existing, active users and find out what they need, and how do we profile them and continue to understand who they are and what they need. Is there a direct line to the CRM?
- Fresh Engagement - How is FamilySearch contacting new developers who have registered weekly to see what their immediate needs are, while their registration is fresh in their minds.
- Historical Engagement - How are historical active and / or inactive developers being engaged to better understand what their needs are and would make them active or increase activity.
- Social Engagement - Is FamilySearch profiling the URL, Twitter, Facebook LinkedIn, and Github profiles, and then actively engage via these active channels?
Establish a Developer Focused Blog For Storytelling
- Projects - There are over 390 active projects on the FamilySearch platform, plus any number of active web, desktop, and mobile applications. All of this activity should be regularly profiled as part of platform evangelism. An editorial assembly line of technical projects that can feed blog stories, how-tos, samples and Github code libraries should be taking place, establishing a large volume of exhaust via the FamlySearch platform.
- Stories - FamilySearch is great at writing public, and partner facing content, but there is a need to be writing, editing and posting of stories derived from the technically focused projects, with SEO and API support by design.
- Syndication - Syndication to Tumblr, Blogger, Medium and other relevant blogging sites on regular basis with the best of the content.
Mapping Out The Geneology Landscape
- Competition Monitoring - Evaluation of regular activity of competitors via their blog, Twitter, Github and beyond.
- Alpha Players - Who are the vocal people in the genealogy space with active Twitter, blogs, and Github accounts.
- Top Apps - What are the top applications in the space, whether built on the FamilySearch platform or not, and what do they do?
- Social - Mapping the social landscape for genealogy, who is who, and who should the platform be working with.
- Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks. (should already be done by marketing folks)
- Cities & Regions - Target specific markets in cities that make sense to your evangelism strategy, what are local tech meet ups, what are the local organizations, schools, and other gatherings. Who are the tech ambassadors for FamilySearch in these spaces?
Adding To Feedback Loop From Forum Operations
- Stories - Deriving of stories for blog derived from forum activity, and the actual needs of developers.
- FAQ Feed - Is this being updated regularly with stuff?
- Streams - other stream giving the platform a heartbeat?
Being Social About Platform Code and Operations With Github
- Setup Github Account - Setup FamilySearch platform developer account and bring internal development team into a team umbrella as part of.
- Github Relationships - Managing of followers, forks, downloads and other potential relationships via Github, which has grown beyond just code, and is social.
- Github Repositories - Managing of code sample Gists, official code libraries and any samples, starter kits or other code samples generated through projects.
Adding To The Feedback Loop From The Bigger FAQ Picture
- Quora - Regular trolling of Quora and responding to relevant [Client Name] or industry related questions.
- Stack Exchange - Regular trolling of Stack Exchange / Stack Overflow and responding to relevant FamilySearch or industry related questions.
- FAQ - Add questions from the bigger FAQ picture to the local FamilySearch FAQ for referencing locally.
Leverage Social Engagement And Bring In Developers Too
- Facebook - Consider setting up of new API specific Facebook company. Posting of all API evangelism activities and management of friends.
- Google Plus - Consider setting up of new API specific Google+ company. Posting of all API evangelism activities and management of friends.
- LinkedIn - Consider setting up of new API specific LinkedIn profile page who will follow developers and other relevant users for engagement. Posting of all API evangelism activities.
- Twitter - Consider setting up of new API specific Twitter account. Tweeting of all API evangelism activity, relevant industry landscape activity, discover new followers and engage with followers.
Sharing Bookmarks With the Social Space
- Hacker News - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Hacker News, to keep a fair and balanced profile, as well as network and user engagement.
- Product Hunt - Product Hunt is a place to share the latest tech creations, providing an excellent format for API providers to share details about their new API offerings.
- Reddit - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Reddit, to keep a fair and balanced profile, as well as network and user engagement.
Communicate Where The Roadmap Is Going
- Roadmap - Provide regular roadmap feedback based upon developer outreach and feedback.
- Changelog - Make sure the change log always reflects the roadmap communication or there could be backlash.
Establish A Presence At Events
- Conferences - What are the top conferences occurring that we can participate in or attend--pay attention to call for papers of relevant industry events.
- Hackathons - What hackathons are coming up in 30, 90, 120 days? Which would should be sponsored, attended, etc.
- Meetups - What are the best meetups in target cities? Are there different formats that would best meet our goals? Are there any sponsorship or speaking opportunities?
- Family History Centers - Are there local opportunities for the platform to hold training, workshops and other events at Family History Centers?
- Learning Centers - Are there local opportunities for the platform to hold training, workshops and other events at Learning Centers?
Measuring All Platform Efforts
- Activity By Group - Summary and highlights from weekly activity within the each area of API evangelism strategy.
- New Registrations - Historical and weekly accounting of new developer registrations across APis.
- Volume of Calls - Historical and weekly accounting of API calls per API.
- Number of Apps - How many applications are there.
Essential Internal Evangelism Activities
- Storytelling - Telling stories of an API isn’t just something you do externally, what stories need to be told internally to make sure an API initiative is successful.
- Conversations - Incite internal conversations about the FamilySearch platform. Hold brown bag lunches if you need to, or internal hackathons to get them involved.
- Participation - It is very healthy to include other people from across the company in API operations. How can we include people from other teams in API evangelism efforts. Bring them to events, conferences and potentially expose them to local, platform focused events.
- Reporting - Sometimes providing regular numbers and reports to key players internally can help keep operations running smooth. What reports can we produce? Make them meaningful.
All of this evangelism starts with a very external focus, which is a hallmark of API and developer evangelism efforts, but if you notice by the end we are bringing it home to the most important aspect of platform evangelism, the internal outreach. This is the number one reason APIs fail, is due to a lack of internal evangelism, educating top and mid-level management, as well as lower level staff, getting buy-in and direct hands-on involvement with the platform, and failing to justify budget costs for the resources needed to make a platform successful.
Top-Down Change At FamilySearch
The change FamilySearch is looking for already has top level management buy-in, the problem is that the vision is not in lock step sync with actual platform operations. When regular projects developed via the FamilySearch platform are regularly showcased to top level executives, and stories consistent with platform operations are told, management will echo what is actually happening via the FamilySearch. This will provide a much more ongoing, deeper message for the rest of the company, and partners around what the priorities of the platform are, making it not just a meaningless top down mandate.
An example of this in action is with the recent mandate from President Obama, that all federal agencies should go “machine readable by default”, which includes using APIs and open data outputs like JSON, instead of document formats like PDF. This top down mandate makes for a good PR soundbite, but in reality has little affect on the ground at federal agencies. In reality it has taken two years of hard work on the ground, at each agency, between agencies, and with the public to even begin to make this mandate a truth at over 20 of the federal government agencies.
Top down change is a piece of the overall platform evolution at FamilySearch, but is only a piece. Without proper bottom-up, and outside-in change, FamilySearch will never evolve beyond just being a genealogical software as a service with an interesting API. It takes much more than leadership to make a platform.
Bottom-Up Change At FamilySearch
One of the most influential aspects of APIs I have seen at companies, institutions, and agencies is the change of culture brought when APIs move beyond just a technical IT effort, and become about making resources available across an organization, and enabling people to do their job better. Without an awareness, buy-in, and in some cases evangelist conversion, a large organization will not be able to move from a service orientation to a platform way of thinking.
If a company as a whole is unaware of APIs, either at the company or organization, as well as out in the larger world with popular platforms like Twitter, Instagram, and others—it is extremely unlikely they will endorse, let alone participate in moving from being a digital service to platform. Employees need to see the benefits of a platform to their everyday job, and their involvement cannot require what they would perceive as extra work to accomplish platform related duties. FamilySearch employees need to see the benefits the platform brings to the overall mission, and play a role in this happening—even if it originates from a top-down mandate.
Top bookseller Amazon was already on the path to being a platform with their set of commerce APIs, when after a top down mandate from CEO Jeff Bezos, Amazon internalized APIs in such a way, that the entire company interacted, and exchange resources using web APIs, resulting in one of the most successful API platforms—Amazon Web Services (AWS). Bezos mandated that if an Amazon department needed to procure a resource from another department, like server or storage space from IT, it need to happen via APIs. This wasn’t a meaningless top-down mandate, it made employees life easier, and ultimately made the entire company more nimble, and agile, while also saving time and money. Without buy-in, and execution from Amazon employees, what we know as the cloud would never have occurred.
Change at large enterprises, organizations, institutions and agencies, can be expedited with the right top-down leadership, but without the right platform evangelism strategy, that includes internal stakeholders as not just targets of outreach efforts, but also inclusion in operations, it can result in sweeping, transformational changes. This type of change at a single organization can effect how an entire industry operates, similar to what we’ve seen from the ultimate API platform pioneer, Amazon.
Outside-In Change At FamilySearch
The final layer of change that needs to occur to bring FamilySearch from being just a service to a true platform, is opening up the channels to outside influence when it comes not just to platform operations, but organizational operations as well. The bar is high at FamilySearch. The quality of services, and expectation of the process, and adherence to the mission is strong, but if you are truly dedicated to providing a database of all mankind, you are going to have to let mankind in a little bit.
FamilySearch is still the keeper of knowledge, but to become a platform you have to let in the possibility that outside ideas, process, and applications can bring value to the organization, as well as to the wider genealogical community. You have to evolve beyond notions that the best ideas from inside the organization, and just from the leading partners in the space. There are opportunities for innovation and transformation in the long-tail stream, but you have to have a platform setup to encourage, participate in, and be able to identify value in the long-tail stream of an API platform.
Twitter is one of the best examples of how any platform will have to let in outside ideas, applications, companies, and individuals. Much of what we consider as Twitter today was built in the platform ecosystem from the iPhone and Android apps, to the desktop app TweetDeck, to terminology like the #hashtag. Over the last 5 years, Twitter has worked hard to find the optimal platform balance, regarding how they educate, communicate, invest, acquire, and incentives their platform ecosystem. Listening to outside ideas goes well beyond the fact that Twitter is a publicly available social platform, it is about having such a large platform of API developers, and it is impossible to let in all ideas, but through a sophisticated evangelism strategy of in-person, and online channels, in 2014 Twitter has managed to find a balance that is working well.
Having a public facing platform doesn’t mean the flood gates are open for ideas, and thoughts to just flow in, this is where service composition, and the certification and partner framework for FamilySearch will come in. Through clear, transparent partners tiers, open and transparent operations and communications, an optimal flow of outside ideas, applications, companies and individuals can be established—enabling a healthy, sustainable amount of change from the outside world.
Knowing All Of Your Platform Partners
The hallmark of any mature online platform is a well established partner ecosystem. If you’ve made the transition from service to platform, you’ve established a pretty robust approach to not just certifying, and on boarding your partners, you also have stepped it up in knowing and understanding who they are, what their needs are, and investing in them throughout the lifecycle.
First off, profile everyone who comes through the front door of the platform. If they sign up for a public API key, who are they, and where do they potentially fit into your overall strategy. Don’t be pushy, but understanding who they are and what they might be looking for, and make sure you have a track for this type of user well defined.
Next, quality, and certify as you have been doing. Make sure the process is well documented, but also transparent, allowing companies and individuals to quickly understand what it will take to certified, what the benefits are, and examples of other partners who have achieved this status. As a developer, building a genealogical mobile app, I need to know what I can expect, and have some incentive for investing in the certification process.
Keep your friends close, and your competition closer. Open the door wide for your competition to become a platform user, and potentially partner. 100+ year old technology company Johnson Controls (JCI) was concerned about what the competition might do it they opened up their building efficient data resources to the public via the Panoptix API platform, when after it was launched, they realized their competition were now their customer, and a partner in this new approach to doing business online for JCI.
When Department of Energy decides what data and other resource it makes available via Data.gov or the agencies developer program it has to deeply consider how this could affect U.S. industries. The resources the federal agency possesses can be pretty high value, and huge benefits for the private sector, but in some cases how might opening up APIs, or limiting access to APIs help or hurt the larger economy, as well as the Department of Energy developer ecosystem—there are lots of considerations when opening up API resources, that vary from industry to industry.
There are no silver bullets when it comes to API design, deployment, management, and evangelism. It takes a lot of hard work, communication, and iterating before you strike the right balance of operations, and every business sector will be different. Without knowing who your platform users are, and being able to establish a clear and transparent road for them to follow to achieve partner status, FamilySearch will never elevate to a true platform. How can you scale the trusted layers of your platform, if your partner framework isn’t well documented, open, transparent, and well executed? It just can’t be done.
Meaningful Monetization For Platform
All of this will take money to make happen. Designing, and executing on the technical, and the evangelism aspects I’m laying out will cost a lot of money, and on the consumers side, it will take money to design, develop, and manage desktop, web, and mobile applications build around the FamilySearch platform. How will both the FamilySearch platform, and its participants make ends meet?
This conversation is a hard one for startups, and established businesses, let alone when you are a non-profit, mission driven organization. Internal developers cost money, server and bandwidth are getting cheaper but still are a significant platform cost--sustaining a sale, bizdev, and evangelism also will not be cheap. It takes money to properly deliver resources via APIs, and even if the lowest tiers of access are free, at some point consumers are going to have to pay for access, resources, and advanced features.
The conversation around how do you monetize API driven resources is going on across government, from cities up to the federal government. Where the thought of charging for access to public data is unheard of. These are public assets, and they should be freely available. While this is true, think of the same situation, but when it comes to physical public assets that are owned by the government, like parks. You can freely enjoy many city, county, and federal parks, there are sometimes small fees for usage, but if you want to actually sell something in a public park, you will need to buy permits, and often share revenue with the managing agency. We have to think critically about how we fund the publishing, and refinement of publicly owned digital assets, as with physical assets there will be much debate in coming years, around what is acceptable, and what is not.
Woven into the tiers of partner access, there should always be provisions for applying costs, overhead, and even generation of a little revenue to be applied in other ways. With great power, comes great responsibility, and along with great access for FamilySearch partners, many will also be required to cover costs of compute capacity, storage costs, and other hard facts of delivering a scalable platform around any valuable digital assets, whether its privately or publicly held.
Platform monetization doesn’t end with covering the costs of platform operation. Consumers of FamilySearch APIs will need assistance in identify the best ways to cover their own costs as well. Running a successful desktop, web or mobile application will take discipline, structure, and the ability to manage overhead costs, while also being able to generate some revenue through a clear business model. As a platform, FamilySearch will have to bring to the table some monetization opportunities for consumers, providing guidance as part of the certification process regarding what are best practices for monetization, and even some direct opportunities for advertising, in-app purchases and other common approaches to application monetization and sustainment.
Without revenue greasing the gears, no service can achieve platform status. As with all other aspects of platform operations the conversation around monetization cannot be on-sided, and just about the needs of the platform providers. Pro-active steps need to be taken to ensure both the platform provider, and its consumers are being monetized in the healthiest way possible, bringing as much benefit to the overall platform community as possible.
Open & Transparent Operations & Communications
How does all of this talk of platform and evangelism actually happen? It takes a whole lot of open, transparent communication across the board. Right now the only active part of the platform is the FamilySearch Developer Google Group, beyond that you don’t see any activity that is platform specific. There are active Twitter, Facebook, Google+, and mainstream and affiliate focused blogs, but nothing that serves the platform, contributed to the feedback loop that will be necessary to take the service to the next level.
On a public platform, communications cannot all be private emails, phone calls, or face to face meetings. One of the things that allows an online service to expand to become a platform, then scale and grow into robust, vibrant, and active community is a stream of public communications, which include blogs, forums, social streams, images, and video content. These communication channels cannot all be one way, meaning they need to include forum and social conversations, as well as showcase platform activity by API consumers.
Platform communications isn’t just about getting direct messages answered, it is about public conversation so everyone shares in the answer, and public storytelling to help guide and lead the platform, that together with support via multiple channels, establishes a feedback loop, that when done right will keep growing, expanding and driving healthy growth. The transparent nature of platform feedback loops are essential to providing everything the consumers will need, while also bringing a fresh flow of ideas, and insight within the FamilySearch firewall.
Truly Shifting FamilySearch The Culture
Top-down, bottom-up, outside-in, with constantly flow of oxygen via vibrant, flowing feedback loop, and the nourishing, and sanitizing sunlight of platform transparency, where week by week, month by month someone change can occur. It won’t all be good, there are plenty of problems that arise in ecosystem operations, but all of this has the potential to slowly shift culture when done right.
One thing that shows me the team over at FamilySearch has what it takes, is when I asked if I could write this up a story, rather than just a proposal I email them, they said yes. This is a true test of whether or not an organization might have what it takes. If you are unwilling to be transparent about the problems you have currently, and the work that goes into your strategy, it is unlikely you will have what it takes to establish the amount of transparency required for a platform to be successful.
When internal staff, large external partners, and long tail genealogical app developers and enthusiasts are in sync via a FamilySearch platform driven ecosystem, I think we can consider a shift to platform has occurred for FamilySearch. The real question is how do we get there?
Executing On Evangelism
This is not a definitive proposal for executing on an API evangelism strategy, merely a blueprint for the seed that can be used to start a slow, seismic shift in how FamilySearch engages its API area, in a way that will slowly evolve it into a community, one that includes internal, partner, and public developers, and some day, with the right set of circumstances, FamilySearch could grow into robust, social, genealogical ecosystem where everyone comes to access, and participate in the mapping of mankind.
- Defining Current Platform - Where are we now? In detail.
- Mapping the Landscape - What does the world of genealogy look like?
- Identifying Projects - What are the existing projects being developed via the platform?
- Define an API Evangelist Strategy - Actually flushing out of a detailed strategy.
- External Public
- External Partner
- Internal Stakeholder
- Internal Company-Wide
- Identify Resources - What resource currently exist? What are needed?
- Content / Storytelling
- Execute - What does execution of an API evangelist strategy look like?
- Iterate - What does iteration look like for an API evangelism strategy.
AS with many providers, you don’t want to this to take 5 years, so how do you take a 3-5 year cycle, and execute in 12-18 months?
- Invest In Evangelist Resources - It takes a team of evangelists to build a platform
- External Facing
- Partner Facing
- Internal Facing
- Development Resources - We need to step up the number of resources available for platform integration.
- Code Samples & SDKs
- Embeddable Tools
- Content Resources - A steady stream of content should be flowing out of the platform, and syndicated everywhere.
- Short Form (Blog)
- Long Form (White Paper & Case Study)
- Event Budget - FamilySearch needs to be everywhere, so people know that it exists. It can’t just be online.
There is nothing easy about this. It takes time, and resources, and there are only so many elements you can automate when it comes to API evangelism. For something that is very programmatic, it takes more of the human variable to make the API driven platform algorithm work. With that said it is possible to scale some aspects, and increase the awareness, presence, and effectiveness of FamilySearch platform efforts, which is really what is currently missing.
While as the API Evangelist, I cannot personally execute on every aspect of an API evangelism strategy for FamilySearch, I can provide essential planning expertise for the overall FamilySearch API strategy, as well as provide regular checkin with the team on how things are going, and help plan the roadmap. The two things I can bring to the table that are reflected in this proposal, is the understanding of where the FamilySearch API effort currently is, and what is missing to help get FamilySearch to the next stage of its platform evolution.
When operating within the corporate or organizational silo, it can be very easy to lose site of how other organizations, and companies, are approach their API strategy, and miss important pieces of how you need to shift your strategy. This is one of the biggest inhibitors of API efforts at large organizations, and is one of the biggest imperatives for companies to invest in their API strategy, and begin the process of breaking operations out of their silo.
What FamilySearch is facing demonstrates that APIs are much more than the technical endpoint that most believe, it takes many other business, and political building blocks to truly go from API to platform.
One of the byproducts of the Oracle vs Google API copyright case, was a realization that many API providers and consumer do not understand the layers of the API stack, let alone the potential licensing considerations for each layer of the API onion. I wouldn't just blame API providers, and consumers, I’m still getting a grasp on all of this, which is why I'm blogging about the subject.
Let’s take a quick crack at defining the layers to the potential API licensing onion:
- Data - What is the licensing for the actual data returned and collected by an API? I’m still learning about the ways to license your data, and the Open Data Commons provides some guidance in this area, while others feel that your data can just as be easily licensed using Creative Commons licensing.
- Data Model - The separation between data and its data model can be hard to see, where in reality end-users may own their data, but the order and structure of it can be owned by the originating application. If this layer is a concern, Creative Commons, or other copyright options should be considered.
- Server - Server side API code is the core of any API operations, and should be licensed accordingly using common open source software licenses when appropriate. However I would add that this is the one layer in the API stack where a proprietary licensing could make sense, but the rest of the stack should always be licensed as open as possible.
- Interface - The separation between the API and its surface area is important, and is something that is just becoming relevant with the Oracle v Google copyright case. Understanding the importance of an openly licensed surface area, is essential to the health of any API stack, and should be kep as open as possible, even when associated with a proprietary API backend. This is why we started API commons, to help API providers take a stance on the license of the surface area of the API stack.
- Clients - As with server side code, you should apply common open source software licensing to client code when appropriate, but your client code samples, libraries and SDKs should never be licensed in a proprietary way, encouraging implementation for any commercial integration.
In this scenario we are talking about a purely data API, if you are serving up some sort of programmatic resource, things might be different. My goal is to just try and understand the separation between the API layers, and apply some thoughts on how licenses can be applied to open, or possibly close access and constraint an APIs operation.
Much like other political building blocks of the API ecosystem, licensing in the aPI stack can be good, bad, or neutral. In the end, regardless of your stance I think it is important to have an open conversation about how our API stack is licensed, so that your consumers can better understand what they are in for.
A free, open-source, API driven conference solution called Voice Chat API popped up on my API monitoring radar today, as I was going through my feeds. The Voice Chat API is a very cool, dead-simple conferencing solution. As a tool it provides clear value, and I really like the approach from Plivo to rollout out an open, API driven resource like this—a model that could be applied to other valuable resources.
What really stands out is the Voice Chat API does one thing and does it well—audio conferences. It is easy to tell what it does. We aren't having to convince users of a problem, then sell them on a solution. The problem is clear, the solution is simple.
The Voice Chat API is open source and available on Github, built using "Plivo WebSDK and APIs”. I haven’t investigated the separation between what code is open source, and where the dependencies on Plivo is, but regardless the approach is interesting.
Providing users with more ways to deploy a conference, the Voice Chat API provides a set of add-ons including a Hubot Plugin, that works in Campfire or Hipchat, a Chrome extension, and a bookmarklet for deploying a conference from any other browser.
The Voice Chat API centers around its API, which allows developers to create a unique audio conference, the call mobile & landline phone numbers (PSTN) into the bridge. The API deploys to your Heroku account, allowing you do manage your ad-hoc audio conference deployments in the cloud.
I’m still playing around with the Voice Chat API, understanding how the application and the API works, as well as considering how this new open source approach dovetails with Plivo's business model, but now I’m intrigued which the approach, and how they crafted this API driven resource.
Salesforce has a pretty cool Code Share area within the DeveloperForce ecosystem, which allows developers to share code snippets with the rest of the community.
Its a pretty cool way for anyone to share their techniques as code samples in a variety of languages, then letting the community vet the code, fork larger projects, and collaborate to improve the code for the greater good.
Acknowledging that they don't have the internal resources to fully support the process, Salesforce has halted code samples submission, announcing plans to migrate the Code Sharing program to Github.
Github is already the largest code sharing platform, providing social tools that developers are used to. Salesforce's publiclackowledgement that they don't have the internal resources to operate, the fact they identify the importance of such a program, and the decision to migrate to Github are all very savvy moves by the API pioneer.
Github is one of the most important platforms you can use to support your API. Using Github to host all your API code samples and SDKs, whether they are generated internally or by your developers community is becoming more than a novelty, but an essential part of API operations.
One great use of Github is as an API issue management tool. The Github issue management system allows you to easily accept issue reports from your API community and apply labels to organize them into appropriate categories, for processing by your development team.
To setup Github Issue Management for your API, just create a new repository, but you won't actually being pushing any code, you will just be using it as a container for running issue management. Think of it as repository for your API itself.
Once setup, you can link the issue management page directly from your API area, allowing users to actively submit issues, comment and potentially be part of the API product development cycle.
There is a lot of discussion around the growth of APIs, and what the future will look like. How will we discover and make sense of the number of available APIs, and quickly get to work integrating with the APIs that bring the most value to our apps and businesses.
One technology that comes up in every conversation I’ve had is Swagger. What is Swagger?
Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services.
The goal of Swagger is to enable client and documentation systems to update at the same pace as the server. The documentation of methods, parameters and models are tightly integrated into the server code, allowing APIs to always stay in sync.
Swagger was born out of initiatives from Wordnik, developed for Wordnik’s own use during the development of developer.wordnik.com. Swagger development began in early 2010 and the framework being released is used by Wordnik’s APIs, which power both internal and external API clients.
Swagger provides a declarative resource specification, allowing users to understand and consume services without knowledge of server implementation, enabling both developers and non-developers to interact with the API, providing clear insight into how the API responds to parameters and options.
I’m familiarizing myself with the specification more and playing with the various tools they provide:
- Swagger Core - Defines Java annotations and required logic to generate a Swagger server or client.
- Swagger CodeGen - Contains a template-driven engine to generate client code in different languages by parsing your Swagger Resource Declaration.
- Swagger Scala Sample App - A fully-functioning, stand-alone Swagger server written in Scala which demonstrates how to enable Swagger in your API.
- Swagger Java Sample App - A fully-functioning, stand-alone Swagger server written in Java which demonstrates how to enable Swagger in your API.
I understand that Swagger is not the one specification to rule all APIs, and it won’t make all religious API fanatics happy. But I want to start somewhere. I see three main benefits for API owners coming from adopting Swagger:
- Automated, consistent generation of clean, beautiful, interactive API documentation
- Generation of client code and SDK in multiple languages
- Feeding into an industry wide API discovery language that both developers and non-developers can use
I believe strongly that consistent documentation and code samples ensure an API will get used, but as the number of APIs grows, a system like Google API Discovery Service, will be essential for API adoption across industries and around the globe. I’m hoping to learn more about Swagger, and see if it can help deliver on this vision.
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.