Login & Contribute!

Feed free to add an API you Discovered

We'll never post to Twitter without your permission.
Programmable Web

Published: 13/08/2017

7 Rules to Follow for REST API URI Design

REST APIs use URIs to address resources. While they’re known as opaque identifiers (meaning you shouldn’t read too much into them), there are better and worse ways to write URIs. Guy Levin over at RestCase has formulated a set of design rules for API URIs that you should keep in mind to make things easy for your API clients.

read more at Programmable Web
Programmable Web

Published: 11/08/2017

How Cybercriminals Take Advantage of Chat APIs and What To Do About It

Cybersecurity solution provider Trend Micro has issued a report that highlights how chat platform APIs can and are being used by cybercriminals to achieve their nefarious objectives. Because of the degree to which Webhook APIs are involved (an API attack vector not previously discussed on ProgrammableWeb), the warnings and incidents should serve as a wake-up call to API providers and developers when it comes to the sorts of best practices and ongoing vigilance it takes to fully secure their customers and systems.

read more at Programmable Web
Programmable Web

Published: 11/08/2017

Want to Take Bitcoin and other Cryptocurrencies as Payment? There's an API for that

Overstock has announced the integration of its online marketplace with ShapeShift to offer customers use of all major cryptocurrencies. ShapeShift is a cryptocurrency exchange platform that allows users to swap between leading blockchain assets.

read more at Programmable Web
Programmable Web

Published: 11/08/2017

CloudRail's Universal API Now Supports Xamarin

CloudRail, API integration provider, has released a version of its unified interface product for Xamarin, a popular web and mobile app building platform. Xamarin pitches itself as a user friendly platform to build cross-platform applications. CloudRail positions itself as a single API/SDK provider that enables users to access a multitude of other popular APIs (e.g.

read more at Programmable Web
Api Evangelist

Published: 11/08/2017

Link Relation Types for APIs

I have been reading through a number of specifications lately, trying to get more up to speed on what standards are available for me to choose from when designing APIs. Next up on my list is Link Relation Types for Web Services, by Erik Wilde. I wanted to take this informational specification and repost here on my site, partially because I find it easier to read, and the process of breaking things down and publishing as a posts helps me digest the specification and absorb more of what it contains.

I’m particularly interested in this one, because Erik captures what I’ve had in my head for APIs.json property types, but haven’t been able to always articulate as well as Erik does, let alone published as an official specification. I think his argument captures the challenge we face with mapping out the structure we have, and how we can balance the web with the API, making sure as much of it becomes machine readable as possible. I’ve grabbed the meat of Link Relation Types for Web Services and pasted here, so I can break down, and reference across my storytelling.


  1. Introduction
    One of the defining aspects of the Web is that it is possible to interact with Web resources without any prior knowledge of the specifics of the resource. Following Web Architecture by using URIs, HTTP, and media types, the Web’s uniform interface allows interactions with resources without the more complex binding procedures of other approaches.

Many resources on the Web are provided as part of a set of resources that are referred to as a “Web Service” or a “Web API”. In many cases, these services or APIs are defined and managed as a whole, and it may be desirable for clients to be able to discover this service information.

Service information can be broadly separated into two categories: One category is primarily targeted for human users and often uses generic representations for human readable documents, such as HTML or PDF. The other category is structured information that follows some more formalized description model, and is primarily intended for consumption by machines, for example for tools and code libraries.

In the context of this memo, the human-oriented variant is referred to as “documentation”, and the machine-oriented variant is referred to as “description”.

These two categories are not necessarily mutually exclusive, as there are representations that have been proposed that are intended for both human consumption, and for interpretation by machine clients. In addition, a typical pattern for service documentation/description is that there is human-oriented high-level documentation that is intended to put a service in context and explain the general model, which is complemented by a machine-level description that is intended as a detailed technical description of the service. These two resources could be interlinked, but since they are intended for different audiences, it can make sense to provide entry points for both of them.

This memo places no constraints on the specific representations used for either of those two categories. It simply allows providers of aWeb service to make the documentation and/or the description of their services discoverable, and defines two link relations that serve that purpose.

In addition, this memo defines a link relation that allows providers of a Web service to link to a resource that represents status information about the service. This information often represents operational information that allows service consumers to retrieve information about “service health” and related issues.

  1. Terminology
    The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,”SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 [RFC2119].

  2. Web Services
    “Web Services” or “Web APIs” (sometimes also referred to as “HTTP API” or “REST API”) are a way to expose information and services on the Web. Following the principles of Web architecture[ they expose URI-identified resources, which are then accessed and transferred using a specific representation. Many services use representations that contain links, and often these links are typed links.

Using typed links, resources can identify relationship types to other resources. RFC 5988 [RFC5988] establishes a framework of registered link relation types, which are identified by simple strings and registered in an IANA registry. Any resource that supports typed links according to RFC 5988 can then use these identifiers to represent resource relationships on the Web without having to re-invent registered relation types.

In recent years, Web services as well as their documentation and description languages have gained popularity, due to the general popularity of the Web as a platform for providing information and services. However, the design of documentation and description languages varies with a number of factors, such as the general application domain, the preferred application data model, and the preferred approach for exposing services.

This specification allows service providers to use a unified way to link to service documentation and/or description. This link should not make any assumptions about the provided type of documentation and/or description, so that service providers can choose the ones that best fit their services and needs.

3.1. Documenting Web Services
In the context of this specification, “documentation” refers to information that is primarily intended for human consumption.Typical representations for this kind of documentation are HTML andPDF.

Documentation is often structured, but the exact kind of structure depends on the structure of the service that is documented, as well as on the specific way in which the documentation authors choose to document it.

3.2. Describing Web Services
In the context of this specification, “description” refers to information that is primarily intended for machine consumption.Typical representations for this are dictated by the technology underlying the service itself, which means that in today’s technology landscape, description formats exist that are based on XML, JSON, RDF, and a variety of other structured data models. Also, in each of those technologies, there may be a variety of languages that a redefined to achieve the same general purpose of describing a Web service.

Descriptions are always structured, but the structuring principles depend on the nature of the described service. For example, one of the earlier service description approaches, the Web ServicesDescription Language (WSDL), uses “operations” as its core concept, which are essentially identical to function calls, because the underlying model is based on that of the Remote Procedure Call (RPC) model. Other description languages for non-RPC approaches to services will use different structuring approaches.

3.3. Unified Documentation/Description
If service providers use an approach where there is no distinction of service documentation Section 3.1 and service descriptionSection 3.2, then they may not feel the need to use two separate links. In such a case, an alternative approach is to use the”service” link relation type, which has no indication of whether it links to documentation or description, and thus may be better fit if no such differentiation is required.

  1. Link Relations for Web Services
    In order to allow Web services to represent the relation of individual resources to service documentation or description, this specification introduces and registers two new link relation types.

4.1. The service-doc Link Relation Type
The “service-doc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are documented at a specific URI. The target resource is expected to provide documentation that is primarily intended for human consumption.

4.2. The service-desc Link Relation Type
The “service-desc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are described at a specific URI. The target resource is expected to provide a service description that is primarily intended for machine consumption. In many cases, it is provided in a representation that is consumed by tools, code libraries, or similar components.

  1. Web Service Status Resources
    Web services providing access to a set of resources often are hosted and operated in an environment for which status information may be available. This information may be as simple as confirming that a service is operational, or may provide additional information about different aspects of a service, and/or a history of status information, possibly listing incidents and their resolution.

The “status” link relation type can be used to link to such a status resource, allowing service consumers to retrieve status information about a Web service’s status. Such a link may not be available from all resources provided by a Web service, but from key resources such as a Web service’s home resource.

This memo does not restrict the representation of a status resource in any way. It may be primarily focused on human or machine consumption, or a combination of both. It may be a simple “traffic light” indicator for service health, or a more sophisticated representation conveying more detailed information such as service subsystems and/or a status history.

  1. IANA Considerations
    The link relation types below have been registered by IANA perSection 6.2.1 of RFC 5988 [RFC5988]:

6.1. Link Relation Type: service-doc

Relation Name: service-doc
Description: Linking to service documentation that is primarily intended for human
consumption.
Reference: [[ This document ]]

6.2. Link Relation Type: service-desc

Relation Name: service-desc
Description: Linking to service description that is primarily intended for consumption by machines.
Reference: [[ This document ]]

6.3. Link Relation Type: status

Relation Name: status
Description: Linking to a resource that represents the status of a Web service or API.
Reference: [[ This document ]]


Adding Some Of My Own Thoughts Beyond The Specification This specification provides a more coherent service-doc, and service-desc that I think we did with humanURL, and support for multiple API definition formats (swagger, api blueprint, raml) as properties for any API. This specification provides a clear solution for human consumption, as well as one intended for consumption by machines. Another interesting link relation it provides is status, helping articulate the current state of an API.

It makes me happy to see this specification pushing forward and formalizing the conversation. I see the evolution of link relations for APIs as an important part of the API discovery and definition conversations in coming years. Processing this specification has helped jumpstart some conversation around APIs.json, as well as other specifications like JSON Home and Pivio.

Thanks for letting me build on your work Erik! - I am looking forward to contributing.

read more at Api Evangelist
Api Evangelist

Published: 11/08/2017

About api.data.gov

I’m going to borrow, modify, and improve on the content from api.data.gov, because it is an important effort I want my readers to be aware of, because I want more of them to help apply educate other federal agencies regarding why it is a good idea to bake api.data.gov into their API operations, and help apply pressure until EVERY federal agency is up and running using a common API management layer.

Ok, so what is api.data.gov? api.data.gov is a free API management service for federal agencies. Our aim is to make it easier for you to release and manage your APIs. api.data.gov acts as a layer above your existing APIs. It transparently adds extra functionality to your APIs and helps deal with some of the repetitive parts of managing APIs.

Here are the features of api.data.gov:

  • You’re in control: You still have complete control of building and hosting your APIs however you like.
  • No changes required: No changes are required to your API, but when it’s accessed through api.data.gov, we’ll transparently add features and handle the boring stuff.
  • Focus on the APIs: You’re freed from worrying about things like API keys, rate limiting, and gathering usage stats, so you can focus on building the next great API.
  • Make it easy for your users: By providing a standard entry point to participating APIs, it’s easier for developers to explore and use APIs across the federal government.

api.data.gov handles the API keys for you:

  • API key signup: It’s quick and easy for users to signup for an API key and start using it immediately.
  • Shared across services: Users can reuse their API key across all participating api.data.gov APIs.
  • No coding required: No code changes are required to your API. If your API is being hit through api.data.gov, you can simply assume it’s from a valid user.

api.data.gov tracks all the traffic to your API and give you tools to easily analyze it:

  • Demonstrate value: Understand how your API is being used so you can gauge the value and success of your APIs.
  • Visualize usage and trends: View graphs of the overall usage trends for your APIs.
  • Flexible querying: Drill down into the stats based on any criteria. Find out how much traffic individual users are generating, or answer more complex questions about aggregate usage.
  • Monitor API performance: We gather metrics on the speed of your API, so you can keep an eye on how your API is performing.
  • No coding required: No code changes are required to your API. If your API is being hit through api.data.gov, we can take care of logging the necessary details. Documentation

api.data.gov helps with publishing documentation for your API:

  • Hosted or linked: We can host the documentation of your API, or, if you already have your own developer portal, we can simply link to it.
  • One stop shop: As more agencies add APIs to api.data.gov, users will be able to discover and explore more government APIs all at one destination.

api.data.gov helps you rate limit because you might not want to allow all users to have uncontrolled access to your APIs:

  • Prevent abuse: Your API servers won’t see traffic from users exceeding their limits, preventing additional load on your servers.
  • Per user limits: Individual users can be given higher or lower rate limits.
  • No coding required: No code changes are required to your API. If your API is being hit, you can simply assume it’s from a user that hasn’t exceeded their rate limits.

api.data.gov is powered by the open source project API Umbrella. You can contribute to the development of this platform, or setup your own instance and run the entire stack yourself. If you’re interested in exploring any of this for your APIs, please contact us. In general, it’s easy to take any existing API your agency has (or is in the process of building) and put api.data.gov in front of it. This can be an easy way to get started and see what type of functionality api.data.gov might provide for your API.

api.data.gov is all about consistent API management across federal government, which means developers will be able to get at government data, content, and algorithms more efficiently, and integrate them into web, mobile, and other applications. We need more government agencies to be doing this, taking advantage of api.data.gov and get to work developing an awareness around who is consuming their API resources. Eventually API management will be how government agencies will be generating revenue on top of valuable API resources, charging commercial users enough so that each agency can cover the costs of operations, and hopefully make more of an investment in the resources they are making available.

read more at Api Evangelist
Programmable Web

Published: 10/08/2017

Twitter Kit 3 Expands to Unity

Earlier this year, Twitter introduced Twitter Kit 3 as the first standalone SDK version of Twitter Kit. The original launch was limited to Android and iOS. Twitter has now expanded Twitter Kit 3 to Unity. With the announcement, Twitter hopes to add a new social element to games of all kind.

read more at Programmable Web
Api Evangelist

Published: 10/08/2017

Image Logging With Amazon S3 API

I have been slowly evolving my network of websites in 2017, overhauling the look of them, as well as how they function. I am investing cycles into pushing as much of my infrastructure towards being as static as possible, minimizing my usage of JavaScript wherever I can. I am still using a significant amount of JavaScript libraries across my sites for a variety of use cases, but whenever I can, I am looking to kill my JavaScript or backend dependencies, and reduce the opportunity for any tracking and surveillance.

While I still keep Google Analytics on my primary API Evangelist sites, as my revenue depends on it, whenever possible I keep personal projects without any JavaScript tracking mechanisms. Instead of JavaScript I am defaulting to image logging using Amazon S3. Most of my sites tend to have some sort of header image, which I store in a common public bucket on Amazon S3, all I have to do is turn on logging, and then get at logging details via the Amazon S3 API. Of course, images get cached within a user’s browser, but the GET for my images still gives me a pretty good set of numbers to work with. I’m not concerned with too much detail, I just generally want to understand the scope of traffic a project is getting, and whether it is 5, 50, 500, 5,000, or 50,000 visitors.

My two primary CDNs are Amazon S3 and Github. I’m trying to pull together a base strategy for monitoring activity across my digital footprint. My business presence is very different than my personal presence, but with some of my personal writing, photography, and other creations I still like to keep a finger on the pulse of what is happening. I am just looking to minimize the data gathering and surveillance I am participating in these days. Keeping my personal and business websites static, and with a minimum footprint is increasingly important to me. I find that a minimum viable static digital footprint protects my interests, maximize my control over my work, and minimizes the impact to my readers and customers.

read more at Api Evangelist
Api Evangelist

Published: 10/08/2017

My Focus On Public APIs Also Applies Internally

A regular thing I hear from folks when we are having conversations about the API lifecycle, is that I focus on public APIs, and they are more interested in private APIs. Each time I hear this I try to take time and assess which parts of my public API research wouldn’t apply to internal APIs. You wouldn’t publish your APIs to pubic API search engines like APIs.io or ProgrammableWeb, and maybe not evangelizing your APIs at hackathons, but I’d say 90% of what I study is applicable to internal APIs, as well as publicly available APIs.

With internal APIs, or private network partner APIs you still need a portal, documentation, SDKs, support mechanisms, and communication and feedback loops. Sure, how you use the common building blocks of API operations that I track on will vary between private and public APIs, but this shifts from industry to industry, and API to API as well–it isn’t just a public vs. private thing. I would say that 75% of my API industry research is derived from public API operations–it is just easier to access, and honestly more interesting to cover than private ones. The other 25% of internal API conversation I’m having, always benefit from thinking through the common API building blocks of public APIs, looking for ways they can be more successful with internal and partner APIs.

I’d say that a significant part of the mindshare for the microservices philosophy is internally focused. I think this is something that will come back to hurt some implementations, cutting out many of the mechanisms and common elements required in a successful API formula. Things like portals, documentations, SDKs, testing, monitoring, discovery, support, communications all contribute to an API working, or not working. I’ve said it before, and I’ll say it again. I’m not convinced that there is any true separation in public vs private APIs, and there remains to be a great deal we can learn from public API providers, and put to work across internal operations, and with our most trusted partners.

read more at Api Evangelist
Programmable Web

Published: 09/08/2017

What is an API Fragment?

An API fragment is a portion of an API specification, which is why understanding it starts at the API specification level. An API spec consists of a plan of how your API should look structurally – like a blueprint of a house.

read more at Programmable Web
Api Evangelist

Published: 09/08/2017

An Open Source API Security Intelligence Gathering, Processing, And Distribution Framework

I was reading about GOSINT, the open source intelligence gathering and processing framework over at Cisco. “GOSINT allows a security analyst to collect and standardize structured and unstructured threat intelligence. Applying threat intelligence to security operations enriches alert data with additional confidence, context, and co-occurrence. This means that you are applying research from third parties to your event data to identify similar, or identical, indicators of malicious behavior.” The framework is written in Go, with a front-end in JavaScript frontend, and usage of APIs as threat intelligence sources.

When you look at configuration section on the README for GOSINT, you’ll see information for setting up threat intelligence feeds, including Twitter API, Alien Vault the Open Threat Community API, VirusTotal API, and the Collaborative Research Into Threats (CRITS). GOSINT acts as an API aggregator for a variety of threat information, which then allows you to scour the information for threat indicators, which you can evolve over time, providing a pretty interesting model for not just threat information sharing, but also API driven aggregation, curation and sharing.

GOSINT also has the notion of behaving as a “transfer station”, where you can export refined data as CSV or CRITS format. Right here seems like an opportunity for some Github integration, adding continuous integration and deployment to open source intelligence and processing workflows. Making sure refined, relevant threat information is available where it is needed, via existing API deployment and integration workflows. Wouldn’t take much to publish CSV, YAML, and JSON files to Github which can then be used to drive distributed dashboards, visualizations, and other awareness building tools. Plus, the refined threat information is now published as CSV/JSON/YAML on Github where it can be ingested by any system of application with access to the Github repository.

GOSINT is just one of the interesting tooling I’m coming across as I turn up the volume on my API security research, thanks to the investment of ElasticBeam my API security partner. They’ve invested in an API security guide, as well as white paper, which is something that will generate a wealth of stories like this along the way, as I find interesting API security artifacts. I’m looking to map out the API security landscape, but I’m also interested in understanding open source API aggregation, analysis, and syndication platforms that integrate with existing CI/CD workflows, to help feed my existing human services API work, and other city, state, and federal government API projects I’m working on.

read more at Api Evangelist
Api Evangelist

Published: 09/08/2017

Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.

read more at Api Evangelist
Api Evangelist

Published: 09/08/2017

A Fresh Look At The Embeddable Tools Built On The Twitter API

Over the years I have regularly showcased Twitter as an example API driven embeddable tools like buttons, badges, and widgets. In 2017, after spending some time in the Twitter developer portal, it is good to see Twitter still investing in their embeddable tools. The landing page for the Twitter embeddables still provides the best example out there of the value of using APIs to drive data and content across a large number of remote web sites.

Twitter has distinct elements of their web embeddables:

  • Tweet Button - That classic tweet button, allowing users to quickly Tweet from any website.
  • Embedded Tweets - Taking any Tweet and embedding on a web page showing its full content.
  • Embedded Timeline - Showing curated timelines on any website using a Twitter embeddable widget.
  • Follow Button - Helping users quickly follow your Twitter account, or your companies Twitter account.
  • Twitter Cards - Present link summaries, engaging images, product information, or inline video as embeddable cards in timeline.

Account interactions, messaging, posting, and other API enabled function made portable using JavaScript allowing it to be embedded and executed on any website. JavaScript widgets, buttons, and other embeddables are still a very tangible, useful example of APIs in action. Something I can talk about to anyone about, helping them understand why you might want to do APIs, or at least know about APIs.

We bash on Twitter a lot in the API community. However, after a decade of operation, you have to give it to them. They are still doing it. They are still keeping it simple with embeddable tools like this. I can confidently say that APIs are automating some serious illness on the Twitter API platform at the moment, and there are many things I’d like to be different with the Twitter API, but I am still pleased that I can keep finding examples from the Twitter platform to showcase on API Evangelist seven years of writing about them.

read more at Api Evangelist
Programmable Web

Published: 08/08/2017

How Stalled Obamacare Debate Could Hinder Healthcare API Progress

As issues that dominate the mainstream headlines go, there is perhaps no single issue that is more dominant than the desire of Republican politicians, from President Trump to the US Congress, to enact some sort of sweeping change or repeal of the American Affordable Care Act (aka Obamacare). Perhaps immigration and North Korea run a close 2nd or 3rd.

read more at Programmable Web
Api Evangelist

Published: 08/08/2017

API Message Integrity with JSON Web Token (JWT)

I don’t have any production experience deploying JSON Web Tokens (JWT), but it has been something I’ve been reading up on, and staying in tune with for some time. I often reference JWT as the leading edge for API authentication, but there is one aspect of JWT I think is worth me referencing more often–message integrity. JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object.

JWT can not only be used for authentication of both message sender/receiver, it can ensure the message integrity as well, leveraging a digital signature hash value of the message body to ensure the message integrity during transmission. It adds another interesting dimension to the API security conversation, and while not be applicable in all APIs, I know many that it would make a lot of sense. Many of the networks we use today and applications we depend on today are proxied, creating an environment where message integrity should always come into question, and JWT gives us another tool in our toolbox to help us keep things secure.

I’m working my way through each layer of API operations, looking for aspects of API security that are often obscured, hidden, or just not discussed as they should be. I feel like JWT is definitely one track of API security that has evolved the conversation significantly over the last couple years, and is something that can make a significant impact on the space with just a little more storytelling and education. I’m going to make sure API request and response message integrity is a regular part of my API security storytelling, curriculum, and live talks that I develop.

read more at Api Evangelist
Api Evangelist

Published: 08/08/2017

Reducing Developers To A Transaction With APIs, Microservices, Serverless, Devops, and the Blockchain

A topic that keeps coming up in discussions with my partner in crime Audrey Watters (@audreywatters) about our podcast is around the future of labor in an API world. I have not written anything about this, which means I’m still in early stages of any research into this area, but it has come up in conversation, and reflected regularly in my monitoring of the API space, I need to begin working through my ideas in this area. A process that helps me better see what is coming down the API pipes, and fill the gaps in what I do not know.

Audrey has long joked about my API world using a simple phrase: “reducing everything to a transaction”. She says it mostly in jest, but other times I feel like she wields it as the Cassandra she channels. I actually bring up the phrase more than she does, because it is something I regularly find myself working in the service of as the API Evangelist. By taking a pro API stance I am actively working to reduce legacy business, institutional, and government processes down and breaking them down into a variety of individual tasks, or if you see things through a commercial lens, transactions.

Microservices
A microservices philosophy is all about breaking down monoliths into small bite size chunks, so they can be transacted independently, scaled, evolved, and deprecated in isolation. Microservices should do one thing, and do it well (no backtalk). Microservices should do what it does as efficiently as possible, with as few dependencies as possible. Microservices are self-contained, self-sufficient, and have everything they need to get the job done under a single definition of a service (a real John Wayne of compute). And of course, everything has an API. Microservices aren’t just about decoupling the technology, they are are about decoupling the business, and the politics of doing business within SMB, SME, enterprises, institutions, and government agencies–the philosophy for reducing everything to a transaction.

Containers
A microservice way of thinking about software that is born in the clouds, a bi-product of virtualization and API-ization of IT resources like storage and compute. In the last decade, as IT services moved from the basement of companies into the cloud, a new approach to delivering the compute, storage, and scalability needed to drive this new microservices way of doing business emerged that was called containers. In 2017 businesses are being containerized. The enterprise monolith is being reduced down to small transactions, putting the technology, business, and politics of each business transaction into a single container, for more efficient development, deployment, scaling, and management. Containers are the vehicle moving the microservices philosophy forward–the virtualized embodiment of reducing everything to a transaction.

Serverless
Alongside a microservice way of life, driven by containerization, is another technological trend (undertow) called serverless. With the entire IT backend being virtualized in the cloud, the notion of the server is disappearing, lightening the load for developers in their quest for containerizing everything, turning the business landscape into microservices, than can be distilled down to a single, simple, executable, scalable function. Serverless is the codified conveyor belt of transactions rolling by each worker on the factory floor. Each slot on a containerized, serverless, microservices factory floor possessing a single script or function, allowing each transaction to be executed, and replicated allowing it to be applied over and over, scaled, and fixed as needed. Serverless is the big metal stamping station along a multidimensional digital factory assembly line.

DevOps
Living in microservices land, with everything neatly in containers, being assembled, developed, and wrenched on by developers, you are increasingly given more (or less) control over the conveyor belt that rolls by you on the factory floor. As a transaction developer you are given the ability to change direction of your conveyor belt, speed things up, apply one or many metal stamp templates, and orchestrate as much, or as little of the transaction supply chain as you can keep up with (meritocracy 5.3.4). Some transaction developers will be closer to the title of architect, understanding larger portions of the transaction supply chain, while most will be specialized, applying one or a handful of transaction templates, with no training or awareness of the bigger picture, simply pulling the Devops knobs and levers within their reach.

Blockchain
Another trend (undertow) that has been building for sometime, that I have managed to ignore as much as I can (until recently) is the blockchain. Blockchain and the emergence of API driven smart contracts has brought the technology front and center for me, making it something i can ignore, as I see signs that each API transaction will soon be put in the blockchain. The blockchain appears to becoming the decentralized (ha!) and encrypted manifestation of what many of us has been calling the API contract for years. I am seeing movements from all the major cloud providers, and lesser known API providers to ensure that all transactions are put into the blockchain, providing a record of everything that flows through API pipes, and has been decoupled, containerized, rendered as serverless, and available for devops orchestration.

Ignorance of Labor
I am not an expert in labor, unions, and markets. Hell, I still haven’t even finished my Marx and Engels Reader. But, I know enough to be able to see that us developers are fucking ourselves right now. Our quest to reduce everything to a transaction, decouple all the things, and containerize and render them serverless makes us the perfect tool(s) for some pretty dark working conditions. Sure, some of us will have the bigger picture, and make a decent living being architects. The rest of us will become digital assembly line workers, stamping, maintaining a handful of services that do one thing and do it well. We will be completely unaware of dependencies, or how things are orchestrated, barely able to stay afloat, pay the bills, leaving us thankful for any transactions sent our way.

Think of this frontline in terms of Amazon Mechanical Turk + API + Microservices + Containers + Serverless + Blockhain. There is a reason young developers make for good soldiers on this front line. Lack of awareness of history. Lack of awareness of labor. Makes great digital factory floor workers, stamping transactions for reuse elsewhere in the digital assembly line process. This model will fit well with current Silicon Valley culture. There will still be enough opportunity in this environment for architects and cybersecurity theater conductors to make money, exploit, and generate wealth. Without the defense of unions, government or institutions, us developers will find ourselves reduced to transactions, stamping out other transactions on the digital assembly line floor.

I know you think your savvy. I used to think this too. Then after having the rug pulled out from under me, and the game changed around me by business partners, investors, and other actors who were playing a game I’m not familiar with, I have become more critical. You can look around the landscape right now and see numerous ways in which power has set its sights on the web, and completely distorting any notion of the web being democratic, open, inclusive, or safe environment. Why do us developers think it will be any different wit us? Oh yeah, privilege.

read more at Api Evangelist
Programmable Web

Published: 07/08/2017

Zynx Health Offers API That Supports FHIR Standard Data Format

This month, Zynx Health introduced the availabilty of an API whose data format conforms to the standard data format for FHIR (Fast Healthcare Interoperability Resources).

read more at Programmable Web
Programmable Web

Published: 07/08/2017

In Other API Economy News: Migrating From Flash on Facebook and More

We head into the weekend with a review of the stories we couldn’t cover with a look at what what going on in the world of APIs. Leading off is news from edge cloud platform, Fastly. This week they introduced a batch API for surrogate key purge. Surrogate keys let users of Fastly tag related assets such as images, audio, and copy and then purge them in a single request.

read more at Programmable Web
Api Evangelist

Published: 07/08/2017

API Industry Standards Negotiation By Media Type

I am trying to help push forward the conversation around the API definition for the Human Services Data Specification (HSDS) in a constructive way amidst a number of competing interests. I was handed a schema for sharing data about about organizations, locations, and services in a CSV format. I took this schema and exposed it with a set of API paths, keeping the flat file structure in tact, making no assumptions around how someone would need to access the data. I simply added the ability to get HSDS over the web as JSON–I would like to extend to be HTML, CSV, JSON, and XML, reaching as wide as possible audience with the basic implementation.

As we move forward discussions around HSDS and HSDA I’m looking to use media types to help separate the different types of access people are looking for using media types. I don’t want to leave folks who only have basic CSV export or import capabilities behind, but still wanted to provide guidance for exchanging HSDA over the web. To help organize higher levels of demand on the HSDS schema I’m going to break out into some specialized media types as well as the default set:

  • Human Services Data Specification (HSDS) - text/csv - Keeping data package basic, spreadsheet friendly, yet portable and exchangeable.
  • Human Services Data API (HSDA) - application/json and text/xml, text/csv, and text/html - Governing access at the most basic level, keeping true to schema, but allowing for content negotiation over the web.
  • Human Services Data API (HSDA) Hypermedia - (application/hal+json and application/hal+xml) - Allowing for more comprehensive responses to HSDA requests, addressing searching, filtering, pagination, and relationship linking between any HSDS returend.
  • Human Services Data API (HSDA) Bulk - (application/vnd.hsda.bulk) - Focusing on heavy system to system bulk transfers, and eventually syncing, backups, archives, and migrations. Dealing with the industrial levels of HSDA operations.
  • Human Services Data API (HSDA) Federated - (application/vnd.hsda.federated) - Allowing for a federated HSDA implementation that allows for the moderation of all POST, PUT, and DELETE by a distributed set of partners. Might also accompany the bulk system where partners can enable sync or bulk extraction for use in their own implementations.

I am working to define an industry level API standard. I am not operating an individual API implementation (well I do have several demos), so media types allows me to enable each vendor, or implementation to negotiate the type of content they desire. If they are interested developing single page applications or conversational interfaces they can offer up the hypermedia implementation. If they are system administrators and looking to load up large datasets, or extract large datasets, they can work within the HSDA Bulk realm. In the end I can see any one of these APIs being deployed in isolation, as well as all four of them living side by side, driving a single HSDS/A compliant platform.

This is all preliminary thought. All I have currently is HSDS, and HSDA returning JSON. I’m just brainstorming about what possible paths there are forward, and what I think the solution involves content negotiation, at the vendor, implementation, and consumption levels. Content type negotiation seems to provide a great way to break up and separate concerns, keeping simple things simple, and some of the more robust integrations segregated, while still working in concert. I’m always impressed by the meaningful impact something like content type has had on the web API design conversation, and always surprised when I learn new approaches to leveraging the content type header as part of API operations.

read more at Api Evangelist
Api Evangelist

Published: 07/08/2017

Providing Code Citations In Machine Learning APIs

I was playing around with the Style Thief, an image transfer API from Algorithmia, and I noticed the citation for the algorithm behind. The API is an adaptation of Anish Athalye’s Neural Style Transfer, and I thought the algorithmic citation of where the work was derived from was an interesting thing to take note of for my machine learning API research.

I noticed on Algorithmia’s page there was a Bibtex citation, which referenced the author, and project Github repository:

@misc{athalye2015neuralstyle,
author = {Anish Athalye},
title = {Neural Style},
year = {2015},
howpublished = {url{https://github.com/anishathalye/neural-style}},
note = {commit xxxxxxx}
}

This provides an interesting way to address citation in not just machine learning, but with open source driving algorithmic APIs in general. It gives me food for thought when it comes to what licensing I should be considering when wrapping open source software with an API. I’ve been thinking about dependencies a lot lately when it comes to APIs and their definitions, and I’d consider citation or attribution to be in a similar category. I guess rather then technical dependency, it is more in the business and legal dependency category.

Similar to how Pivio allows you to reference dependencies for your microservices, I’m thinking that API Commons, or some other format like Bibtext could provide a machine readable citation, that could be indexed as part of an APIs.json index. Allowing us API designers, architect, and providers to provide proper citation for where our work is derived. These aren’t just technical dependencies, but also business and political dependencies, that we should ensuring are present with each API we deploy, providing an observable provenance of where ideas come from, and a honest look at how we build on the work of each other.

read more at Api Evangelist
Programmable Web

Published: 06/08/2017

4 Lessons to Learn When Integrating With Third Party APIs

Working with numerous third party APIs can be a headache. Damon Swayn of InSite should know. The InSite team has integrated countless APIs for its clients’ CRMs into its platform. Damon over at his Medium blog gives you the top four lessons he and his team have learned from working with third party APIs so you can build APIs that delight rather than frustrate clients.

read more at Programmable Web
Programmable Web

Published: 05/08/2017

Overstock Has a Blockchain API and It’s Not for Bitcoin

This story was updated on Aug 7, 2017

read more at Programmable Web
Api Evangelist

Published: 04/08/2017

When You See API Rate Limiting As Security

I’m neck deep into my assessment of the world of API security this week, a process which always yields plenty of random thoughts, which end up becoming stories here on the blog. One aspect of API security I keep coming across in this research is the concept of API rate limiting as being security. This is something I’ve long attributed with API management service providers making their mark on the API landscape, but as I dig deeper I think there is more to this notion of what API security is (or isn’t). I think it has more to do with API providers, than companies selling their warez to these API providers.

The API management service providers have definitely set the tone for API security conversation(good), by standing up a gateway, and providing tools for limiting what access is available–I think many data, content, and algorithmic stewards are very narrowly focus on security being ONLY about limiting access to their valuable resources. Many folks I come across see their resources as valuable, when they begin doing APIs they have a significant amount of concern around putting their resources on the Internet, and once you secure and begin rate limiting things, all security concerns appear to have been dealt with. Competitors, and others just can’t get at your valuable resources, they have to come through the gate–API security done.

Many API providers I encounter have unrealistic views of the value of their data, content, and algorithms, and when you match this with their unrealistic views about how much others want access to this valuable content you end up with a vacuum which allows for some very narrow views of what API security is. To help support this type of thinking, I feel like the awareness generated from API management is often focused on generating revenue, and not always about understanding API abuse, and is also something can create blindspots when it comes to database, server, and DNS level logging and layers where security threats emerge. I’m assuming folks often feel comfortable that the API management layer is sufficiently securing things by rate limiting, and we can see all traffic through the analytics dashboard. I’m feeling that this one of the reasons folks aren’t looking up at the bigger API security picture.

From what I’m seeing, assumptions that the API management layer is securing things can leave blind spots in other areas like DNS, threat information gathering, aggregation, collaboration, and sharing. I’ve come across API providers who are focused in on API management, but don’t have visibility at the database, server, container, and web server logging levels, and are only paying attention to what their API management dashboard provides access to. I feel like API management opened up a new found awareness for API provides, something that has evolved and spread to API monitoring, API testing, and API performance. I feel like the next wave of awareness will be in the area of API security. I’m just trying to explore ways that I can help my readers and clients better understand how to expand their vision of API security beyond their current field of vision.

read more at Api Evangelist
Programmable Web

Published: 03/08/2017

Domain Group Launches Public API for Access to Australian Property-Related Data

Australian property services company Domain Group has launched a public API allowing third-party developers access to a specific set of property-related data. Access to data via the Domain Group API v1 is governed by both API packages and API plans. There are currently three API packages available, Agencies and Listings, Properties and Locations, and Content. The two plan levels available at this time are Default and Business. Each plan provides a set amount of resources and rate limits.

read more at Programmable Web
Programmable Web

Published: 03/08/2017

Plume Labs Opens API for Air Quality Data

Plume Labs, an environmental tech company, has opened up its Plume API to third party businesses via its AI-powered platform: Plume.io. Plume.io delivers programmatic access to a global, crowdsourced, hardware-enabled air quality data platform.

read more at Programmable Web
Api Evangelist

Published: 03/08/2017

Plugin Infrastructure For Every Stop Along The API Lifecycle

I’m continuing my integration platform as a service (iPaaS) research, understanding how API providers are quickly integration with other platform, I am also looking into how API service providers are opening up their services to the entire API lifecycle. I’m seeing API service provides offer up a default set of integrations with other platforms, ad in some cases using Zapier by default–opening up 750+ other API driven platforms pretty quickly. Another dimension of this that I’m tracking on is when API service providers offer up plugin infrastructure, allowing other platforms to develop plug and play integrations that any platform user can take advantage of.

You can see this in action over at my partner Tyk, who has a nice set of plugins for their API management solution. They start with three language focused middleware plugins allowing you to write scripts in Java, Lua, and JavaScript. Then they offer two gRPC plugins, which might be another post all by itself. While very similar to the iPaaS, or custom integration solution I’ve seen from other API providers, I’m going to be categorizing plugin approaches to integration like this separately, because it invites developers to come develop integrations as plugin–something that is very API in my book.

I’ve added a separate research area to tune into what types of integrations platforms are introducing via plugin infrastructure. I’m trying to understand how plugins are evolving from being more about platform, browser, and other common forms and becoming more API lifecycle middleware (for lack of better term), like Tyk. I want to be aware of each of their approaches, and how different stops along the API lifecycle are embedding scripting engines, injecting integrated features into operations, and making it part of any continuous integration and deployment workflow(s). Like the other areas of my API lifecycle research, I will stop in from time to time, and understand if plugin API infrastructure is evolving and becoming more of a thing.

read more at Api Evangelist
Programmable Web

Published: 02/08/2017

Stoplight Releases Scenarios v3.4 API Testing and Debugging Tool

Stoplight, a Techstars-graduated startup that provides a modular API toolkit, has announced the release of Scenarios v3.4 which includes a number of bug fixes and new features including (but not limited to) tagging and filtering, discussions, and shared environments.

read more at Programmable Web
Programmable Web

Published: 02/08/2017

Quovo Launches Self-Service API Checkout

Quovo, fintech data aggregation and analytics company, has launched a new self-service API checkout service for its Quovo API. The new service allows users to easily access and manage API tokens directly through the Quovo site. Three-tiered pricing allows developers to test the API in the free Sandbox, and scale Quovo API use as needed through the Catalyst and Partner plan levels.

read more at Programmable Web
Programmable Web

Published: 01/08/2017

GraphCMS Launches API-First Content Management System

GraphCMS, API-first content management system provider, has officially launched its flagship product. GraphCMS, the name of the company and its product, is a headless content management system the utilizes GraphQL. GraphQL, a data query language developed by Facebook, is now an open-source Facebook project. With open access to GraphQL, companies like GraphCMS can build product offerings with the future in mind.

read more at Programmable Web
Programmable Web

Published: 31/07/2017

LunchBadger Announces Open Source Express.js API Gateway

LunchBadger; API lifecycle, orchestration and optimization solution provider; has announced its new open source API gateway: Express Gateway. Express Gateway is one of the first open source gateways to utilize Express.js. The gateway delivers a solution to developers and businesses who desire to build their own Express.js-based micro services instead of utilizing an out-of-the-box solution.

read more at Programmable Web
Programmable Web

Published: 29/07/2017

How Google Versions Their APIs

APIs sometimes go thru changes that are so big they require a new version to make sure API users don’t break their apps. Versioning APIs is, however, difficult and some teams go out of their way to avoid it. Google can’t do that and so they’ve developed some simple, consistent rules for versioning APIs. Dan Ciruli over at the Google Cloud Platform, explains how the search giant versions their APIs.

read more at Programmable Web
Programmable Web

Published: 27/07/2017

FinFolio Launches Standalone Wealth Management API

FinFolio, investment portfolio management software provider, has launched its REST API as a standalone product: wealthlab.io. The API targets wealth managers, and their tech teams, to assist with the creation of wealth management apps, client portals, and other tools. Portfolio management and trading software products are complicated to build, and traditionally, building such products required massive upfront resource investments.

read more at Programmable Web
API key to my ♥ ... by 9AM LABS