Skip to content

Welcome to is all about the building, sharing and use of intelligent services.

  • Building services and their components by using the Rockstar drag-and-drop editor
  • Sharing services and their components by deploying them to the Zoo
  • Using other service and components in the Zoo by calling their API or drag-and-dropping them into your services

Using services

  • Demos - Most services have a skeletal user interface that demonstrates the workings of the service
  • API - Deployed services have an API to do basic things like training and predictions

Any visitor to the website can view and run the service demonstrations. Their ability to do so, however, is severely limited - we want to make sure that all visitors get a chance to see what A.I. (machine learning) is capable of. A visitors capacity to run the demos is refreshed every 24 hours.

Members of can view, run and build services. Their ability to do so is limited only with respect to how much CPU/GPU they use while training, testing and using the services.

Premium and Pro members have greatly expanded capacity and control over the speed they wish to run their services at.

Building services

The goals for the Rockstar IDE are:

  • a framework that makes functional components independent from other components allowing a developer to focus on building a component in isolation
  • a framework that makes it obvious where each component fits into the larger whole
  • a process for building components and services that requires as little low-level knowledge of the internal workings of the libraries and their underlyng mathematics as possible
  • to observe the principle of maximum obviousness - that the process and the resultant design add minimal complexity to the representation of complex systems

Rockstar is designed for the building of new services to be done largely through the customization of existing services. Similarly, the building of new components is accomplished by cloning an existing component and editing it.

The reason for this is that there is a higher-cognitive load when starting with a blank screen when writing machine learning code than when writing generic back-end code and even front-end code, especially when using a well-designed modern framework. Similarly, drag-and-drop machine learning tools that start with a blank screen beg the question: what to drop where and when and how do I attach that to this.

Beyond the cognitive load of dealing with a domain that is still struggling with defining clear divisions between responsiblities and what constitutes core functionality, there is the added load of determining whether each requirement of any tool is an artifact required by the tool (adding unanticipated complexity) and what is a core property of the domain that the tool is trying to support?

The process

Some of these steps are optional. The Editor tries at all times to keep the service semantically correct i.e. components that are required by a pipeline cannot be deleted, nor can they be moved or inserted into an invalid order.

  1. Find a service that is most similar to the service you want to build
  2. Clone the service by clicking the clone button
  3. Change the training dataset to the one you want to use by
    • drag and dropping the replacement on top of it or by
    • selecting it and opening the inspector and selecting the search tab
  4. Edit the pipelines by
    • dragging and dropping components into the service and deleting unwanted components, or by
    • selecting it and opening the inspector and selecting the search tab which will automatically show compatible components
  5. Edit the properties or source code of the components as needed by selecting them and opening the inspector
  6. Train, test and deploy the service by clicking train, test and deploy in the todo list
  7. Customize the demonstration UI to enhance the UX of the service available to the general public
  8. incorporate the API into your external application to make the service available to your customer base

The Editor

The editor is opened and closed by clicking the colorful icon in the lower left of the main viewing area

There are up to four panels in the Rockstar Editor. The editor affectionately refers to components as 'rocks'.

  1. The 'Rockpile' which lists popular components that you can drag and drop into a service. The rockpile is opened and closed by selecting the 3-dot ellipses on the left of the bottom bar.
  2. The editing area itself that shows the service and its components
  3. The 'Todo' list which is not a separate panel at this time
  4. The 'Inspector' which works like the inspector in Chrome and Firefox. It can be opened and closed using the right-button mouse menu or the Ctrl-i key


Right-click the editor background (or press the left mouse button (or touch the background), and hold for a second and release) to popup the background context menu. From here you can open the Rockstar inspector, similar to the Chrome and Firefox inspectors.


The complete documentation is at The Automatic API documentation.

The API currently supports blocking predictions only (limited by AWS APIGateway to 30 seconds). An asynchronous prediction API is under consideration. Training and testing are both asynchronous and return a token allowing the caller to poll for status and results.

    payload = { data: data };
    payload.command = "predict";
    payload.service_uri = service_uri;
    payload.api_token = api_token;
    return fetch(API, { 
            method: "POST", 
            headers: { "Content-Type": "application/json" }, 
            body: JSON.stringify(payload) 
        }).then(response => {
            return response.json();
        }).catch(e => {
            return { error: "" + e };

With data set to the contents of a CSV data stream (of 1 or more rows) plus column headers, the following will return

    { predictions: [results] }

where each result is a real number (in the case of a single-valued prediction, for example).

  • service_uri is the fully qualified name of the service, e.g. smith/service/my-awesome-service
  • api_token is your api key

The Core (OMG) Framework


Components are arranged into convenient but to some degree arbitrary categories as specified by the Architecture component rules.


Datasets are usually imported through the Got Data? button in the main menu. They are associated with a DataSource component that loads them into the service. Currently only CSV datasets are supported by DataSources, Components and Algorithms.


DataSource components load data into the service. These are most like the 'extractors' in a typical ETL pipeline.

Augmentors, Filters, Transformers

These are generic components who take their input data, do something to it, then output it. They have been arbitrarily divided into these categories for convenience.

Augmentors expand the input data in various ways to improve the results of training a model. Filters are used to clean the input data. Transformers are used to transform the data into some kind of standard representation or value range.


Splitters split the input data into training and testing datasets


Algorithms are divided up into those used for training, testing and prediction / inference. For example, an algorithm can be a CNN or a Perceptron.

  • Training algorithms take input data and train a 'model', e.g. a neural network, assigning it 'weights'
  • Testing algorithms take input data and ask the trained model to 'predict' something and then compare the results with known results
  • Predicting algorithms take novel input data and ask the trained model to 'predict' something


Models are a container (with no associated source code) that contains their training pipeline and a potentially large data structure of trained weights (e.g. saved to AWS S3).


Pipelines define how data is injested and used to train a model. The default pipeline architecture calls for exactly one DataSource at the start of the ppeline and exactly one alrogithm at the end (either a train, test or predictor / inference algorithm). In between a number of components massage the data into a form that will optimize the efforts of the algorithm.

Customizing components

Components can be customized by:

  1. changing their properties using the Inspector
  2. editing their source code (after cloning them, if you are not yet their owner)
  3. swapped in-place for another component using the Inspector''s search tab


Architectures are special structures that specifies and controls the container-part hierarchy of components. Each container has an architecture and any component can be assigned an architecture, turning it into a container. The architecture specifies rules about what components are allowed in teh container, and in what sequence they appear. Currently only linear architectures (pipelines) are supported.

The Help System

The help system consists of the WTF, HelpMeChoose, Freakouts, and the App's 'Ninja' chat bot. All of these can be cloned, edited (improved) and offered for use by the community at large in the Zoo.

*** This editability feature has been removed from the first Beta ***

WTF (What's This for?)

WTF buttons are scattered throughout Automatic and provide a simple explanation and context for the various entities and terms used by the system, most often to do with the often opaque nomenclature of machine learning. The algorithm used by WTF is a simple dictionary lookup.


Help Me Choose options are offered when the user is asked to choose from a number of options. The algorithm used by the HelpMeChoose system is in flux and ranges from simple dictionary lookup (indexing on the types of things being choosen from) to custom pluggable algorithms with user-chooseable goals.


Freakouts buttons help relieve the stress and frustration that comes with taking things way too seriously or not progressing quite as fast as we personally wish. Got this idea from Kathy Sierra's work. The algorithm used by Freakouts to generate content is a simple random lookup.

The App (Ninja) Bot

This bot is currently stupid and just provides the list of page-specific tours and tutorials. The list of tours and turorials will grow over time, but the goal for the bot is a fully context-aware with member-created pluggable intelligence modules that learns both

  1. what good machine learning work looks like, and
  2. how to explain what good machine learning looks like to users

all within a standard conversational interface. This API is still under development. Our approach, like that for Automatic itself, will not be 100% rule-based (ala pre 2010) or 100% ML (post 2014) but a mix of the two.

Organizations (Teams)

Plans at the 'Business' level support the formation of a corporate team. Members of teams can simply switch from their personal accounts to the corporate account, and back. When using the corporate account the member can view, create and edit the coporate resources, e.g. services and components, and their work activities will draw from the corporate Compute capacity.

To form an organization:

  1. Log out of your Automatic account
  2. Register a new account using the username you want associated with your new corproate account
  3. Upgrade this new account to 'business' class
  4. Use the main menu (hamburger) to go to the 'Membership' page
  5. Add (and remove) members to/from your organization by entering their Automatic username

Members of an organization can swtich back and forth between their organixzation(s) and their personal account. To do this:

  1. Use the main menu (hamburger) to go the the 'My Organizations' page
  2. Click the 'switch' button for the organization of your choice, or your personal account, that you want to be working on next