Introduction

ProxyLambda is the entry to run the model. Its backend supports a variety of running platforms, currently AWS Lambda and AWS Fargate. The caller can specify the actual platform to run on with the “cmd” argument. Since the runtime data needs to be recorded in the Coinfer web server during the model run, we need a token ‘coinfer_auth_token’ that represents user authorization in the web server. This can be created and queried on the User > Profile page of the Coinfer system. We also need to know the service address of the web server, which is passed in through ‘coinfer_server_endpoint’. If the model to run was previously created in the Coinfer system, then you can pass in the ‘model_id’ parameter, in which case you don’t need to pass in the ‘model.tree’ part of the model data. The system will automatically obtain the model data according to ‘model_id’. If the model to be run is not created in the Coinfer system, the model data needs to be uploaded through the full ‘model’ parameters. In this case please do not specify the “model_id” parameter. We support two ways to run the model, you can specify the model parameters and the parameters of the sample call in ‘model.meta’, or you can pass in a piece of code via ‘generated_script’ as the startup code. We will provide the SDK to generate the content of ‘generated_script’. If “parallel” parameter is specified, multiple backend instances will be used to run the model.

API

  • URL: https://mf5yygimg5uefdcdekdhjvx7r40rynkt.lambda-url.us-west-2.on.aws/
  • METHOD: POST
  • Data Format: JSON
  • Data Fields:
nametypedefaultdescription
cmdLiteral: run_in_lambda, run_in_fargaterun_in_lambdaWhere to run the model
experiment_idstr""the experiment ID. New experiment will be created if not provided
parallelint1Number of Lambda or Fargate instances to run the model
coinfer_auth_token*strThe authorization token to access web server
coinfer_server_endpointPLMServerEndpointWeb Server endpoint config
model_idstr""The model ID.
modelPLMModelContentThe model data.
generated_scriptstr""startup code generated by the SDK

PLMServerEndpoint

This structure contains the endpoint config for the Coinfer http API
nametypedefaultdescription
base*strEndpoint of base API
mcmc*strEndpoint of mcmc API
turing*strEndpoint of turing API

PLMModelContent

nametypedefaultdescription
metaPLMModelMetameta data of model
treearray[PLMModelTreeNode]model data

PLMModelMeta

nametypedefaultdescription
idstr""The ID of the model
project_filestr”Project.toml”The project file of the model project
entrance_filestr”main.jl”The entrance file of the model project
manifeststr”Manifest.toml”The manifest file of the model project
iteration_countint1000The iteration count of the sample
input_idstr""The ID of input data to the model
entrance_argsarray[Any][]The argument to the model entrance function
entrance_kwargsdictThe keyword argument to the model entrance function
sample_argsarray[Any][]The argument to the sample function
sample_kwargsdictThe keyword argument to the sample function
entrance_funcstr”model”The name of the model entrance function
experiment_namestr""the name to create experiment. Select a random name if leaving empty

PLMModelTreeNode

nametypedefaultdescription
name*strfile or folder name
typeLiteral: file, foldernode type
contentstrbase64 encoded file content
childrenarray[PLMModelTreeNode][]contents of subnodes