Introduction
ProxyLambda is the entry to run the model. Its backend supports a variety of running platforms, currently AWS Lambda and AWS Fargate. The caller can specify the actual platform to run on with the “cmd” argument. Since the runtime data needs to be recorded in the Coinfer web server during the model run, we need a token ‘coinfer_auth_token’ that represents user authorization in the web server. This can be created and queried on theUser > Profile
page of the Coinfer system. We also need to know the service address of the web server, which is passed in through ‘coinfer_server_endpoint’.
If the model to run was previously created in the Coinfer system, then you can pass in the ‘model_id’ parameter, in which case you don’t need to pass in the ‘model.tree’ part of the model data. The system will automatically obtain the model data according to ‘model_id’.
If the model to be run is not created in the Coinfer system, the model data needs to be uploaded through the full ‘model’ parameters. In this case please do not specify the “model_id” parameter.
We support two ways to run the model, you can specify the model parameters and the parameters of the sample call in ‘model.meta’, or you can pass in a piece of code via ‘generated_script’ as the startup code. We will provide the SDK to generate the content of ‘generated_script’.
If “parallel” parameter is specified, multiple backend instances will be used to run the model.
API
- URL:
https://mf5yygimg5uefdcdekdhjvx7r40rynkt.lambda-url.us-west-2.on.aws/
- METHOD: POST
- Data Format: JSON
- Data Fields:
name | type | default | description | |
---|---|---|---|---|
cmd | Literal: run_in_lambda, run_in_fargate | run_in_lambda | Where to run the model | |
experiment_id | str | "" | the experiment ID. New experiment will be created if not provided | |
parallel | int | 1 | Number of Lambda or Fargate instances to run the model | |
coinfer_auth_token* | str | The authorization token to access web server | ||
coinfer_server_endpoint | PLMServerEndpoint | Web Server endpoint config | ||
model_id | str | "" | The model ID. | |
model | PLMModelContent | The model data. | ||
generated_script | str | "" | startup code generated by the SDK |
PLMServerEndpoint
This structure contains the endpoint config for the Coinfer http APIname | type | default | description |
---|---|---|---|
base* | str | Endpoint of base API | |
mcmc* | str | Endpoint of mcmc API | |
turing* | str | Endpoint of turing API |
PLMModelContent
name | type | default | description |
---|---|---|---|
meta | PLMModelMeta | meta data of model | |
tree | array[PLMModelTreeNode] | model data |
PLMModelMeta
name | type | default | description |
---|---|---|---|
id | str | "" | The ID of the model |
project_file | str | ”Project.toml” | The project file of the model project |
entrance_file | str | ”main.jl” | The entrance file of the model project |
manifest | str | ”Manifest.toml” | The manifest file of the model project |
iteration_count | int | 1000 | The iteration count of the sample |
input_id | str | "" | The ID of input data to the model |
entrance_args | array[Any] | [] | The argument to the model entrance function |
entrance_kwargs | dict | The keyword argument to the model entrance function | |
sample_args | array[Any] | [] | The argument to the sample function |
sample_kwargs | dict | The keyword argument to the sample function | |
entrance_func | str | ”model” | The name of the model entrance function |
experiment_name | str | "" | the name to create experiment. Select a random name if leaving empty |
PLMModelTreeNode
name | type | default | description |
---|---|---|---|
name* | str | file or folder name | |
type | Literal: file, folder | node type | |
content | str | base64 encoded file content | |
children | array[PLMModelTreeNode] | [] | contents of subnodes |