JavaScript Library

Integrate Runware's API with our JavaScript library. Get started with code examples and comprehensive usage instructions.

Introduction

The SDK is used to run image inference with the Runware API, powered by the Runware inference platform. It can be used to generate imaged with text-to-image and image-to-image. It also allows the use of an existing gallery of models or selecting any model or LoRA from the CivitAI gallery. The API also supports upscaling, background removal, inpainting and outpainting, and a series of other ControlNet models.

Source: https://github.com/Runware/sdk-js

Installation

To install and set up the library, run:

npm install @runware/sdk-js

Or if you prefer using Yarn:

yarn add @runware/sdk-js

Instantiating the SDK

Instantiating Synchronously

const runware = new Runware({ apiKey: "API_KEY" });

Instantiating Asynchronously

const runware = await Runware.initialize({ apiKey: "API_KEY" });
ParameterTypeUse
urlstringUrl to get images from (optional)
apiKeystringThe environment api key
shouldReconnectboolean (default = true)This handles reconnection when there is websocket inactivity
globalMaxRetriesnumber (default = 2)The number of retries it should make before throwing an error (NB: you can specify a retry parameters for every request that overrides the global retry)
timeoutDurationnumber (in milliseconds) (default = 60000)The timeout span per retry before timing out

Methods

Ensure connection is established before making request

const runware = new RunwareServer({ apiKey: "API_KEY" });

await runware.ensureConnection();

Manually disconnect

const runware = new RunwareServer({ apiKey: "API_KEY" });

await runware.disconnect();

API

Request Image

Read Documentation

NB: All errors can be caught in the catch block of each request

import { Runware } from "@runware/sdk-js";

const  runware  =  new  Runware({ apiKey: "API_KEY" });
const images = await runware.requestImages({
	positivePrompt: string;
	negativePrompt?: string;
	width: number;
	height: number;
	model: string;
	numberResults?: number;
	outputType?: "URL" | "base64Data" | "dataURI";
	outputFormat?: "JPG" | "PNG" | "WEBP";
	uploadEndpoint?: string;
	checkNSFW?: boolean
	seedImage?: File | string;
	maskImage?: File | string;
	strength?: number;
	steps?: number;
	scheduler?: string;
	seed?: number;
	CFGScale?: number;
	clipSkip?: number;
	refiner?: IRefiner;
	usePromptWeighting?: number;
	controlNet?: IControlNet[];
	lora?: ILora[];
 	retry?: number;
	onPartialImages?: (images: IImage[], error: IError) =>  void;
})

return interface ITextToImage {
	taskType: ETaskType;
	imageUUID: string;
	inputImageUUID?: string;
	taskUUID: string;
	imageURL?: string;
	imageBase64Data?: string;
	imageDataURI?: string;
	NSFWContent?: boolean;
	cost: number;
	positivePrompt?: string;
 	negativePrompt?: string;
}[]

Parallel Requests (2 or more requests at the same time)

const  runware  =  new Runware({ apiKey: "API_KEY" });

const [firstImagesRequest, secondImagesRequest] = await Promise.all([
	runware.requestImages({
		positivePrompt: string;
		width: number;
		height: number;
		numberResults: number;
		model: string;
		negativePrompt?: string;
		onPartialImages?: (images: IImage[], error: IError) =>  void;
	}),
	runware.requestImages({
		positivePrompt: string;
		width: number;
		height: number;
		numberResults: number;
		model: string;
		onPartialImages?: (images: IImage[], error: IError) =>  void;
	})
])

console.log({firstImagesRequest, secondImagesRequest})

return interface ITextToImage {
	taskType: ETaskType;
	imageUUID: string;
	inputImageUUID?: string;
	taskUUID: string;
	imageURL?: string;
	imageBase64Data?: string;
	imageDataURI?: string;
	NSFWContent?: boolean;
	cost: number;
	positivePrompt?: string;
 	negativePrompt?: string;
}[]
ParameterTypeUse
positivePromptstringDefines the positive prompt description of the image.
negativePromptstringDefines the negative prompt description of the image.
widthnumberControls the image width.
heightnumberControls the image height.
modelstringThe AIR system ID of the image to be requested.
numberResultsnumber: (Optional) (default = 1)(Optional) The number of images to be generated.
outputTypeIOutputType: (Optional)Specifies the output type in which the image is returned.
outputFormatIOutputFormat: (Optional)Specifies the format of the output image.
uploadEndpointstring: (Optional)This parameter allows you to specify a URL to which the generated image will be uploaded as binary image data using the HTTP PUT method. For example, an S3 bucket URL can be used as the upload endpoint.
checkNSFWboolean: (Optional)This parameter is used to enable or disable the NSFW check. When enabled, the API will check if the image contains NSFW (not safe for work) content. This check is done using a pre-trained model that detects adult content in images.
seedImagestring or File: (Optional)When doing Image-to-Image, Inpainting or Outpainting, this parameter is required.Specifies the seed image to be used for the diffusion process.
maskImagestring or File: (Optional)The image to be used as the mask image. It can be the UUID of previously generated image, or an image from a file.
strengthnumber: (Optional)When doing Image-to-Image, Inpainting or Outpainting, this parameter is used to determine the influence of the seedImage image in the generated output. A higher value results in more influence from the original image, while a lower value allows more creative deviation.
stepsnumber: (Optional)The number of steps is the number of iterations the model will perform to generate the image. The higher the number of steps, the more detailed the image will be.
schedulerstring: (Optional)An scheduler is a component that manages the inference process. Different schedulers can be used to achieve different results like more detailed images, faster inference, or more accurate results.
seednumber: (Optional)A seed is a value used to randomize the image generation. If you want to make images reproducible (generate the same image multiple times), you can use the same seed value.
CFGScalenumber: (Optional)Guidance scale represents how closely the images will resemble the prompt or how much freedom the AI model has. Higher values are closer to the prompt. Low values may reduce the quality of the results.
clipSkipnumber: (Optional)CLIP Skip is a feature that enables skipping layers of the CLIP embedding process, leading to quicker and more varied image generation.
usePromptWeightingboolean: (Optional)Allow setting different weights per words or expressions in prompts.
clipSkipnumber: (Optional)CLIP Skip is a feature that enables skipping layers of the CLIP embedding process, leading to quicker and more varied image generation.
loraILora[]: (Optional)With LoRA (Low-Rank Adaptation), you can adapt a model to specific styles or features by emphasizing particular aspects of the data.
controlNetIControlNet[]: (Optional)With ControlNet, you can provide a guide image to help the model generate images that align with the desired structure.
onPartialImagesfunction: (Optional)If you want to receive the images as they are generated instead of waiting for the async request, you get the images as they are generated from this function.
includeCostboolean (Optional)If set to true, the cost to perform the task will be included in the response object.
retrynumber (default = globalMaxRetries)The number of retries it should make before throwing an error.

ControlNet Params

ParameterTypeUse
modelstringDefines the model to use for the control net.
guideImagefile or string (Optional)The image requires for the guide image. It can be the UUID of previously generated image, or an image from a file.
weightnumber (Optional)an have values between 0 and 1 and represent the weight of the ControlNet preprocessor in the image.
startStepnumber (Optional)represents the moment in which the ControlNet preprocessor starts to control the inference. It can take values from 0 to the maximum number of steps in the image create request. This can also be replaced with startStepPercentage (float) which represents the same value but in percentages. It takes values from 0 to 1.
startStepPercentagenumber (Optional)Represents the percentage of steps in which the ControlNet model starts to control the inference process.
endStepnumber (Optional)similar with startStep but represents the end of the preprocessor control of the image inference. The equivalent of the percentage option is endStepPercentage (float).
endStepPercentagenumber (Optional)Represents the percentage of steps in which the ControlNet model ends to control the inference process.
controlModestring (Optional)This parameter has 3 options: prompt, controlnet and balanced

Request Image To Text

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });
const imageToText = await runware.requestImageToText({
	inputImage: string | File
})
console.log(imageToText)

return interface IImageToText {
  taskType: string;
  taskUUID: string;
  text: string;
  cost?: number;
}
ParameterTypeUse
inputImagestring or FileThe image to be used as the seed image. It can be the UUID of previously generated image, or an image from a file.
includeCostboolean (Optional)If set to true, the cost to perform the task will be included in the response object.
retrynumber (default = globalMaxRetries)The number of retries it should make before throwing an error.

Remove Image Background

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });
const image = await runware.removeImageBackground({
	inputImage: string | File
	outputType?: IOutputType;
	outputFormat?: IOutputFormat;
	rgba?: number[];
	postProcessMask?: boolean;
	returnOnlyMask?: boolean;
	alphaMatting?: boolean;
	alphaMattingForegroundThreshold?: number;
	alphaMattingBackgroundThreshold?: number;
	alphaMattingErodeSize?: number;
})
console.log(image)
return interface IImage {
	taskType: ETaskType;
	taskUUID: string;
	imageUUID: string;
	inputImageUUID: string;
	imageURL?: string;
	imageBase64Data?: string;
	imageDataURI?: string;
	cost: number;
}
ParameterTypeUse
inputImagestring or FileThe image to be used as the seed image. It can be the UUID of previously generated image, or an image from a file.
outputTypeIOutputType: (Optional)Specifies the output type in which the image is returned.
outputFormatIOutputFormat: (Optional)Specifies the format of the output image.
includeCostboolean (Optional)If set to true, the cost to perform the task will be included in the response object.
rgbanumber[] (Optional)An array representing the [red, green, blue, alpha] values that define the color of the removed background. The alpha channel controls transparency.
postProcessMaskboolean (Optional)Flag indicating whether to post-process the mask. Controls whether the mask should undergo additional post-processing.
returnOnlyMaskboolean (Optional)Flag indicating whether to return only the mask. The mask is the opposite of the image background removal.
alphaMattingboolean (Optional)Flag indicating whether to use alpha matting. Alpha matting is a post-processing technique that enhances the quality of the output by refining the edges of the foreground object.
alphaMattingForegroundThresholdnumber (Optional)Threshold value used in alpha matting to distinguish the foreground from the background. Adjusting this parameter affects the sharpness and accuracy of the foreground object edges.
alphaMattingBackgroundThresholdnumber (Optional)Threshold value used in alpha matting to refine the background areas. It influences how aggressively the algorithm removes the background while preserving image details.
alphaMattingErodeSizenumber (Optional)Specifies the size of the erosion operation used in alpha matting. Erosion helps in smoothing the edges of the foreground object for a cleaner removal of the background.
retrynumber (default = globalMaxRetries)The number of retries it should make before throwing an error.

Upscale Image

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });
const image = await runware.upscaleGan({
	inputImage: File | string;
	upscaleFactor: number;
	outputType?: IOutputType;
	outputFormat?: IOutputFormat;
	includeCost?: boolean
})
console.log(image)
return interface IImage {
	taskType: ETaskType;
	imageUUID: string;
	inputImageUUID?: string;
	taskUUID: string;
	imageURL?: string;
	imageBase64Data?: string;
	imageDataURI?: string;
	NSFWContent?: boolean;
	cost: number;
}
ParameterTypeUse
inputImagestring or FileThe image to be used as the seed image. It can be the UUID of previously generated image, or an image from a file.
upscaleFactornumberThe number of times to upscale;
outputTypeIOutputType: (Optional)Specifies the output type in which the image is returned.
outputFormatIOutputFormat: (Optional)Specifies the format of the output image.
includeCostboolean (Optional)If set to true, the cost to perform the task will be included in the response object.
retrynumber (default = globalMaxRetries)The number of retries it should make before throwing an error.

Enhance Prompt

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });
const enhancedPrompt = await runware.enhancePrompt({
	prompt: string;
	promptMaxLength?: number;
	promptVersions?: number;
	includeCost?: boolean;
})
console.log(enhancedPrompt)
return interface IEnhancedPrompt {
	taskUUID: string;
	text: string;
}
ParameterTypeUse
promptstringThe prompt that you intend to enhance.
promptMaxLengthnumber: OptionalCharacter count. Represents the maximum length of the prompt that you intend to receive. Can take values between 1 and 380.
promptVersionsnumber: OptionalThe number of prompt versions that will be received. Can take values between 1 and 5.
includeCostboolean: OptionalIf set to true, the cost to perform the task will be included in the response object.
retrynumber (default = globalMaxRetries)The number of retries it should make before throwing an error.

ControlNet Preprocess

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });
const controlNetPreProcessed = await runware.controlNetPreProcess({
	inputImage: string | File;
	preProcessorType: EPreProcessor;
	height?: number;
	width?: number;
	outputType?: IOutputType;
	outputFormat?: IOutputFormat;
	highThresholdCanny?: number;
	lowThresholdCanny?: number;
	includeHandsAndFaceOpenPose?: boolean;
})
console.log(controlNetPreProcessed)
return interface IControlNetImage {
	taskUUID: string;
	inputImageUUID: string;
	guideImageUUID: string;
	guideImageURL?: string;
	guideImageBase64Data?: string;
	guideImageDataURI?: string;
	cost: number;
}
ParameterTypeUse
inputImagestring or FileSpecifies the input image to be preprocessed to generate a guide image.
widthnumberControls the image width.
heightnumberControls the image height.
outputTypeIOutputType: (Optional)Specifies the output type in which the image is returned.
outputFormatIOutputFormat: (Optional)Specifies the format of the output image.
preProcessorTypestring: (Optional)Specifies the pre processor type to use.
includeCostboolean: OptionalIf set to true, the cost to perform the task will be included in the response object.
lowThresholdCannynumber OptionalDefines the lower threshold when using the Canny edge detection preprocessor.
highThresholdCannynumber OptionalDefines the high threshold when using the Canny edge detection preprocessor.
includeHandsAndFaceOpenPoseboolean OptionalInclude the hands and face in the pose outline when using the OpenPose preprocessor.
retrynumber (default = globalMaxRetries)The number of retries it should make before throwing an error.

Model Upload

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });

const basePayload = {
	air: string;
	name: string;
	downloadUrl: string;
	uniqueIdentifier: string;
	version: string;
	format: EModelFormat;
	architecture: EModelArchitecture;
	heroImageUrl?: string;
	tags?: string[];
	shortDescription?: string;
	comment?: string;
	private: boolean;
	customTaskUUID?: string;
	retry?: number;
	onUploadStream?: (
		response?: IAddModelResponse,
		error?: IErrorResponse
	) => void;
}

const controlNetUpload = await runware.modelUpload({
	...basePayload,
 	category: "controlnet";
 	conditioning: EModelConditioning;
})
console.log(controlNetUpload)

const checkpointUpload = await runware.modelUpload({
	...basePayload,
 	category: "checkpoint";
 	positiveTriggerWords?: string;
	defaultCFGScale?: number;
	defaultStrength: number;
	defaultSteps?: number;
	defaultScheduler?: number;
	type?: EModelType;
})
console.log(checkpointUpload)

const loraUpload = await runware.modelUpload({
	...basePayload,
 	category: "lora";
 	defaultWeight: number;
 	positiveTriggerWords?: string;
})

console.log(loraUpload)

return interface IAddModelResponse {
  status: string;
  message: string;
  taskUUID: string;
  air: string;
  taskType: string;
}

export interface IErrorResponse {
  code: string;
  message: string;
  parameter: string;
  type: string;
  documentation: string;
  taskUUID: string;
}

Photo Maker

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });

const photoMaker = await runware.modelUpload({
	positivePrompt: string;
	height: number;
	width: number;
	numberResults: number;
	steps?: number;
	inputImages: string[];
	style: EPhotoMakerEnum;
	strength?: number;
	outputFormat?: string;
	includeCost?: boolean;
	customTaskUUID?: string;
	retry?: number;
	onPartialImages?: (images: IImage[], error?: IError) => void
})
console.log(photoMaker)

export interface IImage {
	taskType: ETaskType;
	imageUUID: string;
	inputImageUUID?: string;
	taskUUID: string;
	imageURL?: string;
	imageBase64Data?: string;
	imageDataURI?: string;
	NSFWContent?: boolean;
	cost?: number;
	seed?: number;
}

Image Masking

Read Documentation

const  runware  =  new Runware({ apiKey: "API_KEY" });

const imageMasking = await runware.imageMask({
  model: string;
  inputImage: string;
  confidence?: number;
  maskPadding?: number;
  maskBlur?: number;
  outputFormat?: string;
  outputType?: string;
  includeCost?: boolean;
  uploadEndpoint?: string;
  customTaskUUID?: string;
  retry?: number;
})
console.log(imageMasking)

export type TImageMaskingResponse = {
  taskType: string;
  taskUUID: string;
  imageUUID: string;

  detections: [
    {
      x_min: number;
      y_min: number;
      x_max: number;
      y_max: number;
    }
  ];
  maskImageURL: string;
  cost: number;
};