The LandingLens JavaScript library contains the LandingLens development library and examples that show how to integrate your app with LandingLens in a variety of scenarios.
We've provided some examples in CodeSandbox to focus on ease of use.
Example | Description | Type |
---|---|---|
Poker Card Suit Identification | This example shows how to use an Object Detection model from LandingLens to detect suits on playing cards. | CodeSandbox |
npm install landingai landingai-react
# OR
yarn add landingai landingai-react
This library needs to communicate with the LandingLens platform to perform certain functions. (For example, the getInferenceResult
API calls the HTTP endpoint of your deployed model). To enable communication with LandingLens, you will need the following information:
Collect images and run inference using the endpoint you created in LandingLens:
apiInfo
object and pass it to <InferenceContext.Provider>
.import React from 'react';
import { useState } from "react";
import { InferenceContext, InferenceResult, PhotoCollector } from "landingai-react";
const apiInfo = {
endpoint: `https://predict.app.landing.ai/inference/v1/predict?endpoint_id=<FILL_YOUR_INFERENCE_ENDPOINT_ID>`,
key: "<FILL_YOUR_API_KEY>",
};
export default function App() {
const [image, setImage] = useState();
return (
<InferenceContext.Provider value={apiInfo}>
<PhotoCollector setImage={setImage} />
<InferenceResult image={image} />
</InferenceContext.Provider>
);
}
See a working example in here.
Generated using TypeDoc