The upload client for https://web3.storage
The @web3-storage/upload-client
package provides the "low level" client API for uploading data to web3.storage using the w3up platform.
Most users will be better served by the higher-level @web3-storage/w3up-client
package, which presents a simpler API and supports creating agents and registering spaces.
If you are using this package directly instead of w3up-client
, you will also need to use the @web3-storage/access
client for agent and space management. The @web3-storage/capabilities
package referenced in the examples below is a transitive dependency of both @web3-storage/upload-client
and @web3-storage/access
, so you shouldn't need to install it explicitly.
Install the package using npm:
npm install @web3-storage/upload-client
An agent provides:
issuer
).with
).issuer
has been delegated capabilities to store data and register uploads to the "space" (proofs
).import { Agent } from '@web3-storage/access'
import { store } from '@web3-storage/capabilities/store'
import { upload } from '@web3-storage/capabilities/upload'
const agent = await Agent.create()
// Note: you need to create and register a space 1st time:
// await agent.createSpace()
// await agent.registerSpace('you@youremail.com')
const conf = {
issuer: agent.issuer,
with: agent.currentSpace(),
proofs: await agent.proofs([store, upload]),
}
See the @web3-storage/access
docs for more about creating and registering spaces.
Once you have the issuer
, with
and proofs
, you can upload a directory of files by passing that invocation config to uploadDirectory
along with your list of files to upload.
You can get your list of Files from a <input type="file">
element in the browser or using files-from-path
in Node.js
import { uploadFile } from '@web3-storage/upload-client'
const cid = await uploadFile(conf, new Blob(['Hello World!']))
import { uploadDirectory } from '@web3-storage/upload-client'
const cid = await uploadDirectory(conf, [
new File(['doc0'], 'doc0.txt'),
new File(['doc1'], 'dir/doc1.txt'),
])
The buffering API loads all data into memory so is suitable only for small files. The root data CID is derived from the data before any transfer to the service takes place.
import { UnixFS, CAR, Store, Upload } from '@web3-storage/upload-client'
// Encode a file as a DAG, get back a root data CID and a set of blocks
const { cid, blocks } = await UnixFS.encodeFile(file)
// Encode the DAG as a CAR file
const car = await CAR.encode(blocks, cid)
// Store the CAR file to the service
const carCID = await Store.add(conf, car)
// Register an "upload" - a root CID contained within the passed CAR file(s)
await Upload.add(conf, cid, [carCID])
This API offers streaming DAG generation, allowing CAR "shards" to be sent to the service as the DAG is built. It allows files and directories of arbitrary size to be sent to the service while keeping within memory limits of the device. The last CAR file sent contains the root data CID.
import {
UnixFS,
ShardingStream,
Store,
Upload,
} from '@web3-storage/upload-client'
let rootCID, carCIDs
// Encode a file as a DAG, get back a readable stream of blocks.
await UnixFS.createFileEncoderStream(file)
// Pipe blocks to a stream that yields CARs files - shards of the DAG.
.pipeThrough(new ShardingStream())
// Each chunk written is a CAR file - store it with the service and collect
// the CID of the CAR shard.
.pipeTo(
new WritableStream({
async write (car) {
const carCID = await Store.add(conf, car)
carCIDs.push(carCID)
rootCID = rootCID || car.roots[0]
},
})
)
// Register an "upload" - a root CID contained within the passed CAR file(s)
await Upload.add(conf, rootCID, carCIDs)
uploadDirectory
function uploadDirectory(
conf: InvocationConfig,
files: File[],
options: {
retries?: number
signal?: AbortSignal
onShardStored?: ShardStoredCallback
onDirectoryEntryLink?: DirectoryEntryLinkCallback
shardSize?: number
concurrentRequests?: number
} = {}
): Promise<CID>
Uploads a directory of files to the service and returns the root data CID for the generated DAG. All files are added to a container directory, with paths in file names preserved.
Required delegated capability proofs: store/add
, upload/add
More information: InvocationConfig
, ShardStoredCallback
uploadFile
function uploadFile(
conf: InvocationConfig,
file: Blob,
options: {
retries?: number
signal?: AbortSignal
onShardStored?: ShardStoredCallback
shardSize?: number
concurrentRequests?: number
} = {}
): Promise<CID>
Uploads a file to the service and returns the root data CID for the generated DAG.
Required delegated capability proofs: store/add
, upload/add
More information: InvocationConfig
uploadCAR
function uploadCAR(
conf: InvocationConfig,
car: Blob,
options: {
retries?: number
signal?: AbortSignal
onShardStored?: ShardStoredCallback
shardSize?: number
concurrentRequests?: number
rootCID?: CID
} = {}
): Promise<CID>
Uploads a CAR file to the service. The difference between this function and Store.add is that the CAR file is automatically sharded and an "upload" is registered (see Upload.add
), linking the individual shards. Use the onShardStored
callback to obtain the CIDs of the CAR file shards.
Required delegated capability proofs: store/add
, upload/add
More information: InvocationConfig
, ShardStoredCallback
CAR.BlockStream
class BlockStream extends ReadableStream<Block>
Creates a readable stream of blocks from a CAR file Blob
.
CAR.encode
function encode(blocks: Iterable<Block>, root?: CID): Promise<CARFile>
Encode a DAG as a CAR file.
More information: CARFile
Example:
const { cid, blocks } = await UnixFS.encodeFile(new Blob(['data']))
const car = await CAR.encode(blocks, cid)
ShardingStream
class ShardingStream extends TransformStream<Block, CARFile>
Shard a set of blocks into a set of CAR files. The last block written to the stream is assumed to be the DAG root and becomes the CAR root CID for the last CAR output.
More information: CARFile
Store.list
function list(
conf: InvocationConfig,
options: { retries?: number; signal?: AbortSignal } = {}
): Promise<ListResponse<StoreListResult>>
List CAR files stored by the issuer.
Required delegated capability proofs: store/list
More information: InvocationConfig
Store.remove
function remove(
conf: InvocationConfig,
link: CID,
options: { retries?: number; signal?: AbortSignal } = {}
): Promise<void>
Remove a stored CAR file by CAR CID.
Required delegated capability proofs: store/remove
More information: InvocationConfig
UnixFS.createDirectoryEncoderStream
function createDirectoryEncoderStream(
files: Iterable<File>
): ReadableStream<Block>
Creates a ReadableStream
that yields UnixFS DAG blocks. All files are added to a container directory, with paths in file names preserved.
Note: you can use https://npm.im/files-from-path to read files from the filesystem in Nodejs.
UnixFS.createFileEncoderStream
function createFileEncoderStream(file: Blob): ReadableStream<Block>
Creates a ReadableStream
that yields UnixFS DAG blocks.
UnixFS.encodeDirectory
function encodeDirectory(
files: Iterable<File>
): Promise<{ cid: CID; blocks: Block[] }>
Create a UnixFS DAG from the passed file data. All files are added to a container directory, with paths in file names preserved.
Note: you can use https://npm.im/files-from-path to read files from the filesystem in Nodejs.
Example:
const { cid, blocks } = encodeDirectory([
new File(['doc0'], 'doc0.txt'),
new File(['doc1'], 'dir/doc1.txt'),
])
// DAG structure will be:
// bafybei.../doc0.txt
// bafybei.../dir/doc1.txt
UnixFS.encodeFile
function encodeFile(file: Blob): Promise<{ cid: CID; blocks: Block[] }>
Create a UnixFS DAG from the passed file data.
Example:
const { cid, blocks } = await encodeFile(new File(['data'], 'doc.txt'))
// Note: file name is not preserved - use encodeDirectory if required.
Upload.add
function add(
conf: InvocationConfig,
root: CID,
shards: CID[],
options: { retries?: number; signal?: AbortSignal } = {}
): Promise<UploadAddResponse>
Register a set of stored CAR files as an "upload" in the system. A DAG can be split between multipe CAR files. Calling this function allows multiple stored CAR files to be considered as a single upload.
Required delegated capability proofs: upload/add
More information: InvocationConfig
Upload.list
function list(
conf: InvocationConfig,
options: { retries?: number; signal?: AbortSignal } = {}
): Promise<ListResponse<UploadListResult>>
List uploads created by the issuer.
Required delegated capability proofs: upload/list
More information: InvocationConfig
Upload.remove
function remove(
conf: InvocationConfig,
link: CID,
options: { retries?: number; signal?: AbortSignal } = {}
): Promise<void>
Remove a upload by root data CID.
Required delegated capability proofs: upload/remove
More information: InvocationConfig
CARFile
A Blob
with two extra properties:
type CARFile = Blob & { version: 1; roots: CID[] }
CARMetadata
Metadata pertaining to a CAR file.
export interface CARMetadata {
/**
* CAR version number.
*/
version: number
/**
* Root CIDs present in the CAR header.
*/
roots: CID[]
/**
* CID of the CAR file (not the data it contains).
*/
cid: CID
/**
* Piece CID of the CAR file.
*/
piece: CID
/**
* Size of the CAR file in bytes.
*/
size: number
}
DirectoryEntryLinkCallback
Callback for every DAG encoded directory entry, including the root. It includes the CID, name (full path) and DAG size in bytes.
type DirectoryEntryLinkCallback = (link: DirectoryEntryLink) => void
InvocationConfig
This is the configuration for the UCAN invocation. It's values can be obtained from an Agent
. See Create an Agent for an example. It is an object with issuer
and proofs
:
issuer
is the signing authority that is issuing the UCAN invocation(s). It is typically the user agent.proofs
are a set of capability delegations that prove the issuer has the capability to perform the action.ShardStoredCallback
A function called after a DAG shard has been successfully stored by the service:
type ShardStoredCallback = (meta: CARMetadata) => void
More information: CARMetadata
Feel free to join in. All welcome. Please open an issue!
Dual-licensed under MIT + Apache 2.0
Generated using TypeDoc