@exodus/bytes
    Preparing search index...

    @exodus/bytes

    Uint8Array conversion to and from base64, base32, base58, hex, utf8, utf16, bech32 and wif

    And a TextEncoder / TextDecoder polyfill

    See documentation.

    Performs proper input validation, ensures no garbage-in-garbage-out

    Tested in CI with @exodus/test on:

    Node.js Deno Bun Electron workerd
    Chrome WebKit Firefox Servo
    Hermes V8 JavaScriptCore SpiderMonkey
    QuickJS XS GraalJS

    • 10-20x faster than Buffer polyfill
    • 2-10x faster than iconv-lite

    The above was for the js fallback

    It's up to 100x when native impl is available
    e.g. in utf8fromString on Hermes / React Native or fromHex in Chrome

    Also:

    • 3-8x faster than bs58
    • 10-30x faster than @scure/base (or >100x on Node.js <25)
    • Faster in utf8toString / utf8fromString than Buffer or TextDecoder / TextEncoder on Node.js

    See Performance for more info

    import { TextDecoder, TextEncoder } from '@exodus/bytes/encoding.js'
    import { TextDecoderStream, TextEncoderStream } from '@exodus/bytes/encoding.js' // Requires Streams

    Less than half the bundle size of text-encoding, whatwg-encoding or iconv-lite (gzipped or not).
    Also much faster than all of those.

    Tip

    See also the lite version to get this down to 8 KiB gzipped.

    Spec compliant, passing WPT and covered with extra tests.
    Moreover, tests for this library uncovered bugs in all major implementations.
    Including all three major browser engines being wrong at UTF-8.
    See WPT pull request.

    It works correctly even in environments that have native implementations broken (that's all of them currently).
    Runs (and passes WPT) on Node.js built without ICU.

    Note

    Faster than Node.js native implementation on Node.js.

    The JS multi-byte version is as fast as native impl in Node.js and browsers, but (unlike them) returns correct results.

    For encodings where native version is known to be fast and correct, it is automatically used.
    Some single-byte encodings are faster than native in all three major browser engines.

    See analysis table for more info.

    These are only provided as a compatibility layer, prefer hardened APIs instead in new code.

    • TextDecoder can (and should) be used with { fatal: true } option for all purposes demanding correctness / lossless transforms

    • TextEncoder does not support a fatal mode per spec, it always performs replacement.

      That is not suitable for hashing, cryptography or consensus applications.
      Otherwise there would be non-equal strings with equal signatures and hashes — the collision is caused by the lossy transform of a JS string to bytes. Those also survive e.g. JSON.stringify/JSON.parse or being sent over network.

      Use strict APIs in new applications, see utf8fromString / utf16fromString below.
      Those throw on non-well-formed strings by default.

    Alternate exports exist that can help reduce bundle size, see comparison:

    import size
    @exodus/bytes/encoding-browser.js
    @exodus/bytes/encoding-lite.js
    @exodus/bytes/encoding.js
    text-encoding
    iconv-lite
    whatwg-encoding

    Libraries are advised to use single-purpose hardened @exodus/bytes/utf8.js / @exodus/bytes/utf16.js APIs for Unicode.

    Applications (including React Native apps) are advised to load either @exodus/bytes/encoding-lite.js or @exodus/bytes/encoding.js (depending on whether legacy multi-byte support is needed) and use that as a global polyfill.

    If you don't need support for legacy multi-byte encodings.

    Reduces the bundle size ~12x, while still keeping utf-8, utf-16le, utf-16be and all single-byte encodings specified by the spec. The only difference is support for legacy multi-byte encodings.

    See the list of encodings.

    This can be useful for example in React Native global TextDecoder polyfill, if you are sure that you don't need legacy multi-byte encodings support.

    Resolves to a tiny import in browser bundles, preferring native TextDecoder / TextEncoder.

    For non-browsers (Node.js, React Native), loads a full implementation.

    Note

    This is not the default behavior for @exodus/bytes/encoding.js because all major browser implementations have bugs, which @exodus/bytes/encoding.js fixes. Only use if you are ok with that.

    UTF-8 encoding/decoding

    import { utf8fromString, utf8toString } from '@exodus/bytes/utf8.js'

    // loose
    import { utf8fromStringLoose, utf8toStringLoose } from '@exodus/bytes/utf8.js'

    These methods by design encode/decode BOM (codepoint U+FEFF Byte Order Mark) as-is.
    If you need BOM handling or detection, use @exodus/bytes/encoding.js

    Encode a string to UTF-8 bytes (strict mode)

    Throws on invalid Unicode (unpaired surrogates)

    This is similar to the following snippet (but works on all engines):

    // Strict encode, requiring Unicode codepoints to be valid
    if (typeof string !== 'string' || !string.isWellFormed()) throw new TypeError()
    return new TextEncoder().encode(string)

    Encode a string to UTF-8 bytes (loose mode)

    Replaces invalid Unicode (unpaired surrogates) with replacement codepoints U+FFFD per WHATWG Encoding specification.

    Such replacement is a non-injective function, is irreversable and causes collisions.
    Prefer using strict throwing methods for cryptography applications.

    This is similar to the following snippet (but works on all engines):

    // Loose encode, replacing invalid Unicode codepoints with U+FFFD
    if (typeof string !== 'string') throw new TypeError()
    return new TextEncoder().encode(string)

    Decode UTF-8 bytes to a string (strict mode)

    Throws on invalid UTF-8 byte sequences

    This is similar to new TextDecoder('utf-8', { fatal: true, ignoreBOM: true }).decode(arr), but works on all engines.

    Decode UTF-8 bytes to a string (loose mode)

    Replaces invalid UTF-8 byte sequences with replacement codepoints U+FFFD per WHATWG Encoding specification.

    Such replacement is a non-injective function, is irreversable and causes collisions.
    Prefer using strict throwing methods for cryptography applications.

    This is similar to new TextDecoder('utf-8', { ignoreBOM: true }).decode(arr), but works on all engines.

    UTF-16 encoding/decoding

    import { utf16fromString, utf16toString } from '@exodus/bytes/utf16.js'

    // loose
    import { utf16fromStringLoose, utf16toStringLoose } from '@exodus/bytes/utf16.js'

    These methods by design encode/decode BOM (codepoint U+FEFF Byte Order Mark) as-is.
    If you need BOM handling or detection, use @exodus/bytes/encoding.js

    Encode a string to UTF-16 bytes (strict mode)

    Throws on invalid Unicode (unpaired surrogates)

    Encode a string to UTF-16 bytes (loose mode)

    Replaces invalid Unicode (unpaired surrogates) with replacement codepoints U+FFFD per WHATWG Encoding specification.

    Such replacement is a non-injective function, is irreversible and causes collisions.
    Prefer using strict throwing methods for cryptography applications.

    Decode UTF-16 bytes to a string (strict mode)

    Throws on invalid UTF-16 byte sequences

    Throws on non-even byte length.

    Decode UTF-16 bytes to a string (loose mode)

    Replaces invalid UTF-16 byte sequences with replacement codepoints U+FFFD per WHATWG Encoding specification.

    Such replacement is a non-injective function, is irreversible and causes collisions.
    Prefer using strict throwing methods for cryptography applications.

    Throws on non-even byte length.

    Convert between BigInt and Uint8Array

    import { fromBigInt, toBigInt } from '@exodus/bytes/bigint.js'
    

    Convert a BigInt to a Uint8Array or Buffer

    The output bytes are in big-endian format.

    Throws if the BigInt is negative or cannot fit into the specified length.

    Convert a Uint8Array or Buffer to a BigInt

    The bytes are interpreted as a big-endian unsigned integer.

    Implements Base16 from RFC4648 (no differences from RFC3548).

    import { fromHex, toHex } from '@exodus/bytes/hex.js'
    

    Decode a hex string to bytes

    Unlike Buffer.from(), throws on invalid input

    Encode a Uint8Array to a lowercase hex string

    Implements base64 and base64url from RFC4648 (no differences from RFC3548).

    import { fromBase64, toBase64 } from '@exodus/bytes/base64.js'
    import { fromBase64url, toBase64url } from '@exodus/bytes/base64.js'
    import { fromBase64any } from '@exodus/bytes/base64.js'

    Decode a base64 string to bytes

    Operates in strict mode for last chunk, does not allow whitespace

    Decode a base64url string to bytes

    Operates in strict mode for last chunk, does not allow whitespace

    Decode either base64 or base64url string to bytes

    Automatically detects the variant based on characters present

    Encode a Uint8Array to a base64 string (RFC 4648)

    Encode a Uint8Array to a base64url string (RFC 4648)

    Implements base32 and base32hex from RFC4648 (no differences from RFC3548).

    import { fromBase32, toBase32 } from '@exodus/bytes/base32.js'
    import { fromBase32hex, toBase32hex } from '@exodus/bytes/base32.js'

    Decode a base32 string to bytes

    Operates in strict mode for last chunk, does not allow whitespace

    Decode a base32hex string to bytes

    Operates in strict mode for last chunk, does not allow whitespace

    Decode a Crockford base32 string to bytes

    Operates in strict mode for last chunk, does not allow whitespace

    Crockford base32 decoding follows extra mapping per spec: LIli -> 1, Oo -> 0

    Encode a Uint8Array to a base32 string (RFC 4648)

    Encode a Uint8Array to a base32hex string (RFC 4648)

    Encode a Uint8Array to a Crockford base32 string

    Implements bech32 and bech32m from BIP-0173 and BIP-0350.

    import { fromBech32, toBech32 } from '@exodus/bytes/bech32.js'
    import { fromBech32m, toBech32m } from '@exodus/bytes/bech32.js'
    import { getPrefix } from '@exodus/bytes/bech32.js'

    Extract the prefix from a bech32 or bech32m string without full validation

    This is a quick check that skips most validation.

    Decode a bech32 string to bytes

    Encode bytes to a bech32 string

    Decode a bech32m string to bytes

    Encode bytes to a bech32m string

    Implements base58 encoding.

    Supports both standard base58 and XRP variant alphabets.

    import { fromBase58, toBase58 } from '@exodus/bytes/base58.js'
    import { fromBase58xrp, toBase58xrp } from '@exodus/bytes/base58.js'

    Decode a base58 string to bytes

    Uses the standard Bitcoin base58 alphabet

    Encode a Uint8Array to a base58 string

    Uses the standard Bitcoin base58 alphabet

    Decode a base58 string to bytes using XRP alphabet

    Uses the XRP variant base58 alphabet

    Encode a Uint8Array to a base58 string using XRP alphabet

    Uses the XRP variant base58 alphabet

    Implements base58check encoding.

    import { fromBase58check, toBase58check } from '@exodus/bytes/base58check.js'
    import { fromBase58checkSync, toBase58checkSync } from '@exodus/bytes/base58check.js'
    import { makeBase58check } from '@exodus/bytes/base58check.js'

    On non-Node.js, requires peer dependency @noble/hashes to be installed.

    Decode a base58check string to bytes asynchronously

    Validates the checksum using double SHA-256

    Encode bytes to base58check string asynchronously

    Uses double SHA-256 for checksum calculation

    Decode a base58check string to bytes synchronously

    Validates the checksum using double SHA-256

    Encode bytes to base58check string synchronously

    Uses double SHA-256 for checksum calculation

    Create a base58check encoder/decoder with custom hash functions

    Wallet Import Format (WIF) encoding and decoding.

    import { fromWifString, toWifString } from '@exodus/bytes/wif.js'
    import { fromWifStringSync, toWifStringSync } from '@exodus/bytes/wif.js'

    On non-Node.js, requires peer dependency @noble/hashes to be installed.

    Decode a WIF string to WIF data

    Returns a promise that resolves to an object with { version, privateKey, compressed }.

    The optional version parameter validates the version byte.

    Throws if the WIF string is invalid or version doesn't match.

    Decode a WIF string to WIF data (synchronous)

    Returns an object with { version, privateKey, compressed }.

    The optional version parameter validates the version byte.

    Throws if the WIF string is invalid or version doesn't match.

    Encode WIF data to a WIF string

    Encode WIF data to a WIF string (synchronous)

    TypedArray utils and conversions.

    import { typedCopyBytes, typedView } from '@exodus/bytes/array.js'
    

    Create a copy of TypedArray underlying bytes in the specified format ('uint8', 'buffer', or 'arraybuffer')

    This does not copy values, but copies the underlying bytes. The result is similar to that of typedView(), but this function provides a copy, not a view of the same memory.

    Warning

    Copying underlying bytes from Uint16Array (or other with BYTES_PER_ELEMENT > 1) is platform endianness-dependent.

    Note

    Buffer might be pooled. Uint8Array return values are not pooled and match their underlying ArrayBuffer.

    Create a view of a TypedArray in the specified format ('uint8' or 'buffer')

    Important

    Does not copy data, returns a view on the same underlying buffer

    Warning

    Viewing Uint16Array (or other with BYTES_PER_ELEMENT > 1) as bytes is platform endianness-dependent.

    Implements the Encoding standard: TextDecoder, TextEncoder, TextDecoderStream, TextEncoderStream, some hooks.

    import { TextDecoder, TextEncoder } from '@exodus/bytes/encoding.js'
    import { TextDecoderStream, TextEncoderStream } from '@exodus/bytes/encoding.js' // Requires Streams
    import { isomorphicDecode, isomorphicEncode } from '@exodus/bytes/encoding.js'

    // Hooks for standards
    import { getBOMEncoding, legacyHookDecode, labelToName, normalizeEncoding } from '@exodus/bytes/encoding.js'

    TextDecoder implementation/polyfill.

    Decode bytes to strings according to WHATWG Encoding specification.

    TextEncoder implementation/polyfill.

    Encode strings to UTF-8 bytes according to WHATWG Encoding specification.

    TextDecoderStream implementation/polyfill.

    A Streams wrapper for TextDecoder.

    Requires Streams to be either supported by the platform or polyfilled.

    TextEncoderStream implementation/polyfill.

    A Streams wrapper for TextEncoder.

    Requires Streams to be either supported by the platform or polyfilled.

    Implements isomorphic decode.

    Given a TypedArray or an ArrayBuffer instance input, creates a string of the same length as input byteLength, using bytes from input as codepoints.

    E.g. for Uint8Array input, this is similar to String.fromCodePoint(...input).

    Wider TypedArray inputs, e.g. Uint16Array, are interpreted as underlying bytes.

    Implements isomorphic encode.

    Given a string, creates an Uint8Array of the same length with the string codepoints as byte values.

    Accepts only isomorphic string input and asserts that, throwing on any strings containing codepoints higher than U+00FF.

    Implements get an encoding from a string label.

    Convert an encoding label to its name, as a case-sensitive string.

    If an encoding with that label does not exist, returns null.

    All encoding names are also valid labels for corresponding encodings.

    Convert an encoding label to its name, as an ASCII-lowercased string.

    If an encoding with that label does not exist, returns null.

    This is the same as decoder.encoding getter, except that it:

    1. Supports replacement encoding and its labels
    2. Does not throw for invalid labels and instead returns null

    It is identical to:

    labelToName(label)?.toLowerCase() ?? null
    

    All encoding names are also valid labels for corresponding encodings.

    Implements BOM sniff legacy hook.

    Given a TypedArray or an ArrayBuffer instance input, returns either of:

    • 'utf-8', if input starts with UTF-8 byte order mark.
    • 'utf-16le', if input starts with UTF-16LE byte order mark.
    • 'utf-16be', if input starts with UTF-16BE byte order mark.
    • null otherwise.

    Implements decode legacy hook.

    Given a TypedArray or an ArrayBuffer instance input and an optional fallbackEncoding encoding label, sniffs encoding from BOM with fallbackEncoding fallback and then decodes the input using that encoding, skipping BOM if it was present.

    Notes:

    • BOM-sniffed encoding takes precedence over fallbackEncoding option per spec. Use with care.
    • Always operates in non-fatal mode, aka replacement. It can convert different byte sequences to equal strings.

    This method is similar to the following code, except that it doesn't support encoding labels and only expects lowercased encoding name:

    new TextDecoder(getBOMEncoding(input) ?? fallbackEncoding).decode(input)
    

    The exact same exports as @exodus/bytes/encoding.js are also exported as @exodus/bytes/encoding-lite.js, with the difference that the lite version does not load multi-byte TextDecoder encodings by default to reduce bundle size ~12x.

    import { TextDecoder, TextEncoder } from '@exodus/bytes/encoding-lite.js'
    import { TextDecoderStream, TextEncoderStream } from '@exodus/bytes/encoding-lite.js' // Requires Streams
    import { isomorphicDecode, isomorphicEncode } from '@exodus/bytes/encoding-lite.js'

    // Hooks for standards
    import { getBOMEncoding, legacyHookDecode, labelToName, normalizeEncoding } from '@exodus/bytes/encoding-lite.js'

    The only affected encodings are: gbk, gb18030, big5, euc-jp, iso-2022-jp, shift_jis and their labels when used with TextDecoder.

    Legacy single-byte encodingds are loaded by default in both cases.

    TextEncoder and hooks for standards (including labelToName / normalizeEncoding) do not have any behavior differences in the lite version and support full range if inputs.

    To avoid inconsistencies, the exported classes and methods are exactly the same objects.

    > lite = require('@exodus/bytes/encoding-lite.js')
    [Module: null prototype] {
    TextDecoder: [class TextDecoder],
    TextDecoderStream: [class TextDecoderStream],
    TextEncoder: [class TextEncoder],
    TextEncoderStream: [class TextEncoderStream],
    getBOMEncoding: [Function: getBOMEncoding],
    labelToName: [Function: labelToName],
    legacyHookDecode: [Function: legacyHookDecode],
    normalizeEncoding: [Function: normalizeEncoding]
    }
    > new lite.TextDecoder('big5').decode(Uint8Array.of(0x25))
    Uncaught:
    Error: Legacy multi-byte encodings are disabled in /encoding-lite.js, use /encoding.js for full encodings range support

    > full = require('@exodus/bytes/encoding.js')
    [Module: null prototype] {
    TextDecoder: [class TextDecoder],
    TextDecoderStream: [class TextDecoderStream],
    TextEncoder: [class TextEncoder],
    TextEncoderStream: [class TextEncoderStream],
    getBOMEncoding: [Function: getBOMEncoding],
    labelToName: [Function: labelToName],
    legacyHookDecode: [Function: legacyHookDecode],
    normalizeEncoding: [Function: normalizeEncoding]
    }
    > full.TextDecoder === lite.TextDecoder
    true
    > new full.TextDecoder('big5').decode(Uint8Array.of(0x25))
    '%'
    > new lite.TextDecoder('big5').decode(Uint8Array.of(0x25))
    '%'

    Same as @exodus/bytes/encoding.js, but in browsers instead of polyfilling just uses whatever the browser provides, drastically reducing the bundle size (to less than 2 KiB gzipped).

    Does not provide isomorphicDecode and isomorphicEncode exports.

    import { TextDecoder, TextEncoder } from '@exodus/bytes/encoding-browser.js'
    import { TextDecoderStream, TextEncoderStream } from '@exodus/bytes/encoding-browser.js' // Requires Streams

    // Hooks for standards
    import { getBOMEncoding, legacyHookDecode, labelToName, normalizeEncoding } from '@exodus/bytes/encoding-browser.js'

    Under non-browser engines (Node.js, React Native, etc.) a full polyfill is used as those platforms do not provide sufficiently complete / non-buggy TextDecoder APIs.

    Note

    Implementations in browsers have bugs, but they are fixing them and the expected update window is short.
    If you want to circumvent browser bugs, use full @exodus/bytes/encoding.js import.

    WHATWG helpers

    import '@exodus/bytes/encoding.js' // For full legacy multi-byte encodings support
    import { percentEncodeAfterEncoding } from '@exodus/bytes/whatwg.js'

    Implements percent-encode after encoding per WHATWG URL specification.

    Important

    You must import @exodus/bytes/encoding.js for this API to accept legacy multi-byte encodings.

    Encodings utf16-le, utf16-be, and replacement are not accepted.

    C0 control percent-encode set is always percent-encoded.

    percentEncodeSet is an addition to that, and must be a string of unique increasing codepoints in range 0x20 - 0x7e, e.g. ' "#<>'.

    This method accepts DOMStrings and converts them to USVStrings. This is different from e.g. encodeURI and encodeURIComponent which throw on surrogates:

    > percentEncodeAfterEncoding('utf8', '\ud800', ' "#$%&+,/:;<=>?@[\\]^`{|}') // component
    '%EF%BF%BD'
    > encodeURIComponent('\ud800')
    Uncaught URIError: URI malformed

    See GitHub Releases tab

    MIT