main repo

This commit is contained in:
Basilosaurusrex
2025-11-24 18:09:40 +01:00
parent b636ee5e70
commit f027651f9b
34146 changed files with 4436636 additions and 0 deletions

21
node_modules/meshoptimizer/LICENSE.md generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2016-2024 Arseny Kapoulkine
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

232
node_modules/meshoptimizer/README.md generated vendored Normal file
View File

@@ -0,0 +1,232 @@
# meshoptimizer.js
This folder contains JavaScript/WebAssembly modules that can be used to access parts of functionality of meshoptimizer library. While normally these would be used internally by glTF loaders, processors and other Web optimization tools, they can also be used directly if needed. The modules are available as an [NPM package](https://www.npmjs.com/package/meshoptimizer) but can also be redistributed individually on a file-by-file basis.
## Structure
Each component comes in two variants:
- `meshopt_component.js` uses a UMD-style module declaration and can be used by a wide variety of JavaScript module loaders, including node.js require(), AMD, Common.JS, and can also be loaded into the web page directly via a `<script>` tag which exposes the module as a global variable
- `meshopt_component.module.js` uses ES6 module exports and can be imported from another ES6 module
In either case the export name is MeshoptComponent and is an object that has two fields:
- `supported` is a boolean that can be checked to see if the component is supported by the current execution environment; it will generally be `false` when WebAssembly is not supported or enabled. To use these components on browsers without WebAssembly a polyfill library is recommended.
- `ready` is a Promise that is resolved when WebAssembly compilation and initialization finishes; any functions are unsafe to call before that happens.
In addition to that, each component exposes a set of specific functions documented below.
## Decoder
`MeshoptDecoder` (`meshopt_decoder.js`) implements high performance decompression of attribute and index buffers encoded using meshopt compression. This can be used to decompress glTF buffers encoded with `EXT_meshopt_compression` extension or for custom geometry compression pipelines. The module contains two implementations, scalar and SIMD, with the best performing implementation selected automatically. When SIMD is available, the decoders run at 1-3 GB/s on modern desktop computers.
To decode a buffer, one of the decoding functions should be called:
```ts
decodeVertexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, filter?: string) => void;
decodeIndexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
decodeIndexSequence: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
```
The `source` should contain the data encoded using meshopt codecs; `count` represents the number of elements (attributes or indices); `size` represents the size of each element and should be divisible by 4 for `decodeVertexBuffer` and equal to 2 or 4 for the index decoders. `target` must be `count * size` bytes.
Given a valid encoded buffer and the correct input parameters, these functions always succeed; they fail if the input data is malformed.
When decoding attribute (vertex) data, additionally one of the decoding filters can be applied to further post-process the decoded data. `filter` must be equal to `"OCTAHEDRAL"`, `"QUATERNION"` or `"EXPONENTIAL"` to activate this extra step. The description of filters can be found in [the specification for EXT_meshopt_compression](https://github.com/KhronosGroup/glTF/blob/master/extensions/2.0/Vendor/EXT_meshopt_compression/README.md).
To simplify the decoding further, a wrapper function is provided that automatically calls the correct version of the decoding based on `mode` - which should be `"ATTRIBUTES"`, `"TRIANGLES"` or `"INDICES"`. The difference in terminology is due to the fact that the JavaScript API uses the terms established in the glTF extension, whereas the function names match that of the meshoptimizer C++ API.
```ts
decodeGltfBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, mode: string, filter?: string) => void;
```
Note that all functions above run synchronously; sometimes decoding large buffers takes time, so this library provides support for asynchronous decoding
using WebWorkers via the following API; `useWorkers` must be called once at startup to create the desired number of workers:
```ts
useWorkers: (count: number) => void;
decodeGltfBufferAsync: (count: number, size: number, source: Uint8Array, mode: string, filter?: string) => Promise<Uint8Array>;
```
## Encoder
`MeshoptEncoder` (`meshopt_encoder.js`) implements data preprocessing and compression of attribute and index buffers. It can be used to compress data that can be decompressed using the decoder module - note that the encoding process is more complicated and nuanced. It is typically split into three steps:
1. Pre-process the mesh to improve index and vertex locality which increases compression ratio
2. Quantize the data, either manually using integer or normalized integer format as a target, or using filter encoders
3. Encode the data
Step 1 is optional but highly recommended for triangle meshes; it can be omitted when compressing data with a predefined order such as animation keyframes.
Step 2 is the only lossy step in this process; without step 2, encoding will retain all semantics of the input exactly which can result in compressed data that is too large.
To reverse the process, decoder is used to reverse step 3 and (optionally) 2; the resulting data can typically be fed directly to the GPU. Note that the output of step 3 can also be further compressed in transport using a general-purpose compression algorithm such as Deflate.
To pre-process the mesh, the following function should be called with the input index buffer:
```ts
reorderMesh: (indices: Uint32Array, triangles: boolean, optsize: boolean) => [Uint32Array, number];
```
The function optimizes the input array for locality of reference (make sure to pass `triangles=true` for triangle lists, and `false` otherwise). `optsize` can choose whether the order should be optimal for transmission size (recommended for Web) or for GPU rendering performance. The function changes the `indices` array in place and returns an additional remap array and the total number of unique vertices.
After this function returns, to maintain correct rendering the application should reorder all vertex streams - including morph targets if applicable - according to the remap array. For each original index, remap array contains the new location for that index (or `0xffffffff` if the value is unused), so the remapping pseudocode looks like this:
```ts
let newvertices = new VertexArray(unique); // unique is returned by reorderMesh
for (let i = 0; i < oldvertices.length; ++i)
if (remap[i] != 0xffffffff)
newvertices[remap[i]] = oldvertices[i];
```
When the input is a point cloud and not a triangle mesh, it is recommended to reorder the points using a specialized function that performs spatial sorting that can result in significant improvements in compression ratio by the subsequent processing:
```ts
reorderPoints: (positions: Float32Array, positions_stride: number) => Uint32Array;
```
This function returns a remap array just like `reorderMesh`, so the vertices need to be reordered accordingly for every vertex stream - the `positions` input is not modified. Note that it assumes no index buffer is provided, as it is redundant for point clouds.
To quantize the attribute data (whether it represents a mesh component or something else like a rotation quaternion for a bone), typically some data-specific analysis should be performed to determine the optimal quantization strategy. For linear data such as positions or texture coordinates remapping the input range to 0..1 and quantizing the resulting integer using fixed-point encoding with a given number of bits stored in a 16-bit or 8-bit integer is recommended; however, this is not always best for compression ratio for data with complex cross-component dependencies.
To that end, three filter encoders are provided: octahedral (optimal for normal or tangent data), quaternion (optimal for unit-length quaternions) and exponential (optimal for compressing floating-point vectors). The last two are recommended for use for animation data, and exponential filter can additionally be used to quantize any floating-point vertex attribute for which integer quantization is not sufficiently precise.
```ts
encodeFilterOct: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterQuat: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterExp: (source: Float32Array, count: number, stride: number, bits: number, mode?: string) => Uint8Array;
```
All these functions take a source floating point buffer as an input, and perform a complex transformation that, when reversed by a decoder, results in an optimally quantized decompressed output. Because of this these functions assume specific configuration of input and output data:
- `encodeFilterOct` takes each 4 floats from the source array (for a total of `count` 4-vectors), treats them as a unit vector (XYZ) and fourth component from -1..1 (W), and encodes them into `stride` bytes in a way that, when decoded, the result is stored as a normalized signed 4-vector. `stride` must be 4 (in which case the round-trip result is 4 8-bit normalized values) or 8 (in which case the round-trip result is 4 16-bit normalized values). This encoding is recommended for normals (with stride=4 for medium quality and 8 for high quality output) and tangents (with stride=4 providing enough quality in all cases; note that 4-th component is preserved in case it stores coordinate spaced winding). `bits` represents the desired precision of each component and must be in `[1..8]` range if `stride=4` and `[1..16]` range if `stride=8`.
- `encodeFilterQuat` takes each 4 floats from the source array (for a total of `count` 4-vectrors), treats them as a unit quaternion, and encodes them into `stride` bytes in a way that, when decoded, the result is stored as a normalized signed 4-vector representing the same rotation as the source quaternion. `stride` must be 8 (the round-trip result is 4 16-bit normalized values). `bits` represents the desired precision of each component and must be in `[4..16]` range, although using less than 9-10 bits is likely going to lead to significant deviation in rotations.
- `encodeFilterExp` takes each K floats from the source array (where `K=stride/4`, for a total of `count` K-vectors), and encodes them into `stride` bytes in a way that, when decoded, the result is stored as K single-precision floating point values. This may seem redundant but it allows to trade some precision for a higher compression ratio due to reduced precision of stored components, controlled by `bits` which must be in `[1..24]` range, and a shared exponent encoding used by the function.
The `mode` parameter can be used to influence the exponent sharing and provides a tradeoff between compressed size and quality for various use cases, and can be one of 'Separate', 'SharedVector', 'SharedComponent' and 'Clamped' (defaulting to 'SharedVector').
Note that in all cases using the highest `bits` value allowed by the output `stride` won't change the size of the output array (which is always going to be `count * stride` bytes), but it *will* reduce compression efficiency, as such the lowest acceptable `bits` value is recommended to use. When multiple parts of the data require different levels of precision, encode filters can be called multiple times and the output of the same filter called with the same `stride` can be concatenated even if `bits` are different.
After data is quantized using filter encoding or manual quantization, the result should be compressed using one of the following functions that mirror the interface of the decoding functions described above:
```ts
encodeVertexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeIndexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeIndexSequence: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeGltfBuffer: (source: Uint8Array, count: number, size: number, mode: string) => Uint8Array;
```
`size` is the size of each component in bytes; it must be divisible by 4 for attribute/vertex encoding and must be equal to 2 or 4 for index encoding; additionally, index buffer encoding assumes triangle lists as an input and as such count must be divisible by 3.
Note that the source is specified as byte arrays; for example, to quantize a position stream encoded using 16-bit integers with 5 vertices, `source` must have length of `5 * 8 = 40` bytes (8 bytes for each position - 3\*2 bytes of data and 2 bytes of padding to conform to alignment requirements), `count` must be 5 and `size` must be 8. When padding data to the alignment boundary make sure to use 0 as padding bytes for optimal compression.
When interleaved vertex data is compressed, `encodeVertexBuffer` can be called with the full size of a single interleaved vertex; however, when compressing deinterleaved data, note that `encodeVertexBuffer` should be called on each component individually if the strides of different streams are different.
## Simplifier
`MeshoptSimplifier` (`meshopt_simplifier.js`) implements mesh simplification, producing a mesh with fewer triangles/points that resembles the original mesh in its appearance. The simplification algorithms are lossy and may result in significant change in appearance, but can often be used without visible visual degradation on high poly input meshes or for level of detail variants far away.
To simplify the mesh, the following function needs to be called first:
```ts
simplify(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, target_index_count: number, target_error: number, flags?: [Flags]) => [Uint32Array, number];
```
Given an input triangle mesh represented by an index buffer and a position buffer, the algorithm tries to simplify the mesh down to the target index count while maintaining the appearance. For meshes with inconsistent topology or many seams, such as faceted meshes, it can result in simplifier getting "stuck" and not being able to simplify the mesh fully. Therefore it's critical that identical vertices are "welded" together, that is, the input vertex buffer does not contain duplicates. Additionally, it may be possible to preprocess the index buffer to discard any vertex attributes that aren't critical and can be rebuilt later.
Target error is an approximate measure of the deviation from the original mesh using distance normalized to `[0..1]` range (e.g. `1e-2f` means that simplifier will try to maintain the error to be below 1% of the mesh extents). Note that the simplifier attempts to produce the requested number of indices at minimal error, but because of topological restrictions and error limit it is not guaranteed to reach the target index count and can stop earlier.
The algorithm uses position data stored in a strided array; `vertex_positions_stride` represents the distance between subsequent positions in `Float32` units and should typically be set to 3. If the input position data is quantized, it's necessary to dequantize it so that the algorithm can estimate the position error correctly. While the algorithm doesn't use other attributes like normals/texture coordinates, it automatically recognizes and preserves attribute discontinuities based on index data. Because of this, for the algorithm to function well, the mesh vertices should be unique (de-duplicated).
Upon completion, the function returns the new index buffer as well as the resulting appearance error. The index buffer can be used to render the simplified mesh with the same vertex buffer(s) as the original one, including non-positional attributes. For example, `simplify` can be called multiple times with different target counts/errors, and the application can select the appropriate index buffer to render for the mesh at runtime to implement level of detail.
To control behavior of the algorithm more precisely, `flags` may specify an array of strings that enable various additional options:
- `'LockBorder'` locks the vertices that lie on the topological border of the mesh in place such that they don't move during simplification. This can be valuable to simplify independent chunks of a mesh, for example terrain, to ensure that individual levels of detail can be stitched together later without gaps.
- `'ErrorAbsolute'` changes the error metric from relative to absolute both for the input error limit as well as for the resulting error. This can be used instead of `getScale`.
- `'Sparse'` improves simplification performance assuming input indices are a sparse subset of the mesh. This can be useful when simplifying small mesh subsets independently. For consistency, it is recommended to use absolute errors when sparse simplification is desired.
When the resulting mesh is stored, it might be desireable to remove the redundant vertices from the attribute buffers instead of simply using the original vertex data with the smaller index buffer. For that purpose, the simplifier module provides the `compactMesh` function, which is similar to `reorderMesh` function that the encoder provides, but doesn't perform extra optimizations and merely prepares a new vertex order that can be used to create new, smaller, vertex buffers:
```ts
compactMesh: (indices: Uint32Array) => [Uint32Array, number];
```
The simplification algorithm uses relative errors for input and output; to convert these errors to absolute units, they need to be multiplied by the scaling factor which depends on the mesh geometry and can be computed by calling the following function with the position data:
```ts
getScale: (vertex_positions: Float32Array, vertex_positions_stride: number) => number;
```
## Clusterizer
`MeshoptClusterizer` (`meshopt_clusterizer.js`) implements meshlet generation and optimization.
To split a triangle mesh into clusters, call `buildMeshlets`, which tries to balance topological efficiency (by maximizing vertex reuse inside meshlets) with culling efficiency.
```ts
buildMeshlets(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, max_vertices: number, max_triangles: number, cone_weight?: number) => MeshletBuffers;
```
The algorithm uses position data stored in a strided array; `vertex_positions_stride` represents the distance between subsequent positions in `Float32` units.
The maximum number of triangles and number of vertices per meshlet can be controlled via `max_triangles` and `max_vertices` parameters. However, `max_vertices` must not be greater than 255 and `max_triangles` must not be greater than 512.
Additionally, if cluster cone culling is to be used, `buildMeshlets` allows specifying a `cone_weight` as a value between 0 and 1 to balance culling efficiency with other forms of culling. By default, `cone_weight` is set to 0.
All meshlets are implicitly optimized for better triangle and vertex locality by `buildMeshlets`.
The algorithm returns the meshlet data as packed buffers:
```ts
const buffers = MeshoptClusterizer.buildMeshlets(indices, positions, stride, /* args */);
console.log(buffers.meshlets); // prints the raw packed Uint32Array containing the meshlet data, i.e., the indices into the vertices and triangles array
console.log(buffers.vertices); // prints the raw packed Uint32Array containing the indices into the original meshes vertices
console.log(buffers.triangles); // prints the raw packed Uint8Array containing the indices into the verices array.
console.log(buffers.meshletCount); // prints the number of meshlets - this is not the same as buffers.meshlets.length because each meshlet consists of 4 unsigned 32-bit integers
```
Individual meshlets can be extracted from the packed buffers using `extractMeshlet`. The memory of the returned `Meshlet` object's `vertices` and `triangles` arrays is backed by the `MeshletBuffers` object.
```ts
const buffers = MeshoptClusterizer.buildMeshlets(indices, positions, stride, /* args */);
const meshlet = MeshoptClusterizer.extractMeshlet(buffers, 0);
console.log(meshlet.vertices); // prints the packed Uint32Array of the first meshlet's vertex indices, i.e., indices into the original meshes vertex buffer
console.log(meshlet.triangles); // prints the packed Uint8Array of the first meshlet's indices into its own vertices array
console.log(MeshoptClusterizer.extractMeshlet(buffers, 0).triangles[0] === meshlet.triangles[0]) // prints true
meshlet.triangles.set([123], 0);
console.log(MeshoptClusterizer.extractMeshlet(buffers, 0).triangles[0] === meshlet.triangles[0]) // still prints true
```
After generating the meshlet data, it's also possible to generate extra culling data for one or more meshlets:
```ts
computeMeshletBounds(buffers: MeshletBuffers, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds | Bounds[];
```
If `buffers` contains more than one meshlet, `computeMeshletBounds` returns an array of `Bounds`. Otherwise, a single `Bounds` object is returned.
```ts
const buffers = MeshoptClusterizer.buildMeshlets(indices, positions, stride, /* args */);
const bounds = MeshoptClusterizer.computeMeshletBounds(buffers, positions, stride);
console.log(bounds[0].centerX, bounds[0].centerY, bounds[0].centerZ); // prints the center of the first meshlet's bounding sphere
console.log(bounds[0].radius); // prints the radius of the first meshlet's bounding sphere
console.log(bounds[0].coneApexX, bounds[0].coneApexY, bounds[0].coneApexZ); // prints the apex of the first meshlet's normal cone
console.log(bounds[0].coneAxisX, bounds[0].coneAxisY, bounds[0].coneAxisZ); // prints the axis of the first meshlet's normal cone
console.log(bounds[0].coneCutoff); // prins the cutoff angle of the first meshlet's normal cone
```
It is also possible to compute bounds of a vertex cluster that is not generated by `MeshoptClusterizer` using `computeClusterBounds`. Like `buildMeshlets`, this algorithm takes vertex indices and a strided vertex positions array with a vertex stride in `Float32` units as input.
```ts
computeClusterBounds(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds;
```
## License
This library is available to anybody free of charge, under the terms of MIT License (see LICENSE.md).

139
node_modules/meshoptimizer/benchmark.js generated vendored Normal file
View File

@@ -0,0 +1,139 @@
var encoder = require('./meshopt_encoder.js');
var decoder = require('./meshopt_decoder.js');
var { performance } = require('perf_hooks');
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
function bytes(view) {
return new Uint8Array(view.buffer, view.byteOffset, view.byteLength);
}
var tests = {
roundtripVertexBuffer: function () {
var N = 1024 * 1024;
var data = new Uint8Array(N * 16);
for (var i = 0; i < N * 16; i += 4) {
data[i + 0] = 0;
data[i + 1] = (i % 16) * 1;
data[i + 2] = (i % 16) * 2;
data[i + 3] = (i % 16) * 8;
}
var decoded = new Uint8Array(N * 16);
var t0 = performance.now();
var encoded = encoder.encodeVertexBuffer(data, N, 16);
var t1 = performance.now();
decoder.decodeVertexBuffer(decoded, N, 16, encoded);
var t2 = performance.now();
return { encodeVertex: t1 - t0, decodeVertex: t2 - t1, bytes: N * 16 };
},
roundtripIndexBuffer: function () {
var N = 1024 * 1024;
var data = new Uint32Array(N * 3);
for (var i = 0; i < N * 3; i += 6) {
var v = i / 6;
data[i + 0] = v;
data[i + 1] = v + 1;
data[i + 2] = v + 2;
data[i + 3] = v + 2;
data[i + 4] = v + 1;
data[i + 5] = v + 3;
}
var decoded = new Uint32Array(data.length);
var t0 = performance.now();
var encoded = encoder.encodeIndexBuffer(bytes(data), data.length, 4);
var t1 = performance.now();
decoder.decodeIndexBuffer(bytes(decoded), data.length, 4, encoded);
var t2 = performance.now();
return { encodeIndex: t1 - t0, decodeIndex: t2 - t1, bytes: N * 12 };
},
decodeGltf: function () {
var N = 1024 * 1024;
var data = new Uint8Array(N * 16);
for (var i = 0; i < N * 16; i += 4) {
data[i + 0] = 0;
data[i + 1] = (i % 16) * 1;
data[i + 2] = (i % 16) * 2;
data[i + 3] = (i % 16) * 8;
}
var decoded = new Uint8Array(N * 16);
var filters = [
{ name: 'none', filter: 'NONE', stride: 16 },
{ name: 'oct4', filter: 'OCTAHEDRAL', stride: 4 },
{ name: 'oct12', filter: 'OCTAHEDRAL', stride: 8 },
{ name: 'quat12', filter: 'QUATERNION', stride: 8 },
{ name: 'exp', filter: 'EXPONENTIAL', stride: 16 },
];
var results = { bytes: N * 16 };
for (var i = 0; i < filters.length; ++i) {
var f = filters[i];
var encoded = encoder.encodeVertexBuffer(data, (N * 16) / f.stride, f.stride);
var t0 = performance.now();
decoder.decodeGltfBuffer(decoded, (N * 16) / f.stride, f.stride, encoded, 'ATTRIBUTES', f.filter);
var t1 = performance.now();
results[f.name] = t1 - t0;
}
return results;
},
};
Promise.all([encoder.ready, decoder.ready]).then(() => {
var reps = 10;
var data = {};
for (var key in tests) {
data[key] = tests[key]();
}
for (var i = 1; i < reps; ++i) {
for (var key in tests) {
var nd = tests[key]();
var od = data[key];
for (var idx in nd) {
od[idx] = Math.min(od[idx], nd[idx]);
}
}
}
for (var key in tests) {
var rep = key;
rep += ':\n';
for (var idx in data[key]) {
if (idx != 'bytes') {
rep += idx;
rep += ' ';
rep += data[key][idx];
rep += ' ms (';
rep += (data[key].bytes / 1024 / 1024 / 1024 / data[key][idx]) * 1000;
rep += ' GB/s)';
rep += '\n';
}
}
console.log(rep);
}
});

6
node_modules/meshoptimizer/index.js generated vendored Normal file
View File

@@ -0,0 +1,6 @@
const MeshoptEncoder = require('./meshopt_encoder.js');
const MeshoptDecoder = require('./meshopt_decoder.js');
const MeshoptSimplifier = require('./meshopt_simplifier.js');
const MeshoptClusterizer = require('./meshopt_clusterizer.js');
module.exports = { MeshoptEncoder, MeshoptDecoder, MeshoptSimplifier, MeshoptClusterizer };

4
node_modules/meshoptimizer/index.module.d.ts generated vendored Normal file
View File

@@ -0,0 +1,4 @@
export * from './meshopt_encoder.module';
export * from './meshopt_decoder.module';
export * from './meshopt_simplifier.module';
export * from './meshopt_clusterizer.module';

4
node_modules/meshoptimizer/index.module.js generated vendored Normal file
View File

@@ -0,0 +1,4 @@
export * from './meshopt_encoder.module.js';
export * from './meshopt_decoder.module.js';
export * from './meshopt_simplifier.module.js';
export * from './meshopt_clusterizer.module.js';

280
node_modules/meshoptimizer/meshopt_clusterizer.js generated vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,45 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2024, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export class Bounds {
centerX: number;
centerY: number;
centerZ: number;
radius: number;
coneApexX: number;
coneApexY: number;
coneApexZ: number;
coneAxisX: number;
coneAxisY: number;
coneAxisZ: number;
coneCutoff: number;
}
export class MeshletBuffers {
meshlets: Uint32Array;
vertices: Uint32Array;
triangles: Uint8Array;
meshletCount: number;
}
export class Meshlet {
vertices: Uint32Array;
triangles: Uint8Array;
}
export const MeshoptClusterizer: {
supported: boolean;
ready: Promise<void>;
buildMeshlets: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
max_vertices: number,
max_triangles: number,
cone_weight?: number
) => MeshletBuffers;
computeClusterBounds: (indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds;
computeMeshletBounds: (buffers: MeshletBuffers, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds[];
extractMeshlet: (buffers: MeshletBuffers, index: number) => Meshlet;
};

File diff suppressed because one or more lines are too long

125
node_modules/meshoptimizer/meshopt_clusterizer.test.js generated vendored Normal file
View File

@@ -0,0 +1,125 @@
const assert = require('assert').strict;
const clusterizer = require('./meshopt_clusterizer.js');
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
const cubeWithNormals = {
vertices: new Float32Array([
// n = (0, 0, 1)
-1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
// n = (0, 0, -1)
-1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, -1.0, -1.0, 0.0, 0.0, -1.0, -1.0, -1.0, -1.0, 0.0, 0.0, -1.0,
// n = (1, 0, 0)
1.0, -1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -1.0, 1.0, 1.0, 0.0, 0.0,
// n = (-1, 0, 0)
-1.0, -1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, -1.0, -1.0, 0.0, 0.0, -1.0, -1.0, -1.0, -1.0, 0.0, 0.0,
// n = (0, 1, 0)
1.0, 1.0, -1.0, 0.0, 1.0, 0.0, -1.0, 1.0, -1.0, 0.0, 1.0, 0.0, -1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0,
// n = (0, -1, 0)
1.0, -1.0, 1.0, 0.0, -1.0, 0.0, -1.0, -1.0, 1.0, 0.0, -1.0, 0.0, -1.0, -1.0, -1.0, 0.0, -1.0, 0.0, 1.0, -1.0, -1.0, 0.0, -1.0, 0.0,
]),
indices: new Uint32Array([
// n = (0, 0, 1)
0, 1, 2, 2, 3, 0,
// n = (0, 0, -1)
4, 5, 6, 6, 7, 4,
// n = (1, 0, 0)
8, 9, 10, 10, 11, 8,
// n = (-1, 0, 0)
12, 13, 14, 14, 15, 12,
// n = (0, 1, 0)
16, 17, 18, 18, 19, 16,
// n = (0, -1, 0)
20, 21, 22, 22, 23, 20,
]),
vertexStride: 6, // in floats
};
const tests = {
buildMeshlets: function () {
const maxVertices = 4;
const buffers = clusterizer.buildMeshlets(cubeWithNormals.indices, cubeWithNormals.vertices, cubeWithNormals.vertexStride, maxVertices, 512);
const expectedVertices = [
new Uint32Array([2, 3, 0, 1]),
new Uint32Array([12, 13, 14, 15]),
new Uint32Array([6, 7, 4, 5]),
new Uint32Array([16, 17, 18, 19]),
new Uint32Array([8, 9, 10, 11]),
new Uint32Array([22, 23, 20, 21]),
];
const expectedTriangles = new Uint8Array([0, 1, 2, 2, 3, 0]);
assert.equal(buffers.meshletCount, 6);
for (let i = 0; i < buffers.meshletCount; ++i) {
const m = clusterizer.extractMeshlet(buffers, i);
assert.deepStrictEqual(m.vertices, expectedVertices[i]);
assert.deepStrictEqual(m.triangles, expectedTriangles);
}
},
computeClusterBounds: function () {
for (let i = 0; i < 6; ++i) {
const indexOffset = i * 6;
const normalOffset = i * 4 * cubeWithNormals.vertexStride;
const bounds = clusterizer.computeClusterBounds(
cubeWithNormals.indices.subarray(indexOffset, 6 + indexOffset),
cubeWithNormals.vertices,
cubeWithNormals.vertexStride
);
assert.deepStrictEqual(
new Int32Array([bounds.coneAxisX, bounds.coneAxisY, bounds.coneAxisZ]),
new Int32Array(cubeWithNormals.vertices.subarray(3 + normalOffset, 6 + normalOffset))
);
}
},
computeMeshletBounds: function () {
const maxVertices = 4;
const buffers = clusterizer.buildMeshlets(cubeWithNormals.indices, cubeWithNormals.vertices, cubeWithNormals.vertexStride, maxVertices, 512);
const expectedNormals = [
new Int32Array([0, 0, 1]),
new Int32Array([-1, 0, 0]),
new Int32Array([0, 0, -1]),
new Int32Array([0, 1, 0]),
new Int32Array([1, 0, 0]),
new Int32Array([0, -1, 0]),
];
const bounds = clusterizer.computeMeshletBounds(buffers, cubeWithNormals.vertices, cubeWithNormals.vertexStride);
assert(bounds.length === 6);
assert(bounds.length === buffers.meshletCount);
bounds.forEach((b, i) => {
const normal = new Int32Array([b.coneAxisX, b.coneAxisY, b.coneAxisZ]);
assert.deepStrictEqual(normal, expectedNormals[i]);
});
},
};
clusterizer.ready.then((_) => {
let passed = 0;
let failed = 0;
for (const key in tests) {
try {
tests[key]();
++passed;
} catch (e) {
console.error(e);
++failed;
}
}
if (failed === 0) {
console.log(passed, 'tests passed');
} else {
console.log(passed, 'tests passed &', failed, 'tests failed');
}
});

203
node_modules/meshoptimizer/meshopt_decoder.js generated vendored Normal file

File diff suppressed because one or more lines are too long

15
node_modules/meshoptimizer/meshopt_decoder.module.d.ts generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2024, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export const MeshoptDecoder: {
supported: boolean;
ready: Promise<void>;
decodeVertexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, filter?: string) => void;
decodeIndexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
decodeIndexSequence: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
decodeGltfBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, mode: string, filter?: string) => void;
useWorkers: (count: number) => void;
decodeGltfBufferAsync: (count: number, size: number, source: Uint8Array, mode: string, filter?: string) => Promise<Uint8Array>;
};

195
node_modules/meshoptimizer/meshopt_decoder.module.js generated vendored Normal file

File diff suppressed because one or more lines are too long

252
node_modules/meshoptimizer/meshopt_decoder.test.js generated vendored Normal file
View File

@@ -0,0 +1,252 @@
var assert = require('assert').strict;
var decoder = require('./meshopt_decoder.js');
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
var tests = {
decodeVertexBuffer: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x58, 0x57, 0x58, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0x58, 0x01, 0x08, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x17, 0x18, 0x17, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00,
0x00, 0x17, 0x01, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 4, 12, encoded);
assert.deepStrictEqual(result, expected);
},
decodeVertexBuffer_More: function () {
var encoded = new Uint8Array([
0xa0, 0x00, 0x01, 0x2a, 0xaa, 0xaa, 0xaa, 0x02, 0x04, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x03, 0x00, 0x10, 0x10, 0x10, 0x10, 0x10,
0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 1, 2, 8, 0, 2, 4, 16, 0, 3, 6, 24, 0, 4, 8, 32, 0, 5, 10, 40, 0, 6, 12, 48, 0, 7, 14, 56, 0, 8, 16, 64, 0, 9, 18, 72, 0,
10, 20, 80, 0, 11, 22, 88, 0, 12, 24, 96, 0, 13, 26, 104, 0, 14, 28, 112, 0, 15, 30, 120,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 16, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeVertexBuffer_Mode2: function () {
var encoded = new Uint8Array([
0xa0, 0x02, 0x08, 0x88, 0x88, 0x88, 0x88, 0x88, 0x88, 0x88, 0x02, 0x0a, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0x02, 0x0c, 0xcc, 0xcc,
0xcc, 0xcc, 0xcc, 0xcc, 0xcc, 0x02, 0x0e, 0xee, 0xee, 0xee, 0xee, 0xee, 0xee, 0xee, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 4, 5, 6, 7, 8, 10, 12, 14, 12, 15, 18, 21, 16, 20, 24, 28, 20, 25, 30, 35, 24, 30, 36, 42, 28, 35, 42, 49, 32, 40, 48, 56, 36,
45, 54, 63, 40, 50, 60, 70, 44, 55, 66, 77, 48, 60, 72, 84, 52, 65, 78, 91, 56, 70, 84, 98, 60, 75, 90, 105,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 16, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBuffer16: function () {
var encoded = new Uint8Array([
0xe0, 0xf0, 0x10, 0xfe, 0xff, 0xf0, 0x0c, 0xff, 0x02, 0x02, 0x02, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98,
0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint16Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var result = new Uint16Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 2, encoded);
assert.deepEqual(result, expected);
},
decodeIndexBuffer32: function () {
var encoded = new Uint8Array([
0xe0, 0xf0, 0x10, 0xfe, 0xff, 0xf0, 0x0c, 0xff, 0x02, 0x02, 0x02, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98,
0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBufferV1: function () {
var encoded = new Uint8Array([
0xe1, 0xf0, 0x10, 0xfe, 0x1f, 0x3d, 0x00, 0x0a, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98, 0x01, 0x69, 0x00,
0x00,
]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3, 0, 1, 2, 2, 1, 5, 2, 1, 4]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 15, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBufferV1_More: function () {
var encoded = new Uint8Array([
0xe1, 0xf0, 0x10, 0xfe, 0xff, 0xf0, 0x0c, 0xff, 0x02, 0x02, 0x02, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98,
0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBufferV1_3Edges: function () {
var encoded = new Uint8Array([
0xe1, 0xf0, 0x20, 0x30, 0x40, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98, 0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint32Array([0, 1, 2, 1, 0, 3, 2, 1, 4, 0, 2, 5]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexSequence: function () {
var encoded = new Uint8Array([0xd1, 0x00, 0x04, 0xcd, 0x01, 0x04, 0x07, 0x98, 0x1f, 0x00, 0x00, 0x00, 0x00]);
var expected = new Uint32Array([0, 1, 51, 2, 49, 1000]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexSequence(new Uint8Array(result.buffer), 6, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeFilterOct8: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x07, 0x00, 0x00, 0x00, 0x1e, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x8b, 0x8c, 0xfd, 0x00, 0x01, 0x26, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x01, 0x7f, 0x00,
]);
var expected = new Uint8Array([0, 1, 127, 0, 0, 159, 82, 1, 255, 1, 127, 0, 1, 130, 241, 1]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 4, encoded, /* filter= */ 'OCTAHEDRAL');
assert.deepStrictEqual(result, expected);
},
decodeFilterOct12: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x3d, 0x5a, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x9a, 0x99, 0x26,
0x01, 0x3f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x0a, 0x00, 0x00, 0x01, 0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0xff, 0x07,
0x00, 0x00,
]);
var expected = new Uint16Array([0, 16, 32767, 0, 0, 32621, 3088, 1, 32764, 16, 471, 0, 307, 28541, 16093, 1]);
var result = new Uint16Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 8, encoded, /* filter= */ 'OCTAHEDRAL');
assert.deepStrictEqual(result, expected);
},
decodeFilterQuat12: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x3d, 0x5a, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x9a, 0x99, 0x26,
0x01, 0x3f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x0a, 0x00, 0x00, 0x01, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0xfc, 0x07,
]);
var expected = new Uint16Array([32767, 0, 11, 0, 0, 25013, 0, 21166, 11, 0, 23504, 22830, 158, 14715, 0, 29277]);
var result = new Uint16Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 8, encoded, /* filter= */ 'QUATERNION');
assert.deepStrictEqual(result, expected);
},
decodeFilterExp: function () {
var encoded = new Uint8Array([
0xa0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0xff, 0xf7, 0xff, 0xff, 0x02, 0xff,
0xff, 0x7f, 0xfe,
]);
var expected = new Uint32Array([0, 0x3fc00000, 0xc2100000, 0x49fffffe]);
var result = new Uint32Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 1, 16, encoded, /* filter= */ 'EXPONENTIAL');
assert.deepStrictEqual(result, expected);
},
decodeGltfBuffer: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x58, 0x57, 0x58, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0x58, 0x01, 0x08, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x17, 0x18, 0x17, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00,
0x00, 0x17, 0x01, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
var result = new Uint8Array(expected.length);
decoder.decodeGltfBuffer(result, 4, 12, encoded, /* mode= */ 'ATTRIBUTES');
assert.deepStrictEqual(result, expected);
},
decodeGltfBufferAsync: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x58, 0x57, 0x58, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0x58, 0x01, 0x08, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x17, 0x18, 0x17, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00,
0x00, 0x17, 0x01, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
decoder.decodeGltfBufferAsync(4, 12, encoded, /* mode= */ 'ATTRIBUTES').then(function (result) {
assert.deepStrictEqual(result, expected);
});
},
};
decoder.ready.then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

337
node_modules/meshoptimizer/meshopt_decoder_reference.js generated vendored Normal file
View File

@@ -0,0 +1,337 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2024, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
// This is the reference decoder implementation by Jasper St. Pierre.
// It follows the decoder interface and should be a drop-in replacement for the actual decoder from meshopt_decoder.js
// It is provided for educational value and is not recommended for use in production because it's not performance-optimized.
const MeshoptDecoder = {};
MeshoptDecoder.supported = true;
MeshoptDecoder.ready = Promise.resolve();
function assert(cond) {
if (!cond) {
throw new Error('Assertion failed');
}
}
function dezig(v) {
return (v & 1) !== 0 ? ~(v >>> 1) : v >>> 1;
}
MeshoptDecoder.decodeVertexBuffer = (target, elementCount, byteStride, source, filter) => {
assert(source[0] === 0xa0);
const maxBlockElements = Math.min((0x2000 / byteStride) & ~0x000f, 0x100);
const deltas = new Uint8Array(0x10);
const tailDataOffs = source.length - byteStride;
// What deltas are stored relative to
const tempData = source.slice(tailDataOffs, tailDataOffs + byteStride);
let srcOffs = 0x01;
// Attribute Blocks
for (let dstElemBase = 0; dstElemBase < elementCount; dstElemBase += maxBlockElements) {
const attrBlockElementCount = Math.min(elementCount - dstElemBase, maxBlockElements);
const groupCount = ((attrBlockElementCount + 0x0f) & ~0x0f) >>> 4;
const headerByteCount = ((groupCount + 0x03) & ~0x03) >>> 2;
// Data blocks
for (let byte = 0; byte < byteStride; byte++) {
let headerBitsOffs = srcOffs;
srcOffs += headerByteCount;
for (let group = 0; group < groupCount; group++) {
const mode = (source[headerBitsOffs] >>> ((group & 0x03) << 1)) & 0x03;
// If this is the last group, move to the next byte of header bits.
if ((group & 0x03) === 0x03) headerBitsOffs++;
const dstElemGroup = dstElemBase + (group << 4);
if (mode === 0) {
// bits 0: All 16 byte deltas are 0; the size of the encoded block is 0 bytes
deltas.fill(0x00);
} else if (mode === 1) {
// bits 1: Deltas are using 2-bit sentinel encoding; the size of the encoded block is [4..20] bytes
const srcBase = srcOffs;
srcOffs += 0x04;
for (let m = 0; m < 0x10; m++) {
// 0 = >>> 6, 1 = >>> 4, 2 = >>> 2, 3 = >>> 0
const shift = 6 - ((m & 0x03) << 1);
let delta = (source[srcBase + (m >>> 2)] >>> shift) & 0x03;
if (delta === 3) delta = source[srcOffs++];
deltas[m] = delta;
}
} else if (mode === 2) {
// bits 2: Deltas are using 4-bit sentinel encoding; the size of the encoded block is [8..24] bytes
const srcBase = srcOffs;
srcOffs += 0x08;
for (let m = 0; m < 0x10; m++) {
// 0 = >>> 6, 1 = >>> 4, 2 = >>> 2, 3 = >>> 0
const shift = m & 0x01 ? 0 : 4;
let delta = (source[srcBase + (m >>> 1)] >>> shift) & 0x0f;
if (delta === 0xf) delta = source[srcOffs++];
deltas[m] = delta;
}
} else {
// bits 3: All 16 byte deltas are stored verbatim; the size of the encoded block is 16 bytes
deltas.set(source.subarray(srcOffs, srcOffs + 0x10));
srcOffs += 0x10;
}
// Go through and apply deltas to data
for (let m = 0; m < 0x10; m++) {
const dstElem = dstElemGroup + m;
if (dstElem >= elementCount) break;
const delta = dezig(deltas[m]);
const dstOffs = dstElem * byteStride + byte;
target[dstOffs] = tempData[byte] += delta;
}
}
}
}
// Filters - only applied if filter isn't undefined or NONE
if (filter === 'OCTAHEDRAL') {
assert(byteStride === 4 || byteStride === 8);
let dst, maxInt;
if (byteStride === 4) {
dst = new Int8Array(target.buffer);
maxInt = 127;
} else {
dst = new Int16Array(target.buffer);
maxInt = 32767;
}
for (let i = 0; i < 4 * elementCount; i += 4) {
let x = dst[i + 0],
y = dst[i + 1],
one = dst[i + 2];
x /= one;
y /= one;
const z = 1.0 - Math.abs(x) - Math.abs(y);
const t = Math.max(-z, 0.0);
x -= x >= 0 ? t : -t;
y -= y >= 0 ? t : -t;
const h = maxInt / Math.hypot(x, y, z);
dst[i + 0] = Math.round(x * h);
dst[i + 1] = Math.round(y * h);
dst[i + 2] = Math.round(z * h);
// keep dst[i + 3] as is
}
} else if (filter === 'QUATERNION') {
assert(byteStride === 8);
const dst = new Int16Array(target.buffer);
for (let i = 0; i < 4 * elementCount; i += 4) {
const inputW = dst[i + 3];
const maxComponent = inputW & 0x03;
const s = Math.SQRT1_2 / (inputW | 0x03);
let x = dst[i + 0] * s;
let y = dst[i + 1] * s;
let z = dst[i + 2] * s;
let w = Math.sqrt(Math.max(0.0, 1.0 - x ** 2 - y ** 2 - z ** 2));
dst[i + ((maxComponent + 1) % 4)] = Math.round(x * 32767);
dst[i + ((maxComponent + 2) % 4)] = Math.round(y * 32767);
dst[i + ((maxComponent + 3) % 4)] = Math.round(z * 32767);
dst[i + ((maxComponent + 0) % 4)] = Math.round(w * 32767);
}
} else if (filter === 'EXPONENTIAL') {
assert((byteStride & 0x03) === 0x00);
const src = new Int32Array(target.buffer);
const dst = new Float32Array(target.buffer);
for (let i = 0; i < (byteStride * elementCount) / 4; i++) {
const v = src[i],
exp = v >> 24,
mantissa = (v << 8) >> 8;
dst[i] = 2.0 ** exp * mantissa;
}
}
};
function pushfifo(fifo, n) {
for (let i = fifo.length - 1; i > 0; i--) fifo[i] = fifo[i - 1];
fifo[0] = n;
}
MeshoptDecoder.decodeIndexBuffer = (target, count, byteStride, source) => {
assert(source[0] === 0xe1);
assert(count % 3 === 0);
assert(byteStride === 2 || byteStride === 4);
let dst;
if (byteStride === 2) dst = new Uint16Array(target.buffer);
else dst = new Uint32Array(target.buffer);
const triCount = count / 3;
let codeOffs = 0x01;
let dataOffs = codeOffs + triCount;
let codeauxOffs = source.length - 0x10;
function readLEB128() {
let n = 0;
for (let i = 0; ; i += 7) {
const b = source[dataOffs++];
n |= (b & 0x7f) << i;
if (b < 0x80) return n;
}
}
let next = 0,
last = 0;
const edgefifo = new Uint32Array(32);
const vertexfifo = new Uint32Array(16);
function decodeIndex(v) {
return (last += dezig(v));
}
let dstOffs = 0;
for (let i = 0; i < triCount; i++) {
const code = source[codeOffs++];
const b0 = code >>> 4,
b1 = code & 0x0f;
if (b0 < 0x0f) {
const a = edgefifo[(b0 << 1) + 0],
b = edgefifo[(b0 << 1) + 1];
let c = -1;
if (b1 === 0x00) {
c = next++;
pushfifo(vertexfifo, c);
} else if (b1 < 0x0d) {
c = vertexfifo[b1];
} else if (b1 === 0x0d) {
c = --last;
pushfifo(vertexfifo, c);
} else if (b1 === 0x0e) {
c = ++last;
pushfifo(vertexfifo, c);
} else if (b1 === 0x0f) {
const v = readLEB128();
c = decodeIndex(v);
pushfifo(vertexfifo, c);
}
// fifo pushes happen backwards
pushfifo(edgefifo, b);
pushfifo(edgefifo, c);
pushfifo(edgefifo, c);
pushfifo(edgefifo, a);
dst[dstOffs++] = a;
dst[dstOffs++] = b;
dst[dstOffs++] = c;
} else {
// b0 === 0x0F
let a = -1,
b = -1,
c = -1;
if (b1 < 0x0e) {
const e = source[codeauxOffs + b1];
const z = e >>> 4,
w = e & 0x0f;
a = next++;
if (z === 0x00) b = next++;
else b = vertexfifo[z - 1];
if (w === 0x00) c = next++;
else c = vertexfifo[w - 1];
pushfifo(vertexfifo, a);
if (z === 0x00) pushfifo(vertexfifo, b);
if (w === 0x00) pushfifo(vertexfifo, c);
} else {
const e = source[dataOffs++];
if (e === 0x00) next = 0;
const z = e >>> 4,
w = e & 0x0f;
if (b1 === 0x0e) a = next++;
else a = decodeIndex(readLEB128());
if (z === 0x00) b = next++;
else if (z === 0x0f) b = decodeIndex(readLEB128());
else b = vertexfifo[z - 1];
if (w === 0x00) c = next++;
else if (w === 0x0f) c = decodeIndex(readLEB128());
else c = vertexfifo[w - 1];
pushfifo(vertexfifo, a);
if (z === 0x00 || z === 0x0f) pushfifo(vertexfifo, b);
if (w === 0x00 || w === 0x0f) pushfifo(vertexfifo, c);
}
pushfifo(edgefifo, a);
pushfifo(edgefifo, b);
pushfifo(edgefifo, b);
pushfifo(edgefifo, c);
pushfifo(edgefifo, c);
pushfifo(edgefifo, a);
dst[dstOffs++] = a;
dst[dstOffs++] = b;
dst[dstOffs++] = c;
}
}
};
MeshoptDecoder.decodeIndexSequence = (target, count, byteStride, source) => {
assert(source[0] === 0xd1);
assert(byteStride === 2 || byteStride === 4);
let dst;
if (byteStride === 2) dst = new Uint16Array(target.buffer);
else dst = new Uint32Array(target.buffer);
let dataOffs = 0x01;
function readLEB128() {
let n = 0;
for (let i = 0; ; i += 7) {
const b = source[dataOffs++];
n |= (b & 0x7f) << i;
if (b < 0x80) return n;
}
}
const last = new Uint32Array(2);
for (let i = 0; i < count; i++) {
const v = readLEB128();
const b = v & 0x01;
const delta = dezig(v >>> 1);
dst[i] = last[b] += delta;
}
};
MeshoptDecoder.decodeGltfBuffer = (target, count, size, source, mode, filter) => {
var table = {
ATTRIBUTES: MeshoptDecoder.decodeVertexBuffer,
TRIANGLES: MeshoptDecoder.decodeIndexBuffer,
INDICES: MeshoptDecoder.decodeIndexSequence,
};
assert(table[mode] !== undefined);
table[mode](target, count, size, source, filter);
};
// node.js interface:
// for (let k in MeshoptDecoder) { exports[k] = MeshoptDecoder[k]; }
export { MeshoptDecoder };

212
node_modules/meshoptimizer/meshopt_encoder.js generated vendored Normal file

File diff suppressed because one or more lines are too long

21
node_modules/meshoptimizer/meshopt_encoder.module.d.ts generated vendored Normal file
View File

@@ -0,0 +1,21 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2024, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export type ExpMode = 'Separate' | 'SharedVector' | 'SharedComponent' | 'Clamped';
export const MeshoptEncoder: {
supported: boolean;
ready: Promise<void>;
reorderMesh: (indices: Uint32Array, triangles: boolean, optsize: boolean) => [Uint32Array, number];
reorderPoints: (positions: Float32Array, positions_stride: number) => Uint32Array;
encodeVertexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeIndexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeIndexSequence: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeGltfBuffer: (source: Uint8Array, count: number, size: number, mode: string) => Uint8Array;
encodeFilterOct: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterQuat: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterExp: (source: Float32Array, count: number, stride: number, bits: number, mode?: ExpMode) => Uint8Array;
};

204
node_modules/meshoptimizer/meshopt_encoder.module.js generated vendored Normal file

File diff suppressed because one or more lines are too long

176
node_modules/meshoptimizer/meshopt_encoder.test.js generated vendored Normal file
View File

@@ -0,0 +1,176 @@
var assert = require('assert').strict;
var encoder = require('./meshopt_encoder.js');
var decoder = require('./meshopt_decoder.js');
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
function bytes(view) {
return new Uint8Array(view.buffer, view.byteOffset, view.byteLength);
}
var tests = {
reorderMesh: function () {
var indices = new Uint32Array([4, 2, 5, 3, 1, 4, 0, 1, 3, 1, 2, 4]);
var expected = new Uint32Array([0, 1, 2, 3, 1, 0, 4, 3, 0, 5, 3, 4]);
var remap = new Uint32Array([5, 3, 1, 4, 0, 2]);
var res = encoder.reorderMesh(indices, /* triangles= */ true, /* optsize= */ true);
assert.deepEqual(indices, expected);
assert.deepEqual(res[0], remap);
assert.equal(res[1], 6); // unique
},
reorderPoints: function () {
var points = new Float32Array([1, 1, 1, 11, 11, 11, 2, 2, 2, 12, 12, 12]);
var expected = new Uint32Array([0, 2, 1, 3]);
var remap = encoder.reorderPoints(points, 3);
assert.deepEqual(remap, expected);
},
roundtripVertexBuffer: function () {
var data = new Uint8Array(16 * 4);
// this tests 0/2/4/8 bit groups in one stream
for (var i = 0; i < 16; ++i) {
data[i * 4 + 0] = 0;
data[i * 4 + 1] = i * 1;
data[i * 4 + 2] = i * 2;
data[i * 4 + 3] = i * 8;
}
var encoded = encoder.encodeVertexBuffer(data, 16, 4);
var decoded = new Uint8Array(16 * 4);
decoder.decodeVertexBuffer(decoded, 16, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexBuffer: function () {
var data = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeIndexBuffer(bytes(data), data.length, 4);
var decoded = new Uint32Array(data.length);
decoder.decodeIndexBuffer(bytes(decoded), data.length, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexBuffer16: function () {
var data = new Uint16Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeIndexBuffer(bytes(data), data.length, 2);
var decoded = new Uint16Array(data.length);
decoder.decodeIndexBuffer(bytes(decoded), data.length, 2, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexSequence: function () {
var data = new Uint32Array([0, 1, 51, 2, 49, 1000]);
var encoded = encoder.encodeIndexSequence(bytes(data), data.length, 4);
var decoded = new Uint32Array(data.length);
decoder.decodeIndexSequence(bytes(decoded), data.length, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexSequence16: function () {
var data = new Uint16Array([0, 1, 51, 2, 49, 1000]);
var encoded = encoder.encodeIndexSequence(bytes(data), data.length, 2);
var decoded = new Uint16Array(data.length);
decoder.decodeIndexSequence(bytes(decoded), data.length, 2, encoded);
assert.deepEqual(decoded, data);
},
encodeFilterOct8: function () {
var data = new Float32Array([1, 0, 0, 0, 0, -1, 0, 0, 0.7071068, 0, 0.707168, 1, -0.7071068, 0, -0.707168, 1]);
var expected = new Uint8Array([0x7f, 0, 0x7f, 0, 0, 0x81, 0x7f, 0, 0x3f, 0, 0x7f, 0x7f, 0x81, 0x40, 0x7f, 0x7f]);
// 4 vectors, encode each vector into 4 bytes with 8 bits of precision/component
var encoded = encoder.encodeFilterOct(data, 4, 4, 8);
assert.deepEqual(encoded, expected);
},
encodeFilterOct12: function () {
var data = new Float32Array([1, 0, 0, 0, 0, -1, 0, 0, 0.7071068, 0, 0.707168, 1, -0.7071068, 0, -0.707168, 1]);
var expected = new Uint16Array([0x7ff, 0, 0x7ff, 0, 0x0, 0xf801, 0x7ff, 0, 0x3ff, 0, 0x7ff, 0x7fff, 0xf801, 0x400, 0x7ff, 0x7fff]);
// 4 vectors, encode each vector into 8 bytes with 12 bits of precision/component
var encoded = encoder.encodeFilterOct(data, 4, 8, 12);
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterQuat12: function () {
var data = new Float32Array([1, 0, 0, 0, 0, -1, 0, 0, 0.7071068, 0, 0, 0.707168, -0.7071068, 0, 0, -0.707168]);
var expected = new Uint16Array([0, 0, 0, 0x7fc, 0, 0, 0, 0x7fd, 0x7ff, 0, 0, 0x7ff, 0x7ff, 0, 0, 0x7ff]);
// 4 quaternions, encode each quaternion into 8 bytes with 12 bits of precision/component
var encoded = encoder.encodeFilterQuat(data, 4, 8, 12);
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterExp: function () {
var data = new Float32Array([1, -23.4, -0.1]);
var expected = new Uint32Array([0xf7000200, 0xf7ffd133, 0xf7ffffcd]);
// 1 vector with 3 components (12 bytes), encode each vector into 12 bytes with 15 bits of precision/component
var encoded = encoder.encodeFilterExp(data, 1, 12, 15);
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterExpMode: function () {
var data = new Float32Array([1, -23.4, -0.1, 11.0]);
var expected = new Uint32Array([0xf3002000, 0xf7ffd133, 0xf3fffccd, 0xf7001600]);
// 2 vectors with 2 components (8 bytes), encode each vector into 8 bytes with 15 bits of precision/component
var encoded = encoder.encodeFilterExp(data, 2, 8, 15, 'SharedComponent');
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterExpClamp: function () {
var data = new Float32Array([1, -23.4, -0.1]);
var expected = new Uint32Array([0xf3002000, 0xf7ffd133, 0xf2fff99a]);
// 1 vector with 3 components (12 bytes), encode each vector into 12 bytes with 15 bits of precision/component
// exponents are separate but clamped to 0
var encoded = encoder.encodeFilterExp(data, 1, 12, 15, 'Clamped');
assert.deepEqual(encoded, bytes(expected));
},
encodeGltfBuffer: function () {
var data = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeGltfBuffer(bytes(data), data.length, 4, 'TRIANGLES');
var decoded = new Uint32Array(data.length);
decoder.decodeGltfBuffer(bytes(decoded), data.length, 4, encoded, 'TRIANGLES');
assert.deepEqual(decoded, data);
},
};
Promise.all([encoder.ready, decoder.ready]).then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

376
node_modules/meshoptimizer/meshopt_simplifier.js generated vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,47 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2024, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export type Flags = 'LockBorder' | 'Sparse' | 'ErrorAbsolute' | 'Prune';
export const MeshoptSimplifier: {
supported: boolean;
ready: Promise<void>;
useExperimentalFeatures: boolean;
compactMesh: (indices: Uint32Array) => [Uint32Array, number];
simplify: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
target_index_count: number,
target_error: number,
flags?: Flags[]
) => [Uint32Array, number];
// Experimental; requires useExperimentalFeatures to be set to true
simplifyWithAttributes: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
vertex_attributes: Float32Array,
vertex_attributes_stride: number,
attribute_weights: number[],
vertex_lock: Uint8Array | null,
target_index_count: number,
target_error: number,
flags?: Flags[]
) => [Uint32Array, number];
getScale: (vertex_positions: Float32Array, vertex_positions_stride: number) => number;
// Experimental; requires useExperimentalFeatures to be set to true
simplifyPoints: (
vertex_positions: Float32Array,
vertex_positions_stride: number,
target_vertex_count: number,
vertex_colors?: Float32Array,
vertex_colors_stride?: number,
color_weight?: number
) => Uint32Array;
};

368
node_modules/meshoptimizer/meshopt_simplifier.module.js generated vendored Normal file

File diff suppressed because one or more lines are too long

169
node_modules/meshoptimizer/meshopt_simplifier.test.js generated vendored Normal file
View File

@@ -0,0 +1,169 @@
var assert = require('assert').strict;
var simplifier = require('./meshopt_simplifier.js');
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
simplifier.useExperimentalFeatures = true;
var tests = {
compactMesh: function () {
var indices = new Uint32Array([0, 1, 3, 3, 1, 5]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3]);
var missing = 2 ** 32 - 1;
var remap = new Uint32Array([0, 1, missing, 2, missing, 3]);
var res = simplifier.compactMesh(indices);
assert.deepEqual(indices, expected);
assert.deepEqual(res[0], remap);
assert.equal(res[1], 4); // unique
},
simplify: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 4, 0, 0, 1, 0, 2, 2, 0, 0, 0, 0, 1, 0, 0, 4, 0, 0]);
var res = simplifier.simplify(indices, positions, 3, /* target indices */ 3, /* target error */ 0.01);
var expected = new Uint32Array([0, 5, 3]);
assert.deepEqual(res[0], expected);
assert.equal(res[1], 0); // error
},
simplify16: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint16Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 4, 0, 0, 1, 0, 2, 2, 0, 0, 0, 0, 1, 0, 0, 4, 0, 0]);
var res = simplifier.simplify(indices, positions, 3, /* target indices */ 3, /* target error */ 0.01);
var expected = new Uint16Array([0, 5, 3]);
assert.deepEqual(res[0], expected);
assert.equal(res[1], 0); // error
},
simplifyLockBorder: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 2, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0]);
var res = simplifier.simplify(indices, positions, 3, /* target indices */ 3, /* target error */ 0.01, ['LockBorder']);
var expected = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
assert.deepEqual(res[0], expected);
assert.equal(res[1], 0); // error
},
simplifyAttr: function () {
var vb_pos = new Float32Array(8 * 3 * 3);
var vb_att = new Float32Array(8 * 3 * 3);
for (var y = 0; y < 8; ++y) {
// first four rows are a blue gradient, next four rows are a yellow gradient
var r = y < 4 ? 0.8 + y * 0.05 : 0;
var g = y < 4 ? 0.8 + y * 0.05 : 0;
var b = y < 4 ? 0 : 0.8 + (7 - y) * 0.05;
for (var x = 0; x < 3; ++x) {
vb_pos[(y * 3 + x) * 3 + 0] = x;
vb_pos[(y * 3 + x) * 3 + 1] = y;
vb_pos[(y * 3 + x) * 3 + 2] = 0.03 * x;
vb_att[(y * 3 + x) * 3 + 0] = r;
vb_att[(y * 3 + x) * 3 + 1] = g;
vb_att[(y * 3 + x) * 3 + 2] = b;
}
}
var ib = new Uint32Array(7 * 2 * 6);
for (var y = 0; y < 7; ++y) {
for (var x = 0; x < 2; ++x) {
ib[(y * 2 + x) * 6 + 0] = (y + 0) * 3 + (x + 0);
ib[(y * 2 + x) * 6 + 1] = (y + 0) * 3 + (x + 1);
ib[(y * 2 + x) * 6 + 2] = (y + 1) * 3 + (x + 0);
ib[(y * 2 + x) * 6 + 3] = (y + 1) * 3 + (x + 0);
ib[(y * 2 + x) * 6 + 4] = (y + 0) * 3 + (x + 1);
ib[(y * 2 + x) * 6 + 5] = (y + 1) * 3 + (x + 1);
}
}
var attr_weights = [0.01, 0.01, 0.01];
var res = simplifier.simplifyWithAttributes(ib, vb_pos, 3, vb_att, 3, attr_weights, null, 6 * 3, 1e-2);
var expected = new Uint32Array([0, 2, 9, 9, 2, 11, 9, 11, 12, 12, 11, 14, 12, 14, 21, 21, 14, 23]);
assert.deepEqual(res[0], expected);
},
simplifyLockFlags: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 2, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0]);
var locks = new Uint8Array([1, 1, 1, 1, 0, 1]); // only vertex 4 can move
var res = simplifier.simplifyWithAttributes(indices, positions, 3, new Float32Array(), 1, [], locks, 3, 0.01);
var expected = new Uint32Array([0, 2, 1, 1, 2, 3, 2, 5, 3]);
assert.deepEqual(res[0], expected);
assert.equal(res[1], 0); // error
},
getScale: function () {
var positions = new Float32Array([0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 3]);
assert(simplifier.getScale(positions, 3) == 3.0);
},
simplifyPoints: function () {
var positions = new Float32Array([0, 0, 0, 100, 0, 0, 100, 1, 1, 110, 0, 0]);
var colors = new Float32Array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]);
var expected = new Uint32Array([0, 1]);
var expectedC = new Uint32Array([0, 2]);
var res = simplifier.simplifyPoints(positions, 3, 2);
assert.deepEqual(res, expected);
// note: recommended value for color_weight is 1e-2 but here we push color weight to be very high to bias candidate selection for testing
var resC1 = simplifier.simplifyPoints(positions, 3, 2, colors, 3, 1e-1);
assert.deepEqual(resC1, expectedC);
var resC2 = simplifier.simplifyPoints(positions, 3, 2, colors, 3, 1e-2);
assert.deepEqual(resC2, expected);
},
};
Promise.all([simplifier.ready]).then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

28
node_modules/meshoptimizer/package.json generated vendored Normal file
View File

@@ -0,0 +1,28 @@
{
"name": "meshoptimizer",
"version": "0.22.0",
"description": "Mesh optimizaiton library that makes meshes smaller and faster to render",
"author": "Arseny Kapoulkine",
"license": "MIT",
"bugs": "https://github.com/zeux/meshoptimizer/issues",
"homepage": "https://github.com/zeux/meshoptimizer",
"keywords": [
"compression",
"mesh"
],
"repository": {
"type": "git",
"url": "https://github.com/zeux/meshoptimizer"
},
"files": [
"*.js",
"*.ts"
],
"main": "index.js",
"module": "index.module.js",
"types": "index.module.d.ts",
"scripts": {
"test": "node meshopt_encoder.test.js && node meshopt_decoder.test.js && node meshopt_simplifier.test.js && node meshopt_clusterizer.test.js",
"prepublishOnly": "npm test"
}
}