[GH-ISSUE #53] Better error handling/user guidance for missing local models #27

Open
opened 2026-03-03 13:52:18 +03:00 by kerem · 1 comment
Owner

Originally created by @0xdevalias on GitHub (Aug 23, 2024).
Original GitHub issue: https://github.com/jehna/humanify/issues/53

Currently when trying to run with a local model that isn't downloaded, the app crashes with an error such as the following:

⇒ npx humanifyjs local --disableGpu foo.js
(node:96922) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
node:internal/fs/promises:638
  return new FileHandle(await PromisePrototypeThen(
                        ^

Error: ENOENT: no such file or directory, open '/Users/devalias/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf'
    at async Object.open (node:internal/fs/promises:638:25)
    at async GgufFsFileReader._readByteRange (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/fileReaders/GgufFsFileReader.js:49:20)
    at async GgufFsFileReader.<anonymous> (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/fileReaders/GgufFsFileReader.js:44:40)
    at async withLock (file:///Users/devalias/dev/foohumanify/node_modules/lifecycle-utils/dist/withLock.js:36:16)
    at async GgufFsFileReader._readToExpandBufferUpToOffset (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/fileReaders/GgufFsFileReader.js:41:16)
    at async parseMagicAndVersion (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/parser/parseGguf.js:37:27)
    at async parseGguf (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/parser/parseGguf.js:11:29)
    at async readSingleFile (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/readGgufFileInfo.js:34:16)
    at async readGgufFileInfo (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/readGgufFileInfo.js:45:16)
    at async LlamaModel._create (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/evaluator/LlamaModel/LlamaModel.js:411:26) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'open',
  path: '/Users/devalias/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf'
}

Node.js v22.6.0

It might be useful to give a more user friendly error that explains how to resolve the issue.

I ran into this while trying to test/replicate the following:

Originally created by @0xdevalias on GitHub (Aug 23, 2024). Original GitHub issue: https://github.com/jehna/humanify/issues/53 Currently when trying to run with a local model that isn't downloaded, the app crashes with an error such as the following: ``` ⇒ npx humanifyjs local --disableGpu foo.js (node:96922) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) node:internal/fs/promises:638 return new FileHandle(await PromisePrototypeThen( ^ Error: ENOENT: no such file or directory, open '/Users/devalias/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf' at async Object.open (node:internal/fs/promises:638:25) at async GgufFsFileReader._readByteRange (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/fileReaders/GgufFsFileReader.js:49:20) at async GgufFsFileReader.<anonymous> (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/fileReaders/GgufFsFileReader.js:44:40) at async withLock (file:///Users/devalias/dev/foohumanify/node_modules/lifecycle-utils/dist/withLock.js:36:16) at async GgufFsFileReader._readToExpandBufferUpToOffset (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/fileReaders/GgufFsFileReader.js:41:16) at async parseMagicAndVersion (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/parser/parseGguf.js:37:27) at async parseGguf (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/parser/parseGguf.js:11:29) at async readSingleFile (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/readGgufFileInfo.js:34:16) at async readGgufFileInfo (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/gguf/readGgufFileInfo.js:45:16) at async LlamaModel._create (file:///Users/devalias/dev/foohumanify/node_modules/node-llama-cpp/dist/evaluator/LlamaModel/LlamaModel.js:411:26) { errno: -2, code: 'ENOENT', syscall: 'open', path: '/Users/devalias/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf' } Node.js v22.6.0 ``` It might be useful to give a more user friendly error that explains how to resolve the issue. I ran into this while trying to test/replicate the following: - https://github.com/jehna/humanify/issues/50
Author
Owner

@0xdevalias commented on GitHub (Aug 23, 2024):

I can see that the instructions are in the README here:

Which suggests I need to run humanify download 2b first.

I wonder if it might make more sense to have the local model download as a sub-command of humanify local, as that's where I was first looking for help for how to download the models, and it didn't even occur to me to check the root level command, since things local things seemed to be 'scoped' under the local command:

⇒ npx humanifyjs local -h
(node:97623) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
Usage: humanify local [options] <input>

Use a local LLM to unminify code

Arguments:
  input                     The input minified Javascript file

Options:
  -m, --model <model>       The model to use (default: "2b")
  -o, --outputDir <output>  The output directory (default: "output")
  -s, --seed <seed>         Seed for the model to get reproduceable results (leave out for random seed)
  --disableGpu              Disable GPU acceleration
  --verbose                 Show verbose output
  -h, --help                display help for command

There also seems to be very minimal information output during the download. It might be nice to know a bit more about which model is being downloaded, from where, where it's being saved, how large it is, etc:

 ⇒ npx humanifyjs download 2b
(node:97932) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
Downloaded 1.63 GB

I guess it does provide slightly more info when the download is completed:

⇒ npx humanifyjs download 2b
(node:97932) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
                  Model "2b" downloaded to /Users/devalias/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf

I can see it's downloaded here:

⇒ ls ~/.humanifyjs/models
Phi-3.1-mini-4k-instruct-Q4_K_M.gguf

And the code for that is here:

github.com/jehna/humanify@85d17e73d6/src/local-models.ts (L13-L25)

I also notice that MODEL_DIRECTORY is hardcoded currently. I wonder if that would be something useful to be able to specify/customize via a CLI arg/env variable/etc.

It seems the humanify local command uses getModelPath:

github.com/jehna/humanify@85d17e73d6/src/plugins/local-llm-rename/llama.ts (L19-L22)

Which only seems to work for model aliases defined in MODELS:

github.com/jehna/humanify@85d17e73d6/src/local-models.ts (L69-L75)

Even though the error text for humanify download sounds as though it would be capable of downloading any named model:

github.com/jehna/humanify@85d17e73d6/src/local-models.ts (L77-L85)

And usually for LLM apps, the --model param would let us specify arbitrary models from huggingface or similar.


Edit: Created a new issue related to the download progress/etc:

<!-- gh-comment-id:2306107630 --> @0xdevalias commented on GitHub (Aug 23, 2024): I can see that the instructions are in the README here: - https://github.com/jehna/humanify#local-mode Which suggests I need to run `humanify download 2b` first. I wonder if it might make more sense to have the local model download as a sub-command of `humanify local`, as that's where I was first looking for help for how to download the models, and it didn't even occur to me to check the root level command, since things local things seemed to be 'scoped' under the `local` command: ``` ⇒ npx humanifyjs local -h (node:97623) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) Usage: humanify local [options] <input> Use a local LLM to unminify code Arguments: input The input minified Javascript file Options: -m, --model <model> The model to use (default: "2b") -o, --outputDir <output> The output directory (default: "output") -s, --seed <seed> Seed for the model to get reproduceable results (leave out for random seed) --disableGpu Disable GPU acceleration --verbose Show verbose output -h, --help display help for command ``` There also seems to be very minimal information output during the download. It might be nice to know a bit more about which model is being downloaded, from where, where it's being saved, how large it is, etc: ```shell ⇒ npx humanifyjs download 2b (node:97932) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) Downloaded 1.63 GB ``` I guess it does provide slightly more info when the download is completed: ```shell ⇒ npx humanifyjs download 2b (node:97932) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) Model "2b" downloaded to /Users/devalias/.humanifyjs/models/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf ``` I can see it's downloaded here: ``` ⇒ ls ~/.humanifyjs/models Phi-3.1-mini-4k-instruct-Q4_K_M.gguf ``` And the code for that is here: https://github.com/jehna/humanify/blob/85d17e73d6b760bf896d01dd99fa3a5f98ea2848/src/local-models.ts#L13-L25 I also notice that `MODEL_DIRECTORY` is hardcoded currently. I wonder if that would be something useful to be able to specify/customize via a CLI arg/env variable/etc. It seems the `humanify local` command uses `getModelPath`: https://github.com/jehna/humanify/blob/85d17e73d6b760bf896d01dd99fa3a5f98ea2848/src/plugins/local-llm-rename/llama.ts#L19-L22 Which only seems to work for model aliases defined in `MODELS`: https://github.com/jehna/humanify/blob/85d17e73d6b760bf896d01dd99fa3a5f98ea2848/src/local-models.ts#L69-L75 Even though the error text for `humanify download` sounds as though it would be capable of downloading any named model: https://github.com/jehna/humanify/blob/85d17e73d6b760bf896d01dd99fa3a5f98ea2848/src/local-models.ts#L77-L85 And usually for LLM apps, the `--model` param would let us specify arbitrary models from huggingface or similar. --- **Edit:** Created a new issue related to the download progress/etc: - https://github.com/jehna/humanify/issues/58
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#27
No description provided.