To disable ZSH annoying expansion without complete path
zstyle ':completion:*' completer _complete _complete:-fuzzy _correct _approximate _ignored _expand
To disable ZSH annoying expansion without complete path
zstyle ':completion:*' completer _complete _complete:-fuzzy _correct _approximate _ignored _expand
When running pnpm run dev
on Svelte + Vite + Shacdn project, I received error
[dev:svelte] [error] No parser could be inferred for file
[dev:svelte] [error] No parser could be inferred for file
[dev:svelte] [error] No parser could be inferred for file
To solve this, create .prettierrc
file and put this
When running Shacdn + Svelte, Vite, I got this error :
▲ [WARNING] Cannot find base config file "./.svelte-kit/tsconfig.json" [tsconfig.json]
To solve this, edit package.json
and add prepare": "svelte-kit sync",
. For example
"scripts": {
"dev": "vite dev",
"build": "vite build",
"build:registry": "tsx scripts/build-registry.ts",
"br": "pnpm build:registry",
"preview": "vite preview",
"test": "playwright test",
"prepare": "svelte-kit sync",
"sync": "svelte-kit sync",
"check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
"check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch",
"test:unit": "vitest"
},
Here are a quick step to upgrade to the latest Driver (which needed for running Docker NVIDIA Nemo)
sudo apt purge "nvidia*" "libnvidia*"
2. Install the latest NVIDIA Driver
Add PPA and check the driver version as you wish to install
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers list
Then to install
sudo apt install nvidia-driver-565
If you got error Failed to initialize NVML: Driver/library version mismatch
the solution is reboot.
If you are using NVIDIA Container Toolkit,
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
There are a quick way to do remote desktop for Ubuntu 24.04 by enable its desktop sharing
and connect using Remmina from the client. Here is the steps
Go to “System” -> “Desktop Sharing” and toggle both Desktop Sharing and Remote Control. In login details, filling the RDP username and Password
2. Connect via Client
Open Remmina and click “+”. Choose RDP and give the credentials the Remote user and password OS (not the RDP yet). Once you connected, then filling with the Login Details in RDP. Yes, we have two users/password here and you can set it to have same value.
When I’m trying to do nvidia-smi
inside the docker for multiple-gpus, its gave errors. I’m using docker API python module to run it. Checking on nvidia-gpus, its showing only single device, rather multiple
ls /proc/driver/nvidia/gpus
Solution is to ensure the gpus=all
or gpus=2
is initialize properly. Running the docker manually first using
docker run --name caviar --detach --gpus all -it --privileged ghcr.io/ehfd/nvidia-dind:latest
This step showing all the GPUs is loaded. Then, the culprit is at Docker API. the proper way to do it is
When running docker compose up for compose.yaml
I got error:
docker compose up
WARN[0000] /docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
project name must not be empty
The quick solution is
docker-compose.yaml
/home/USER
like /home/ubuntu
in this case. Execute docker compose up
from there.
When running VLLM, I got error “alueError: Model architectures [‘Qwen2ForCausalLM’] failed to be inspected”
vllm serve unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit --enable-reasoning --reasoning-parser deepseek_r1 --quantization bitsa
ndbytes --load-format bitsandbytes --enable-chunked-prefill --max_model_len 6704
The solution is put VLLM_USE_MODELSCOPE=True
For example
VLLM_USE_MODELSCOPE=True vllm serve unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit --enable-reasoning --reasoning-parser deepseek_r1 --quantization bitsa
ndbytes --load-format bitsandbytes --enable-chunked-prefill --max_model_len 6704
Fix the problem running Vertex AI local-run with GPU based training docker asia-docker.pkg.dev/vertex-ai/training/pytorch-gpu.2-3.py310:latest
producing error with Transformer Trainer()
gcloud ai custom-jobs local-run --gpu --executor-image-uri=asia-docker.pkg.dev/vertex-ai/training/pytorch-gpu.2-3.py310:latest --local-package-path=YOUR_PYTHON_PACKAGE --script=YOUR_SCRIPT_PYTHON_FILE
The error appear
/opt/conda/lib/python3.10/site-packages/transformers/training_args.py:1575: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead
warnings.warn(
Setting up Trainer...
Starting training...
0%| | 0/3060 [00:00<?, ?it/s]terminate called after throwing an instance of 'std::runtime_error'
what(): torch_xla/csrc/runtime/runtime.cc:31 : $PJRT_DEVICE is not set.
exit status 139
ERROR: (gcloud.ai.custom-jobs.local-run)
Docker failed with error code 139.
Command: docker run --rm --runtime nvidia -v -e --ipc host
This problem what(): torch_xla/csrc/runtime/runtime.cc:31 : $PJRT_DEVICE is not set.
apparently because the PyTorch issue.
Downloading the vertex AI docker directly and running it locally as docker run will trigger `exit 1` error. The quick solution is to use
gcloud ai custom-jobs local-run
The detail at https://cloud.google.com/vertex-ai/docs/training/containerize-run-code-local