Set up 2025 (nick)
Using Ubuntu 24.04 LTS which already has python3.12
1. install tree
sudo apt-get install tree
sudo apt install python3.12-venv
2. Install Oh My Zsh
sudo apt install zsh -y
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
modify .zshrc
sudo nano ~/.zshrc- make changes:
# Nick added
alias python=python3.12
echo ".zshrc loaded ✅"
3. node
Download with
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bashnvm install 24
4. Install pm2
npm install -g pm2
5. Install the404-API and GitHub
- from terminal:
ssh-keygen -t ed25519 -C "nrodrig1@gmail.com" - start ssh-agent:
eval "$(ssh-agent -s)" - create or modify ~/.ssh/config file
touch ~/.ssh/config
Host github.com
AddKeysToAgent yes
IdentityFile ~/.ssh/id_ed25519
- place your ssh key in the Github page: https://github.com/settings/keys
- copy SSH public key from terminal command:
cat ~/.ssh/id_ed25519.pub
6. Create folder structure
mkdir applications
mkdir databases
mkdir environments
mkdir project_resources
mkdir _config_files
7. Clone the404-API
git clone git@github.com:costa-rica/The404-API.git
Set up 2025 (OBE - shared)
1. copy datastore/NWS-Avatar03 or start new using the Ubuntu 24.04 LTS image
2. use /home/shared/
Make home/ shared with all development users
sudo groupadd developers
sudo usermod -aG developers nick
sudo chown -R root:developers /home/shared
sudo chmod -R 775 /home/shared
3. install tree
sudo apt-get install tree
4. Install Oh My Zsh
sudo apt install zsh -y
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
5. Update .zshrc
prepare for node.js
export NVM_DIR="/home/shared/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
# For pm2 to work
export PM2_HOME="/home/shared/.pm2"
# Set NPM_CONFIG_PREFIX only if nvm isn't already loaded
if [ -z "$NVM_DIR" ]; then
export NPM_CONFIG_PREFIX="/home/shared/.npm"
export PATH="$NPM_CONFIG_PREFIX/bin:$PATH"
fi
alias python=python3.12
echo ".zshrc loaded ✅"
6. node
- create /home/shared/.nvm directory
- Download with
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
# Download and install nvm:
nvm install 24
#corepack enable yarn
7. Install the404-API and GitHub
- from terminal:
ssh-keygen -t ed25519 -C "nrodrig1@gmail.com" - start ssh-agent:
eval "$(ssh-agent -s)" - create or modify ~/.ssh/config file
touch ~/.ssh/config
Host github.com
AddKeysToAgent yes
IdentityFile ~/.ssh/id_ed25519
- place your ssh key in the Github page: https://github.com/settings/keys
- copy SSH public key from terminal command:
cat ~/.ssh/id_ed25519.pub
git clone git@github.com:costa-rica/The404-API.git
8. Install pm2
npm install -g pm2
- note (2025-05-23): I didn't need sudo last time
- note (old ?): it doesn't seem that yarn global with pm2 works ?
OBE: Set up 2025 (prior to November 2025)
1. use /home/shared/
Make home/ shared with all development users
sudo groupadd developers
sudo usermod -aG developers nick
sudo chown -R root:developers /home/shared
sudo chmod -R 775 /home/shared
- check that the group is correct:
getent group developers - then logout and log back in to activate the group privileges
- in case: change back to root/sudo:
sudo chown -R root:root - list direcotories and owner:
ls -l [optional path]
2. Install Oh My Zsh
sudo apt install zsh -y
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
install tree
sudo apt-get install tree
3. Update .zshrc
prepare for node.js
export NVM_DIR="/home/shared/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
# For pm2 to work
export PM2_HOME="/home/shared/.pm2"
## No more yarn
# export PATH="/home/shared/.yarn/bin:$PATH"
# export PATH="/home/shared/.config/yarn/global/node_modules/.bin:$PATH"
# Set NPM_CONFIG_PREFIX only if nvm isn't already loaded
if [ -z "$NVM_DIR" ]; then
export NPM_CONFIG_PREFIX="/home/shared/.npm"
export PATH="$NPM_CONFIG_PREFIX/bin:$PATH"
fi
echo ".zshrc loaded ✅"
4. node
- create /home/shared/.nvm directory
- Download with
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
# Download and install nvm:
nvm install 24
#corepack enable yarn
Assign privileges
sudo chown -R :developers /home/shared/.nvm
sudo chmod -R 775 /home/shared/.nvm # This one might not be necessary ?
5. Install the404Back and GitHub
- from terminal:
ssh-keygen -t ed25519 -C "nrodrig1@gmail.com" - start ssh-agent:
eval "$(ssh-agent -s)" - create or modify ~/.ssh/config file
touch ~/.ssh/config
Host github.com
AddKeysToAgent yes
IdentityFile ~/.ssh/id_ed25519
- place your ssh key in the Github page: https://github.com/settings/keys
- copy SSH public key from terminal command:
cat ~/.ssh/id_ed25519.pub
git clone git@github.com:costa-rica/The404-API.git
6. Install pm2
npm install -g pm2
- note (2025-05-23): I didn't need sudo last time
- note (old ?): it doesn't seem that yarn global with pm2 works ?
Implement logrotate
Step 1 Install pm2-logrotate:
pm2 install pm2-logrotateStep 2 Rotate logs when they grow bigger than 10MB:
pm2 set pm2-logrotate:max_size 10MStep 3 Keep the 7 most recent rotated logs:
pm2 set pm2-logrotate:retain 7Step 4 Compress rotated logs:
pm2 set pm2-logrotate:compress trueStep 5 Use timestamp format in rotated filenames:
pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
7. move all the .yarn .yarnrc .nvm, .pm2, etc. to /home/shared/
mv /home/nick/.yarn /home/shared/
mv /home/nick/.yarnrc /home/shared/
mv /home/nick/.nvm /home/shared/
mv /home/nick/.pm2 /home/shared/
8. nginx
sudo apt install nginx -y
sudo apt install ufw -y
sudo ufw allow ssh
sudo ufw allow "Nginx full"
sudo ufw allow from 192.168.100.166 to any port 8000
sudo ufw allow 8000
sudo ufw allow from 192.168.1.134 # allows to all ports on this machine
sudo ufw enable
- change ownership
sudo chown -R nick:nick /etc/nginx/conf.d/- !important for the404Back
Troubleshooting
- move files and safely overwrite
rsync -av --progress ~/.yarn/ /home/shared/.yarn/
Set up 2024 and prior
This is used setup machines that deploy web applications on Ubuntu20.04.4 LTS focal. The current process just focuses on Python Flask websites.
- Hardware: GPU (not a CPU)
Update and install Python
sudo apt update && sudo apt upgrade -y
sudo apt install software-properties-common -y
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt-get install python3.11-full -y
#See all python installed in machine
ls /usr/bin/python*
Install Oh My Zsh
sudo apt install zsh -y
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
if no option to change default shell user:
chsh -s $(which zsh)
.profile or .zshrc files
alias python=python3.11
install nginx and firewalls
sudo apt install nginx -y
sudo apt install ufw -y
sudo ufw allow ssh
sudo ufw allow "Nginx full"
sudo ufw allow from 192.168.100.166 to any port 8000
sudo ufw allow 8000
sudo ufw enable
https certify
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d what-sticks.com -d www.what-sticks.com -d what-sticks-health.com -d www.what-sticks-health.com -d api.what-sticks.com -d api.what-sticks-health.com
sudo certbot --nginx -d venturer.dashanddata.com -d dev.what-sticks.com -d dev.api10.what-sticks.com -d pioneer02.dashanddata.com
sudo certbot --nginx -d nhtsa-dash.kineticmetrics.com -d demo.kmdashboard.dashboardsanddatabases.com
# certbot auto renewal
sudo systemctl status certbot.timer
# check certbo auto renewal
sudo certbot renew --dry-run
# starts service on boot
sudo systemctl enable ws08web
number of workers
1- get number of cores
nproc --all
2- calc number of workers based on gunicorn documenetation (CoreyS): 2 x num_cores
change computer (host name)
sudo hostnamectl set-hostname newNameHere
## edit this file with all the old names:
sudo nano /etc/hosts
sudo reboot
## This worked even in a WMWare VM
sudo unable to resolve host name or service not known
su - root
cat /etc/hosts
nano /etc/hosts
- add name of machine like this 127.0.0.1 dev0
- source: https://www.globo.tech/learning-center/sudo-unable-to-resolve-host-explained/
Multipathd warnings on server sys.log file
- For NWS in Rochester - to stop Multipathd warnings from clogging up syslog
- Commands to turn this off:
sudo systemctl stop multipathd
sudo systemctl disable multipathd
Install specific version of Python
- note this did not work well as I recall
Install prerequisites:
sudo apt install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget
download source code
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
extract:
tar -xf Python-3.11.7.tgz
configure:
cd Python-3.11.7
./configure --enable-optimizations
### Build Python from source and install it
make -j$(nproc)
### make -j$(nproc) <--- takes 20 minutes
sudo make altinstall
### Using make altinstall instead of make install prevents the new Python version from
### check it worked:
python3.11 --version
### After installation you can delete the Python-3.11.7 Python-3.11.7.tgz files
VMWare
Reverser Proxy Server:
- CPU: 1
- Memory: 2GB
- Hard Disk: 40GM
VM Options > Configuration Parameters to add:
- disk.EnableUUID: True
Change password
- terminal
passwd
To allow short and simple password must edit file
- terminal:
sudo nano /etc/pam.d/common-password - To allow for short and simple password replace line
password [success=1 default=ignore] pam_unix.so obscure sha512topassword [success=1 default=ignore] pam_unix.so sha512 minlen=4
install latest version of node
Method from nodejs.org
# Download and install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
# Download and install Node.js:
nvm install 22
# Verify the Node.js version:
node -v # Should print "v22.12.0".
nvm current # Should print "v22.12.0".
# Download and install Yarn:
corepack enable yarn
# Verify Yarn version:
yarn -v
- if using npm to install yarn you might need to add yarn to .profile, or .zshrc or .bashrc
export PATH="$PATH:/home/dashanddata_user/.yarn/bin"
.zshrc file example with Node.js:
- Use this. Here is the bottom of a clean and well organized .zshrc file from 2025-02-11
# 🛠️ Ensure Language Environment
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
# 🛠️ Node.js & NVM Setup
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # Load nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # Load nvm bash_completion
# 🛠️ Ensure Yarn Global Packages Are in PATH
export PATH="$(yarn global bin):$PATH"
# 🛠️ Source .profile (if it exists) for additional settings
if [ -f ~/.profile ]; then
source ~/.profile
fi
# ✅ Debugging Message (Optional)
echo ".zshrc loaded successfully ✅ "
pm2
pm2 cron_restart
cron_restart: "42 6 * * *"# restarts at 6:42 AM every day- params: m h day_of_month month day_of_week
- If you start the app with this paramter, it will start automatically at the specified time run until it ends and then restart again.
- If you stop the app manually by
pm2 stop NewsNexusGNewsRequestorpm2 stop 16it will restart - you can verify by seeing this:
- If you stop the app manually by
➜ ~ pm2 stop 16
[PM2] Applying action stopProcessId on app [16](ids: [ '16' ])
[PM2] [NewsNexusGNewRequester](16) ✓
[PM2][WARN] App NewsNexusGNewRequester stopped but CRON RESTART is still UP 42 6 * * *
pm2 Commands
- run only:
pm2 start ecosystem.config.js --only DevelopmentWebApp - delete (after stopping app):
pm2 delete DevelopmentWebApp - clear all logs:
pm2 flush
install pm2
install global
yarn global add pm2
or
npm install -g pm2
install through yarn
yarn global add pm2- check that
pm2 --versionworks otherwise edit PATH from .bashrc
- get binary path by
yarn global bin - in the .bashrc file (via nano ~/.bashrc) add
export PATH="$PATH:/home/your-username/.yarn/bin" source ~/.bashrc- check pm2 version
- check that
run app using pm2
pm2 start app.jsorpm2 start server.js- give it a name
pm2 start server.js --name ExpressApi01
pm2 ecosystem.config.js file that works
- placement of file doesn't seem to matter, but you'll need to be in the directory that it exits to tell pm2 to use it
- I placed it in /home/nick/
- This example runs a js app and two python apps. The second python app has the port in the env and the run.py file corresponds to and uses the PORT form the env
module.exports = {
apps: [
{
name: "404Manager",
script: "server.js",
cwd: "/home/dashanddata_user/applications/ServerManagerBackend/",
watch: true,
env: {
NODE_ENV: "production",
API_KEY: "node-app-api-key",
PORT: 8000, // Add the port number here
},
},
{
name: "DevelopmentWebApp",
script: "/home/dashanddata_user/environments/my_venv/bin/gunicorn",
args: "-w 3 -b 0.0.0.0:8001 --timeout 600 run:app",
cwd: "/home/dashanddata_user/applications/DevelopmentWebApp", // Working directory
interpreter: "none", // Prevents pm2 from using Node.js for this script
env: {
FLASK_CONFIG_TYPE: "dev",
PATH: "/home/dashanddata_user/environments/my_venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin",
},
},
{
name: "DevelopmentWebApp02", // Name of your app
script: "/home/dashanddata_user/environments/my_venv/bin/gunicorn",
args: "-w 3 --timeout 600 run:app",
cwd: "/home/dashanddata_user/applications/DevelopmentWebApp02", // Working directory
interpreter: "none", // Prevents pm2 from using Node.js for this script
env: {
FLASK_CONFIG_TYPE: "dev",
PATH: "/home/dashanddata_user/environments/my_venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin",
PORT: 8002,
},
},
// Add more apps here as needed
],
};
pm2 start on boot, reboot, or whenever the server is off and back on again
- This process assumes you have an ecosystem.config.js file
- 'pm2 startup`
- pm2 will display a line of code to copy and paste like this one:
sudo env PATH=$PATH:/usr/bin /home/dashanddata_user/.config/yarn/global/node_modules/pm2/bin/pm2 startup systemd -u dashanddata_user --hp /home/dashanddata_user
- using the shared case (this worked on nn-dev machine):
sudo env \
PM2_HOME=/home/shared/.pm2 \
PATH=/home/shared/.nvm/versions/node/v22.14.0/bin:$PATH \
/home/shared/.nvm/versions/node/v22.14.0/lib/node_modules/pm2/bin/pm2 \
startup systemd -u nick --hp /home/shared
- 'pm2 save`
additional steps for the /home/shared/ case (2025-08-16)
- Update the service to point at the shared PM2 home: from terminal do
sudo systemctl edit pm2-nickand enter:
[Service]
Environment=PM2_HOME=/home/shared/.pm2
Environment=PATH=%h/.nvm/versions/node/v22.14.0/bin:/home/shared/.nvm/versions/node/v22.14.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
- Reload the service:
sudo systemctl daemon-reload - Might need the following as well:
sudo systemctl enable pm2-nicksudo systemctl restart pm2-nick
undo pm2 start on boot
pm2 unstartup systemd
- This will remove the script added by step 2 of the pm2 start on boot section.
Run Express App on server
- move files to server
- install yarn
- use nginx .conf
- if nginx
sudo nginx -tshows a problem search for duplicates of directed route by using:grep -r "server_name expressapi01.dashanddata.com" /etc/nginx/sites-available/or some variation. If this is the case probably need to replace the default file in the /etc/nginx/sites-available/ directory.
- if nginx
- open ports that app is running on
Run Express App on behind reverse proxy server
- create nginx file for reverse proxy machine
server {
listen 80;
server_name [your_domain.com];
client_max_body_size 1G;
location / {
proxy_pass http://192.168.1.18:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 600s; # Sets the timeout to 600 seconds
}
location /static {
proxy_pass http://192.168.1.18:8000/static;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 600s; # Sets the timeout to 600 seconds
}
}
- move file to /etc/nginx/sites-available
- create link to /etc/nginx/sites-enabled/ using:
sudo ln -s /etc/nginx/sites-available/[your_domain.com] /etc/nginx/sites-enabled/ - check nginx syntax and files are ok
sudo nginx -t - reload nginx
sudo systemctl reload nginx - open ufw port to server
sudo ufw allow from [reverse_proxy_server_local_ip] to any port [port_app_is_deployed_on]
Trouble shooting
Node install issue
- After installing node sand still get:
➜ The404Back git:(main) pm2 status
zsh: command not found: pm2
- assuming you have .zshrc
- add to .zshrc
# Nick added:
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
# Source .profile if it exists
if [ -f ~/.profile ]; then
source ~/.profile
fi
# Load NVM if it exists
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && source "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && source "$NVM_DIR/bash_completion"
echo ".zshrc loaded"
- add to .profile
export PATH="$PATH:/home/dashanddata_user/.yarn/bin"
echo ".profile loaded"
Terminal warning: "manpath: can't set the locale; make sure $LC_* and $LANG are correct"
- if running .zshrc terminal there is a line that is
export MANPATH="/usr/local/man:$MANPATH"in the .zshrc file. Make sure it is not commented. If so, uncomment.
Node for all Ubuntu users
- make the shared node follder
sudo mkdir -p /usr/local/nvm
sudo cp -r ~/.nvm/* /usr/local/nvm/
install node (or copy node from /home/nick/) to
/etc/profile.d/nvm.shadd to to the nvm.sh file
sudo nano /etc/profile.d/nvm.shadd
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # Load nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # Enable nvm auto-completion
- any user can then
source /etc/profile.d/nvm.shto activate the nvm.
Make files that were created by individual users accessible to all
- create a group:
sudo groupadd developers - add users to group:
sudo usermod -aG developers <username>- do this for each user - Change the group ownership of all folders:
sudo chown -R :developers /home/applications/ - To allow all users in the group to create, delete, and modify files:
sudo chmod -R 775 /home/applications/ - users might need to exit and re-enter the terminal
Create Jump User
- create jumpuser
sudo adduser jumpuseron both machine01 (reverse proxy) and kv09 (destintion) - add rule in machine01's sshd_config file (/etc/ssh/sshd_config) with the correct instructions you must
sudo systemctl restart ssh
Match User jumpuser
ForceCommand ssh jumpuser@192.168.1.72
PermitTTY yes
AllowTcpForwarding yes
- jumpuser's workstation they need to create a ssh-keygen file like:
ssh-keygen -t ed25519 -C "jumpuser_to_machine01" - jumpuser's workstation they need to copy the public key to machine01 like:
ssh-copy-id jumpuser@82.66.246.192where the ip address is the public ip that will get routed to the port 22 of the reverse proxy server. - jumpuser should be able to
ssh jumpuser@82.66.246.192
OBE Jumpuser
- Machine01 Proxy Server
- DevelopmentServer access
create jumpuser in Machine01:
sudo adduser jumpuserMake public key from user's computer (.i.e the computer that will want to access DeveloperServer through Machine01)
ssh-keygen -t rsa -b 4096Add Developer Public Keys to jumpuser
- using the contents of their id_rsa.pub file
- navigate into Machine01's /home/jumpuser/.ssh/authorized_keys (NOTE: this is the jumpuser dir)
- if /home/jumpuser/.ssh/authorized_keys doesn't exist, just create it.
- paste in thethe contents of the id_rsa.pub on the first open line.
- this will immediately give the remote computer with the id_rsa.pub file access to Machine01
from Machine01 edit sshd_config
- from terminal (Machine01) enter:
sudo nano /etc/ssh/sshd_config - modify file by adding to the bottom:
Match User jumpuser ForceCommand ssh dashanddata@192.168.1.136 PermitTTY yes AllowTcpForwarding yes- restart ssh:
sudo systemctl restart ssh
- from terminal (Machine01) enter:
Helpful resourece: https://www.youtube.com/watch?v=KIeBC7NIzj4
Other stuff that seemed to kind of work Tunneling one terminal:
- both work locally to access machine01
ssh -L 2222:192.168.1.134:22 jumpuser@82.66.246.192ssh -L 0.0.0.0:2222:192.168.1.136:22 jumpuser@82.66.246.192
but this does not work to access Machine02 from Machine01
ssh dashanddata_user@localhost -p 2222
ssh dashanddata_user@192.168.1.134 -p 2222
END OBE Jumpuser
VMWare Cloning VM
Step 1: copy vmdk and vmx files to new folder with new VM name
- original VM should be turned off
- copy vmdk and vmx files to new folder with new VM name
Step 2: ☝️ [After copy is finished] turn on old VM
- after finished copy turn ON the old VM
- This will help because when you get back to the Virtual Machines page / list you'll see two VMs with the same name
Step 3: register the cloned VM
- From datastore browser navigate to the new VM folder and right click on .vmx file and select "Register VM"
- From the Virtual Machines page / list rename new VM (which will be the one with the same name but turned off)
- Turn on the new VM
- rename the new VM from VMWare
- hostname from inside terminal of new machine
sudo hostnamectl set-hostname <new_hostname>
Step 4 (Ubuntu 24.04 LTS): update IP address
Regenerate the machine ID (this is the key part)
sudo rm -f /etc/machine-id
sudo rm -f /var/lib/dbus/machine-id
sudo systemd-machine-id-setup
sudo systemctl restart systemd-networkd
sudo reboot
Step 4 (Ubuntu 20.04 LTS): apply new static IP address
- turn on new vm
- if ip address in not different create a new static IP address
sudo nano /etc/netplan/00-installer-config.yaml, enter:
network:
ethernets:
ens160:
dhcp4: false
addresses:
- 192.168.100.<unused_ip>/24
gateway4: 192.168.100.1
nameservers:
addresses:
- 8.8.8.8
version: 2
Step 5: shutdown and restart server
- on boot up the new / used IP address should be applied to this VM
Running Python Flask App on Ubuntu / PM2
- as of 2025-08-11
the pm2 ecosystem.config.js file
- this one started working as of 2025-08-11
{
name: "Samurai02APIRag",
script: "/home/shared/applications/Samurai02APIRag/run.py",
interpreter: "/home/shared/environments/samurai02/bin/python",
cwd: "/home/shared/applications/Samurai02APIRag",
env: {
FLASK_CONFIG_TYPE: "prod",
},
},
OBE?
- for some reason this stopped working 2025-08-11 on Samurai02
- I think this worked on other servers
{
name: "Samurai02APIRag",
script: "/home/shared/environments/samurai02/bin/gunicorn",
args: "-w 3 -b 0.0.0.0:8003 --timeout 600 run:app",
cwd: "/home/shared/applications/Samurai02APIRag",
interpreter: "none", // direct python path from venv
autorestart: true, // don't restart on crash; just run at scheduled time
watch: false, // no need to watch files
env: {
FLASK_CONFIG_TYPE: "prod",
PATH: "/home/shared/environments/samurai02/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin",
},
},
service file
- /etc/systemd/system/Samurai02APIRag.service
[Unit]
Description=Gunicorn instance to serve Samurai02 API Production on Samurai02.
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/nick/applications/Samurai02APIRag
Environment="PATH=/home/nick/environments/samurai02/bin"
Environment="FLASK_CONFIG_TYPE=prod"
ExecStart=/home/nick/environments/samurai02/bin/gunicorn -w 3 -b 0.0.0.0:8003 --timeout 600 run:app
[Install]
WantedBy=multi-user.target
run.py
import os
from dotenv import load_dotenv
load_dotenv()
from app_package import create_app
app = create_app()
if __name__ == '__main__':
port = int(os.environ.get("FLASK_RUN_PORT"))
host = os.environ.get("FLASK_RUN_HOST")
app.run(host=host, port=port)
set up .env
- see .env.example
- the key variables for running the Flask portion of the app are:
FLASK_RUN_HOST="0.0.0.0"
FLASK_RUN_PORT=8003
set up .flaskenv
the server has not been responding well to placiing variables in the .flaskenv file.
- these are the only ones that seem to matter and even that i'm not sure.
FLASK_APP=run
FLASK_DEBUG=1
Neosmay Server Device
- esc only once or twice as soon as you press the on button (like immediately)
- if you get to screen with
grub>you pressed too many times - you should get an option that has "Firmware" in it