Player FM - Internet Radio Done Right
143 subscribers
Checked 11h ago
Added eleven years ago
Content provided by HPR Volunteer and Hacker Public Radio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by HPR Volunteer and Hacker Public Radio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Hacker Public Radio explicit
Mark all (un)played …
Manage series 32765
Content provided by HPR Volunteer and Hacker Public Radio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by HPR Volunteer and Hacker Public Radio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
…
continue reading
860 episodes
Mark all (un)played …
Manage series 32765
Content provided by HPR Volunteer and Hacker Public Radio. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by HPR Volunteer and Hacker Public Radio or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
…
continue reading
860 episodes
Όλα τα επεισόδια
×H
Hacker Public Radio


This show has been flagged as Clean by the host. The PineTab2 is PINE64's successor to the original PineTab Linux tablet computer, featuring a faster processor and better availability. The tablet is available in two configurations, 4GB of RAM and 64GB of internal storage or 8GB of RAM and 128GB of internal storage. The tablet ships with a detachable keyboard that doubles as a protective cover. The tablet is designed around the Rockchip RK3566 processor, which features 4 energy-efficient Cortex-A55 64-bit ARM cores and enjoys good mainline Linux support. A similarly packaged RISC-V tablet is the PineTab-V . Pre-orders started on the 13th of April 2023, with pricing starting at USD 159 for the 4GB/64GB version and USD 209 for the 8GB/128GB version. The PineTab2 began shipping on June 2, 2023. Taken from https://wiki.pine64.org/wiki/PineTab2 Provide feedback on this episode .…
This show has been flagged as Clean by the host. A collection of tips and tricks that operat0r uses to make a standard Android phone more custom. The secret block extension is "11335506" - tell 'em Ken sent ya. Links UserLAnd - Linux on Andro UserLAnd is an open-source app which allows you to run several Linux distributions like Ubuntu, Debian, and Kali. Widgify - DIY Live Wallpaper Widgify is a well-designed beautification tool for phone, where you can experience a wide variety of screen widgets to easily match your super personalized phone home screen! Nova Launcher Prime Nova Launcher is a powerful, customizable, and versatile home screen replacement. Firefox Nightly for Developers Nightly is built for testers. Help us make Firefox the best browser it can be. Expanded extension support in Firefox for Android Nightly How to use collections on addons.mozilla.org SponsorBlock SponsorBlock is an open-source crowdsourced browser extension and open API for skipping sponsor segments in YouTube videos. WireGuard (VPN) The official app for managing WireGuard VPN tunnels. DNS66 This is a DNS-based host blocker for Android. (Requires root) Hacker's Keyboard Four- or five-row soft-keyboard TidyPanel Notification Cleaner Tidy up your notification panel with simple, minimal, beautiful and intuitive UI. Provide feedback on this episode .…
H
Hacker Public Radio


This show has been flagged as Explicit by the host. ----------------- NYE 2025 4 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Jimmy Carter and the Govenor of Texas https://www.cbsnews.com/texas/news/texas-governor-greg-abbott-sends-condolences-to-rosalynn-carter-who-died-in-2023-following-jimmy-carters-death/ Finger Cot https://en.wikipedia.org/wiki/Finger_cot Filk Music https://en.wikipedia.org/wiki/Filk_music Moss Bliss https://mordewis.bandcamp.com/ Georgia Filk Convention https://www.gafilk.org/ Liquid Callus https://www.amazon.com/Rock-Tips-Liquid-Formula-Stringed-Instruments/dp/B008MY3VU2 Enya Nextg Guitar https://www.enya-music.com/collections/guitar Guitar Gloves https://www.amazon.com/guitar-glove/s?k=guitar+glove Soju https://en.wikipedia.org/wiki/Soju Bird Dog Whiskey https://birddogwhiskey.com/ Delta 8 vs Delta 9 https://jcannabisresearch.biomedcentral.com/articles/10.1186/s42238-021-00115-8 Bodhi Linux https://www.bodhilinux.com/ Internet Archive https://archive.org/ Trump buy Greenland https://www.foxnews.com/politics/make-greenland-great-again-trumps-house-gop-allies-unveil-bill-authorize-countrys-purchase Pierre Poilievre https://en.wikipedia.org/wiki/Pierre_Poilievre Chrystia Freeland https://en.wikipedia.org/wiki/Chrystia_Freeland Justin Trudeau https://en.wikipedia.org/wiki/Justin_Trudeau New Democratic Party https://en.wikipedia.org/wiki/New_Democratic_Party Trump Bankruptcies https://www.abi.org/feed-item/examining-donald-trump%E2%80%99s-chapter-11-bankruptcies Elmers Glue https://www.elmers.com/ Pentagon Federal Credit Union https://www.penfed.org/ US Draft https://en.wikipedia.org/wiki/Conscription_in_the_United_States Vienna Susages https://en.wikipedia.org/wiki/Vienna_sausage Vegan vs Vegetarian https://www.healthline.com/nutrition/vegan-vs-vegetarian Beyond Meat sausage https://www.beyondmeat.com/en-US/products/beyond-sausage Raspberry PI 5 https://www.raspberrypi.com/products/raspberry-pi-5/ MIT Open Course Ware https://ocw.mit.edu/ HAM License http://www.arrl.org/getting-licensed 89 Corolla https://en.wikipedia.org/wiki/Toyota_Corolla_(E90)#North_America Autism https://en.wikipedia.org/wiki/Autism Asperger syndrome https://en.wikipedia.org/wiki/Asperger_syndrome Narcissistic https://www.helpguide.org/mental-health/personality-disorders/narcissistic-personality-disorder Thermal Paste https://en.wikipedia.org/wiki/Thermal_paste 7-11 https://www.7-eleven.com/ MIT https://www.mit.edu/ Wild Pie https://www.wildpie.com/ Follow your Heart Cheese https://followyourheart.com/product_category/dairy-free-cheese/ Morning Star https://www.morningstarfarms.com/en_US/products/veggie-burgers.html Boca Burger https://www.kraftheinz.com/boca Nip/Tuck https://www.imdb.com/title/tt0361217/ American Cheese https://en.wikipedia.org/wiki/American_cheese Boxing Day https://en.wikipedia.org/wiki/Boxing_Day Mumble https://www.mumble.info/ VPN https://en.wikipedia.org/wiki/Virtual_private_network Pfsense https://www.pfsense.org/ Open wrt https://openwrt.org/ AC wifi protocol https://en.wikipedia.org/wiki/IEEE_802.11ac-2013 Open Sense https://opnsense.org/ Linux https://www.linux.org/ Wiindows 7 https://en.wikipedia.org/wiki/Windows_7 VAX system https://en.wikipedia.org/wiki/VAX Novell https://en.wikipedia.org/wiki/Novell PDP-11 https://en.wikipedia.org/wiki/PDP-11 Lotus Notes https://en.wikipedia.org/wiki/Lotus_Software Red Hat Linux https://www.redhat.com/en Debian Linux https://www.debian.org/ Ubuntu Linux https://ubuntu.com/ Linux Mint https://linuxmint.com/ Open Suse https://www.opensuse.org/ Provide feedback on this episode .…
H
Hacker Public Radio


This show has been flagged as Explicit by the host. Interview with one of the "Redot Engine" founders, Andrew. Redot Engine is a fork of the famous free and open source project "Godot engine". NOTE: This is my first time interviewing someone for a podcast, so feel free to point out any improvements and critiques I can learn from. After an introduction about the reasons the project was created, we focus on other engines, on the videogame console situation, on a FOSS licensing debate, on Redot's future and on C language interoperability. Official links: Redot engine website Projects and links we've talked about: Redot: why we forked Defold engine Redot proposal for homebrew console support Sonic colors ultimate UPBGE: Fork of Blender game engine GPL vs LGPL license ABI Application Binary Interface proposal for defer operator in C; example of usage in GO Redot slogan: > "Your game, your rules" Provide feedback on this episode .…
H
Hacker Public Radio


This show has been flagged as Clean by the host. First, I create a Git repository some place on the server. This is the Git repo that's going to be populated with your content, but it doesn't have to be in a world-viewable location on your server. Instead, you can place this anywhere, and then use a Git hook or a cronjob to copy files from it to a world-viewable directory. I don't cover that here. I refer to this location as the staging directory. Next, create a bare repository on your server. In its hooks directory, create a shell script called post-receive: #!/usr/bin/bash # while read oldrev newrev refname do BR=`git rev-parse --symbolic --abbrev-ref $refname` if [ "$BR" == "master" ]; then WEB_DIR="/my/staging/dir" export GIT_DIR="$WEB_DIR/.git" pushd $WEB_DIR > /dev/null git pull popd > /dev/null fi done Now when you push to your bare repository, you are triggering the post-receive script to run, which in turn triggers a git pull in your staging directory. Once your staging directory contains the content you want to distribute, you can copy them to live directories, or you could make your staging directory live (remember to exclude the .git directory though), or whatever you want. For gopher, I create a file listing by date using a shell script: #!/usr/bin/bash SED=/usr/bin/sed DIR_BASE=/my/live/dir DIR_LIVE=blog DIR_STAGING=staging DATE=${DATE:-`date --rfc-3339=date`} for POST in `find "$DIR_BASE"/"$DIR_STAGING" \ -type f -name "item.md" -exec grep -Hl "$DATE" {} \;`; do POSTDIR=`dirname "$POST"` cp "$POST" "$DIR_BASE"/"$DIR_LIVE"/`basename $POSTDIR`.txt echo -e 0Latest'\t'../"$DIR_LIVE"/`basename $POSTDIR`.txt > /tmp/updater.tmp echo -e 0"$DATE" `basename $POSTDIR`'\t'../"$DIR_LIVE"/`basename $POSTDIR`.txt \ >> /tmp/updater.tmp "${SED}" -i "/0Latest/ r /tmp/updater.tmp" "$DIR_BASE"/date/gophermap "${SED}" -i '0,/0Latest/{/0Latest/d;}' "$DIR_BASE"/date/gophermap /usr/bin/rm /tmp/updater.tmp done Provide feedback on this episode .…
This show has been flagged as Clean by the host. Transferring Large Data Sets Very large data sets present their own problems. Not everyone has directories with hundreds of gigabytes of project files, but I do, and I assume I'm not the only one. For instance, I have a directory with over 700 radio shows, many of these directories also have a podcast, and they also have pictures and text files. Doing a properties check on the directory I see 450 gigabytes of data. When I started envisioning Libre Indie Archive I wanted to move the directories into archival storage using optical drives. My first attempt at this didn't work because I lost metadata when I wrote the optical drives since optical drives are read only. After further work and study I learned that tar files can preserve meta data if they are created and uncompressed as root. In fact, if you are running tar as root preserving file ownership and permissions is the default. So this means that optical drives are an option if you write tar archives onto the optical drives. I have better success rates with 25 GB Blue Ray Discs than with the 50 GB discs. So, if your directory breaks up into projects that fit on 25 GB discs, that's great. My data did not do this easily but tar does have an option to write a data set to multiple tar files each with a maximum size, labelling them -0 -1, etc. When using this multi volume feature you cannot use compression. So you will get tar files, not tar.gz files. It's better to break the file sets up in more reasonable sizes so I decided to divide the shows up alphabetically by title, so all the shows starting with the letter a would be one data set and then down the alphabet, one letter at a time. Most of the letters would result in a single tar file labeled -0 that would fit on the 25 GB disc. Many letters, however, took two or even three tar files that would have to be written on different disks and then concatenated on the primary system before they are extracted to the correct location in primaryfiles. There is a companion program to tar, called tarcat, that I used to combine 2 or 3 tar files split by length into a single tar file that could be extracted. I ran engrampa as root to extract the files. So, I used a tar command on the working system where my Something Blue radio shows are stored. Then I used K3b to burn these files onto a 25 GB Blu Ray Disc carefully labeling the discs and writing a text file that I used to keep up with which files I had already copied to Disc. Then on the Libre Indie Archive primary system I copied from the Blu Ray to the boot drive the file or files for that data set. Then I would use tarcat to combine the files if there was more than one file for that data set. And finally I would extract the files to primaryfiles by running engrampa as root. Now I'm going to go into details on each of these steps. First make sure that the Libre Indie Archive program, prep.sh, is in your home directory on your workstation. Then from the data directory to be archived, in my case the something_blue directory run prep.sh like this. ~/prep.sh This will create a file named IA_Origin.txt that lists the date, the computer and directory being archived, and the users and userids on that system. All very helpful information to have if at some time in the future you need to do a restore. Next create a tar data set for each letter of the alphabet. (You may want to divide your data set in a different way.) Open a terminal in the same directory as the data directory, my something_blue directory, so that ls displays something_blue (your data directory). I keep the Something Blue shows and podcasts in subdirectories in the something_blue directory. Here's the tar command. Example a: sudo tar -cv --tape-length=20000000 --file=somethingblue-a-{0..50}.tar /home/larry/delta/something_blue/a* This is for the letter a so the --file parameter includes the letter a. The numbers 0..50 in the squirelly brackets are the sequence numbers for the files. I only had one file for the letter a, somethingblue-a-0.tar. The last parameter is the source for the tar files, in this case /home/larry/delta/something_blue/a* All of the files and directories in the something_blue directory that start with the letter a. You may want to change the --tape-length parameter. As listed it stores up to 19.1 GB. The maximum capacity of a 25 GB Blu-ray is 23.3GB for data storage. Example b: For the letter b, I ended up with three tar files. somethingblue-b-0.tar somethingblue-b-1.tar somethingblue-b-2.tar I will use these files in the example below using tarcat to combine the files. I use K3b to burn Blu-Ray data discs. Besides installing K3b you have to install some other programs and then there is a particular setup that needs to be done including selecting cdrecord and no multisession. Here's an excellent article that will go step by step through the installation and setup. How to burn Blu-ray discs on Ubuntu and derivatives using K3b? https://en.ubunlog.com/how-to-burn-blu-ray-discs-on-ubuntu-and-derivatives-using-k3b/ I also always check Verify data and I use the Linux/Unix file system, not windows which will rename your files if the filenames are too long. I installed a Blu-Ray reader into the primary system and I used thunar to copy the files from the Blu-Ray Disc to the boot drive. In the primaryfiles directory I make a subdirectory, something_blue, to hold the archived shows. If there is only one file, like in example a above, you can skip the concatenation step. If there is more than one file, like Example b above, you use tarcat to concatenate these files into one tar file. You have to do this. If you try to extract from just one of the numbered files when there is more than one you will get an error. So if I try to extract from somethingblue-b-0.tar and I get an error it doesn't mean that there's anything wrong with that file. It just has to be concatenated with the other b files before it can be extracted. There is a companion program to tar called tarcat that should be used to concatenate the tar files. Here's the command I used for example b, above. tarcat somethingblue-b-0.tar somethingblue-b-1.tar somethingblue-b-2.tar > sb-b.tar This will concatenate the three smaller tar files into one bigger tar file named sb-b.tar In order to preserve the meta data you have to extract the files as root. In order to make it easier to select the files to be extracted and where to store them I use the GUI archive manager, engrampa. To run engrampa as root open a terminal with CTRL-ALT t and use this command sudo -H engrampa Click Open and select the tar file to extract. Then follow the path until you are in the something_blue directory and you are seeing the folders and files you want to extract. Type Ctrl a to select them all. (instead of the something_blue directory you will go to your_data directory) Then click Extract at the top of the window. Open the directory where you want the files to go. In my case, primaryfiles/something_blue Then click Extract again in the lower right. After the files are extracted go to your data directory in primaryfiles and check that the directories and files are where you expect them to be. You can also open a terminal in that directory and type ls -l to review the meta data. When dealing with data chunks sized 20 GB or more each one of these steps takes time. The reason I like using an optical disk backup to transfer the files from the working system to Libre Indie Archive is because it gives me an easy to store backup that is not on a spinning drive and that cannot be overwritten. Still optical disk storage is not perfect either. It's just another belt to go with your suspenders. Another way to transfer directories into the primaryfiles directory is with ssh over the network. This is not as safe as using optical disks and it also does not provide the extra snapshot backup. It also takes a long time but it is not as labor intensive. After I spend some more time thinking about this and testing I will do a podcast about transferring large data sets with ssh. Although I am transferring large data sets to move them into archival storage using Libre Indie Archive there are many other situations where you might want to move a large data set while preserving the meta data. So what I have written about tar files, optical discs, and running thunar and engrampa as root is generally applicable. As always comments are appreciated. You can comment on Hacker Public Radio or on Mastodon. Visit my blog at home.gamerplus.org where I will post the show notes and embed the Mastodon thread for comments about thie podcast. Thanks Provide feedback on this episode .…
H
Hacker Public Radio


This show has been flagged as Clean by the host. Civilization IV added some new Victory types, and I decided to illustrate one of them, the Culture victory, by going through an example of achieving this, the Culture victory. Links: https://civilization.fandom.com/wiki/Speed_(Civ4) https://civilization.fandom.com/wiki/Cottage_(Civ4) https://www.palain.com/gaming/civilization-iv/playing-civilization-iv-part-7/ Provide feedback on this episode .…
H
Hacker Public Radio


This show has been flagged as Clean by the host. This episode gives a mini-review of the Yamiry YR01 Fingerprint Smart Knob. This key less entry system replaces your door handles and latch with a door handle and latch system that allows for multiple ways to 'keylessly' unlock your door via fingerprint, pin codes, bluetooth fobs, your phone's bluetooth, or your phone's wifi. References: Yamiry Fingerprint Smart Knob - Keyless Entry Digital Lock for Front Door ( https://www.amazon.com/Smart-Door-Handle-Lock-Keypad/dp/B0C66NCTXX ) NICE Digi ( https://nice-digi.com/ ) Provide feedback on this episode .…
This show has been flagged as Clean by the host. Review of the book the Arduino controlled by eforth by dr chen-hanson ting published in 2018 written by chen-hanson ting Late Dr. ting was a chemist turned engineer. he earned a phd in chemistry at the U of Chicago in 1965. taught chemistry in Taiwan until 1975. became a firmware engineer until hI retirement in 2000. he was a forth advocate for more than 50 years, especially a forth called eforth that has been ported to many devices, including the micro chip atmega 328 found on the arduino uno board. I found this book while searching for forths for the arduino uno boards. the source code and documentation for eforth is available in a lot of places I will put a few links in the show notes. I believe I mentioned this forth in an earlier hpr where I talked about choosing a forth. forth interest group https://forth.org https://wiki.forth-ev.de https://chochain.github.io (pdf) When I first encountered dr tings forth for arduino I was interested for one reason, it was easily assembled using avra, the gnu port of the atmel assembler. this was nice because using atmels (now microchips) assemblers on Linux required installing wine and installing wine, in the past, on a 64 bit Slackware meant installing 32 bit libraries to have a multI lib Slackware. ( that not an issue now). assembling the forth code in avra is quick, its only a little bit over 5k in size in the end. After playing with eforth for a while I became frustrated because I could create new words in the dictionary and the examples ran fine, but nothing persisted across reboot. so I dropped eforth and ended up using flashforth, which is a great, robust full featured forth. I still recommend flashforth if your starting out with forth on a microcontroller its solid software with good documentation. At the end of last year I thought it would be fun to write my own forth. and after looking into doing that I revisited 328eforth and thought, no how about I fix the problems with eforth on the arduino. so I dug out the book and began reading. Jones forth port at https://ratfactor.com/nasmjf The book has 6 parts. part 1 is dr tings musings on how he ended up creating 328eforth. part 2 explains installing eforth. the 3rd part begins exercising the arduino board using forth in the interactive interpreter. part 4 explains 328eforth implementation and design decisions. part 5 is the full commented source code of 328eforth and, this is the best part, dr tings explanation of what is going on in the code broken down by functional sections. a gold mine of information! part 6 conclusions The last part is his conclusions and examples to learn forth. This is a great free software project. nothing is hidden. it is accessible to anybody who would take the time to read and dig into the code. its makes assembly language much less dark and foreboding. I'll finish by reading a couple of paragraphs from dr tings book dr ting concludes: People using computers are trained to be slaves. You are taught to push certain buttons, and your are taught to push certain keys. Then, you get employed to push buttons and keys to work as slaves. Computers, programming languages, and operating systems are made complicated to enslave people. Computers are not complicated beyond comprehension. Programming languages and operating systems do not have to be complicated. If you get a sharp knife, you can be the master of your destination. 328eforth is a sharp knife. Go use it. The hacker ethos. The next podcast I produce will cover installing eforth on an arduino board and solving that pesky loss of words between boots problem. Provide feedback on this episode .…
This show has been flagged as Explicit by the host. OpenWebUI notes ... Open WebUI installer: https://github.com/freeload101/SCRIPTS/blob/master/Bash/OpenWebUI_Fast.bash Older Professor synapse prompt you can use: https://raw.githubusercontent.com/freeload101/SCRIPTS/refs/heads/master/Prof%20Synapse%20Old.txt Fabric prompts you can import into openwebui !!! ( https://github.com/danielmiessler/fabric/tree/main/patterns ) https://github.com/freeload101/SCRIPTS/blob/master/MISC/Fabric_Prompts_Open_WebUI_OpenWebUI_20241112.json Example AT windows task startup script to make it start and not die on boot https://github.com/freeload101/SCRIPTS/blob/master/MISC/StartKokoro.xml Open WebUI RAG fail sause ... https://youtu.be/CfnLrTcnPtY Open registration Model list / order NAME ID SIZE MODIFIED hf.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF:Q4_K_S 017d7a278e7e 4.7 GB 2 days ago qwen2.5:32b 9f13ba1299af 19 GB 3 days ago deepsex:latest c83a52741a8a 20 GB 3 days ago HammerAI/openhermes-2.5-mistral:latest d98003b83e17 4.4 GB 2 weeks ago Sweaterdog/Andy-3.5:latest d3d9dc04b65a 4.7 GB 2 weeks ago nomic-embed-text:latest 0a109f422b47 274 MB 2 weeks ago deepseek-r1:32b 38056bbcbb2d 19 GB 4 weeks ago psyfighter2:latest c1b3d5e5be73 7.9 GB 2 months ago CognitiveComputations/dolphin-llama3.1:latest ed9503dedda9 4.7 GB 2 months ago Disable Arena models Documents WIP RAG is not good . Discord notes; https://discord.com/channels/1170866489302188073/1340112218808909875 Abhi Chaturvedi: @(Operat0r) try this To reduce latency and improve accuracy, modify the .env file: Enable RAG ENABLE_RAG=true Use Hybrid Mode (Retrieval + Reranking for better context) RAG_MODE=hybrid Reduce the number of retrieved documents (default: 5) RETRIEVAL_TOP_K=3 Use a Fast Embedding Model (instead of OpenAI's Ada-002) EMBEDDING_MODEL=all-MiniLM-L6-v2 # Faster and lightweight . Optimize the Vector Database VECTOR_DB_TYPE=chroma CHROMA_DB_IMPL=hnsw # Faster search CHROMA_DB_PATH=/root/open-webui/backend/data/vector_db. Optimize Backend Performance # Increase Uvicorn worker count (improves concurrency) UVICORN_WORKERS=4 Increase FastAPI request timeout (prevents RAG failures) FASTAPI_TIMEOUT=60 Optimize database connection pool (for better query performance) SQLALCHEMY_POOL_SIZE=10 So probably the first thing to do is increase the top K value in admin -> settings -> documents, or you could try the new "full context mode" for rag documents. You may also need to increase the context size on the model, but it will make it slower, so you probably don't want to do that unless you start seeing the "truncating input" warnings. @JamesK So probably the first thing to do is increase the top K value in admin -> settings -> documents, or you could try the new "full context mode" for rag documents. You may also need to increase the context size on the model, but it will make it slower, so you probably don't want to do that unless you start seeing the "truncating input" warnings. M] JamesK: Ah, I see. The rag didn't work great for you in this prompt. There are three hits and the first two are duplicates, so there isn't much data for the model to work with [9:12 PM] JamesK: context section I see a message warning that you are using the default 2048 context length, but not the message saying you've hit that limit (from my logs the warning looks like level=WARN source=runner.go:126 msg="truncating input prompt" limit=32768 prompt=33434 numKeep=5 [6:06 AM] JamesK: If you set the env var OLLAMA_DEBUG=1 before running ollama serve it will dump the full prompt being sent to the model, that should let you confirm what the rag has put in the prompt JamesK: Watch the console output from ollama and check for warnings about overflowing the context. If you have the default 2k context you may need to increase it until the warnings go away [8:58 PM] JamesK: But also, if you're using the default rag, it chunks the input into small fragments, then matches the fragments against your prompt and only inserts a few fragments into the context, not the entire document. So it's easily possible for the information you want to not be present. Auto updates echo '0,12 */4 * * * docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui' >> /etc/crontab Search red note for API keys Go to Google Developers, use Programmable Search Engine , and log on or create account. Go to control panel and click Add button Enter a search engine name, set the other properties to suit your needs, verify you're not a robot and click Create button. Generate API key and get the Search engine ID . (Available after the engine is created) With API key and Search engine ID , open Open WebUI Admin panel and click Settings tab, and then click Web Search Enable Web search and Set Web Search Engine to google_pse Fill Google PSE API Key with the API key and Google PSE Engine Id (# 4) Click Save Note You have to enable Web search in the prompt field, using plus ( + ) button. Search the web ;-) Kokoro / Open Webui https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html https://github.com/remsky/Kokoro-FastAPI?tab=readme-ov-file apt update apt upgrade curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update sudo apt-get install -y nvidia-container-toolkit apt install docker.io -y docker run --gpus all -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-gpu:v0.2.2 http://localhost:8880/v1 af_bella Import fabric prompts https://raw.githubusercontent.com/freeload101/Python/46317dee34ebb83b01c800ce70b0506352ae2f3c/Fabric_Prompts_Open_WebUI_OpenWebUI.py Provide feedback on this episode .…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.