Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza ChughtaiFahd Mirzahttp://www.blogger.com/profile/14722451950835849728noreply@blogger.comBlogger627125
Updated: 3 hours 35 min ago

Easiest RAG Tutorial for Beginners on Free Google Colab

Wed, 2024-09-18 16:29

 This video is a step-by-step tutorial to learn RAG in an easy way with LlamaIndex on your own data in free google colab.



Code:



!pip install llama-index faiss-cpu pandas python-dotenv openai transformers numpy
!pip install llama-index-agent-openai llama-index-cli llama-index-core llama-index-embeddings-openai
!pip install llama-index-llms-openai llama-index-program-openai llama-index-question-gen-openai llama-index-readers-file
!pip install llama-index-readers-llama-parse llama-index-vector-stores-faiss llama-parse llama-index-indices-managed-llama-cloud

from llama_index.core.readers import SimpleDirectoryReader
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.readers.file import PagedCSVReader
from llama_index.vector_stores.faiss import FaissVectorStore
from llama_index.core.ingestion import IngestionPipeline
from llama_index.core import VectorStoreIndex
import faiss
import os
import pandas as pd

from google.colab import userdata
os.environ['OPENAI_API_KEY']=userdata.get('OPENAI_API_KEY')

EMBED_DIMENSION=512
Settings.llm = OpenAI(model="gpt-3.5-turbo")
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small", dimensions=EMBED_DIMENSION)

file_path = ('/content/minifinance.csv')
data = pd.read_csv(file_path)
data.head()

fais_index = faiss.IndexFlatL2(EMBED_DIMENSION)
vector_store = FaissVectorStore(faiss_index=fais_index)

csv_reader = PagedCSVReader()

reader = SimpleDirectoryReader(
    input_files=[file_path],
    file_extractor= {".csv": csv_reader}
    )

docs = reader.load_data()

print(docs[0].text)

pipeline = IngestionPipeline(
    vector_store=vector_store,
    documents=docs
)

nodes = pipeline.run()

vector_store_index = VectorStoreIndex(nodes)
query_engine = vector_store_index.as_query_engine(similarity_top_k=2)

response = query_engine.query("which products are sold in Canada?")
response.response
Categories: DBA Blogs

Live Face Swapping in Call with AI - Easy Installation on Windows for Free - Rope Pearl Live

Tue, 2024-09-03 04:07

This video shows how to locally install Rope Pearl Live on Windows for for live swapping using Webcam and for live streaming the swapped videos using virtual camera.



Code:

Clone the repository to your folder by downloading the zip file:

https://github.com/argenspin/Rope-Live.git
cd Rope-Live


conda create -n Rope python=3.10.13
conda activate Rope
conda install cuda-runtime=11.8.0 cudnn=8.9.2.26 gputil=1.4.0 -c pytorch -c nvidia -c conda-forge
python -m pip install -r requirements.txt

Download the required models

To get access to all the features of Rope, you need to download the models from here. You need all of the files.

Place the downloaded model files in the Rope-Live/models folder

Set up OBS Virtual Camera

Start OBS.
Click "Start Virtual Camera" (bottom right), then "Stop Virtual Camera".
Close OBS.

Start the application by running Rope.bat file
Categories: DBA Blogs

How To Fine-Tune AI Model with Online Direct Preference Optimization Locally on Own Dataset

Mon, 2024-09-02 17:08

 This video is a step-by-step tutorial to use Online DPO to fine-tune a model locally on custom dataset. ODPO is a new alignment method from DeepMind to boost the performance of LLMs.



Code:

conda create -n dpo python=3.11 -y && conda activate dpo

pip install torch
pip install datasets dataclasses
pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/accelerate

git clone https://github.com/huggingface/trl.git && cd trl

git checkout d57e4b726561e5ae58fdc335f34029052944a4a3

pip install -e .

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

from datasets import Dataset
from trl import OnlineDPOConfig, OnlineDPOTrainer
from transformers import (
    AutoModelForCausalLM,
    AutoModelForSequenceClassification,
    AutoTokenizer,
)
NUM_DUMMY_SAMPLES = 100

tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct")
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
# The model to optimise
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct")
# The reference model to calculate the KL divergence against
ref_model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct")
# The model to score completions with. In practice, you will need a reward model.
reward_model = AutoModelForSequenceClassification.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct", num_labels=1)

train_dataset = Dataset.from_dict(
    {"prompt": ["Q: Hi how are you? A:"] * NUM_DUMMY_SAMPLES})
eval_dataset = Dataset.from_dict(
    {"prompt": ["Q: What do you like to eat A:"] * NUM_DUMMY_SAMPLES})

args = OnlineDPOConfig(output_dir="online-dpo-model")
trainer = OnlineDPOTrainer(
    model=model,
    ref_model=ref_model,
    reward_model=reward_model,
    args=args,
    tokenizer=tokenizer,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
)
trainer.train()
Categories: DBA Blogs

Install MiniG Locally - Long Context Model for Novel and Story Writing with Images

Sun, 2024-09-01 01:19

 This video shows how to locally install MiniG model which is trained on a synthesis dataset of over 120 million entries and has 1M token context window. It deals with both text and images.




Code:

conda create -n lm python=3.11 -y && conda activate lm

pip install torch
pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/accelerate
pip install --upgrade sentencepiece


conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

pip install tiktoken torchvision

import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
)

device = "cuda"

tokenizer = AutoTokenizer.from_pretrained("CausalLM/miniG",trust_remote_code=True)

query = "What is Happiness?"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "CausalLM/miniG",
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True
).to(device).eval()

gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1}
with torch.no_grad():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))



#===================

#For Images:

#==================

import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"

tokenizer = AutoTokenizer.from_pretrained("CausalLM/miniG", trust_remote_code=True)

query = 'Which lane should I drive in this image?'
image = Image.open("/home/Ubuntu/images/lane.png").convert('RGB')
inputs = tokenizer.apply_chat_template([{"role": "user", "image": image, "content": query}],
                                       add_generation_prompt=True, tokenize=True, return_tensors="pt",
                                       return_dict=True)  # chat mode

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "CausalLM/miniG",
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True
).to(device).eval()

gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1}
with torch.no_grad():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print(tokenizer.decode(outputs[0]))
Categories: DBA Blogs

Install RAG Me Up with Ollama Locally - Free RAG with Any Dataset

Sat, 2024-08-31 19:23

 This video shows how to install and use RAG Me Up which is a generic framework (server + UIs) that enables you do to RAG on your own dataset.


Code:

conda create -n rag python=3.11 -y && conda activate rag


sudo apt update
sudo apt install openjdk-17-jre
sudo apt install openjdk-17-jdk

java --version

Install Scala : To install Scala, it is recommended to use cs setup, the Scala installer powered by Coursier. It installs everything necessary to use the latest Scala release from a command line

curl -fL https://github.com/coursier/coursier/releases/latest/download/cs-x86_64-pc-linux.gz | gzip -d > cs && chmod +x cs && ./cs setup
source ~/.profile

Install SBT, simple build tool for scala :

cs setup
sbt --script-version

This should install the latest stable version of sbt


git clone https://github.com/UnderstandLingBV/RAGMeUp.git cd RAGMeUp/server
pip install -r requirements.txt
python3 server.py

For Scala UI:
Run sbt run from the server/scala
Categories: DBA Blogs

Rope Pearl Installation - Easy Tutorial for Deepfake Face Swap - Free and Private Face Fusion

Thu, 2024-08-29 06:24

 This video shows how to locally install Rope Pearl on Windows for one click face swap in any video free and private. You can fuse multiple faces in deepfake easily. 



Code:

1- Install Choco

Open Powershell as Administrator and run following:

Set-ExecutionPolicy Bypass -Scope Process -Force;
[System.Net.ServicePointManager]::SecurityProtocol
= [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex
((New-Object System.Net.WebClient).DownloadString
('https://chocolatey.org/install.ps1'))

2- Install following pre-reqs

choco install python --version=3.10.0
choco install git
choco install ffmpeg

3- Install Visual Studio from
https://visualstudio.microsoft.com/visual-cpp-build-tools/

4- git clone https://github.com/Hillobar/Rope

5-  cd Rope

6- create the virtual environment
python -m venv venv

7- activate the local venv
.\venv\Scripts\activate

8- check if you have installed the correct python version (Python 3.10.X)
python --version

9- install the dependencies for Rope
.\venv\Scripts\pip.exe install -r .\requirements.txt

10- copy all the files from this page under Assets ,
https://github.com/Hillobar/Rope/releases/tag/Sapphire
and save them in Rope\Models folder

11- Run .\Rope.bat


Categories: DBA Blogs

Top Free High-Quality AI Datasets Sites

Thu, 2024-08-29 00:23

 

This video shows where to find quality and free datasets for LLMs easily.



ARCS Data Sets

https://csr.lanl.gov/data/


Microsoft Datasets

https://www.microsoft.com/en-us/research/tools/?facet%5Bdate%5D%5Bfixed%5D=any&facet%5Btax%5D%5Bmsr-product-type%5D[]=243083&filter_queries%5B%5D=open&pg=1&sort_by=most-relevant


AWS Open Datasets 

https://registry.opendata.aws/


Google Datasets

https://datasetsearch.research.google.com/


PEW Research

https://www.pewresearch.org/search/dataset


Azure Open Datasets

https://azure.microsoft.com/en-us/products/open-datasets

Categories: DBA Blogs

Easy Tutorial to Build Full Free RAG Pipeline from Scratch with Your Own Data

Fri, 2024-08-23 20:08

 This video shows how to install Haystack with Ollama locally for free end-to-end RAG pipeline with your own documents.



Code:


conda create -n hay python=3.11 -y && conda activate hay



pip install torch

pip install haystack-ai==2.2.4

pip install haystack-experimental==0.1.0

pip install sentence-transformers==3.0.1

pip install transformers==4.42.3

pip install ollama-haystack





conda install jupyter -y

pip uninstall charset_normalizer -y

pip install charset_normalizer

jupyter notebook



import transformers

import torch



from haystack_integrations.components.generators.ollama import OllamaGenerator



generator = OllamaGenerator(model="llama3.1",

                            url = "http://localhost:11434/api/generate",

                            generation_kwargs={

                              "num_predict": 100,

                              "temperature": 0.9,

                              })



print(generator.run("Who is the best American actor?"))



========



from haystack_integrations.components.generators.ollama import OllamaGenerator



from haystack import Pipeline, Document

from haystack.components.retrievers.in_memory import InMemoryBM25Retriever

from haystack.components.builders.prompt_builder import PromptBuilder

from haystack.document_stores.in_memory import InMemoryDocumentStore



template = """

Given the following information, answer the question.



Context:

{% for document in documents %}

    {{ document.content }}

{% endfor %}



Question: {{ query }}?

"""



docstore = InMemoryDocumentStore()

docstore.write_documents([Document(content="I really like summer"),

                          Document(content="My favorite sport is soccer"),

                          Document(content="I don't like reading sci-fi books"),

                          Document(content="I don't like crowded places"),])



generator = OllamaGenerator(model="llama3.1",

                            url = "http://localhost:11434/api/generate",

                            generation_kwargs={

                              "num_predict": 100,

                              "temperature": 0.9,

                              })



pipe = Pipeline()

pipe.add_component("retriever", InMemoryBM25Retriever(document_store=docstore))

pipe.add_component("prompt_builder", PromptBuilder(template=template))

pipe.add_component("llm", generator)

pipe.connect("retriever", "prompt_builder.documents")

pipe.connect("prompt_builder", "llm")



result = pipe.run({"prompt_builder": {"query": query},"retriever": {"query": query}})



print(result)


Categories: DBA Blogs

Roop - One-Click Face Swap in Video with AI - Step by Step Tutorial

Fri, 2024-08-23 01:03

 This video shows how to locally install Roop which enables you to take a video and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training. This is an alternate of Deep Live Cam tool for AI Face Swap.



Code:


conda create -n roop python=3.11 -y && conda activate roop

git clone https://github.com/s0md3v/roop.git && cd roop

pip install -r requirements.txt

python3 run.py --execution-provider cuda
Categories: DBA Blogs

Install MinerU Locally to Create LLM Dataset from PDF Files

Thu, 2024-08-22 16:41

 This video shows how to install MinerU which is a LLM-powered tool that converts PDFs into machine-readable formats (e.g., markdown, JSON), allowing for easy extraction into any format to create datasets.


Code:

git clone https://github.com/opendatalab/MinerU.git && cd MinerU

conda create -n MinerU python=3.10 && conda activate MinerU

pip install magic-pdf[full]==0.7.0b1 --extra-index-url https://wheels.myhloli.com

magic-pdf --version

git lfs install

mkdir model
cd model
git lfs clone https://huggingface.co/wanderkid/PDF-Extract-Kit

change magic-pdf.json for models-dir and cuda

wget https://github.com/opendatalab/MinerU/raw/master/demo/small_ocr.pdf

magic-pdf -p small_ocr.pdf
Categories: DBA Blogs

Adobe Magic Fixup - Edit Images with Simple Cut and Paste - Install Locally

Thu, 2024-08-22 01:27

 This video shows how to install Magic Fixup Locally. It enables users to edit images with simple a cut-and-paste like approach, and fixup those edits automatically.


Code:

git clone https://github.com/adobe-research/MagicFixup.git && cd MagicFixup

conda env create -f environment.yaml -v

conda activate MagicFixup

wget https://drive.google.com/file/d/1zOcDcJzCijbGr9I9adC0Cv6yzW60U9TQ/view?usp=share_link

python3 magicfu_gradio.py  --checkpoint magic_fu_open_source_full_model.pt
Categories: DBA Blogs

Install Phi 3.5 Vision Locally for OCR and Image Chat

Tue, 2024-08-20 20:43

 This video shows how to locally install Phi-3.5-vision which is a lightweight, state-of-the-art open multimodal model with a focus on very high-quality, reasoning dense data both on text and vision.


Code:

pip install torch
pip install --upgrade transformers
pip install accelerate huggingface_hub
pip install numpy Pillow Requests torchvision

jupyter notebook

from IPython.display import Markdown, display
from PIL import Image
import requests
from transformers import AutoModelForCausalLM
from transformers import AutoProcessor

model_id = "microsoft/Phi-3.5-vision-instruct"

# Note: set _attn_implementation='eager' if you don't have flash_attn installed
model = AutoModelForCausalLM.from_pretrained(
  model_id,
  device_map="cuda",
  trust_remote_code=True,
  torch_dtype="auto",
  _attn_implementation='flash_attention_2'    
)

processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

# Load the local image
image = Image.open("/home/Ubuntu/images/1.png")

# Prepare the input
messages = [
    {"role": "user", "content": "<|image_1|> Describe this image.",}
]

prompt = processor.tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = processor(prompt, [image], return_tensors="pt").to("cuda:0")

# Generate the response
generation_args = {
    "max_new_tokens": 1000,
    "temperature": 0.0,
    "do_sample": False,
}

generate_ids = model.generate(**inputs,
                              eos_token_id=processor.tokenizer.eos_token_id,
                              **generation_args)

# Remove input tokens
generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:]
response = processor.batch_decode(generate_ids,
                                  skip_special_tokens=True,
                                  clean_up_tokenization_spaces=False)[0]

print(response)
Categories: DBA Blogs

CogVideoX-2B - Install Locally to Create Videos from Text

Mon, 2024-08-19 03:37

 This video shows how to locally install CogVideoX-2B which is an open-source video generation model.


Code:

conda create -n cog python=3.11 -y && conda activate cog

git clone https://github.com/THUDM/CogVideo.git && CogVideo

pip install -r requirements.txt
pip install --upgrade opencv-python transformers diffusers

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video

prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene."

pipe = CogVideoXPipeline.from_pretrained(
    "THUDM/CogVideoX-2b",
    torch_dtype=torch.float16
)

pipe.enable_model_cpu_offload()

prompt_embeds, _ = pipe.encode_prompt(
    prompt=prompt,
    do_classifier_free_guidance=True,
    num_videos_per_prompt=1,
    max_sequence_length=226,
    device="cuda",
    dtype=torch.float16,
)

video = pipe(
    num_inference_steps=50,
    guidance_scale=6,
    prompt_embeds=prompt_embeds,
).frames[0]

export_to_video(video, "output.mp4", fps=8)
Categories: DBA Blogs

Free LLM Dataset Creation with Ollama Locally - Easy Tutorial

Sat, 2024-08-17 21:34

 This video is a step-by-step tutorial to create your own custom dataset from your database schema locally with free model from Ollama.



Code:

import json
import ollama

def make_llama_3_prompt(user, system="", assistant=""):
    system_prompt = ""
    if system:
        system_prompt = (
            f"<|start_header_id|>system<|end_header_id|>\n\n{system}<|eot_id|>"
        )
   
    user_prompt = f"<|start_header_id|>user<|end_header_id|>\n\n{user}<|eot_id|>"
    assistant_prompt = f"<|start_header_id|>assistant<|end_header_id|>\n\n{assistant}<|eot_id|>" if assistant else "<|start_header_id|>assistant<|end_header_id|>\n\n"
   
    return f"<|begin_of_text|>{system_prompt}{user_prompt}{assistant_prompt}"

def get_movie_schema():
    return """\
    0|Title|TEXT eg. "Inception"
    1|Director|TEXT eg. "Christopher Nolan"
    2|Year|INT eg. "2010"
    3|Rating|TEXT eg. "PG-13"
    4|Runtime|TEXT eg. "148 min" castable to int
    5|Genre|TEXT eg. "Sci-Fi"
    6|Box_Office|TEXT eg. "$829,895,144" and when null has a value "N/A"
    """

def generate_question_and_query():
    system = "You are a data analyst with 10 years of experience writing complex SQL queries.\n"
    system += (
        "Consider a table called 'movies' with the following schema (columns)\n"
    )
    system += get_movie_schema()
    system += "Consider the following questions, and queries used to answer them:\n"

    question = """What is the highest-grossing movie of all time?"""
    sql = "SELECT Title, Box_Office FROM movies WHERE Box_Office != 'N/A' ORDER BY CAST(REPLACE(Box_Office, ',', '') AS INTEGER) DESC LIMIT 1;"

    system += "Question: " + question + "\n"
    system += "Query: " + sql + "\n"

    user = "Write a question and a query that are similar but different to those above.\n"
    user += "Format the question and query as a JSON object, i.e.\n"
    user += '{"question" : str, "sql_query": str }.\n'

    user += "Make sure to only return me valid sqlite SQL query generated as response to the question. Don't give me any comments. Just return question and query as JSON objects. Make sure query is relevant to the question. Make sure each query is complete and ends with a ;\n"

    prompt = make_llama_3_prompt(user, system)

    # Generate the result from the model
    result = ollama.generate(model='llama3.1', prompt=prompt)

    # Inspect and parse the result['response']
    response_str = result['response']
    try:
        response_dict = json.loads(response_str)
    except json.JSONDecodeError as e:
        print("Failed to parse response as JSON:", e)
        response_dict = {}

    return response_dict

def save_to_jsonl(data, file_path):
    with open(file_path, 'a') as f:
        for entry in data:
            f.write(json.dumps(entry) + '\n')

def main():
    output_file_path = 'questions_queries.jsonl'
    num_iterations = 10  # Define how many questions and queries you want to generate
    all_questions_queries = []

    for _ in range(num_iterations):
        question_and_query = generate_question_and_query()
        all_questions_queries.append(question_and_query)

    save_to_jsonl(all_questions_queries, output_file_path)
    print(f"Saved {num_iterations} questions and queries to {output_file_path}")

# Execute the main function
if __name__ == "__main__":
    main()
Categories: DBA Blogs

How to Install Flux AI Models Locally for Image Generation Easily

Thu, 2024-08-15 15:04

 This video shows how to install Flux.1-Dev and Flux.1-Schnell model locally in comfyUI and how to generate midjourney like images.



Code:

conda create -n comfy python=3.11 -y && conda activate comfy

git clone https://github.com/comfyanonymous/ComfyUI.git && cd ComfyUI

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121

pip install -r requirements.txt

python3 main.py

http://localhost:8188

cd ComfyUI/models/clip AND COPY clip_l.safetensors  & t5xxl_fp16.safetensors FROM https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main

cd ComfyUI/models/vae/ AND COPY  ae.safetensors FROM https://huggingface.co/black-forest-labs/FLUX.1-dev

cd ComfyUI/models/unet/ AND COPY  flux1-dev.safetensors FROM https://huggingface.co/black-forest-labs/FLUX.1-dev

cd ComfyUI/models/loras/ AND COPY flux_realism_lora.safetensors
 FROM https://huggingface.co/comfyanonymous/flux_RealismLora_converted_comfyui/tree/main
 
Go To https://comfyanonymous.github.io/ComfyUI_examples/flux/  
Categories: DBA Blogs

Deep Live Cam Local Installation Easy Guide for Face Swap and Deepfake Video on Webcam

Fri, 2024-08-09 20:16

 This is step-by-step easy tutorial to install Deep Live Cam for real time face swap and one-click video deepfake with only a single image (uncensored) locally on Windows.


Code:


1- Install Choco

Open Powershell as Administrator and run following:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

2- Install following pre-reqs

choco install python --version=3.10.0
choco install git
choco install ffmepg

3- Install Visual Studio from https://visualstudio.microsoft.com/visual-cpp-build-tools/

3- git clone https://github.com/hacksider/Deep-Live-Cam.git and cd Deep-Live-Cam

4- Download 2 models from https://huggingface.co/hacksider/deep-live-cam/tree/main and put it in Deep-Live-Cam\models folder

5- cd Deep-Live-Cam and pip install -r requirements.txt

6- If on CPU, run python run.py

For GPU:

7- Install CUDA Toolkit 11.8 from https://developer.nvidia.com/cuda-11-8-0-download-archive

8- Install dependencies:

pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3

9-  python run.py --execution-provider cuda

Enjoy
Categories: DBA Blogs

Mem0 with Ollama Locally - Memory Layer for Personalized AI

Mon, 2024-08-05 03:57

 This video is a step-by-step easy tutorial to install Mem0 locally and integrate it with Ollama local model.


Code:

conda create -n mem python=3.11 -y && conda activate mem

pip install torch
pip install -U transformers sentencepiece accelerate
pip install sentence_transformers
pip install ollama
pip install mem0ai

import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = ""

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama3.1:latest",
            "temperature": 0.1,
            "max_tokens": 2000,
        }
    }
}

m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})



# Get all memories
all_memories = m.get_all()
print(all_memories)

# Get a single memory by ID
specific_memory = m.get("59565340-c742-4e09-8128-702e810cb4fd")
print(specific_memory)

related_memories = m.search(query="alice hobbies?", user_id="alice")
print(related_memories)

result = m.update(memory_id="59565340-c742-4e09-8128-702e810cb4fd", data="Visited Brisbane in Winter")
print(result)

m.delete(memory_id="59565340-c742-4e09-8128-702e810cb4fd") # Delete a memory

m.delete_all(user_id="alice") # Delete all memories

all_memories = m.get_all()
print(all_memories)

Categories: DBA Blogs

Workflows in LlamaIndex - Tutorial to Build Complex AI Applications with Events

Sat, 2024-08-03 02:47

 This video shows how to install and use LlamaIndex Workflows which is a mechanism for orchestrating actions in the increasingly complex AI application.


Code:

conda create -n workflow python=3.11 -y && conda activate workflow

pip install llama-index
pip install llama-index-llms-openai

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

from llama_index.core.workflow import (
    Event,
    StartEvent,
    StopEvent,
    Workflow,
    step,
)

from llama_index.llms.openai import OpenAI

class WeatherEvent(Event):
    location: str
    forecast: str | None

class WeatherFlow(Workflow):
    llm = OpenAI()

    @step()
    async def get_location(self, ev: StartEvent) -> WeatherEvent:
        location = "Sydney"
        forecast = ""  # or some default value
        return WeatherEvent(location=location, forecast=forecast)

    @step()
    async def get_forecast(self, ev: WeatherEvent) -> WeatherEvent:
        location = ev.location
        prompt = f"Get the current weather forecast for {location}."
        response = await self.llm.acomplete(prompt)
        return WeatherEvent(location=location, forecast=str(response))

    @step()
    async def format_forecast(self, ev: WeatherEvent) -> StopEvent:
        location = ev.location
        forecast = ev.forecast
        formatted_forecast = f"Weather in {location}: {forecast}"
        return StopEvent(result=formatted_forecast)

w = WeatherFlow(timeout=60, verbose=False)
result = await w.run()
print(str(result))
Categories: DBA Blogs

Install Perplexica with SearXNG and Ollama and Llama 3.1 for Local AI Search Engine for Free

Wed, 2024-07-31 02:53

 This video shows how to locally install Perplexica with SearXNG and Ollama Llama 3.1 model and do AI-powered search.



Code:
conda create -n px python=3.11 -y && conda activate px

pip install torch transformers accelerate huggingface_hub sentencepiece

SearXNG:

git clone https://github.com/searxng/searxng && cd searxng

under searx directory in settings.yml file, change following:

search:
  formats:
    - html
    - json

sudo chmod 666 /var/run/docker.sock
make docker.build  

docker run --rm -d -p 32768:8080 -v "${PWD}/searxng:/etc/searxng" -e "BASE_URL=http://localhost:$PORT/" -e "INSTANCE_NAME=my-instance" searxng/searxng

http://localhost:32768

Ollama:

curl -fsSL https://ollama.com/install.sh | sh

ollama pull llama3
ollama pull bgesmall
             
             
perplexica :

git clone https://github.com/ItzCrazyKns/Perplexica.git && cd Perplexica

cp sample.config.toml config.toml
vi config.toml change following:

[API_ENDPOINTS]
SEARXNG = "http://localhost:32768"
OLLAMA = "http://localhost:11434"

sudo chmod 666 /var/run/docker.sock
docker compose up -d

http://localhost:3000
Categories: DBA Blogs

Get Llama 3.1 70B-Level AI Quality from 8B with Ollama Locally for Free

Tue, 2024-07-30 05:35

 This video is a step-by-step easy tutorial to get quality of Llama 3.1 70B from Llama 3.1 8B with Ollama locally. It's inspired by Matt Shumer GPT Prompt Engineer.


Code:

import os
import re
import json
import sys

from ollama import Client
client = Client(host='http://localhost:11434')

# Define model names
small_model = "llama3.1"
big_model = "llama3.1:70b"

def generate_candidate_prompts(task, prompt_example, response_example):
    system_prompt = """Given an example training sample, create seven additional samples for the same task that are even better.
    Each example should contain:
    1. Ensure the new examples are diverse and unique from one another.
    2. They should all be perfect. If you make a mistake, this system won't work.

    Respond in this format:
    PUT_PROMPT_HERE
    PUT_RESPONSE_HERE

    PUT_PROMPT_HERE
    PUT_RESPONSE_HERE
    ...
    """
    user_content = f"""{task}
    {prompt_example}
    {response_example}
    """

    response = client.chat(
        model=big_model,
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": user_content}
        ],
        options={
            "max_tokens": 4000,
            "temperature": 0.5
        }
    )
    response_text = response['message']['content']

    # Parse out the prompts and responses
    prompts_and_responses = []
    # Split examples by the delimiter
    examples = response_text.split('PUT_PROMPT_HERE')[1:]

    for example in examples:
        parts = example.split('PUT_RESPONSE_HERE')
        if len(parts) == 2:
            prompt, response = parts
            prompts_and_responses.append({'prompt': prompt.strip(), 'response': response.strip()})

    return prompts_and_responses

def generate_system_prompt(task, prompt_examples):
    system_prompt = """Given a user-description of their task and a set of prompt / response pairs (it'll be in JSON for easy reading)
                    for the types of outputs we want to generate given inputs, write a fantastic system prompt that describes
                    the task to be done perfectly.
                    1. Do this perfectly.
                    2. Respond only with the system prompt, and nothing else. No other text will be allowed.
                    Respond in this format:
                    WRITE_SYSTEM_PROMPT_HERE
                    """
    user_content = f"""{task}
    {json.dumps(prompt_examples, indent=2)}
    """

    response = client.chat(
        model=big_model,
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": user_content}
        ],
        options={
            "max_tokens": 4000,
            "temperature": 0.5
        }
    )

    response_text = response['message']['content']

    # Directly use the response text since the prompt specifies it should be the only content
    generated_system_prompt = response_text.strip()

    return generated_system_prompt

def test_small_model(generated_examples, prompt_example, system_prompt):
    messages = [{"role": "system", "content": system_prompt}]

    for example in generated_examples:
        messages.append({"role": "user", "content": example['prompt']})
        messages.append({"role": "assistant", "content": example['response']})

    messages.append({"role": "user", "content": prompt_example.strip()})

    response = client.chat(
        model=small_model,
        messages=messages,
        options={
            "max_tokens": 2000,
            "temperature": 0.5
        }
    )

    response_text = response['message']['content']

    return response_text

def run_conversion_process(task, prompt_example, response_example):
    print('Generating the prompts / responses...')
    # Generate candidate prompts
    generated_examples = generate_candidate_prompts(task, prompt_example, response_example)

    print('Prompts / responses generated. Now generating system prompt...')

    # Generate the system prompt
    system_prompt = generate_system_prompt(task, generated_examples)

    print('System prompt generated:', system_prompt)

    print(f'\n\nTesting the new prompt on {small_model}, using your input example...')
    # Test the generated examples and system prompt with the small model
    small_model_response = test_small_model(generated_examples, prompt_example, system_prompt)

    print(f'{small_model} responded with:')
    print(small_model_response)

    print('\n\n!! CHECK THE FILE DIRECTORY, THE PROMPT IS NOW SAVED THERE !!')

    # Create a dictionary with all the relevant information
    result = {
        "task": task,
        "initial_prompt_example": prompt_example,
        "initial_response_example": response_example,
        "generated_examples": generated_examples,
        "system_prompt": system_prompt,
        "small_model_response": small_model_response
    }

    # Save the small model prompt to a Python file
    with open("small_model_prompt.py", "w") as file:
        file.write('system_prompt = """' + system_prompt + '"""\n\n')

        file.write('messages = [\n')
        for example in generated_examples:
            file.write('    {"role": "user", "content": """' + example['prompt'] + '"""},\n')
            file.write('    {"role": "assistant", "content": """' + example['response'] + '"""},\n')

        file.write('    {"role": "user", "content": """' + prompt_example.strip() + '"""}\n')
        file.write(']\n')

    return result

task = "refactoring code"

prompt_example = """def hello():
                    total = 0
                    total = total + 1
                    return total"""

response_example = """def hello():
                   total = 1
                   return total
                 """

result = run_conversion_process(task, prompt_example, response_example)
print(result)
Categories: DBA Blogs

Pages