Prompt Engineering
Prompt Engineering
  • 234
  • 7 825 659
Creating J.A.R.V.I.S.
A sneak peek of voice-to-voice chat assistant.
🦾 Discord: discord.com/invite/t4eYQRUcXB
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: www.patreon.com/PromptEngineering
💼Consulting: calendly.com/engineerprompt/consulting-call
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
All Interesting Videos:
Everything LangChain: ua-cam.com/play/PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr.html
Everything LLM: ua-cam.com/play/PLVEEucA9MYhNF5-zeb4Iw2Nl1OKTH-Txw.html
Everything Midjourney: ua-cam.com/play/PLVEEucA9MYhMdrdHZtFeEebl20LPkaSmw.html
AI Image Generation: ua-cam.com/play/PLVEEucA9MYhPVgYazU5hx6emMXtargd4z.html
Переглядів: 2 537

Відео

First Impressions of Gemini Flash 1.5 - The Fastest 1 Million Token Model
Переглядів 5 тис.4 години тому
Just checked out Google's new Gemini Flash at Google I/O. It's a super-fast AI model designed for handling big tasks - think processing videos, audios, or huge codebases, all while keeping costs low. I put it through its paces against giants like GPT 3.5 and GPT 4.0, looking at performance, costs, and how it handles real-world tasks. I even tried confusing it with tricky questions and coding ch...
Google IO: Agents is The Future - Demos
Переглядів 2,9 тис.7 годин тому
Google IO was all about Agents. Here are some examples demo shown. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerprompt@gmail.com Become Member: tinyurl.com/y5h28s6h 💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: P...
Getting Started with GPT-4o API, Image Understanding, Function Calling and MORE
Переглядів 6 тис.7 годин тому
Getting Started with GPT 4.0: A Comprehensive Tutorial This video tutorial guides you through the basics of getting started with the GPT-4o API, including comparisons with GPT 4.0 Turbo, exploring capabilities like text generation, image understanding, and function calling. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/Prompt...
GPT-4o: OpenAI's NEW OMNI-MODEL Can DO it ALL
Переглядів 4 тис.9 годин тому
In this video we look at GPT-4 OmniModel, a groundbreaking AI model capable of processing and responding to audio, vision, and text in real-time. Demonstrating its versatility, the video showcases various scenarios including customer support, language translation, and educational tutoring, highlighting the OmniModel's ability to understand and interact in near-human response times. 🦾 Discord: d...
Yi-1.5: True Apache 2.0 Competitor to LLAMA-3
Переглядів 6 тис.9 годин тому
In this video, we will look at Yi-1.5 series models were just released by 01-AI. This update includes 3 different models with sizes ranging from 6 billion to 34 billion parameters and training on up to 4.1 trillion tokens. All models are released under Apache 2.0 license. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEn...
NVIDIA ChatRTX: Private Chatbot for Your Files, Image Search via Voice | How to get started
Переглядів 7 тис.14 годин тому
This video provides an in-depth review and tutorial of NVIDIA's ChatRTX, a new tool designed for users with RTX GPUs on Windows PCs. The tool leverages Retrieval Augmented Generation technology and tensor RT LLM alongside RTX acceleration to chat with documents and use voice interaction. It now supports local photo and image search with improvements in its features. The application requires spe...
Free LOCAL Copilot to Take Your Coding to the NEXT LEVEL
Переглядів 5 тис.21 годину тому
Setting up Local AI Models for Code Generation in VS Code This video tutorial covers how to set up local AI models as a copilot for code generation within VS Code, transitioning from using grok API to LM Studio and Ollama, both of which enable running local models. #copilot #llm #vscode #llama3 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: w...
Free Copilot to Take Your Coding to the NEXT LEVEL
Переглядів 12 тис.День тому
Get started with llama3 as your coding copilot. We will use the codeGPT extension in VSCode powered by llama3 via Groq API to create our copilot. #copilot #llm #vscode #llama3 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerpr...
Llama-3 🦙 with LocalGPT: Chat with YOUR Documents in Private
Переглядів 7 тис.День тому
In this video we will look at how to start using llama-3 with localgpt to chat with your document locally and privately. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerprompt@gmail.com Become Member: tinyurl.com/y5h28s6h 💻 Pr...
Extending Llama-3 to 1M+ Tokens - Does it Impact the Performance?
Переглядів 10 тис.14 днів тому
In this video we will look at the 1M context version of the best open llm, llama-3 built by gradientai. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerprompt@gmail.com Become Member: tinyurl.com/y5h28s6h 💻 Pre-configured loca...
Get your own custom Phi-3-mini for your use cases
Переглядів 10 тис.14 днів тому
Here is how to get started with training your own version of Phi-3-mini on your own dataset. We will use Unsloth to train our own version on custom dataset. #llm #finetuning #phi3 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engine...
How Good is LLAMA-3 for RAG, Routing, and Function Calling
Переглядів 7 тис.14 днів тому
How good is Llama-3 for RAG, Query Routing, and function calling? We compare the capabilities of both 8B and 70B models for these tasks. We will be using GROQ API for accessing these models. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Cont...
How Good is Phi-3-Mini for RAG, Routing, Agents
Переглядів 9 тис.21 день тому
Microsoft just released their Phi-3 family of models that are SOTA for their weight class. But are they good for RAG and agent use-cases? 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerprompt@gmail.com Become Member: tinyurl....
Does Size Matter? Phi-3-Mini Punching Above its Size on "BENCHMARKS"
Переглядів 5 тис.21 день тому
Microsoft just released their Phi-3 family of models that are SOTA for their weight class. The best part, the weights are publicly available and can be used locally. 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Patreon: www.patreon.com/PromptEngineering 💼Consulting: calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerprompt@gmail...
Llama-3 Is Not Really THAT Censored
Переглядів 7 тис.21 день тому
Llama-3 Is Not Really THAT Censored
MIXTRAL 8x22B: The BEST MoE Just got Better | RAG and Function Calling
Переглядів 4 тис.21 день тому
MIXTRAL 8x22B: The BEST MoE Just got Better | RAG and Function Calling
Insanely Fast LLAMA-3 on Groq Playground and API for FREE
Переглядів 22 тис.21 день тому
Insanely Fast LLAMA-3 on Groq Playground and API for FREE
LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌
Переглядів 38 тис.28 днів тому
LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌
LLAMA 3 Released - All You Need to Know
Переглядів 11 тис.28 днів тому
LLAMA 3 Released - All You Need to Know
WizardLM 2 - First Open Model Outperforming GPT-4
Переглядів 16 тис.28 днів тому
WizardLM 2 - First Open Model Outperforming GPT-4
Create Financial Agents with Vision 👀 - Powered by Claude 3 Haiku & Opus
Переглядів 6 тис.Місяць тому
Create Financial Agents with Vision 👀 - Powered by Claude 3 Haiku & Opus
Grok Vision - First Multimodal Model from XAi
Переглядів 3,6 тис.Місяць тому
Grok Vision - First Multimodal Model from XAi
Mixtral 8x22B MoE - The New Best Open LLM? Fully-Tested
Переглядів 10 тис.Місяць тому
Mixtral 8x22B MoE - The New Best Open LLM? Fully-Tested
Taking Function Calling to the NEXT Level with Groq API 🚀 🚀 🚀
Переглядів 6 тис.Місяць тому
Taking Function Calling to the NEXT Level with Groq API 🚀 🚀 🚀
Claude 3 Introduces Function Calling and Tool Usage
Переглядів 6 тис.Місяць тому
Claude 3 Introduces Function Calling and Tool Usage
Cohere's Command-R+ Specialized Model for RAG and Tools
Переглядів 5 тис.Місяць тому
Cohere's Command-R Specialized Model for RAG and Tools
SWE-Agent: The New Open Source Software Engineering Agent Takes on DEVIN
Переглядів 13 тис.Місяць тому
SWE-Agent: The New Open Source Software Engineering Agent Takes on DEVIN
Advanced RAG with ColBERT in LangChain and LlamaIndex
Переглядів 8 тис.Місяць тому
Advanced RAG with ColBERT in LangChain and LlamaIndex
OpenAI Voice Engine - Realistic Voice Cloning
Переглядів 8 тис.Місяць тому
OpenAI Voice Engine - Realistic Voice Cloning

КОМЕНТАРІ

  • @user-hn7cq5kk5y
    @user-hn7cq5kk5y 44 хвилини тому

    Don't share trash

  • @themax2go
    @themax2go 4 години тому

    not local. not the jarvis voice. misleading title. disappointed

  • @themax2go
    @themax2go 4 години тому

    should edit title to add "using openai"

  • @matthiasandreas6549
    @matthiasandreas6549 4 години тому

    Yes please more. Thanks

  • @Thorin632
    @Thorin632 8 годин тому

    Please make beginner friendly tutorial, step by step guide on how to integrate this with localgpt 🙏🙏

  • @RedVRCC
    @RedVRCC 13 годин тому

    This is kinda cool even if its a bit limited rn. I really like the idea of being able to run a powerful AI model locally on my PC, making it literally mine to do whatever with and also not sharing any of my data with a big server. I'd love to see how this progresses. I am downloading it as we speak but I'm not too sure whether or not my lowly 3060 will run it well or not.

  • @chaiwajustine8532
    @chaiwajustine8532 16 годин тому

    This is Amazing

  • @chaiwajustine8532
    @chaiwajustine8532 16 годин тому

    I really need this please

  • @3choff
    @3choff 18 годин тому

    Very interesting project! Do you use any VAD to detect the end of the request?

  • @borisrusev9474
    @borisrusev9474 19 годин тому

    I don't get it, how's that different from GPT-4o?

    • @engineerprompt
      @engineerprompt 12 годин тому

      You are right, very similar in functionality. In fact, this version is using GPT-4o for text generation. But the voice functionality is not available in GPT-4o yet.

  • @eointolster
    @eointolster 23 години тому

    I made my own version that sounds like vegeta Attitude assistant, might have an argument #angry #funny #artificialintelligence ua-cam.com/users/livexFZ1R2rUjxk?feature=share the delay in response is killing me though

  • @comfyuiadrian
    @comfyuiadrian 23 години тому

    Wahooo..really looking forward to your new project!

  • @temp911Luke
    @temp911Luke День тому

    Nice but would be great without that annoying 2-3 sec delay.

    • @engineerprompt
      @engineerprompt День тому

      I agree, I just got access to Groq Whisper. Will be interesting to see how that works.

    • @fontende
      @fontende 17 годин тому

      ​@@engineerpromptGeorge Hotz on stream called groq a scam...

  • @aa-xn5hc
    @aa-xn5hc День тому

    Great looking forward

  • @Techonsapevole
    @Techonsapevole День тому

    it's fast which TTS and STT did you use ?

  • @sofianeben2490
    @sofianeben2490 День тому

    Mistral Large is died 😅

  • @joepropertykey3612
    @joepropertykey3612 День тому

    Right on Bro, RIGHT ON. ......... but we need the voice of Cortana for this, for when we are sitting around in our Mark V Armor and coding...:)

  • @immortal2.036
    @immortal2.036 День тому

    Idk know, why there is a folder on my desktop named Jarvis-v6 since 5 months and surprisingly that's also doing the same job 😮

    • @engineerprompt
      @engineerprompt День тому

      Would love to see what's in the folder :D I am v0 now

    • @immortal2.036
      @immortal2.036 День тому

      @@engineerprompt it's gonna become interesting. I thought I was the one who was able to crack speech while streaming to reduce the latency.

  • @Soniboy84
    @Soniboy84 День тому

    how it's different than gpt4o voice?

  • @barackobama4552
    @barackobama4552 День тому

    Impressive, thanks!

  • @saadamiens
    @saadamiens День тому

    Is gemini advanced the same than Google AI Studio, sorry I don't know how to access Google AI Studio

    • @engineerprompt
      @engineerprompt День тому

      Ai studio doenst have the advance version. That is only for paid customers

    • @saadamiens
      @saadamiens День тому

      @@engineerprompt understood. is the 1.5 pro in gemini advanced going to have the ability to upload folder like you presented in the video, thanks again, I assume today what we have in gemini advanced is not the 1.5 pro as I am not able to upload video or folders

  • @GetzAI
    @GetzAI День тому

    EXCITED!

  • @brianpereira7757
    @brianpereira7757 День тому

    That doesnt sound like Jarvis, I want the real Jarvis voice!!!

    • @engineerprompt
      @engineerprompt День тому

      Good point, I think elevanlabs have that. Will try to integrate that :)

    • @sayantandas7544
      @sayantandas7544 15 годин тому

      ​@@engineerprompt How about you add a little UI also? And maybe add a button to take continuous screenshot with a regular interval as well. In that way, you will be releasing the OpenAI's demo app before OpenAI.

  • @MeinDeutschkurs
    @MeinDeutschkurs День тому

    Wooohooo!! Yeah, can‘t wait for it! ⭐️

  • @botondvasvari5758
    @botondvasvari5758 День тому

    and how can I use big models from huggingface ? I can't load them into memory because many of them are bigger than 15gb, some of them are 130gb+ . Any thoughts?

  • @RickySupriyadi
    @RickySupriyadi День тому

    also i request a video about this vs gpt-4o

  • @RickySupriyadi
    @RickySupriyadi День тому

    yes please is it going open source?

  • @smoofwah3552
    @smoofwah3552 День тому

    Is there a way to speed it up?

    • @engineerprompt
      @engineerprompt День тому

      Yes, Groq has whisper support now. Going with that but the issue is the rate limit!

  • @danieldjinishiandebriquez1858

    What apis are being used?

    • @engineerprompt
      @engineerprompt День тому

      currently everything is openai. Just got access to whisper from Groq, will update it and hope will be much faster!

    • @danieldjinishiandebriquez1858
      @danieldjinishiandebriquez1858 День тому

      @@engineerprompt great! Looking forward the tutorial or git repo. Literally yesterday I was searching about Jarvis haha

  • @user-jq1gc8lt7s
    @user-jq1gc8lt7s День тому

    I LIKE IT GREAT JOB

  • @KiyotokaAyanakoji-ss1gn
    @KiyotokaAyanakoji-ss1gn День тому

    What TTS are you using and is it running locally

    • @engineerprompt
      @engineerprompt День тому

      Whisper but via the api. Nothing is running locally in this video. Local version will be coming soon.

    • @KiyotokaAyanakoji-ss1gn
      @KiyotokaAyanakoji-ss1gn День тому

      @@engineerprompt loved it 👍

    • @Gun_ForFun
      @Gun_ForFun День тому

      @@engineerprompt but Whisper is ASR, not TTS??

    • @snapman218
      @snapman218 День тому

      Gross.

    • @themax2go
      @themax2go 4 години тому

      someone already made a fully local version and works w/ little latency and with voice training. there already exist projects on github for continuous speech using a keyword to trigger recording, and a version with a ptt implementation instead of keyword

  • @ksanjaykumar4412
    @ksanjaykumar4412 День тому

    thank you for your video! Quick question: I'm trying to use the Gemini 1.5 Pro API and to host it on a Streamlit app. is it possible if we could host the Gemini 1.5 Pro API on streamlit to receive video file as inputs from users and perform the desired output (to get summary or whatever). Basically just like what we have in google studio but using the API in the streamlit app. Is it possible?

    • @engineerprompt
      @engineerprompt День тому

      I think its not possible yet but you can have a work around. You can generate frames and then use those in sequence to get response. Here is one way of doing it: tinyurl.com/5n8ywwpt

  • @bastabey2652
    @bastabey2652 День тому

    looking for a better chatGPT 3.5 in the same price range, I m impressed with Gemini Flash..

  • @TookieStanleyWilliam
    @TookieStanleyWilliam День тому

    im getting an error on the run.bat portion. all the files are different from what i see in the video

  • @unclecode
    @unclecode День тому

    .Interesting, it seems both OpenAI & Google are stepping up their game, improving performance, speed, and benchmarks a bit. But no groundbreaking intelligence yet. Is this a plateau for transformer models? Or are these companies slowing down for steady competition? Either way, it's cool that the tech behind these models isn't a secret anymore. KV Cache, RoPE and ... Watching your video got me thinking, how cool would it be to invest in something like Llama3-Flash with a 200k content window length! 🚀

    • @engineerprompt
      @engineerprompt День тому

      That's the beauty of open source :) I still think the main limiting factor would be the availability of compute! but the way I think about it is the for most use cases, you don't even need SOTA models. A specialized Llama3-Flash-200k will be more than enough :)

    • @unclecode
      @unclecode День тому

      That is exactly what I believe. Giant companies aim to dominate the AGI market for future valuation, but we need to focus on “topic-specific language models” and orchestrate dozens of them, from 100M to 1B. This is the way to build a distributed “intelligent” system, similar to blockchain where each block is a small model. I think this is the way to AGI, making it more consumer-device friendly and energy-efficient. As Mandalorians say, this is the way.@@engineerprompt

  • @NLPprompter
    @NLPprompter День тому

    it is amaze me when transformers does that... I remember when stability AI never be able to logic the human fingers even it's already "seen" them... I wonders does it think human hands have so many fingers because everytime prompt come in it's instant whole tokens. and since "it" limited by cpu bottle neck which makes "it" think all being have 1 hands to hold a thing and only able to out put something one at a time (please forget this talk I'm hallucinating... too much time with them made me thinks to hallucinate too, it... it... some what fun to hallucinate,.... words )

  • @kapilpai4779
    @kapilpai4779 День тому

    Which technology in the backend these people use to create real human AI avatar.

  • @moviedisk6486
    @moviedisk6486 День тому

    This doesn't work for .c and .cpp files. LangChain load() function works with python and javascript unfortunately

  • @bgNinjaart
    @bgNinjaart 2 дні тому

    You forgot to hide your email

  • @Matlockization
    @Matlockization 2 дні тому

    It's a Zuckerberg free AI........that makes me wonder. And you have to agree to hand over contact info and what else, I wonder ?

  • @caseyhoward8261
    @caseyhoward8261 2 дні тому

    Could the Gemini Flash 1.5 model handle most of the heavy lifting, allowing 4.0 to manage more complex, value-oriented tasks, such as image recognition? For instance, could Gemini handle routine tasks and delegate more complex ones to 4.0 when necessary?In an agent-based problem-solving chatbot with a RAG implementation, wouldn't this approach be more cost-effective?I'm working on a grow buddy chatbot for plant health and environmental monitoring, which includes image recognition. The backend is being developed in Python, running on a cloud server, with communication to an Android front-end via RESTful APIs.I'm relatively new to coding, having started just six months ago. Any advice or insights tha5 anyone can provide would be greatly appreciated! P.s. It's the implementation of vision recognition is where I'm completely stuck at. 😔

  • @caseyhoward8261
    @caseyhoward8261 2 дні тому

    I'm currently working on an agent-based chatbot Android app with image recognition capabilities. The app, a plant growing buddy, is designed to recognize and understand plant health and its growing environment through image recognition. I plan to develop most of the backend functionality using Python before transitioning to frontend development in Android Studio. Regarding the Gemini Flash 1.5, I was wondering if this model would be suitable for my application's image recognition tasks. Do you have any advice on integrating the Gemini Flash 1.5 model for image processing in a mobile app? Additionally, are there any specific tools or libraries you would recommend for efficiently implementing real-time data communication between the chatbot and the app? Thank you for your time and assistance! Ur channel is AWESOME!! ❤❤ Best regards, Casey

  • @akshathreddy158
    @akshathreddy158 2 дні тому

    It is for python code base. I tried for Java code base but it doesn't work. Can you suggest on this

  • @Priming-ING
    @Priming-ING 2 дні тому

    Gemini is shit.. GPT4o the best !

    • @caseyhoward8261
      @caseyhoward8261 2 дні тому

      In production, wouldn't using 4.0 be super expensive, especially for a personal assistant with vision recognition capabilities?Could the Gemini Flash 1.5 model handle most of the heavy lifting, allowing 4.0 to manage more complex, value-oriented tasks, such as image recognition? For instance, could Gemini handle routine tasks and delegate more complex ones to 4.0 when necessary?In an agent-based problem-solving chatbot with a RAG implementation, wouldn't this approach be more cost-effective?I'm working on a grow buddy chatbot for plant health and environmental monitoring, which includes image recognition. The backend is being developed in Python, running on a cloud server, with communication to an Android front-end via RESTful APIs.I'm relatively new to coding, having started just six months ago. Any advice or insights you can provide would be greatly appreciated!Thank you for your time!

    • @jakeboardman5212
      @jakeboardman5212 2 дні тому

      @caseyhoward8261 Yes, it would be more expensive, and absolutely, it would be. Ironically, Gemini has a larger token capacity than the GPT4o having an overall capacity of 128,000, Gemini, exceeds this with a token capacity of 1,000,000 making it far more superior.

    • @caseyhoward8261
      @caseyhoward8261 День тому

      ​@@jakeboardman5212Thank you.

    • @sofianeben2490
      @sofianeben2490 День тому

      Not the same price. Flash seem better than Haiku and better than mistral

  • @TheReferrer72
    @TheReferrer72 2 дні тому

    nice model.

  • @chhabiacharya307
    @chhabiacharya307 2 дні тому

    In case you're having difficulty with storing pickel files. store_name = pdf.name[:-4] index_folder = f'src/faiss_store/{store_name}' if os.path.exists(index_folder): try: vectorstores = FAISS.load_local(index_folder, OpenAIEmbeddings(),allow_dangerous_deserialization=True) st.write("Loaded vectorstores from local storage") except Exception as e: st.write(f"Failed to load local storage: {e}") else: try: embeddings = OpenAIEmbeddings(model="text-embedding-3-large", dimensions=1024) faiss_index = faiss.IndexFlatL2(embeddings.dimensions) # Initialize the docstore docstore = InMemoryDocstore() # Initialize the index_to_docstore_id index_to_docstore_id = {} # vectorstores = FAISS(chunks,embeddings) vectorstores = FAISS(embedding_function=embeddings, index=faiss_index, docstore=docstore, index_to_docstore_id=index_to_docstore_id) vectorstores.save_local(index_folder) st.write("Saved vectorstores to local storage") except Exception as e: st.write(f"Failed to save local storage: {e}")

  • @LTBLTBLTBLTB
    @LTBLTBLTBLTB 2 дні тому

    Is it possible to use GEMINI API in visual studio code?

    • @engineerprompt
      @engineerprompt 2 дні тому

      Yes, the code segment you get from ai studio shows how to set that up.

  • @contentfreeGPT5-py6uv
    @contentfreeGPT5-py6uv 2 дні тому

    yes, in my project i use is so fast, really like this future, like your video

  • @studentsmotivation5876
    @studentsmotivation5876 2 дні тому

    Sir in Devika, The select button of search engine and Select model is not working ,Initially it was working but from last 4-5 days it is not working Please help me

  • @TraveleroftheSoul7674
    @TraveleroftheSoul7674 2 дні тому

    there is a problem in the code. Even when I ingest new files it's still gives answer and make mess with the last file I deleted. How to handle this. I tried different prompts but it's not working for me?