Feed aggregator

LlamaIndex Multimodal with Ollama [Local LLM]

Andrejus Baranovski - 10 hours 17 min ago
I describe how to run LlamaIndex Multimodal with local LlaVA LLM through Ollama. Advantage of this approach - you can process image documents with LLM directly, without running through OCR, this should lead to better results. This functionality is integrated as separate LLM agent into Sparrow. 

 

I want to compate Table A with Table B store difference in C and update A To B

Tom Kyte - Fri, 2024-03-01 09:06
I need to compare 2 table A and B I want to store the difference in C and after I want to update B to A. The B table could be updated every second so , I wondering what is the right approach to ensure when I update B to A I m using same value as when I compare B and A Hope this is clear enough !!!
Categories: DBA Blogs

Optimal ways to Capture Data Changes in Oracle 19c

Tom Kyte - Fri, 2024-03-01 09:06
Hi Tom, Greetings, we have a requirement to capture data changes for multiple tables and push to Kafka topic in the following format similar to GoldenGate provides. "op_type":"Insert or Update","op_ts": "timestamp","current_ts": "timestamp", "before" :"Row Data before change", "after" :"Row Data after change" Since we are not able to use GG at this point in time, exploring combination of PL/SQL and java based solution to push the data to Kafka topic. Please suggest an optimal solution to capture data changes in Oracle 19c. Thanks in advance!
Categories: DBA Blogs

Fetch and pga

Tom Kyte - Fri, 2024-03-01 09:06
I run in sqlplus a query with huge result set. It has to return million records Set arraysize to 5000 Monitoring v$session, v$process, v$sqlstats while query is running I can see after some minutes executions changes from 0 to 1. Then fetch count begin to be incremented at each stats read cycle and so is doing rows processed count (row counter is incremented by multiple of 5000) At this point I expected fetched rows to be unloaded, step by step, from process pga but I'm surprised to see that process pga grows continuously. So why process is still keeping in memory rows that are already been fetched ? No way to free pga step by step after each fetch ?
Categories: DBA Blogs

How AI Transcription Can Make Your Next Job Interview More Effective

Pakistan's First Oracle Blog - Thu, 2024-02-29 01:47

 If you are looking to use AI Transcription during your job interview to get help from LLM, then this video is going to help you immensely.





In the rapidly evolving landscape of recruitment, the integration of Artificial Intelligence (AI) tools has been a game-changer, especially AI transcription services. These services, which convert speech into a written or electronic text document, are now being increasingly utilized during candidate interviews. This innovative approach is not just about efficiency; it's about transforming the interview process into a more insightful, fair, and strategic component of talent acquisition. Let's delve into the merits of using AI transcription in candidate interviews and how it is reshaping the future of hiring.

Enhanced Accuracy and Objectivity

One of the paramount benefits of using AI transcription is the enhanced accuracy and objectivity it brings to the recruitment process. Traditional note-taking methods during interviews are inherently flawed, prone to human error, and can be biased. AI transcription, on the other hand, captures every word as it is spoken, ensuring that nothing is missed or misinterpreted. This verbatim record allows hiring managers to review the interview with a fine-tooth comb, ensuring that decisions are made based on what was actually said, rather than on recollection or interpretation.

Improved Candidate Experience

The candidate experience is crucial in attracting top talent. AI transcription contributes to a more engaging and less intimidating interview environment. Candidates can speak freely, knowing that their responses are being accurately recorded. This level of precision in capturing responses allows candidates to feel heard and valued, improving their overall experience and perception of the company. Furthermore, the efficiency of AI transcription speeds up the interview process, enabling quicker feedback and follow-ups, which candidates appreciate.

Accessibility and Inclusivity

AI transcription makes interviews more accessible and inclusive, breaking down barriers for candidates who may be deaf or hard of hearing. By providing real-time transcription, these candidates can participate fully in the interview process, ensuring equal opportunities for all. This commitment to inclusivity not only enhances the company's reputation but also widens the talent pool by making it more diverse.

Streamlined Analysis and Decision Making

The comprehensive data captured by AI transcription allows for a more streamlined analysis of candidates' responses. Hiring teams can utilize this data to identify patterns, evaluate key competencies, and make more informed decisions. AI tools can also highlight specific skills and qualifications mentioned during the interview, making it easier to assess candidates against job requirements. This data-driven approach reduces the likelihood of cognitive biases affecting hiring decisions and helps in selecting the best fit for the role.

Enhanced Collaboration Among Hiring Teams

AI transcription facilitates better collaboration among hiring team members, especially in scenarios where not all members can attend every interview. The transcribed interviews can be easily shared, allowing team members to review and provide input at their convenience. This collaborative approach ensures a more comprehensive evaluation of candidates, drawing on diverse perspectives within the team.

Legal Compliance and Record-Keeping

In today's litigious environment, maintaining accurate records of candidate interviews can be crucial for legal compliance. AI transcription provides an exact record of the conversation, which can be invaluable in defending against claims of discrimination or bias. This level of documentation ensures that the hiring process is transparent and defensible, safeguarding the company against potential legal challenges.

Conclusion

The adoption of AI transcription in candidate interviews is more than just a nod to technological advancement; it's a strategic move towards making the recruitment process more accurate, fair, and inclusive. By enhancing the accuracy of interviews, improving candidate experience, and streamlining decision-making, AI transcription is setting a new standard in talent acquisition. As companies continue to seek competitive advantages in attracting top talent, the use of AI in recruitment processes, especially transcription, is likely to become not just common but essential.

Categories: DBA Blogs

what is lobsegment,lobindex

Tom Kyte - Wed, 2024-02-28 02:26
I query table 'user_segments' sql>select segment_name, segment_type, tablespace_name, bytes, max_extents from user_segments where segment_type like 'LOB%' result is SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME SYS_IL0000012099C00002$$ LOBINDEX CPOCKET_DATA SYS_LOB0000012099C00002$$ LOBSEGMENT CPOCKET_DATA => I don't know what is lobsegement, lobindex. What is lobsegment, lobindex and why need it. Can't I delete it? Please explain detail about lobsegement, lobindex. Thank you.....
Categories: DBA Blogs

How to do update (replace values) in a table contains 50+ million records?

Tom Kyte - Wed, 2024-02-28 02:26
Hi, I have a table contains 50+ million records, and I am writing a procedure to replace the bad data to the correct values(about 1500 records). <i><b>K_V</b></i> is the array of bad data and target correct value,like <code>K_V('bad data1') := 'correct value1'</code> when I loop the <i><b>K_V</b></i>, do <code>'update table set xx=replace(xx,bad data,correct value);'</code> This procedure run whole night but still can not finish. So how can deal with this problem? Seems I can not write the procedure that way. Thanks.
Categories: DBA Blogs

"alter session sync with primary" with Maximum Performance Protection Mode

Tom Kyte - Wed, 2024-02-28 02:26
"alter session sync with primary" raises ORA-03173 for us. <code>SQL> select database_role, open_mode, db_unique_name from v$database; DATABASE_ROLE OPEN_MODE DB_UNIQUE_NAME ---------------- -------------------- ------------------------------ PHYSICAL STANDBY READ ONLY WITH APPLY mdpams SQL> alter session sync with primary; ERROR: ORA-03173: Standby may not be synced with primary</code> Is this expected behaviour in protection mode "Maximum Performance" or do we maybe hit some bug ? dgmgrl shows nothing suspicious <code>DGMGRL> show configuration Configuration - fsc Protection Mode: MaxPerformance Members: mdpfra - Primary database mdpams - Physical standby database mdpdev - Snapshot standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 61 seconds ago) DGMGRL> show database mdpams Database - mdpams Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: 0 seconds (computed 0 seconds ago) Average Apply Rate: 4.94 MByte/s Real Time Query: ON Instance(s): mdpams1 (apply instance) mdpams2 Database Status: SUCCESS DGMGRL> </code>
Categories: DBA Blogs

Similarity Search with Oracle’s Vector Datatype

DBASolved - Tue, 2024-02-27 09:36

In my last two posts I showed you what the Oracle Vector Datatype is and how to update existing data […]

The post Similarity Search with Oracle’s Vector Datatype appeared first on DBASolved.

Categories: DBA Blogs

View with pivot and group by grouping sets work in 12c but not in 19. Error ORA-56903

Tom Kyte - Tue, 2024-02-27 08:06
Views with group by grouping sets and pivot directly or in referenced view which worjk in Oracle 12c fall with ORA-56903 error sys_op_pivot function is not allowed here in Oracle 19. Bit in view or referenced view don't have explicit call sys_op_pivot. Mybe Oracle use it during execution of views. Thanks in advance. Best regards. According sugestion I have put: alter session set optimizer_features_enable = '12.2.0.1' but error persists. It appeare in all view with grouping sets and with pivot clause in subview as base view or directly in current view. If view with pivot clause is subview subview work correctly.
Categories: DBA Blogs

Record / Check Login Information for Standby DBs

Tom Kyte - Tue, 2024-02-27 08:06
Hello We want to housekeep our user accounts and remove unsed and locked accounts. As far as I understand, the information in dba_users is from the primary DB. Users are not allowed to logon to the primary to query data, they must logon the read only standby (regulated by a trigger). When I look in dba_users on the standbys I can see several users that have not or never logged on: <code>select username, account_status, nvl(to_char(last_login),'never logged on') "Last Login" from dba_users where oracle_maintained = 'N' and username not in ('AAAAAAAAAAA','BBBBBB','CCCCCCC') and username not like '%READ%' and username not like '%Exxx%' order by "Last Login" desc;</code> USERNAME ACCOUNT_STATUS Last Login ------------------------------ --------------- ---------------------------------------- Pxxxxxxx OPEN never logged on Pxxxxxxx_03 LOCKED never logged on Pxxxxxxx_05 LOCKED never logged on Pxxxxxxx_04 LOCKED never logged on Pxxxxxxx_01 LOCKED never logged on BRxxxxxxx OPEN never logged on Pxxxxxxx_02 LOCKED never logged on Sxxxxxxx EXPIRED never logged on Jxxxxxxx EXPIRED never logged on Mxxxxxxx OPEN 2020-09-05:19:48:06 GMT+01:00 Bxxxxx OPEN 2020-09-05:19:19:52 GMT+01:00 Axxxxxx OPEN 2016-05-20:09:17:33 GMT+01:00 Pxxxxxxxxxx_01 OPEN 2016-04-21:10:48:34 GMT+01:00 Kxxxxx OPEN 2016-04-19:13:50:33 GMT+01:00 Pxxxxxxxxxx_01 OPEN 2016-04-13:14:18:17 GMT+01:00 However, this information from dba_users is identical on primary and standby DBs. The users told me that they have logged to the standby recently. As far as I understand the information in dba_users, also on the standby has been inherited from the primary as normal catalogue tables are not updated on the standby. Is this correct? How can I see last logins on the standby, preferably witthout using auditing which could cause a performance degredation and this is a production system where performance is key. Many thanks Alison We are using active dataguard, and our idea at the moment is to record logins to the standby using a trigger which checks if standby or primary and then writes logon data acroos a DB link into a table on primary. Many thanks
Categories: DBA Blogs

move table to new tablespace

Tom Kyte - Tue, 2024-02-27 08:06
Hi TOM i have oracle cluster database EE with 2 nodes, and i have a big table with a big LOBs row, so after compressing the LOBs files i did move them to a new tablespace,so the principal table became just about 300MB but it still taking space of 1.2Tb, my concerns are about the space why i cant reclaim this space, i created a new tablespace and i did table move but it doesn't work it took so much time and i did shrink but it doesn't work too, i think there is a problem with high watermark? what i have to do please to gain this space and thanks. The lobs were saved in the same tablespace with other data caled DATA having 1.2 Tb, after that i did move them to a new tablespace i created caled LOB_DATA the problem is the shrink space for table space DATA did nothing and the table move also doesn't work so how to reclaim the free extents in DATA 1.2Tb tablespace.
Categories: DBA Blogs

Capacity Planning

Tom Kyte - Tue, 2024-02-27 08:06
Hi Tom, I have a some questions regarding the capacity planning.Thanks in advance. 1.is there any way we can match LIOs & PIOs to the no of CPUs & no of disks ? 2.is there any place , i can find documents to do capacity planning for the oracle database/sun solaris environment? 3.I am very much confused about the sort_area_size my understanding is -- sort_area_size is the max threshold to do sort on memory and only one only sort_area_size per session .Alloc from UGA --sort_area_retained is to store the result set from SAS and it can be many per session at a time.is it correct? Alloc from PGA. When we do first sorting which is lesser than sort_area_size,the memory allocated from PGA or UGA? is it sort_area_size or sort_area_reatined? Thanks in advance Regards Jeyaseelan.M
Categories: DBA Blogs

Installing and Running Oracle AHF ORACHK on a 12.2 DB Server

Hemant K Chitale - Tue, 2024-02-27 00:20

 The Oracle Autonomous Health Framework is described in Support Document "Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAchk (Doc ID 2550798.1)"

In a recent video I have demonstrated running 24.1 orachk (with "-b" for "Best Practices Check) against a 21.3 RAC Cluster.

Here I demonstrate the installation and execution against a 12.2 non-RAC database.

When you download the 24.1 release of AHF (AHF-LINUX_v24.1.0.zip, approximately 410MB), you have to unzip it and then run ahf_setup.  It is preferable to use the default location /opt/oracle.ahf  (and precreate a "data" subfolder if it doesn't exist).

If your first attempt at installation returns an error :

[ERROR] : AHF-00074: Required Perl Modules not found :  Data::Dumper

you can check the perl version and download and install this module (Note : In the listings below "AHF_Installer is the location where I have extracted the installation zip file).


[root@vbgeneric AHF_Installer]# /bin/perl -v

This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
(with 34 registered patches, see perl -V for more detail)

Copyright 1987-2012, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.

[root@vbgeneric AHF_Installer]# yum install perl-Data-Dumper
Loaded plugins: langpacks, ulninfo
ol7_UEKR4                                                           | 3.0 kB  00:00:00     
ol7_latest                                                          | 3.6 kB  00:00:00     
(1/5): ol7_latest/x86_64/group_gz                                   | 136 kB  00:00:00     
(2/5): ol7_UEKR4/x86_64/updateinfo                                  | 130 kB  00:00:00     
(3/5): ol7_latest/x86_64/updateinfo                                 | 3.6 MB  00:00:00     
(4/5): ol7_latest/x86_64/primary_db                                 |  50 MB  00:00:02     
(5/5): ol7_UEKR4/x86_64/primary_db                                  |  37 MB  00:00:04     
Resolving Dependencies
--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================
 Package                   Arch            Version               Repository           Size
===========================================================================================
Installing:
 perl-Data-Dumper          x86_64          2.145-3.el7           ol7_latest           47 k

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 47 k
Installed size: 97 k
Is this ok [y/d/N]: y
Downloading packages:
perl-Data-Dumper-2.145-3.el7.x86_64.rpm                             |  47 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : perl-Data-Dumper-2.145-3.el7.x86_64                                     1/1 
  Verifying  : perl-Data-Dumper-2.145-3.el7.x86_64                                     1/1 

Installed:
  perl-Data-Dumper.x86_64 0:2.145-3.el7                                                    

Complete!
[root@vbgeneric AHF_Installer]#


Then resume the installation (precreate the "data" folder if it doesn't exist)

[root@vbgeneric AHF_Installer]# mkdir /opt/oracle.ahf/data
[root@vbgeneric AHF_Installer]# ./ahf_setup

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_241000_6588_2024_02_27-13_48_51.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 24.1.0 Build Date: 202402051317

Default AHF Location : /opt/oracle.ahf

Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : Y

AHF Location : /opt/oracle.ahf

AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

Please Enter AHF Data Directory : /opt/oracle.ahf/data

AHF Data Directory : /opt/oracle.ahf/data

Do you want to add AHF Notification Email IDs ? [Y]|N : N

Extracting AHF to /opt/oracle.ahf

Setting up AHF CLI and SDK

Configuring TFA Services

Discovering Nodes and Oracle Resources

Successfully generated certificates.

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.-------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port  | Version    | Build ID              |
+-----------+---------------+------+-------+------------+-----------------------+
| vbgeneric | RUNNING       | 8540 | 39049 | 24.1.0.0.0 | 240100020240205131724 |
'-----------+---------------+------+-------+------------+-----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.-------------------------------------------------------.
|              Summary of AHF Configuration             |
+-----------------+-------------------------------------+
| Parameter       | Value                               |
+-----------------+-------------------------------------+
| AHF Location    | /opt/oracle.ahf                     |
| TFA Location    | /opt/oracle.ahf/tfa                 |
| Orachk Location | /opt/oracle.ahf/orachk              |
| Data Directory  | /opt/oracle.ahf/data                |
| Repository      | /opt/oracle.ahf/data/repository     |
| Diag Directory  | /opt/oracle.ahf/data/vbgeneric/diag |
'-----------------+-------------------------------------'

Starting ORAchk Scheduler from AHF

AHF binaries are available in /opt/oracle.ahf/bin

AHF is successfully Installed

Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N

Moving /tmp/ahf_install_241000_6588_2024_02_27-13_48_51.log to /opt/oracle.ahf/data/vbgeneric/diag/ahf/

[root@vbgeneric AHF_Installer]# 


orachk can then be executed.  This execution is to check against "Best Practices"  :



[root@vbgeneric AHF_Installer]# orachk -b

List of running databases

1. orcl12c
2. None of above

Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1]. 1
.  .
.  .  

Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

.  .  . . . .  
.  .  .  .  .  .  .  .  .  
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status                          
-------------------------------------------------------------------------------------------------------
  Host Name       CRS Installed       ASM HOME  RDBMS Installed    CRS UP    ASM UP  RDBMS UP    DB Instance Name
-------------------------------------------------------------------------------------------------------
  vbgeneric                  No           No          Yes           No       No      Yes             orcl12c
-------------------------------------------------------------------------------------------------------


Copying plug-ins

. .
.  .  .  .  .  .  

*** Checking Best Practice Recommendations ( Pass / Warning / Fail ) ***

.  

============================================================
              Node name - vbgeneric
============================================================
. . . . . . 
 Collecting - Database Parameters for orcl12c database
 Collecting - Database Undocumented Parameters for orcl12c database
 Collecting - List of active logon and logoff triggers for orcl12c database
 Collecting - CPU Information
 Collecting - Disk I/O Scheduler on Linux
 Collecting - DiskMount Information
 Collecting - Kernel parameters
 Collecting - Maximum number of semaphore sets on system
 Collecting - Maximum number of semaphores on system
 Collecting - Maximum number of semaphores per semaphore set
 Collecting - Memory Information
 Collecting - OS Packages
 Collecting - Operating system release information and kernel version
 Collecting - Patches for RDBMS Home
 Collecting - Patches xml for RDBMS Home
 Collecting - RDBMS patch inventory
 Collecting - Table of file system defaults
 Collecting - number of semaphore operations per semop system call
 Collecting - Database Server Infrastructure Software and Configuration
 Collecting - Disk Information
 Collecting - Root user limits
 Collecting - Verify ORAchk scheduler configuration
 Collecting - Verify TCP Selective Acknowledgement is enabled
 Collecting - Verify no database server kernel out of memory errors
 Collecting - Verify the vm.min_free_kbytes configuration

Data collections completed. Checking best practices on vbgeneric.
------------------------------------------------------------

 INFO =>     Traditional auditing is enabled in database for orcl12c
 WARNING =>  Linux swap configuration does not meet recommendation
 WARNING =>  Hidden database initialization parameters should not be set per best practice recommendations for orcl12c
 FAIL =>     loopback interface MTU value needs to be set to 16436
 INFO =>     Most recent ADR incidents for /u01/app/oracle/product/12.2/db_1
 FAIL =>     Verify Database Memory Allocation
 INFO =>     Oracle GoldenGate failure prevention best practices
 FAIL =>     The vm.min_free_kbytes configuration is not set as recommended
 INFO =>     user_dump_dest has trace files older than 30 days for orcl12c
 INFO =>     At some times checkpoints are not being completed for orcl12c
 WARNING =>  One or more redo log groups are not multiplexed for orcl12c
 WARNING =>  Primary database is not protected with Data Guard (standby database) for real-time data protection and availability for orcl12c
 INFO =>     Important Storage Minimum Requirements for Grid & Database Homes
 CRITICAL => Operating system hugepages count does not satisfy total SGA requirements
 FAIL =>     Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for orcl12c
 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcl12c instance
 INFO =>     umask for RDBMS owner is not set to 0022
 FAIL =>     Database parameter DB_BLOCK_CHECKING on primary is not set to the recommended value. for orcl12c
 INFO =>     Operational Best Practices
 INFO =>     Database Consolidation Best Practices
 INFO =>     Computer failure prevention best practices
 INFO =>     Data corruption prevention best practices
 INFO =>     Logical corruption prevention best practices
 INFO =>     Database/Cluster/Site failure prevention best practices
 INFO =>     Client failover operational best practices
 WARNING =>  Oracle patch 30712670 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 29867728 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 31142749 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 26749785 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 29302565 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 29259068 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle clusterware is not being used
 WARNING =>  RAC Application Cluster is not being used for database high availability on orcl12c instance
 WARNING =>  DISK_ASYNCH_IO is NOT set to recommended value for orcl12c
 WARNING =>  Flashback on PRIMARY is not configured for orcl12c
 INFO =>     Database failure prevention best practices
 WARNING =>  fast_start_mttr_target has NOT been changed from default on orcl12c instance
 FAIL =>     Active Data Guard is not configured for orcl12c
 WARNING =>  Perl Patch 31858212 is not found in 12.2.0.1 RDBMS_HOME. /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 31602782 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 33121934 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 31211220 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 INFO =>     Software maintenance best practices
 INFO =>     Oracle recovery manager(rman) best practices
 INFO =>     Database feature usage statistics for orcl12c
 WARNING =>  Consider investigating changes to the schema objects such as DDLs or new object creation for orcl12c
 WARNING =>  Consider investigating the frequency of SGA resize operations and take corrective action for orcl12c
------------------------------------------------------------

UPLOAD [if required] - /opt/oracle.ahf/data/vbgeneric/orachk/user_root/output/orachk_vbgeneric_orcl12c_022724_140315.zip

[root@vbgeneric AHF_Installer]# 


Thus, you can actually run the 24.1 orachk against even a 12.2 non-RAC (single instance) database.

The complete report is in HTML format in the final ZIP file.  

Here's the header :



Categories: DBA Blogs

Tutorial to Implement RAG with Gemma Model Locally

Pakistan's First Oracle Blog - Mon, 2024-02-26 20:20

 Unlock the power of Retrieval-Augmented Generation (RAG) locally using the Gemma model with our detailed step-by-step tutorial. Learn how to enhance your projects by integrating RAG for insightful document processing and AI-driven content generation. Perfect for developers, data scientists, and AI enthusiasts eager to leverage advanced NLP techniques on their own documents. No prior RAG experience required!




Commands Used:


pip install -U "transformers==4.38.1" --upgrade

pip install -q pypdf

pip install -q python-dotenv

pip install  llama-index==0.10.12

pip install -q gradio

pip install einops

pip install accelerate

pip install llama-index-llms-huggingface

pip install llama-index-embeddings-fastembed

pip install fastembed


import logging

import sys


logging.basicConfig(stream=sys.stdout, level=logging.INFO)

logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))


from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

from llama_index.llms.huggingface import HuggingFaceLLM

from llama_index.core import Settings


documents = SimpleDirectoryReader("/home/ubuntu/pdfs").load_data()


from llama_index.embeddings.fastembed import FastEmbedEmbedding


embed_model = FastEmbedEmbedding(model_name="BAAI/bge-small-en-v1.5")

Settings.embed_model = embed_model

Settings.chunk_size = 512


from llama_index.core import PromptTemplate


system_prompt = "You are a Q&A assistant. Your goal is to answer questions as accurately as possible based on the instructions and context provided."


query_wrapper_prompt = PromptTemplate("<|USER|>{query_str}<|ASSISTANT|>")


from huggingface_hub.hf_api import HfFolder 

HfFolder.save_token('<your huggingface token from huggingface.co>')


import torch


llm = HuggingFaceLLM(

    context_window=8192,

    max_new_tokens=256,

    generate_kwargs={"temperature": 0.7, "do_sample": False},

    system_prompt=system_prompt,

    query_wrapper_prompt=query_wrapper_prompt,

    tokenizer_name="google/gemma-7b-it",

    model_name="google/gemma-7b-it",

    device_map="auto",

    tokenizer_kwargs={"max_length": 4096},

    model_kwargs={"torch_dtype": torch.float16}

)


Settings.llm = llm

Settings.chunk_size = 512


index = VectorStoreIndex.from_documents(documents)


query_engine = index.as_query_engine()


def predict(input, history):

  response = query_engine.query(input)

  return str(response)

  

import gradio as gr


gr.ChatInterface(predict).launch(share=True)


  




Categories: DBA Blogs

Oracle Vector Datatype – Updating table data

DBASolved - Mon, 2024-02-26 14:36

  In my last blog post on Oracle’s Vector data type, I simply showed you how the datatype is used […]

The post Oracle Vector Datatype – Updating table data appeared first on DBASolved.

Categories: DBA Blogs

Different lists of dependencies

Tom Kyte - Mon, 2024-02-26 13:46
As part of a migration effort, I'm researching dependencies and am confused by the different results displayed by SQL Developer's Dependencies tab versus running something like the following: <code>SELECT * FROM user_dependencies WHERE name = 'USP_COMPANYIMPORT';</code> The former displays 19 rows, whereas the latter displays only 15 rows, including two where the REFERENCED_OWNER is SYS. Q1: Why the difference? Q2: Is it possible to view the code SQL Developer runs to obtain its results? Thank you.
Categories: DBA Blogs

UTL_HTTP

Tom Kyte - Mon, 2024-02-26 13:46
Is Oracle working on the Oracle Database PL/SQL package UTL_HTTP to add support for http_versions: HTTP/2 and HTTP/3?
Categories: DBA Blogs

Dropping and purging table does not release space back to the tablespace

Tom Kyte - Mon, 2024-02-26 13:46
Dear Tom, Oracle 4 node RAC version 19c In my tablespace I have total of 570 partitioned tables that are zero rows. Their initial extent is 8M for each partition, so collectively the empty tables are occupying 2286.03 GB. As they are not needed, I have started to drop them. After dropping some 300 tables, I wanted to check the space released. But this query shows the occupied space is not released. I always thought that if I drop a table with purge, the space would immediately be released back to the tablespace. What am I doing wrong? select round(sum (bytes/1024/1024/1024),2) GB from dba_segments Where tablespace_name='TOPREP_DAT' and owner ='SAMSUNGLTE'; GB --- 2286.03
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator