Seach Makes Easy

Labels:

How to drop a log file group that is being the current logfile group?
I have 4 groups. In that I want to drop one group and that group is the active one.
If I drop that group it is telling that it is the current group.
If I try to switch logfile it is telling that database is not open.

----->
Have you use the SYS user to start the database ?

SQL> conn sys/ as sysdba
connected to an idle instance
SQL> startup

----->
In mount stage I want to drop it.

----->
In mount stage you can drop non-current log group but cannot drop or switch current log group.

----->
1. You must have at- least two online groups.
2. You can not drop a active online active redo log group.
3. If it active switch it by alter system switch logfile before dropping.
3. make sure that online redo log group is archived ( if archiving is enabled).
Syntax :- Alter database drop logfile group .

----->
Open the database
switch logfile.
drop the logfile

Why do you want to drop the logfile only on mount stage??

----->
There was some problem with the logfile while opening the database. So to rectify that I need to drop that group.
Database is not starting.. If it get started then I would have used switch log file command then I would have droped it.
Since it is not starting I am forced to drop it in mount stage.

Since the log file is the current log then in mount stage switch log file is not working.
So what I did I recreated control file. Now it is working

----->
Well if the problem has been solved, No issues.
But the appropriate solution would have been:

Alter database clear unarchived logfile group <>;

----->
My database is in no archivelog mode.. will it work....

Moreover suppose if it is in archivelog mode then

If we clare the logfile group will it clear the data or will it move the data to the logfiles... and what effect will be there if the database is in archivelog mode? Like dataloss?

----->
This statement is to be used only when the current logfile has became corrupt and cannot be dropped.It will simply clear the contents and will not copy them any other log group

ALTER DATABASE CLEAR LOGFILE GROUP ;
Use this for noarchivelog mode

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP ;

Use this verison of clearing a log file if the corrupt log file has not been archived

----->
I have a question how does creating a new control file , solves the problem as the new control file will be pointing to the current logfile which has the problem

----->
In the new control which I created in the trace file I edited and droped the logfile which is giving problem and then assigend the new control file to the database.

Labels:

Can someone explain me how the range scan is better than Skip Scan in Oracle 9i.
Is it advantage of tuning queries to take RANGE SCAN access path than SKIP SCAN path?

In index range scans,
<>

Multiple rows with identical values are sorted in ascending order by rowid.
< /quotedoc>

in Skip scan
<>

Skip scanning lets a composite index be split logically into smaller subindexes. In skip scanning, the initial column of the composite index is not specified in the query. In other words, it is skipped.

The number of logical subindexes is determined by the number of distinct values in the initial column. Skip scanning is advantageous if there are few distinct values in the leading column of the composite index and many distinct values in the nonleading key of the index.
< /quotedoc>

My understanding is,
Skip scanning is good only if the cardinality of the leading columns are low ( very few distinct records).
if the cardinality of the rest columns are too low, eventually the CBO will ignore the indexes.

Labels:

I am using oracle 9i.
Now I want to copy its whole data like tables, cursors, function, procedure and trigger to an another system which has no link with my first system.

Oracle export (EXP) and import (IMP) utilities are used to perform logical database backup and recovery. export / import are logical backup system in oracle.

Just type exp(in case of export) or imp (in case of import) at prompt, you will get online help from oracle and supply parameters, it will serve your purpose.

Otherwise, the command is

login as sys user and give command...

exp file= full=y statistics = none (will export all schema backups)

for example,

exp scott/tiger file=emp.dmp log=emp.log tables=emp rows=yes indexes=no
(table level export)
exp scott/tiger file=emp.dmp tables=(emp,dept) (exporting 2 tables)

Similarly import command will be like....

imp scott/tiger file=emp.dmp full=yes
imp scott/tiger file=emp.dmp fromuser=scott touser=scott
tables=dept

Labels:

Please help me to find the soultion for this,,,,

I'll explain u my problem....

1. I have a Unix server Running Oracle 7.2
2. I have a user called 'Namankan' on Oracle DB
3. I have to shift the entire tables of user 'Namankan' from this System to a new server running Windows XP (or NT) with Oracle 8 (Not 8i)

If you are going to suggest me that I spool the entire data into text files and import them by running SQL*Loader in the new system, Pls dont suggest.......coz
i) Data is too large, ii) Table level validations in database will not allow the data to go in.
I have tried making a .dmp file using exp command on Unix Machine with default parameters, and have exported the entire data into a file called Intecons.dmp which is 175 MB, zipped into 28MB

Then I have taken this file onto Windows system with Oracle 8 and tried to import it through Imp80 command

It says
* Invalid Header Information, OS different (some such 2-3 lines)

As suggested by someone in a yahoo chat room, I Have tried to change the header information in .dmp file using a hex editor, I matched it to same as it was in a sample .dmp I made using exp80 command on my windows machine.......... Sometimes, it asks 2-3 questions and after replies starts dumping junk......... sometimes it hangs the system

I also came across to a data extraction program NXTract, however, its similar to spooling of files, even if it extracts data from .dmp file, so it is useless for me


IMP/EXP IS NOT PLATFORM DEPENDANT!!!!!!!!

Gawd Hex editor hacking on dmp binary files.... thatz weird!!!
The guy who gave you that advice is either brilliant
or
....let me leave this here

Firstly:
With the Oracle EXPORT and IMPORT utilities you can transfer data objects between Oracle databases, even if they reside on platforms with different hardware and software configurations.
ie: Dump files also aren't OS-specific; as a result, you can use Export/Import to transfer data between databases on different platforms.

Secondly:
Exporting data from a lower version of oracle into a higher version===no problem
The vice versa ==== got to do it tactically!!

IN your case Puneeth:
From 7.3.3 to 8 => Use the EXPORT 7.3.3 to export the data from the 7.3.3 database and IMPORT 8 to import the data into the 8 database. This should work just fine without any dependancy on the OS.

The PROBLEM of yours could be simple as this:
Did you use ftp to transfer the dmp file from the unix system in to ur target machine?
IF SO: did you type
ftp>bin
before you did the transfer?
When transferring the export dump file over a network, the file has to be transmitted in binary mode.
Transmitting export files in character mode (ASCII) causes errors when the file is imported.

If the problem is still not solved post the errors that are being logged so that we could actually understand the problem come up with some clear cut answer!!!

I see that you have actually posted a similar posting somewhere where you have suggested to pay the person who helps you get through this.... so is that offer still valid here :+)


As per oracle corp:
When creating an export dump file for an import into a higher release database (e.g.: from Oracle8i to Oracle9i), use a version of the EXPORT utility that is equal to the version of the source database (= lowest version = Oracle8i in this case).
The export FAILS if you use a HIGHER release export version.
For example, if you use the export 9.0.1 utility the export data from an 8.1.7 database, you will get the errors:
EXP-56 Oracle error 942 encountered
ORA-942 table or view does not exist
EXP-0 Export terminated unsuccessfully
Solution: use the lowest release export utility (8.1.7 in this case).

Hence I dnt think you can use oracle8 exp on the oracle7.3 database and get things right....!!

And i totally agree HEX editor on dmp files==bad idea.

Labels:

I have written a trigger whose PL/SQL block contains a simple select statment among many other statements.
Now I find that, if the select statement returns no rows the trigger does not continue its operation further and aborts there itself. And if the select statement returns some rows, then it works fine.

I tried to execute a simplified PL/SQL block of the trigger in SQL*Plus and following were the results:

************************
declare
tempdate date;
begin
select trdt into tempdate from inv_trans;
if sql%notfound then
null;
end if;
end;
/
************************

When no data is present in inv_trans table, the result was:
************************
declare
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 4
************************

And when the table inv_trans had data, the result was:
************************
PL/SQL procedure successfully completed.
************************

Why is the piece of code flashing an error when I have already given a treatment if no data is found.
Why is it taking "No Data in table" as an abnormal condition and not normal?

-------------------------

Your problem is in the fact that you haven't given a treatment if no data was found, as you said you've been doing'.

sql%notfound was never reached in case where no record was returned.
The exception was raised in the select into statement.
That's the default Oracle behaviour.

select into statement always raise ORA-01403 error
if no record was returned and no value set into variable.

You'll have to handle the exception.

Here is the code that will work the same logic, without any problem:

[color=red]declare
tempdate date;
begin
select trdt into tempdate from inv_trans;
exception
when no_data_found then
null;
end;
/ [/color]

Labels:

Table Name: INSTRUCTOR
INSTRUCTOR_ID INSTRUCTOR_NAME SALARY COMMISSION MENTOR_ID DATE_HIRE
------------- --------------- --------- ---------- --------- ---------
700 WAYNE 4500 300 16-MAY-81
628 MONROE 3000 700 16-JUN-84
790 NEWMAN 3100 300 700 16-DEC-82
795 BOGART 3200 700 16-DEC-83
515 SHELLEY 3500 200 700 20-JAN-91
222 CAINE 5500 350 02-NOV-76
978 STEEL 5000 250 222 16-JAN-80
560 LAUREL 5200 200 978 16-DEC-83
835 SPARKS 4000 200 978 16-DEC-84
243 TUCKER 2000 835 18-DEC-90
263 JOHNSON 4000 835 18-JUL-92
453 LODGE 2500 100 835 14-SEP-88

Table Name: AUTH
INSTRUCTOR_ID COURSE_ID AUTH_DATE
------------- --------- ---------
222 315 14-MAR-90
222 415 13-JAN-92
978 815 14-MAR-89
700 315 14-FEB-91
835 515 12-MAR-92
515 915 22-APR-91
263 215 14-JUN-92
263 315 14-FEB-92
453 715 14-AUG-93
790 315 14-AUG-91
560 915 14-AUG-91
453 615 14-JUL-93
628 315 13-JUL-91

My Question requirement is:
1) From the INSTRUCTOR table, if the salary of that instructor >= 3750, then he/she can insert new record on the AUTH table, BUT if salary <>

SO, I SOLVE this question using the method below, and can some one help me to check any logic error on it.
cause even i insert a new record where the instructor salary already <>
******************************************************
INSERT INTO INSTRUCTOR(INSTRUCTOR_ID,
INSTRUCTOR_NAME,
SALARY,
COMMISSION,
MENTOR_ID,
DATE_HIRED)
VALUES (100, 'JOSH', 2000, 500, 700, '11-AUG-05');

INSERT INTO AUTH(INSTRUCTOR_ID,
COURSE_ID,
AUTH_DATE)
VALUES(100, 515, '11-AUG-05');
******************************************************
=============================================================================
CREATE OR REPLACE TRIGGER CHECK_SALARY_TR
BEFORE INSERT
ON AUTH
DECLARE
SALARY INSTRUCTOR.SALARY%TYPE;
BEGIN
IF (SALARY >= 3750)THEN
RAISE_APPLICATION_ERROR(-20101,'THANKS, YOU HAVE AN AUTHORIZATION TO OFFER COURSES');
ELSIF (SALARY <>
RAISE_APPLICATION_ERROR(-20102,'SORRY, YOU DO NOT HAVE AUTHORIZATION TO OFFER COURSES');
END IF;
END;
=============================================================================
CREATE OR REPLACE PROCEDURE INSERT_AUTH_PROC(P_INSTRUCTOR_ID AUTH.INSTRUCTOR_ID%TYPE,
P_COURSE_ID AUTH.COURSE_ID%TYPE,
P_AUTH_DATE AUTH.AUTH_DATE%TYPE ) IS
BEGIN
INSERT INTO AUTH(INSTRUCTOR_ID, COURSE_ID, AUTH_DATE)
VALUES (P_INSTRUCTOR_ID, P_COURSE_ID, P_AUTH_DATE);
END INSERT_AUTH_PROC;
=============================================================================

--------------------------------------->

In your trigger you have defined a variable SALARY and used it in the IF..ELSIF. Since the value of SALARY is NULL (you haven't initialized it) both the IF and ELSIF will evaluate to FALSE. You need to populate the variable, probably by querying the INSTRUCTOR table with the ID of the instructor from the insert. Use the NEW record supplied by the trigger (:new.instructor_id).

Also, I don't think you want to use the RAISE_APPLICATION_ERROR if the insert is allowed. Doing so will cause the insert to fail. Perhaps you are just doing that for testing.

--------------------------------------->

=============================================================================
CREATE OR REPLACE TRIGGER CHECK_SALARY_TR
BEFORE INSERT
ON AUTH
DECLARE
SALARY INSTRUCTOR.SALARY%TYPE;
BEGIN
IF (SALARY < 3750) THEN
RAISE_APPLICATION_ERROR(-20101,'SORRY, YOU DO NOT HAVE AUTHORIZATION TO OFFER COURSES');
END IF;
END;
=============================================================================
CREATE OR REPLACE PROCEDURE INSERT_AUTH_PROC(P_INSTRUCTOR_ID AUTH.INSTRUCTOR_ID%TYPE,
P_COURSE_ID AUTH.COURSE_ID%TYPE,
P_AUTH_DATE AUTH.AUTH_DATE%TYPE ) IS
BEGIN
INSERT INTO AUTH(INSTRUCTOR_ID, COURSE_ID, AUTH_DATE)
VALUES (P_INSTRUCTOR_ID, P_COURSE_ID, P_AUTH_DATE);
END INSERT_AUTH_PROC;
=============================================================================

How to modify the trigger coding so that the SALARY can refer to the most currently inserted new row?

example: I insert new record into the INSTRUCTOR table where the new instructor have the SALARY <>

It can run smoothly if i change the ......

SALARY INSTRUCTOR.SALARY%TYPE;

TO BECOME

SALARY INSTRUCTOR.SALARY%TYPE:=2000; (this will not allow me to insert into AUTH table)
SALARY INSTRUCTOR.SALARY%TYPE:=5000; (this will allow me to insert into AUTH table)

BUT I don't want it to be fix value! I want the trigger to check the SALARY on the new instructor that i just inserted.

PROBLEM: I don't really know how to use the :new and :old like what you told me, if not mistaken, the :new should be only available on "BEFORE INSERT or update" right.

--------------------------------------->

You're correct, the NEW and OLD are only available to row triggers. Sorry about that, didn't catch that your trigger was not a row trigger. Could you make this a row trigger instead? Or, add the validation code to the procedure, prior to the insert.

--------------------------------------->

Thanks again for your help one more time. I try to implement the technique that you suggest and now I done perfectly. Thanks.
Below is the code that run successfully. with make sure of :NEW command
=============================================================================
CREATE OR REPLACE TRIGGER CHECK_SALARY_TR
BEFORE INSERT
ON AUTH
FOR EACH ROW
DECLARE
V_SALARY INSTRUCTOR.SALARY%TYPE;
BEGIN
SELECT SALARY
INTO V_SALARY
FROM INSTRUCTOR
WHERE INSTRUCTOR_ID = :NEW.INSTRUCTOR_ID;
IF (V_SALARY < 3750) THEN
RAISE_APPLICATION_ERROR(-20101,'SORRY, YOU DO NOT HAVE AUTHORIZATION TO OFFER COURSES');
END IF;
END;

Labels:

What is PL/SQL tables. When and why they are used.

PL/Sql table is a virtual array(single array) table defined during runtime.These tables help u in storing,manipulating in a blocks thus removing the need to connect to the database for information except for the first time and later can be updated into the database.

eg
Assume I have a table emp

emp
------
empno
empname
empsal
saldate
empcomm

Now I would like to give the employees an increment/commission and will create a procedure with some logic.

Create procedure empinc
as
type v_empsal is table of emp%rowtype;
cursor as my_empsal is select * from emp
where empsal.empno=emp.empno;
begin
open my_emp
fetch my_emp into v_empsal;
exit when my_emp%notfound;

for i in v_empsal.first..v_empsal.last
loop
if v_empsal is not null then
v_empsal(i).empcomm:=v_empsal(i).empsal/100*10;
v_empsal(i).empsal= v_empsal(i).empsal
+v_emp(i).empcomm;
end if;
end loop;

begin
forall k in v_empsal.first..v_empsal.last
insert into empsal vlaues v_empsal(k)
end;

end;

In the above example as you see I didn't need to use any extra valrables nor define any data types. Both of these were as it was defined in the database and the array takes care of storing the values and insertion of the details in to the table was done by
the decalring the table type and the array with in it

Kiran Patil

Anyway you are using the cursor variable, so you can loop across the records and process one by one. The use of table type variable is not significant.

When you open the cursor, during first fetch use bulk collect and take the records into table variables declared. After this close the cursot. YOU NEED NOT LOOP across the cursor.

Table type variable is very useful when you need to select in the array.

Declare
type empno_list is table of emp.empno%;
empno_list_var empno_list ;
begin
select empno BULK COLLECT into empno_list_var from
scott.emp where empno > 100 ;

for v_empno in empno_list_var.first ..
empno_list_var.last
loop
...
;;;
---
end loop;

end;

Labels:

I've recently installed my Oracle 9i database and developers suite and I've created the repository owner and I've created my tablespaces to use in oracle 9i designer but I'm having problems granting roles to the repository owner.

I can login to sqlplus.exe but I can't seem to run the 'ckrorole.sql' file which as far as I know is used to grant roles to the repository owner. I keep getting the following message whenever I try to run it.

By the way, I'm using Windows XP.

SQL*Plus: Release 9.2.0.1.0 - Production on Fri Oct 22 19:05:07 2004

Copyright © 1982, 2002, Oracle Corporation. All rights reserved.

Enter user-name: sys@orcl as sysdba
Enter password:

Connected to:
Personal Oracle9i Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> @orahome1\repadm61\utl\ckrorole.sql
SP2-0310: unable to open file "orahome1\repadm61\utl\ckrorole.sql"
SQL>

The following happens whenever I try to run it the traditional way.

SQL> run d:\orahome1\repadm61\ckrorole.sql
SP2-0103: Nothing in SQL buffer to run.
SQL>

What do I need to do to get the 'ckrorole.sql' script to run?

Make sure this: orahome1\repadm61\utl\ is a valid directory -- try specifying the FULL path.

I did check. It is a valid directory. I found the file by searching for it in the windows XP search companion. The full path for that file is 'D:\OraHome1\REPADM61\UTL\CKROROLE.SQL'. This is exactly how windows recognises this file. ('D:\' being the 2nd partition of my 40 Gb Hard drive, 'C:\' being the 1st partition.)

What do you mean by 'full path' exactly? Add the 'd:\' onto the statement?

I found the problem. The code wasn't typed in properly. This is the correct code:

@ "D:\OraHome1\REPADM61\UTL\CKROROLE.SQL"
(Where 'D:' is the letter of the drive or hard disk partition that the database is installed on)

Labels:

I'm running Oracle 8.1.7 on Windows 2000 Server.

I need to write one PLSQL procedure to housekeep a table, on daily basis.

There are 2 methods i can think of:
1. Use DBMS_SQL
...............
stmt_char := 'DELETE FROM TABLE_A WHERE TRUNC(DT) <= TRUNC(SYSDATE) - 3'; -- always keep 3 days
................
................

2. Use cursor
.............
CURSOR lv_delete_cur IS
SELECT column_A FROM TABLE_A -- where column_A is primary key
WHERE TRUNC(dt) <= TRUNC(SYSDATE) - 3;
.............
.............

..............

My questions are:
(a) Between approach 1 and 2, which is faster/better performance?
(b) In terms of performance, is there any difference if i modify to commit every 500 or 5000 records?
(c) Actually, currently i m using approach 1, which is considerably very slow!!!
[needs 5 - 7 hours to delete approx. 250000 records from table consisting approx. 800000 records.]

You can go for Dynamic SQL also and probably it helps you to improve the performance.

The performance of native dynamic SQL in PL/SQL is comparable to the performance of static SQL because the PL/SQL interpreter has built-in support for native dynamic SQL. Therefore, the performance of programs that use native dynamic SQL is much better than that of programs that use the DBMS_SQL package. Typically, native dynamic SQL statements perform 1.5 to 3 times better than equivalent statements that use the DBMS_SQL package. Of course, your performance gains may vary depending on your application.

The DBMS_SQL package is based on a procedural API and, as a result, incurs high procedure call and data copy overhead. For example, every time you bind a variable, the DBMS_SQL package copies the PL/SQL bind variable into its space for later use during execution. Similarly, every time you execute a fetch, first the data is copied into the space managed by the DBMS_SQL package and then the fetched data is copied, one column at a time, into the appropriate PL/SQL variables, resulting in substantial overhead resulting from data copying. In contrast, native dynamic SQL bundles the statement preparation, binding, and execution steps into a single operation, which minimizes the data copying and procedure call overhead and improves performance.

for e.g
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE my_table;';
END;
/

Please let me know if any issues

I want to share a few ideas..
first of all you have said:

I need to write one PLSQL procedure to housekeep a table, on daily basis

here ,the question is why PLSQL procedure...as Tom Kyte would say often,we should use sql instead of pl/sql whereever possible...now here i think you can definately use simple sql statement like this.

DELETE FROM TABLE_A WHERE TRUNC(DT) <= TRUNC(SYSDATE) - 3

the thing to watch here is whether any indexes are being used or not....is there an index on the column DT?
pl post the execution plan of the query.
they might not get used i guess and you may need function based index as you are using the function trunc on the column DT.
alternately ,just for experiment try the following sql statement (pl. rollback the results ,as this is just experiment)
(pl. index the DT column if not already indexed)
DELETE FROM TABLE_A WHERE DT <= SYSDATE - 4

see if there is some performance gain.

--------------------------------------------------------------------------------------------------------------------------------
Basically I do agree with your saying, but if i use dynamic sql, how do i perform commit for every 1000 records (deleted), like wat i have explained in my first post?
--------------------------------------------------------------------------------------------------------------------------------
Perhaps i have tried most with and without TRUNC function, but the performance seems almost the same. Fyi, the DT column is indexed. But i guess u should not use:

DELETE FROM TABLE_A WHERE DT <= SYSDATE - 4

bcoz it is not accurate. If you really want to get rid off the TRUNC, i supposed you should use:

lv_sysdate_date := TRUNC(SYSDATE);

DELETE FROM TABLE_A WHERE DT >= lv_sysdate_date - 3 AND DT <>

Please correct me, if i am wrong.

And to your question why am i using PL/SQL instead of SQL statement. This is bcoz:
(a) Actually the "3" as in "TRUNC(dt) <= TRUNC(SYSDATE) - 3", i need to query from another table. It is not hardcoded. (b) I need to commit after 1000 records have been deleted, due to rollback segment issue.

--------------------------------------------------------------------------------------------------------------------------------

Can u pl. post the execution plan of the query? what i would like to see is ,whether the index is being used or not..in fact i think that the index may not be used if this is not a function based index..so could u pl. post the execution plan of the query.for commiting,if i understand correctly,it really does not matter whether you commit after 1000 records or 10000 records..so don't expect any big gain in performance from that area...

Thats what i feel...key factor is ensuring & verifying that index is being used.also try NOT to use the index and see performance..and explain plan and post it and try TO USE the index and see performace and post the explain plan..i would like to compare the two plans..(PL. ROLLBACK if u are just experimenting)

Labels:

Can we use user definbed functions in SQL statements, as we can use in built oracle function like date etc..?

Yeah.Of course provided there is no dml in your function

Thanks for reply, pls tell you can't we use functions with DML in it. What will hapen if it conatins dml in it.

We can use function which has only select statments in it, but it should not contain other DML statements like INSERT,UPDATE and DELETE oracle doesnot allow because it might get conflicts

Add A Function To User Defined Object

DATABASE: Oralce 9i
TOOL : OEM 9.2.0.1.0
Can I add a Function to a User Defined Object Type.
The User Defined Object Has Dependencies.
I tried adding the function using OEM but an error was reported,says, cannot alter the definition.
I tried Oracle Change Manager,but that does not display User defined Types.

This is well explained in the Oracle manuals. You cannot simple modify the already existing object type. In order to modify object attributes or methods, an object type must be dropped and re-created. An attempt to drop or replace a type with type or table dependents will result in an ORA-02303. Here is the manual part:

QUOTE:
The object types can be instantiated as types (nested tables or VARRAYs) or object tables. If data preservation is required, data depending on the type must be manually moved into temporary tables.
Per the SQL Reference guide, the DROP TYPE FORCE option is not recommended since this operation is not recoverable and could cause the data in the dependent tables to become inaccessible.

The following 3 queries can be run to determine dependencies:

-- Find nested tables
select owner, parent_table_name, parent_table_column
from dba_nested_tables
where (table_type_owner, table_type_name) in
(select owner, type_name
from dba_coll_types
where elem_type_owner = ''
and elem_type_name = '');

-- Find VARRAYs
select owner, parent_table_name, parent_table_column
from dba_varrays
where (type_owner, type_name) in
(select owner, type_name
from dba_coll_types
where elem_type_owner = ''
and elem_type_name = '

-- Find object tables
select owner, table_name
from dba_object_tables
where table_type_owner = ''
and table_type = ''
and nested = 'NO';

So first find the object dependencies on object types, make sure you have a means to backup the dependent object (nested tables, varrays, tables), drop the dependent object (nested tables, varrays, tables), modify the object type, re-create (restore) the object (nested tables, varrays, tables).

Hope that works for you - no OEM involved, so it would be a good practice.

Labels:

I would like to know how can one identify the current location that oracle is writing for transaction ie. extent location has reached the maximum allocated for it within the tablespace.

In plain words finding if the extents have reached the maxextent?

----->
You can use the view dba_extents and dba_tables for that

CODE

select b.table_name,a.used,b.max_extents-a.used remaining from
(select segment_name,count(extent_id) used from dba_extents group by segment_name) a,
(select table_name,max_extents from dba_tables) b
where a.segment_name=b.table_name

----->
Are you sure extent_id identifies the used extents?

I thought its just an identification

----->
extent_id identifies extents that are allocated to the table. so max_extents-count(extend_id) would give the number of extents yet to be allocated.

That is what you wanted to know? isn't it?

Labels:

Need to find out how long will a query take to run or when I am running the query where can I check to see that X% of my query is complete.

I know there is some place in Oracle Enterprise manager or something like that which shows a progress bar once the query is started, but I dont exactly recollect it can somebody help me with this..

Also when do we run analyze plan estimate statistics and how does this help if I have to run a complex query. does estimating statistics on a table increase the query performance if so how ?

You can use this query to find out the information you want to know:

CODE

REM Displays the long running operations

SET LINESIZE 200

COLUMN operation FORMAT a15
COLUMN username FORMAT a15
COLUMN object FORMAT a25

SELECT a.sid,
a.serial#,
b.username ,
opname OPERATION,
target OBJECT,
TRUNC(elapsed_seconds, 5) "ET (s)",
TO_CHAR(start_time, 'HH24:MI:SS') start_time,
ROUND((sofar/totalwork)*100, 2) "COMPLETE (%)"
FROM v$session_longops a,
v$session b
WHERE a.sid = b.sid AND
b.username not IN ('SYS', 'SYSTEM') AND
totalwork > 0
ORDER BY elapsed_seconds;

Needed some clrafication regrding this

"when do we run analyze plan estimate statistics and how does this help if I have to run a complex query. Does estimating statistics on a table increase the query performance if so how ?"

Your last post contains three questions:
1>when I am running the query where can I check to see that X% of my query is complete
2>predict the time to answer of a query
3>estimating statistics on a table increase the query performance if so how ?

My point of view is
1>Answers to the first question have given above..

2>The second and the third questions implie to understand what is a CBO (cost based optimizer) and a RBO (rule based optimizer). Are you familiar with that? The ROB might not be used any more by oracle (it is an another story). You cannot predict the time of answer of a sql request. There are too many factors.

3>Regarding the third question, the Analyze is useful if the sql request is analyzed by the COB engine. And it should be.
Yes, Analyzing the table allows oracle to know the volume and the repartition of the tables. Therefore it performs the appropriate tasks to compute the data together. But sometimes some complex sql requests are not really well written. In that case you need to analyze the plan table.

Labels:

We are getting java.sql.SQLException: ORA-12154: TNS:could not resolve
service exception. It comes allof a sudden. It was working fine. But the issue got resolved once we recycle our Sunone Application server.

Any idea ..Is it something to do with momentary network issue, after which SunApplication server not able to connect to
database. At the time when we got this exception, we tried connnectig to database using sqlplus, it worked fine.

Oracle Troubleshooting
--------------------------------------------------------------------------------
ORA-12154: TNS:could not resolve service name
--------------------------------------------------------------------------------
Symptom:
When trying to connect to Oracle the following error is generated:

ORA-12154: TNS:could not resolve service name

Cause:
This error indicates that the service (or TNS alias) specified when trying to connect does not exist.

TNS aliases or service names are defined locally to each workstation. Thus different workstations could have a completely different alias to refer to the same database.

Remedy:
Check what aliases are defined on the workstation, either use one that is defined or add an appropriate one.

The following methods can be used to view the TNS names:

Look in the file ORANT\ Network\ Admin\ Tnsnames.ora (for versions of Oracle prior to 8.1) or Oracle\
Ora81\ Network\ Admin\ Tnsnames.ora (for versions of Oracle from 8.1 onwards). In this file each defined
name will normally have.world appended to it.

For Oracle 8, run Oracle Net8 Easy Config. This will list the available service names.

For Oracle 7, run SQL Net Easy Configuration. This provides the option to list the available service names.

For Oracle 8.1 onwards: Try specifying the full service name, i.e. as it appears in the tnsnames file.

If a connection alias is not being explicitly provided then try to identify what alias Oracle thinks you are
using. This will typically be contained in the sqlnet.log file that the Oracle client will produce.
Possible places where Oracle could be getting this (default) alias from:
The environment variableLOCAL

Chaz S

1.what version of oracle?
-------------------------------------------------------
You stated:
java.sql.SQLException: ORA-12154: TNS:could not resolve service name
------------------------------------------------------------------
Most probable case:
This may happen when a Java client requests a database connection but the database server has not started or
not listening.

Solution1. Make sure the service name in the host string is correct. e.g.

jdbc:oracle:oci8:@kesdee.world

Solution2. Start the Oracle Server

Solution3. Start the Listener. Under MS-DOS:
lsnrctl.exe
start

Soution4. The tnsname specified in jdbc url is incorrect, check if the names match in ORACLE_HOME/network/admin/tnsnames.ora

Your problem must be solved by now:if not go down
further!!!!
--------------------------------------------------------------------
If not solved yet:RUN THE FOLLOWING CHECKS:

1>You mustve missed some vital info in SQLNET.ORA:
for egs:
NAMES.DEFAULT_DOMAIN entry missed Due to which you cannot start the application and the The JRE runs into Exception Errors!

2>Verify that a TNSNAMES.ORA file exists and is in the proper directory and is accessible.
- Check that the net service name used as the connect identifier exists in the TNSNAMES.ORA file.
- Make sure there are no syntax errors anywhere in the TNSNAMES.ORA file. Look for unmatched parentheses or
stray characters. Errors in a TNSNAMES.ORA file may make it unusable.

3>Verify that the host or database name is correct and that it is listed in the TNSNAMES.ORA file.This file
keeps a list of available databases and related information required to connect to them.

4>If you are using directory naming:
- Verify that &quo;LDAP&quo; is listed as one of the values of the NAMES.DIRETORY_PATH parameter in the
Oracle Net profile (SQLNET.ORA)
-Verify that the LDAP directory server is up and that it is accessible.
-Verify that the net service name or database name used as the connect identifier is configured in the directory.
- Verify that the default context being used is correct by specifying a fully qualified net service
name or a full LDAP DN as the connect identifier - If you are using easy connect naming:
- Verify that &quo;EZCONNECT&quo; is listed as one of the values of the NAMES.DIRETORY_PATH parameter in the
Oracle Net profile (SQLNET.ORA).
- Make sure the host, port and service name specified are correct.
- Try enclosing the connect identifier in quote marks.

Labels:

I have several Oracle databases running on older Sun servers which I will have to upgrade to new hardware in the near future. I haven't decided yet whether to stay with the Sun/Solaris platform or to make the move to Linux. The decisive factor in this case will have to be price and performance.

Does anyone have information about performance of Oracle servers on Sun-Sparc/Solaris 64-bit as compared to Intel-Xeon/Linux 32 bit? How do they both perform in multi-user environment?

I would very much appreciate a wise word of advice.

-------------------------

I prefer Solaris, mostly because I have found it to be remarkably stable. Linux is fairly stable, but I pretty quickly found ways to make it crash. On the other hand, you can expect the Linux community to be more responsive than a big, monolithic behemoth of a HW/SW/OS vendor who tends to hew more to their OWN agenda than their USERS' needs.
While Sun might not release driver updates to support the latest widget available for the x86 platform, someone might be developing it for Linux at the same time that you are looking for it.

-------------------------

Anytime you run an OS on a stable Hardware platform such as the Sun Computers, the system will be stable. PCs are inherently non-stable platforms. Linux seems to be changing rapidly to keep up with PC changes. Linux is similar enough to Solaris in lay out and stucture to be easy to work with. I support Solaris systems at work and I just recently built a Linux box at home. I love the configurability of UNIX. Linux is cheap and Solaris is expensive.

Make you own decision based on your needs. Good Luck.

-------------------------

How I do enjoy the unix techie sense of humor. I use both Solaris 7 x86 and Linux kernel 2.2.13. They both work reasonably well. Of course I also use BeOS 4.5.2, Windows NT (not very often though), and am thinking about FreeBSD. I have an affinity for operating systems that I can't quite explain. I am always interested in seeing how the features are implemented among the different systems.

For a stable platform without any thrills that won't be changing much Solaris is the way to go especially if you expect to be running any mission critical stuff on it. For example we would not even consider running our database servers that track the history of all our legal cases on a Linux box. It would be much to risky and the equivalent of playing with a cigarette lighter while waist deep in napalm. However, the personal workstation in my office runs primarily linux which I find to be stable enough and provides much better hardware support than Solaris x86. Relative to Solaris x86 Linux supports an etraordinary number of hardware periphereals.

Labels:

How do I create a dbms job, actually I want a job that should run at a specified time and do a specific job for me such as picking up data from a remote database and then dumping it into my database.
We are using windows as operating system and oracle 8i.
Can you give me an specific example as in I'm new to this...that would be of great help to me.

Thanks in anticipation.

Per your request. Example:

Quote:
This submits a new job to the job queue. The job calls the procedure DBMS_DDL.ANALYZE_OBJECT to generate optimizer statistics for the table DQUON.ACCOUNTS. The statistics are based on a sample of half the rows of the ACCOUNTS table. The job is run every 24 hours:

VARIABLE jobno number;
BEGIN
DBMS_JOB.SUBMIT(:jobno,
'dbms_ddl.analyze_object(''TABLE'',
''DQUON'', ''ACCOUNTS'',
''ESTIMATE'', NULL, 50);'
SYSDATE, 'SYSDATE + 1');
commit;
END;
/

Hope that helps,

Labels:

I'm using Oracle 9i R2 installed on Windows XP Professional as a standard version.

I made a Password file like the below:

orapwd file="d:\oracle\amr\dbs\orapwd" password=admin entries=10

and then granted the SYSDBA permissions to user "HR". (GRANT SYSDBA TO HR;)

The value of "remote_login_passwordfile" in my PFile was "ESCLUSIVE" and also I was not connecting from a remote client.

So I could login to SQL *Plus like this:
connect HR/admin@TST as sysdba;

and everything was fine.

but, after that I removed the password file, revoke the sysdba permission from HR and changed the "remote_login_passwordfile" to SHARED, but still I could connect to the DB using HR as sysdba. (don't forget that I'm not connecting from a remote client)

I have even restarted the Instance.

----->
If the OS user you are logged in is member from the ORA_DBA group, then you will be allowed to connect from the local machine as SYSDBA. It's the way how the Oracle trusts the OS users.

If you comment out this line from sqlnet.ora file, you would get the desired effect:

SQLNET.AUTHENTICATION_SERVICES= (NTS)

----->
Also you may want to include the Oracle SID in the password file name for correct syntax:

orapwd file="d:\oracle\amr\dbs\orapwORCL" password=admin entries=10

Labels:

How to change the Port number after the installation of Oracle software?

Ports 1521/1526 are the default port numbers Oracle will listen on for SQL*Net (Oracle 7) or Net8 (Oracle 8) out of the box.

There is a file in directory:

%ORACLE_HOME%\network\admin (Oracle 7) or %ORACLE_HOME%\net8\admin(Oracle 8) called listener.ora that specifies the ports to listen on.

(if you change it you need to change the tnsnames.ora file on all clients that connect to it - including itself, that file usually resides in the same location- if you don't they will not know which port to try to connect to)

Labels:

We have an amazing situation that we don't have the lastdays backup (as backup was not properly taken up)
The database gets mounted but don't get opened as one of the data file is missing.
Is there any way to recover the datafile and make the oracle server live once again.

Good question! There are ways to bring the DB up.
*********
SVRMGR> alter database datafile '/u01/oradata/ORCL/data/users01.dbf' offline drop;
SVRMGR> alter database open;
SVRMGR> drop tablespace users including contents;
*********
In the first stmt you can either use DROP or just use OFFLINE.It totally depends on what kind of data is out in that datafile.If un-necessary data then just drop it...if u happen to have valuable data then you can bring it offline and try to recover the DB.
Also you haven't mentioned if you are running a COLD or HOT bkp ?
Do you at least have an EXPORT.dmp of the tablespace?
If you have then just export the Tablespace.
I am not sure this might workout for your scenario but it may bring the DB up for now.

Thanks a lot , by taking datafiles offline , server could be opened and is now live.
but all the files taken offline are showing files 0 kb size.
Was it due to virus or anyother reason?
and these files now require auto recovery.
can these files be repaired now
However thanks for the advice

I ran the following command and the output came like this
Can you explain me the reason for the following output and also how the files can be recovered.

SQL> set autorecovery on;
SQL> recover automatic datafile 11;
ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock data file 11 - see DBWR trace file
ORA-01110: data file 11: 'E:\ORACLE\ORADATA\HHML\D_HH_ATEND.ORA'

Seems like the only option in your scenario would be dropping the tablespace and re-creating a new one.

If you happen to have an export dump for the tablespace or objects in that tspace just import them after creating a new ts.

Labels:

What is the difference between shutting down an Oracle database (8i and up) on Windows by logging into the instance and issuing the shutdown command vs stoping the Oracle service (OracleServicedbname)?

I think there's no difference - but you can see many more informations by logging in with sqlplus/svrmgrl and shutting down the DB with the shutdown command.
Using MMC and shutting down the service shows only a status bar.

Depends on the version of Oracle and how the instance is configured in the Windows registry. For 9i (not sure about 8i, don't have an 8i instance anymore), each instance has a key in the registry called ORA__SHUTDOWN_TYPE that determines the type of shutdown to initiate when the service is stopped (i = immediate for example). The key ORA__SHUTDOWN_TIMEOUT specifies the amount of time to wait (seconds I think but I never got a clear answer on this) before the instance is terminated.

That being said, in my opinion it is always best to shutdown the database manually thru SQL*Plus. That way you know what is going on.

There is a difference.

The service must be running in order to have the database running.
The service can be running without the database running.

It's simple to see: start the service, start the database (if not set to automatic start), then connect as sysdba and do a shutdown. Then go look at the services - the OracleServiceSID still shows started.

Stop the service, and then start a SQL*Plus session. Try to connect or start, and you get Oracle not available error.

You can shut down the database either through the sql prompt or by shutting down the associated oracle service , in both the cases the database will shutdown its same if your purpose is to shutdown the database a service is continuously running program which helps other program to run or execute.

But when you stop the oracle service you close the door for everything associated with the service so if you stop the service you cannot start the database unless and until you start the service again.

If you look at the oracle database from programmers perspective you can feel oracle database is not one program, its number of programs running togerther and the oracle service you can think of as the link between all these programs.

On Windows you have "services" which could be started or stopped. You can have Oracle DB up/down only if the service is started.

You can also enable Oracle DB instance to shutdown and startup with the service shutdown/startup. On *NIX you don't have services and so such restrictions.

Labels:

I created a user as below
create user paymaster identified by paymaster
default tablespace users
temporary tablespace temp

this created user paymaster.
however when i queried
select owner,tablespace_name from dba_tables
the output was as below:-

OWNER TABLESPACE_NAME
PAYMASTER SYSTEM

the paymaster should be in tablespace 'PAYDATA'
this tablespace is already created.
How do i create user PAYMASTER to use PAYDATA?

I had earlier dropped user paymtr from the database as the client wants paymtr to be changed to paymaster.
I created as above and granted the roles.
However when I tried to import from *.dmp file that was exported from paymaster in another machine(which was in system tablespace) and I queried from dba_tables to find all tables are in system tablespace.
How can I retify this?

Change the tablespace from SYSTEM to PAYDATA without dropping the tables.
sql > Alter user PAYMASTER default tablespace PAYDATA
temporary tablespace TEMP;

and for each table - move them to PAYDATA tablespace...try to create a script for the below for all the tables ..

sql > Alter table tablename default tablespace PAYDATA;

Also check the indexes ...if they happen to be in SYSTEM tspace do the above and move them into PAYDATA tablespace or anyother where you wish.

I have managed to move the tables from system to paydata. However there is one table with datatype long which I am unable to move.
Any suggestions on how to solve this.

In order to move the table with a LONG col datatype you can generate a table script manually and import the data into that table.

If you have problem generating table script use TOAD to generate the table script and also the INSERT statement script and run the script from sqlplus.You should be good.

Labels:

What is the best way to compare two databases. Especially tables+columns, indexes+columns, constraints, views, stored procedures, triggers.
It is not about data.
It is not possible to make a network connection to the two databases, so a dblink is not a possible solution.

-------------------------

select table_name from s1.user_tables
minus
select table_name from s2.user_tables

&
vice versa
&&
for all the other objects in question write sql the like

or use dbms_metadata for both schemas and then use your favourite editor to compare the files will all that ddl included
or take an exp from both schemas
imp /

@sid file=your.dmp fromuser=s1 show=yes log=s1.log
imp /

@sid file=your.dmp fromuser=s2 show=yes log=s2.log

then compare the logs and retrieve the differences

-------------------------

Use OEM Change Management to take the diff between two databases. Of course you would need OMS if want to take snapshots, but for simple comparison - it's not neccessary

Labels:

Oracle9i database admin guide says, a fragmented database should be reorganized to make space. The solution is to make a full db export, delete the db, recreate the db, and a full db import. Is there another easier option?
os: win2000 server
oracle 9.2.0.1.0

----->
Yes, there is an easier option. Use locally managed tablespaces. Do you have a fragmentation problem? Are you currently using locally managed or dictionary managed tablespaces?

----->
Yes, we are using locally managed tablespaces, does it mean I don't have to worry about fragmentation?

----->
It means you don't have to worry about it as much. Are you using uniform extent sizes or system managed extent sizes? If you are using uniform extent sizes, then you won't have any fragmentation issues. All extent sizes are the same. If you are using system managed extent sizes then fragmentation is greatly reduced, to the point that you shouln't have to be concerned with it. Have you identified that you have a fragmentation issue?

----->
No we havent, that's another problem, I don't know how to identify if we have... I will look it up.

Labels:

System Config :
OS : Win 2000 A.S
Oracle 11.5.8
Rdbms : 8.1.7

Usaully my Oracle use to work very good but some times once in a blue moon , when I restarted my Database after performing a nightly backup , it startup quickly but when I look into the Task Manager , it shows Oracle.exe service is consuming a lot of CPU ( 70 – 80 % ) even wen no user is connected , usually it won't.

Because of this behavior my Oracle Applicatrion services takes up a long time to get UP. And after sometime Users starts complaining that Application is very slow. I didn't understand what background process is running that is consuming a lot of CPU .

I was told to create a statspack but since I am new to DBA, can you pls. guide me how to create a statspack report.

Not getting it up quick enough is a real problem.

It could be a couiple of things. First, it takes time for your cache to "warm up". Initial database queries will take longer to run because of this. Second, maybe your database is not being brought down cleanly and it takes some time for SMON to clean things up.

Either way, I'd look at not using a cold backup.

Creating STATSPACK:

1. Create a tablespace for this purpose called perfstat or any good name

2. Run catdbsyn.sql dbmsspool.sql

Now you must understand the data collection level. Each collection is in one of this level:
a. Level 0
b. Level 5
c. Level 6
d. Level 7
e. Level 10

- Study the STATSPACK architecture -. Very important

3. Do a snapshot to capture the RAM structure

SQL> exec statspack.snap;

Statspack report is easy to see the current dabase situation

1) create a tablespace of 100mb size

2) run the script: oracle_home\rdbms\admin\spcreate.sql

It will prompt for your default tablespace then give the above tablespace name. After that just give the temporary tablespace name other than system..

3) take some snapshots under perfstat user.
the command will be:
exec perfstat.statspack.snap

take as many snap shots u want within a time interval.

4) now run the script
oracle_home\rdbms\admin\spreport.sql
it will ask for your destination and name of the output report file..

provide those...with to snapshot nos between which you want to see the reports...

now open the report file and enjoy

Labels:

I am new to oracle. I want to know how to start the service manager and where should I give the command. What is the password for newlly installed oracle. Please help me out.

If you want to run SVRMGRL on UNIX then just log on to the UNIX with your user name and password.
1. Type SVRMGRL
2. Type connect internal
The message will say connected. Now you are connected to Oracle. If you are new to Oracle then make sure of you your commands, SVMGRL has a lot of power.
Now if you are connecting to an NT machine, then you need to start the command prompt.
1. Type SVRMGRL
2. It will ask you for a password, this is the same password that you created when you did the fresh install of oracle. Once you issue a password you should be in.

Thanks for the reply but I am working on Windows NT not Unix so let me know how to start svrmgr in it. Your help will be appreciated.

For NT do the following....assuming u have Oracle installed on c:drive if u have oracle on d: e: or anyother drive just replace the c:> with ur drive.
c:> set ORACLE_SID=ORCL ( whatever db name u have)
c:> svrmgrl
svrmgrl > connect internal
svrmgrl > select * from v$database; -- if Oracle not available ...do this
svrmgrl > startup
svrmgrl > select * from v$database;

You need to start the Listener too...for sqlplus conectivity..
c:> lsnrctl
lsnrctl > status -- if shows errors then you need to start the listener
lsnrctl > start
If you happen to have a named listener then u need to specify the name....
lsnrctl > start ORCL (whatever listener name you have ..check listener.ora file for that)
You should be all set

But remember man, server manager "svrmgrl" is unsupported 9i onwards. You have to use only sqlplus to get connected.

The person who posted this query didn't mention the version

I am new to oracle. I try to connect the oracle sever 8i (8.1.7) from windows 2000, I got error message no listner, then I try to start listner: start orcl and got another error message:
TNS-01151: Missing listerner name, oracle, in LISTENER.ORA

This is my listerner.ora
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1521))
)
)
(DESCRIPTION =
(PROTOCOL_STACK =
(PRESENTATION = GIOP)
(SESSION = RAW)
)
(ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 2481))
)

)

What I need to do ? Please help.

According to your listener.ora file you don't have a named listener so while starting the listener you must do the following

lsnrctl start -- Correct

lsnrctl start orcl -- Incorrect

NOTE : Only for named listener you need to start like ... lsnrctl start orcl

Labels:

I have the following code :

CREATE OR REPLACE TYPE ObjB IS OBJECT (
B1 varchar2(10)
,B2 varchar2(10)
);

CREATE OR REPLACE TYPE ObjC IS OBJECT (
C1 varchar2(10)
,C2 varchar2(10)
);

CREATE OR REPLACE TYPE ObjCArr IS VARRAY(100) OF ObjC;

CREATE OR REPLACE TYPE ObjA IS OBJECT (
A1 varchar2(10)
,A2 varchar2(10)
,Ab ObjB
,Ac ObjCArr
);

a ObjA;

I've tried to initialize a, but each time I have an ORA-06530 error.

Any idea how to initialize it using Oracle 8.1.7 ?

Nusa

If you want to initialize, use:

DECLARE a ObjA := ObjA(NULL,NULL,ObjB(NULL,NULL),ObjCArr());

Note: Since element a.Ac is a VARRAY of objects ObjC, you would have to use

a.Ac.EXTEND;
a.Ac(1) := ObjC('X','Y');

to add elements to VARRAY. You can not add it using:

a.Ac.EXTEND;
a.Ac(1).C1 := 'X';
a.Ac(2).C2 := 'Y';

You can use already existing a.Ac(1).C1 or a.Ac(1).C2 for existing elements only.

Labels:

I create a new undo tablespace, then I want to drop the org undo ts and recreate is with a smaller size.

However, after switching Ii am not always able to offline the org tablespace.

I get an Ora-30042:Unable to offline the undo TS.

This error is always returned when using the procedure. This is in a test db, where I'm the only one logged in (sqlplus /nolog).

Lookin in de alert.log it states:
Undo tablespace 1 moved to Pending Switch-Out State.

What does this mean? How can I make sure the undo TS is not being used anymore, so I can offline it?

Using this command shift user UNDO and the try to drop old one.

ALTER SYSTEM SET UNDO_TABLESPACE = ....

Note of cautious:

An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails. However, since DROP TABLESPACE drops an undo tablespace even if it contains unexpired undo information (within retention period), you must be careful not to drop an undo tablespace if undo information is needed by some existing queries

Labels:

I am attempting to create a new instance in my sun solaris 8 server. I have set the environment and prepared the scripts. I do not have gui to perform the db creation therefore i need to use the scripts and run from sqlplus.
it's a 9.2.0.1 database. I have added in oratab. However when I run this script in sqlplus - I get the following error:

SQL> @/oracle/ora01/app/oracle/admin/xmbdb/scripts/CreateDB.sql
Connected to an idle instance.
SQL> spool /oracle/ora01/app/oracle/product/9.2.0/assistants/dbca/logs/CreateDB.
log
SQL> startup nomount pfile="/oracle/ora01/app/oracle/admin/xmbdb/pfile/init.ora"
;
LRM-00109: could not open parameter file '/oracle/ora01/app/oracle/admin/xmbdb/p
file/init.ora'
ORA-01078: failure in processing system parameters
SQL> CREATE DATABASE xmbdb
2 MAXINSTANCES 1
3 MAXLOGHISTORY 1
4 MAXLOGFILES 5
5 MAXLOGMEMBERS 5
6 MAXDATAFILES 100
7 DATAFILE '/oracle/ora02/oradata/xmbdb/system01.dbf' SIZE 325M REUSE AUTOEXT
END ON NEXT 10240K MAXSIZE UNLIMITED
8 UNDO TABLESPACE "UNDOTBS" DATAFILE '/oracle/ora02/oradata/xmbdb/undotbs01.d
bf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
9 CHARACTER SET US7ASCII
10 NATIONAL CHARACTER SET AL16UTF16
11 LOGFILE GROUP 1 '/oracle/ora02/oradata/xmbdb/redo01.log') SIZE 100M,
12 GROUP 2 ('/oracle/ora02/oradata/xmbdb/redo02.log') SIZE 100M,
13 GROUP 3 ('/oracle/ora02/oradata/xmbdb/redo03.log') SIZE 100M;
CREATE DATABASE xmbdb
*
ERROR at line 1:
ORA-01034: ORACLE not available

SQL> spool off
SQL> exit;
Disconnected

How can I rectify this problem? Please Advice.

The problem you are encountering here is that you have a Permission issue. Try to check all the files you created are under the user Oracle and dba Group. It seems like you have some folders created under diff user other than Oracle.When you try to aceess a folder which Oracle may not have permissions will fail. Just make sure all the folder and file permissions are consistent with Oracle:dba

I have checked the permissions for the folders - they all belong to oracle user. DoIi need to create udump, bdump, cdump and pfile folders and put in the init.ora file into pfile folder before running the scripts?

There's no need to create all the dump folders. They are included in the init.ora file by default...unless you are trying to create your own init.ora file. Its always better to use an existing init.ora file of other db and modify accordingly. Try that method and run the script again.

Labels:

We are running SAP 4.6c with ORACLE 9.2.0.5 on AIX. Database is configured with Dynamic SGA, and we have set SGA_MAX_SIZE also.
When we look at the memory management at OS level it shows high utilization of memory.
We don’t know whether the memory allocated to SGA is completely used or not.
How to find out how much memory is been utilized by the SGA, whether it needs to be allocated more
memory or can be reduced from the SGA.

v$sga will give you the current sga stats

Run statspack to analyze whether you reqiure more sga or not

You can also look at V$SGASTAT view. Try running the following script (note the part about free memory):

CODE

set verify off
set pagesize 23
set linesize 80
set pause on
set pause 'Hit enter to continue'
set feedback off
set showmode off
set echo off


col num format 999 heading "Nbr"
col name format a20 heading "Name"
col type format 999 heading "Type"
col value format a10 heading "Value"
col meg format 99.99 heading "Size|Meg"
col isdefault format a10 heading "IsDefault"
TTITLE "SHARED POOL PARAMETERS (DICTIONARY AND LIBRARY CACHE)"
select num, name, type, value, (to_number(value) / 1024) / 1024 meg,
isdefault
from v$parameter
where name = 'shared_pool_size';


col AA format 99.99 heading "Size MB"
col BB format 99.99 heading "Free MB"
col CC format 99.99 heading "% Free"
col DD format 99999999 heading "Executions"
col EE format 999999 heading "Reloads"
col FF format 999.99 heading "% Reload"
col GG format a20 heading "Parameter"
col HH format 99,999,999 heading "Count|(entries)"
col II format 99,999,999 heading "Usage|(valid entries)"
col JJ format 99,999,999 heading "Gets|(memory)"
col KK format 9,999,999 heading "Misses|(disk)"
col LL format 99.99 heading "% Misses"


TTITLE CENTER 'V$PARAMETER (SHARED POOL SIZE) AND V$SGASTAT (FREE MEMORY)
REPORT '
select (to_number(VP.value) / 1024) / 1024 AA,
VS.bytes / 1048576 BB,
(VS.bytes / to_number(VP.value)) * 100 CC
from v$parameter VP, v$sgastat VS
where VS.name = 'free memory'
and VP.name = 'shared_pool_size';


PROMPT
PROMPT More than 5% Free = lower shared_pool_buffer parameter


TTITLE CENTER 'V$LIBRARYCACHE (SHARED SQL AREA) SUMMARY REPORT' STITLE
CENTER 'V$LIBRARYCACHE (SHARED SQL AREA) SUMMARY REPORT'
select sum(VL.pins) DD,
sum(VL.reloads) EE,
(sum(VL.reloads) / sum(VL.pins)) * 100 FF
from v$librarycache VL;


PROMPT
PROMPT More than 1% Reloads = raise shared_pool_size parameter


TTITLE CENTER 'V$ROWCACHE ENTRIES DETAIL REPORT (DICTIONARY)'
select parameter GG,
/* count HH, */
/* usage II, */
gets JJ,
getmisses KK,
(getmisses / (gets + getmisses + 1)) * 100 LL
from v$rowcache
order by parameter;


PROMPT
PROMPT Not tunable at this level of detail, provided for information
only.


TTITLE CENTER 'V$ROWCACHE ENTRIES SUMMARY REPORT (DICTIONARY)'
select sum(gets) JJ,
sum(getmisses) KK,
(sum(getmisses) / (sum(gets) + sum(getmisses) + 1)) * 100 LL
from v$rowcache;


PROMPT
PROMPT Not tunable at this level of detail, provided for information
only.


TTITLE CENTER 'V$ROWCACHE ENTRIES SUMMARY REPORT (DICTIONARY)'
select sum(gets) JJ,
sum(getmisses) KK,
(sum(getmisses) / (sum(gets) + sum(getmisses) + 1)) * 100 LL
from v$rowcache;


PROMPT
PROMPT More than 5% Misses (summary) = raise shared_pool_buffer parameter


exit;

Labels:

I am working on a need to clone databases..and in that I need to write a shell script which will 'dynamically generate' the init.ora file for target database using that of source database and a temp file which will hold changed value of parameters.

For example in temp file there is a param like:
db_name =targetdb

and in source init.ora file db_name=db1 then the shell script should read the temp file and be able to create a new init.ora file with changed value
ie db_name=targetdb (so all other parameters that are not specified in temp file will remain same, but all specified parameters will be changed.)

Can you suggest some similar scripts for such work?

--------------------------------------------/

Try something this:
CODE
set pages 0 term off feed off trims on lin 132 ver off
def fdb='FromDB'
def tdb='ToDB'
spo /tmp/init&&tdb.ora
Select NAME||'='||replace(VALUE,'&&fdb','&&tdb')
From DBA_PARAMETERS;
spo off

Labels:

I have a situation where I would like to multiplex the redo logs but the other drive on which to multiplex is a raid. I have heard that putting redo logs on raids is bad. Is it better in this situation just not multiplexing? Or should I just go ahead and put the redo logs on the raid?


raid 1 --- reading and writing fast
raid 5 --- reading fast but writing slow

If your log groups are not on raid you must multiplex them.If they r on raid 1 no need to multiplex as it is already mirrored by raid 1


How about on Raid 5?


On raid 5 too mirroring is done but compressed (with parity bits), better you have a multiplexed log


RAID 0 - Striping. Fast access to disks
RAID 1 - Mirror, you have only half of the total space, ecause the other half is the mirror.

This are the two RAID levels (0+1) recomended to use with Oracle. RAID 5 is slower. Remember that redologs and archivelogs, are two components that typically include a great contention in I/O operations. So that, it is recomended to put that type of files in fast and separated storage device.


Whats the difference or relationship between Oracle undo tablespace and redo logs?


UNDO tablespace stores the before image of the data blocks, so the rest of the users can see consistent data before the user modifying the data commits its changes.

The redo logs on the other side store the already commited data and are used for instance recovery. When an instance recovery is required, Oracle first rolls the data forward (reading the commited transactions from the redo logs) , then rolls backward and rolls the uncommited transactions, so the database is in consistent state.


It's in the concepts book. A simple answer could be:

Rollback segments are used to rollback or "undo" transactions. They're also used to provide read consistency. I believe they contain both the before and after images of all changes made during a transaction. They also can be overwritten after the transaction completes (commit or rollback).

Redo logs are used for recovery. They're essentially a sequential, chronological, history of all changes made to the database. For example if you need to restore an old version a datafile, redo logs(both archived and online) are used to bring the restored datafile back in sync with the database.


It might help to think of rollback as "Undo". Rollback remembers all the changes made to a database during a transaction. If the transaction fails or if you decide it wasn't what you wanted to do, you (or Oracle itself) can "Undo" the changes and revert to the original data in the database.

Redo, as it's name suggests, allows Oracle to roll forward and re-apply changes that may have been lost, for instance in the event of a database crash. When the instance starts up, Oracle checks the "age" of the data files against the control file and, if it finds a datafile is out of date, applies the redo (if available!) to bring it in line with the control file. If necessary it can then roll back any transactions which were incomplete at the time of the crash.

I'm sure you'll get lots more technical explanations, but that's the gist of it. Have a look at the Server Administrators Guide more information.


The redo logs are used to redo or do over things that have been entered into the database, and the rollback segments are used to undo things that have been 'entered' into the database.

Lets say you have a transaction that affects 1 million rows, all of the changes will be stored as both redo and undo. Now lets say that you type commit, now everything in the log buffer will be written to your online redo log files and your rollback segments will no longer hold the extents that held this transaction.Now you have a disk failure.you will apply the redo to redo (do over) that transaction.

Next example:
Same transaction, but right before you type commit there is a power surge and your session just dies or you type rollback. Now all the changes that you made (before commiting) will be rolled back and undone.

Labels:

Just want to know what's the best way to delete all the objects (tables, indexes, squences, etc..) from a tablespace?
Delete the tablespace and recreate it?

Just drop the tables from the User(s) whatever exists in that tablespace.This way u can be sure everything is removed.You don't need to drop the tablespace. If you are using Oracle Enterprise Manager(OEM) its very simple...just click on the database-schema-tables-select all the tables in that user and remove them from the schema....do the same for the views/synonyms/seq...
(OR)
If you are not having OEM then u can write a sql stm to drop the tables/views/synonyms/....from each user individually.
(OR)
If you are not worried about the users and the data in it....simply drop the user which will remove everything.
First list out all the users in that tablespace and proceed.

I do have OEM installed. When I try to run it, it will not let me connect. I'm leaving the management server box blank (the database is local) and using all the different username/password combos I can think of that oracle came with. Am I missing something here?

For logon to OEM u might be using a right password but may be the TNSNAMES.ora file have some issues.Check your sqlnet.ora file and all the tns aliases in the tnsnames.ora file and make sure ur using .world in your host or just the string.
Ex: system/manager@orcl.world --if sqlnet.ora has names.default_domain=world
Ex: system/manager@orcl --if sqlnet.ora dont have any entry or commented for names.default_domain=world
The thing is OEM configures itself with the TNSNAMES.ora once installed so while conecting it only expects you to connect using uname/pwd as the host name is already picked by it.In the worst case create a new database tree and try.

Labels:

When I ran the procedure below, I am running into the following error:
The error is ORA-00936: missing expression
I tried to put DBMS_OUTPUT.PUT_LINE statements to see where it is failing. I received the following output

SQL> exec proc_load_user_privileges
The error is 6
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 1
The error is 3
The error is 4
The error is 6
The error is ORA-00936: missing expression

It looks like right after the loop is done, the error is happening. How can I zero in on the statement that is causing the problems? I tried to run all the sql statements by themselves in sql*plus and they are working fine. I am not sure where the syntax error is. Any help would be greatly appreciated.

Here is the procedure text

CREATE OR REPLACE PROCEDURE proc_load_user_privileges
AS

CURSOR cur_list_of_cols
IS
SELECT column_name FROM user_tab_columns WHERE table_name = 'TEMP_NSDA_USER_PRIVILEGE' AND column_name NOT IN ('SUN_ID', 'NODE_ID', 'OPERATION', 'FIRST_NM', 'LAST_NM', 'FUNCTIONAL_AREA_NM');

CURSOR cur_temp_nsda_user_privilege
IS
SELECT SUN_ID,NODE_ID,OPERATION,FIRST_NM,LAST_NM,FUNCTIONAL_AREA_NM FROM TEMP_NSDA_USER_PRIVILEGE;

v_current_sun_id VARCHAR2(30);
v_current_node_id NUMBER(20);
v_current_privilege_nm VARCHAR2(30);
v_current_user_privilege_val VARCHAR2(30);
v_current_column_name_in VARCHAR2(30);
v_block_str VARCHAR2(500);
v_dynamic_query_handle INTEGER;
feedback INTEGER;
v_curr VARCHAR2(30);
v_operation VARCHAR2(30);
v_first_nm VARCHAR2(30);
v_last_nm VARCHAR2(30);
v_functional_area VARCHAR2(30);

BEGIN
FOR rec_temp_nsda_user_priv IN cur_temp_nsda_user_privilege
LOOP
v_current_sun_id := rec_temp_nsda_user_priv.sun_id;
v_current_node_id := rec_temp_nsda_user_priv.node_id;
v_operation := rec_temp_nsda_user_priv.operation;
v_first_nm := rec_temp_nsda_user_priv.first_nm;
v_last_nm := rec_temp_nsda_user_priv.last_nm;
v_functional_area := rec_temp_nsda_user_priv.functional_area_nm;



IF v_operation = 'UPDATE' THEN
DELETE FROM NSDA_USER_PRIVILEGE_T
WHERE sun_id = v_current_sun_id;
END IF;
DBMS_OUTPUT.PUT_LINE('The error is 6');
FOR current_col IN cur_list_of_cols
LOOP
v_current_privilege_nm:= current_col.column_name;


v_block_str:=
'SELECT ' || v_current_privilege_nm || ' FROM TEMP_NSDA_USER_PRIVILEGE WHERE sun_id = ' || v_current_sun_id || ' AND node_id = ' || v_current_node_id ;


EXECUTE IMMEDIATE v_block_str INTO v_current_user_privilege_val;

BEGIN
SELECT
REPLACE(v_current_privilege_nm, '_', ' ')
INTO v_current_privilege_nm
FROM DUAL;

END;
DBMS_OUTPUT.PUT_LINE('The error is 1');


IF (v_current_user_privilege_val = 'Y')
THEN

INSERT INTO NSDA_USER_PRIVILEGE_T (sun_id, node_id, privilege_nm) VALUES (v_current_sun_id, v_current_node_id, v_current_privilege_nm);

ELSIF (v_current_user_privilege_val IS NOT NULL) AND (v_current_user_privilege_val != 'N')
THEN
v_current_user_privilege_val:= TO_NUMBER(v_current_user_privilege_val);
DBMS_OUTPUT.PUT_LINE('The error is 2');


INSERT INTO NSDA_USER_PRIVILEGE_T
(sun_id, node_id, privilege_nm, user_privilege_value)
VALUES
(v_current_sun_id,v_current_node_id,v_current_privilege_nm,v_current_user_privilege_val);

END IF;
DBMS_OUTPUT.PUT_LINE('The error is 3');



END LOOP;
DBMS_OUTPUT.PUT_LINE('The error is 4');

END LOOP;
DBMS_OUTPUT.PUT_LINE('The error is 5');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('The error is ' || SQLERRM);
END proc_load_user_privileges;

Labels:

I have more then one Oracle 8 database servers. I need to have SQL promp to be my connect string. Username whenever I login in SQL Plus. Connect String is the same as Server Name.

Example:

Server 1 = aaaaa
Server 2 = bbbbb
Username = Naaa

Prompt should be

aaaaa.Naaa> or bbbbb.Naaa>

>>>>>
I use the following sql script to change my prompt. Modify it according to your requirements.
Copy and paste the below script (between .............) into a file named login.sql.
Copy this login.sql file in the working directory of your sqlplus (like c:\oracle\ora81\bin).
Oracle looks for a file named login.sql in its working directory. If it finds one then it executes it.

..................

set lines 1000
set pages 50
set serverout on size 500000
set head off
set pages 0
set termout off
alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'
/
spool z1.sql
select 'set sqlprompt '''||lower(global_name)||':'||user||'>''' from sys.global_name
/
spool off
@z1
set head on
set pages 60
set termout on

...................

>>>>>
I take a slightly different approach. I use the "new_value" clause of the column statement.

Here is the script:

-- setPrompt.sql
-- -------------
set term off

column prompt_col new_value prompt_var

define prompt_var = "not connected> "

set prompt off
select lower(user) || '@' || ' (' || global_name || ')' || chr(10) || ' > ' prompt_col
from global_name
/

set sqlprompt "&prompt_var"

set term on

Labels:

I'm fairly new to PL/SQL and I could use some help. I'm trying to create a cursor and build the tablename based on an input parameter. Currently I'm using DBMS.SQL statements, which I find cumbersome to use. I'd like to find an easier way if possible. Here is an example of what I'd like to do;

Cursor DYNAMIC is
SELECT column1 from I_PARAM||restoftablename
in the above example I_PARAM is an input paramter.

You can try using "EXECUTE IMMEDIATE" statement create a table dynamically.....also do get a second opinion.

*****************
declare
my_table varchar2(400);
begin my_table := 'create table dynamic_table (my_name varchar2(20))';
EXECUTE IMMEDIATE my_table;
end;
*****************

In Oracle DBS 7.3, how can I move a table from one tablespace to another without losing permissions, index's...etc for this table?

Hmmm good question ....i used diff methods but nothing like this in 7x...but still i'll try....
* First i would take an export of that table (say mytable.dmp)
* Open mytable.dmp in note pad.
* Edit the tablespace to the new tablespace name(whichever tablespace u wanna move).
* Save the file
* Try to import the dump file(mytable.dmp) into the new user schema u want to.

I can't gaurentee that this procedure would work but its worth a try on test table first.

Labels:

I’m new to Oracle 9i using R2. I’m currently logged in as sysdba and currently tried creating a new user:

SQL> create user john identified by oracle
2 default tablespace users;

User created.

SQL> grant connect to john;

Grant succeeded.

SQL> grant create any table to john;

Grant succeeded.

SQL> grant create table to john;

Grant succeeded.

SQL> grant unlimited tablespace to john;

Grant succeeded.


Why is it that if I don’t grant unlimited tablespace, john will not be able to create table? It gives an error :

ERROR at line 1:
ORA-01950: no privileges on tablespace 'USERS'

But I have stated him to use tablespace users when created the user name at first?

---------->

You have to assign a quota.

Assign Tablespace Quotas to Users

Grant to users who will be creating tables, clusters, materialized views, indexes, and other objects the privilege to create the object and a quota (space allowance or limit) in the tablespace intended to hold the object's segment. The security administrator is responsible for granting the required privileges to create objects to database users and for assigning tablespace quotas, as necessary, to database users.

---------->

just drop the user......john

and try this command....

create user john identified by oracle
default tablespace users
quota 5m on users

after firing up this command give required privileges to john to create a table....
then try whether it is working perfectly without granting that unlimited tablespace....

---------->

You don't have to delete the user. Just alter it.

alter user john quota unlimited on users;

---------->

When you creates the User and Grants him the Connect option by default the User gets allocated with the Unlimited Tablespace only you need to do is eigther Revoke that Option. or as bobanjayan said u just alter the user and add the Quota to that user.

---------->

Connect role does't have the privilege of unlimited tablespace.
Unlimited tablespace is with Resource role.

If you grant Resource role to user, he will get unlimited quota on SYSTEM tablespace also.
Be aware.

Labels:

Which is more efficient; adding more data files or increasiong the size of the data files of a tablespace?
Running on oracle 9.2 on win 2000 server.

It's mostly a personal preference. One thing to consider is recovery. If you have fewer, larger files then restoring and recovering a file with a bad block will take longer than if you had more, smaller files.

If you are not using raid then it is better to spread the data on multiple files on multiple disks.

It depends what kind is your storage: local disks or RAID

For local disks - you want to have many datafiles on as much local disks as possible

For RAID - it really doesn't matter. The datafiles are stripped anyway. it's a more for convenience not to have huge datafiles, since if you have to move them latter it's harder to find 20 GB disk space than 5 chunks of 4GB.

If you are going to be using RAID-5 (which is common), I cannot see how chopping up your database will help a lot. Unless you have multiple ranks of RAID-5 storage, all your data is stored in one big blob over which you have little physical control, if any.

I usually separate my table-tablespaces and index-tablespaces into different tablespaces and subtending datafiles, more of a matter of management than of performance, since I am not a big fan of self-extending Oracle storage.

RAID-5 automatically scatters your data when it is stored; it is part of its reliability functions and also what can help your performance in queries. If I were you, I would separate the database logically, and not arbitrarily separate the storage on physical boundaries, given they are meaningless in the RAID-5 environment unless you have multiple controllers and RAID ranks.

Labels:

Helpful statement to end the backup for the tablespaces which are in the online backup mode:

1- Create a script through the following commands:
svrmgrl> spool endhotbackup.sql
svrmgrl> select 'ALTER DATABASE DATAFILE '''||name||''' END BACKUP ' FROM V$DATAFILE;

2- Cleanup endhotbackup.sql through vi

3- svrmgrl> @endhotbackup.sql

4- startup the database

Labels:

Your screen and layout sets decimals are affected by :-

OY01 - Country Global Parameters -> Double click on the country code

SU01 - Maintain users -> Click Defaultsbutton

In the Decimal format section.

Labels:

How to solve the Time Zone Definition Problems?

The Time zone is defined in table TTZCU(refer to note: 91667) for System wide and Client wide.

If you want it for individual user, go to SU01 under the default Personal Time zone sections.

Settings for individual users is done when you have global users in different time zone.

For Local users only, go to SM30 and change the table TTZCU. e.g. UTC+8

This is done when your Spool time is incorrect.

Labels:

In "USER MAINTENANCE- SU01" --> in the "logon tab" there are 5 different "user type"

1. dialog
2. system
3. communication
4. service
5. reference

Kindly mention the function and role of all the above mentioned user types specifically and how is one user type different from another.

::Dialog (A)::

User type for exactly one interactive user (all logon types including Internet users):

During a dialog log on, the system checks whether the password has expired or is initial. The user can change his or her password himself or herself.

Multiple dialog logons are checked and, where appropriate, logged.

::System (B)::

User type for background processing and communication within a system (internal RFC calls).

A dialog logon is not possible.

The system does not check whether the password has expired or is initial.

Due to a lack of interaction, no request for a change of password occurs. (Only the user administrator can change the password.)

Multiple logons are permissible.

::Communication (C)::

User type for dialog-free communication between systems (such as RFC users for ALE, Workflow, TMS, and CUA):

A dialog logon is not possible.

Whether the system checks for expired or initial passwords depends on the logon method (interactive or not interactive). Due to a lack of interaction, no request for a change of password occurs.

::Service (S)::

User type that is a dialog user available to a larger, anonymous group of users. Assign only very restricted authorizations for this user type:

During a log on, the system does not check whether the password has expired or is initial. Only the user administrator can change the password (transaction SU01, Goto ® Change Password).

Multiple logons are permissible.

Service users are used, for example, for anonymous system accesses through an ITS service. After an individual authentication, an anonymous session begun with a service user can be continued as a person-related session with a dialog user.

::Reference (L)::

User type for general, non-person related users that allows the assignment of additional identical authorizations, such as for Internet users created with transactions SU01. You cannot log on to the system with a reference user.

To assign a reference user to a dialog user, specify it when maintaining the dialog user on the Roles tab page. In general, the application controls the assignment of reference users. This assignment is valid for all systems in a Central User Administration (CUA) landscape. If the assigned reference user does not exist in a CUA child system, the assignment is ignored.

You should be very cautious when creating reference users.

If you do not implement the reference user concept, you can deactivate this field in accordance with SAP Note 330067.

We also recommend that you set the value for the Customizing switch REF_USER_CHECK in table PRGN_CUST to "E". This means that only users of type REFERENCE can then be assigned. Changing the Customizing switch affects only new assignments of reference users. Existing assignments are retained.

We further recommend that you place all reference users in one particularly secure user group to protect them from changes to assigned authorizations and deletion.

Labels:

Purpose

The user administrator performs all tasks that are relevant to user management and role assignments. In the portal, all user management functions related to users and groups are provided by the user management engine (UME). The UME is integrated in the SAP NetWeaver Application Server (AS) Java.

For more information about the administration functions of the UME, see Administration of Users and Roles.

In this section you can find information about concepts that need additional clarification in a SAP NetWeaver Portal context. These are:

● UME Actions in the Portal – a brief description of how UME actions are integrated in the portal.

● UME Roles and Portal Roles – an explanation of the difference between these two types of roles and how they are both used in the portal.

In addition, you can find information about administration functions that are specific to the portal. These are:

● Assigning roles to users and groups

● Mapping users – for Single Sign-On purposes

Features

● In a portal installation, the UME provides you with tools for performing user management tasks in a set of iViews and worksets integrated in the User Administration role in the portal

Labels:

Use

The user management engine (UME) uses UME actions to enforce authorizations. An action is a collection of Java permissions that define which activities a user can perform. UME actions can be assigned to UME roles or portal roles. If a role with a UME action is assigned to a user, the user gains the authorizations provided by the action. The UME verifies that users have the appropriate UME actions assigned to them before granting them access to UME iViews and functions. Other applications can also define or check for actions.

The following table lists the UME actions assigned to portal roles by default.

Portal Roles with Default UME Actions

Portal Role

Assigned UME Actions

Delegated User Administrator

UME.Manage_Users

UME.Manage_Role_Assignments

Every User Core Role

UME.Manage_My_Profile

Standard User Role

UME.Manage_My_Profile

Super Administrator

UME.AclSuperUser

UME.Manage_All

System Admin

UME.System_Admin

User Administrator

UME.Manage_All


Some UME actions are defined specifically for the portal environment:

· UME.AclSuperUser

· UME.Manage_Role_Assignments

· UME.Remote_Producer_Read_Access

· UME.Remote_Producer_Write_Access

Integration

In the portal, you can assign UME actions to portal roles with the Role Editor. Each UME action is listed as a property in the Property Editor for roles. Set an action to Yes to assign it to the portal role and change the role's authorizations. This information is recorded in the Portal Content Directory (PCD), which is why you cannot use the delete function of identity management to remove actions from a portal role. When try to delete the role with identity management, the UME only removes the user and group assignments. You must edit the role manually either in identity management or the Role Editor.

Labels:

In the portal, you can manage both user management engine (UME) roles and portal roles. Both types of roles determine what users can do, but each with a different focus. The following table lists the main differences between these two types of roles.

Comparison of UME and Portal Roles

UME Roles

Portal Roles

Are a container for UME actions (actions are sets of Java permissions).

Are a container for portal content (iViews, worksets, folders, and so on).

Define a set of authorizations. By assigning a UME role, you define what authorizations a user has to run applications on the J2EE engine. The authorizations are defined by the UME actions in the role.

Defines how content is grouped together and how it is displayed in the portal. By assigning a portal role, you define which content a user sees in the portal.

Like UME roles, you can assign UME actions to portal roles.

Are stored in the user management tables of the J2EE database.

Are stored in the Portal Content Directory (PCD) tables of the J2EE database.

Are created with identity management.

Are created in the Role Editor of the Portal Content Studio.

Protect access to applications on the J2EE engine.

Constitute a small part of the authorization concept of the portal. When you assign a portal role to a user or group, they get end user permission on the role.

You can define role assigner permission on a portal role. Users or groups that are granted role assigner permission on a portal role can assign the portal role to users or groups.



Tools

The tools need to manage UME and portal roles are identity management and the Portal Content Studio. The following table lists the main differences in use of these tools.

Comparison of Identity Management and Portal Content Studio

Activity

Identity Management

Portal Content Studio

Create and edit roles

UME roles

Portal roles

Assign UME actions

UME roles and portal roles

Portal roles

Assign roles to users and groups

UME roles and portal roles

None. Can assign portal permissions for PCD objects to users and groups.


To perform these activities you need the required permissions.

More Information:

● Managing Users, Groups, and Roles

● Role Assignment

Example

Carmen Fernandez is assigned to the UME role Administrator and no other role. She has full administrator authorizations on the J2EE Engine, but does not see any content in the portal. In contrast, Oleg Semenov is assigned to the portal Super Administrator role. He can see all the administrator functions when he logs on to the portal, and he has the corresponding authorizations on the J2EE Engine.

Followers

Blog Archive