title

If you’ve landed here, you’ve probably followed one of my tweets with the URL url.oradba.ch/30hnNDL. I did use the MarsEdit functionality to download and update the side template. MarsEdit creates a new posts, which will be deleted immediately. The problem is when new posts are automatically published directly on twitter. Of course, the tweet is not deleted automatically, but points to a dead end.

Anyway, if you’re already here why don’t you take a look at some of my other “real” blog posts.

Audit Trail cleanup in Oracle Multitenant environments

A crucial aspect of any database audit concept is the management and maintenance of audit trails. Depending on the defined audit policies and the database activity, an audit trail can grow relatively quickly. Oracle Multitenant environments increase the operational effort because the root container and each PDB uses their own audit trail. Ok, for a simple CDB with 2-3 PDB this effort is manageable, but what about CDB’s with 40 or more PDB’s?

Let’s look into the different possibilities.

Common Purge Job for all PDB’s

Oracle allow’s to initiate an audit trail clean up for all PDB’s using dbms_audit_mgmt.clean_audit_trail procedure by specifying dbms_audit_mgmt.container_all. The following example initiate a clean up in the root container for all PDB’s without considering the last archive timestamp.

ALTER SESSION SET CONTAINER=cdb$root;
BEGIN
   dbms_audit_mgmt.clean_audit_trail(
      audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
      container => dbms_audit_mgmt.container_all,
      use_last_arch_timestamp => false);
END;
/

The problem with this method are the closed PDB’s. First let’s check how much audit records we do have per PDB.

COLUMN con_id FORMAT 999999
COLUMN count FORMAT 99999
COLUMN name FORMAT a8
SELECT
   v.con_id, v.name, v.open_mode, COUNT(u.event_timestamp) count
FROM
   cdb_unified_audit_trail u
FULL OUTER JOIN v$containers v ON u.con_id = v.con_id
GROUP BY
   v.con_id, v.name, v.open_mode
ORDER BY
   v.con_id;

CON_ID NAME     OPEN_MODE   COUNT
====== ======== =========== =====
     1 CDB$ROOT READ WRITE    644
     2 PDB$SEED READ ONLY       0
     3 PDB1     MOUNTED         0
     4 PDB2     READ WRITE     36
     5 PDB3     MOUNTED         0
     6 PDBSEC   READ WRITE     27

Running the clean up on this container database will end with an ORA-46273: DBMS_AUDIT_MGMT operation failed in one of the PDB.

BEGIN
   dbms_audit_mgmt.clean_audit_trail(
      audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
      container => dbms_audit_mgmt.container_all,
      use_last_arch_timestamp => false);
END;
/

BEGIN
*
ERROR at line 1:
ORA-46273: DBMS_AUDIT_MGMT operation failed in one of the PDB
ORA-06512: at "AUDSYS.DBMS_AUDIT_MGMT", line 204
ORA-06512: at "AUDSYS.DBMS_AUDIT_MGMT", line 2137
ORA-06512: at "AUDSYS.DBMS_AUDIT_MGMT", line 1409
ORA-06512: at line 2

The error ORA-46273 is confusing. When we check the audit records, we see that the entries have been effectively be deleted. Here it would be more understandable if Oracle simply issue a warning rather than an error.

COLUMN con_id FORMAT 999999
COLUMN count FORMAT 99999
COLUMN name FORMAT a8
SELECT
   v.con_id, v.name, v.open_mode, COUNT(u.event_timestamp) count
FROM
   cdb_unified_audit_trail u
FULL OUTER JOIN v$containers v ON u.con_id = v.con_id
GROUP BY
   v.con_id, v.name, v.open_mode
ORDER BY
   v.con_id;

CON_ID NAME     OPEN_MODE   COUNT
====== ======== =========== =====
     1 CDB$ROOT READ WRITE    5
     2 PDB$SEED READ ONLY     0
     3 PDB1     MOUNTED       0
     4 PDB2     READ WRITE    1
     5 PDB3     MOUNTED       0
     6 PDBSEC   READ WRITE    1

The same does apply when we run the clean up task as a job. The job will always fail if one PDB is in MOUNT stat. This is annoying when monitoring scheduler jobs.

SQL> BEGIN
2 DBMS_SCHEDULER.RUN_JOB(job_name => '"AUDSYS"."TVD_TEST"');
3 END;
4 /
BEGIN
*
ERROR at line 1:
ORA-46273: DBMS_AUDIT_MGMT operation failed in one of the PDB
ORA-06512: at "AUDSYS.DBMS_AUDIT_MGMT", line 204
ORA-06512: at "AUDSYS.DBMS_AUDIT_MGMT", line 2137
ORA-06512: at "AUDSYS.DBMS_AUDIT_MGMT", line 1409
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_ISCHED", line 242
ORA-06512: at "SYS.DBMS_SCHEDULER", line 566
ORA-06512: at line 2

There is an other issue related to dbms_audit_mgmt.clean_audit_trail in Oracle 12.1 to 18c. The procedure does create a wrong scheduler job. The container_all is not set for the scheduler job, which results in the job running only in the root container. The issue is addressed in Oracle bug 27527173 – ISSUE WITH CREATE PURGE JOB FOR UNIFIED_AUDIT_TRAIL WITH CONTAINER_ALL. A bugfix is available for Oracle 12.1.0.2, 12.2.0.1 and 18.0.0.0. Alternatively you can workaround this issue by manually create a scheduler job to purge the audit trail rather than using dbms_audit_mgmt.clean_audit_trail. The issue is permanently fixed in Oracle 19.0.0.0.

dbms_audit_mgmt or Scheduler Job

Considering the above problems, it seems better to directly create a regular scheduler job for deleting audit trails. All right, this does work as expected, but these kind of jobs will never be visible in cdb_audit_mgmt_cleanup_jobs view. Technically this is not a problem, but it is good practice when audit-related clean up jobs are visible where they should.

The following example creates a regular scheduler job.

BEGIN
   dbms_scheduler.create_job(
      job_name => 'TVD_AUDIT_PURGE_JOB',
      job_type => 'PLSQL_BLOCK',
      job_action => 'BEGIN dbms_audit_mgmt.clean_audit_trail(
         audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
         container => dbms_audit_mgmt.container_current,
         use_last_arch_timestamp => false); END;',
      start_date => trunc(sysdate)+(2/24),
      repeat_interval => 'FREQ=HOURLY;INTERVAL=12',
      enabled => true,
      comments => 'Create an audit purge job for unified audit');
END;
/

Insights when creating audit purge job via dbms_scheduler:

  • Job is not visible in cdb_audit_mgmt_cleanup_jobs
  • Start date can be defined when the job is created
  • Job can be created in any schema, which has the appropriate rights.

The following example does create an audit trail clean up job for unified audit trail using dbms_audit_mgmt and adjusting the start time to 01:00 with dbms_scheduler.

BEGIN
   -- Create regular purge job
   dbms_audit_mgmt.create_purge_job(
      audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
      audit_trail_purge_interval => 12,
      audit_trail_purge_name=> 'tvd_unified_audit_purge_job',
      container => dbms_audit_mgmt.container_current,
      use_last_arch_timestamp => true);
   -- Adapt start date of dbms_audit_mgmt purge job
   dbms_scheduler.set_attribute (
      name => 'tvd_unified_audit_purge_job',
      attribute => 'start_date',
      value => trunc(sysdate) + ( 1 / 24 ) );
END;
/

Insights when creating audit purge job via dbms_audit_mgmt:

  • Job is visible in cdb_audit_mgmt_cleanup_jobs
  • Start date can not be defined when the job is created with dbms_audit_mgmt. The start date has to be changed later using dbms_scheduler.set_attribute. This also applies to other adjustments to the job.
  • For Oracle 18c and 19c the job will be created in the AUDSYS schema. In Oracle 12c and earlier it will be created in the SYS schema.

Audit Window

In the above examples I did set use_last_arch_timestamp=>true. This means that only audit trails older than the archive timestamp will be deleted. The archive timestamp is usually set by the Oracle audit vault agent, when it reads the audit records. But what happens a third party tool like splunk collects the audit data? These tools will generally just read the data without setting a corresponding archive timestamp. Oracle will then never delete old audit records. Therefor it is crucial, that the archive timestamp is set by the tool as soon as the data as been collected. Additionally to this, it is good practice to define an audit window. This is nothing else than set up a job which regularly set an archive timestamp e.g. set the timestamp to sysdate-30. This way all audit records older than 30 days will be removed. The database is thus always nicely tidied up. :-The following example creates such an archive timestamp job. Every 12 hours the archive timestamp is set to sysdate-30.

BEGIN
   dbms_scheduler.create_job(
   job_name =>'tvd_unified_audit_timestamp_job',
   job_type => 'PLSQL_BLOCK',
   job_action => 'BEGIN dbms_audit_mgmt.set_last_archive_timestamp(
      audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
      container => dbms_audit_mgmt.container_current,
      last_archive_time => sysdate - 30); END;',
   start_date => trunc(sysdate)+(2/24),
   repeat_interval => 'FREQ=HOURLY;INTERVAL=12',
   enabled => true,
   comments => 'Create an regularly archive timestamp for unified audit');
END;
/

Conclusion

The administration of audit records works well in Oracle Multitenant environments as well. dbms_audit_mgmt helps with the automatic and manual management of audit trails. Nevertheless, there are a few issues and best practices to consider:

  • Audit data should always be stored in a separate tablespace. However, depending on the Oracle version, there may be issues related to the relocation of the audi trail. For example, not all partitions are moved correctly, LOB and index partitions will continue to be created in the old tablespace. See also bug 27576342dbms_audit_mgmt.set_audit_trail_location does not move lob and index partitions or MOS note 2428624.1 Dbms_Audit_Mgmt.Set_Audit_Trail_Location Does Not Move Lob And Index Partitions
  • Adjusting the partition size or the interval when a new audit partition has to be created, has be done according to the audit quantity and database activity..
  • The different jobs for the clean up as well as the setting of the archive timestamp should run in the same scheme. This increases the transparency. As of Oracle 18c < dbms_audit_mgmt creates the jobs under the user AUDSYS. Older versions usually use the SYS user
  • In Oracle Multitenant environments a central clean up job is the easiest, but makes no sense, depending on the state of the PDB’s it always finishes with an error. Here it is recommended to define one job for each CDB / PDB. In this way, the frequency of the deletion can be individually adapted to the database activity and compliance requirements of the particular PDB.
  • So far we just discussed unified audit. If the database does run in pure unified mode there is no need to define a housekeeping for the legacy audit trails (AUD$, FGA$ etc). Although it is a good practice to define a minimal housekeeping for the legacy audit trail.
  • To define a time window in which audit records are keep in the database is usually a easy method to hold a certain level of audit information in the database. Third party tools like splunk do have enough time to collect the relevant information.
  • For successful database audit concept it is crucial to define the different users and roles who can access or maintain audit information.

References

Some links related to this blog post:

  • Oracle® Database PL/SQL Packages and Types Reference 19c DBMS_AUDIT_MGMT
  • Master Note For Oracle Database Auditing [1299033.1]
  • Dbms_Audit_Mgmt.Set_Audit_Trail_Location Does Not Move Lob And Index Partitions [2428624.1]
  • Bug 27576342 – dbms_audit_mgmt.set_audit_trail_location does not move lob and index partitions
  • Bug 22859443 – CONTAINER_ALL IS IGNORED IN CREATE_PURGE_JOB
  • Patch 27527173 – ISSUE WITH CREATE PURGE JOB FOR UNIFIED_AUDIT_TRAIL WITH CONTAINER_ALL (Patch)

Conferences, Workshops and Events 2019

I am looking forward to a busy second half of 2019. I will be represented with my lectures at various conferences and events. Enclosed a short overview, at which events I will participate:

The comprehensive list is listed under public appearances. I’ll update the events as soon as more information is available. Slides of the presentation will usually be posted on this side.

Oracle CPU / PSU Advisory July 2019

Recently, just in the middle of the summer holidays, Oracle has released the third Critical Patch Advisory for its products. It seems there’s a lot of work going on in Redwood Shore. Oracle has fixed about 319 security vulnerabilities across their products. The Oracle database is relatively prominently represented with 9 security vulnerabilities and a maximal CVSS rating of 9.8. The problem CVE-2018-11058 with such a high CVSS rating is related to Core RDBMS and affects all Oracle releases on various platforms. In addition this vulnerability can also be exploited remotely over the network. 3 of the security bug fixes are for client-only installations. So you have to patch your database servers as well the clients.

Oracle Unified Directory itself is not mentioned in the Oracle Critical Patch Update Advisory. But the MOS note 2385785.1 Information And Bug Listing of Oracle Unified Directory Bundle Patches: 12.2.1.3.x (12cR2PS3) Version does provide information on the latest bundle patch for OUD. Beside this patch, There are updates for Oracle WebLogic and Oracle Java as well (see links below).

The highest CVSS Base Score of vulnerabilities affecting Oracle Database Server is 9.8. The following components are affected:

  • Oracle 11.2 Core RDBMS, Java VM, Oracle Text
  • Oracle 12.1 Core RDBMS, Java VM, Oracle Text
  • Oracle 12.2 Core RDBMS, Java VM, Oracle Text, Spatial
  • Oracle 18c Core RDBMS, Java VM, Oracle Text, Spatial
  • Oracle 19c Core RDBMS, Java VM

Oracle Java VM is not installed by default. It is therefore recommended that you check your database environment to see if it is necessary to apply this critical patch update.

For Oracle Fusion Middleware the situation looks somehow different. The Critical Patch Update includes not less than 33 fixes for vulnerabilities. Several of the vulnerabilities may be remotely exploitable without authentication and are rated with the highest CVSS rating of 9.8.

By the way, I’ve just update my Docker build scripts for Oracle Databases as well Oracle Unified Directory on GitHub to use the latest release updates. Ok, I still haven’t improved the documentation, but at least the build scripts are up to date. 🙂

A few links related to this Critical Patch Update.

SQL Developer 19.1 unable to use connection type ldap with OUD

Due to a tip from a work colleague, I came across a changed behaviour of the latest SQL Developer release. It affects the connection type LDAP respectively the use of an LDAP directory for the database name resolution. After specifying one or more LDAP servers it should actually be possible to select the corresponding context in the drop down list. But as of SQL Developer 19.1 this drop down list remains empty as you can see in the following images. As far as I can tell, this problem only occurs when using Oracle Unified Directory and the latest release of SQL Developer.

You can enable debugging in the SQL Developer to search for errors. But it is far easier to have a short look in the OUD access log. In the access log, you will instantly notice an error as you can see in the following excerpt of the access log.

[02/Jul/2019:04:34:51 +0000] CONNECT conn=276 from=172.20.0.1:55224 to=172.20.0.2:1389 protocol=LDAP
[02/Jul/2019:04:34:51 +0000] BIND REQ conn=276 op=0 msgID=1 type=SIMPLE dn="" version=3
[02/Jul/2019:04:34:51 +0000] BIND RES conn=276 op=0 msgID=1 result=0 authDN="" etime=0
[02/Jul/2019:04:34:51 +0000] SEARCH REQ conn=276 op=1 msgID=2 base="" scope=sub filter="(objectClass=orclContext)" attrs="cn,DN"
[02/Jul/2019:04:34:51 +0000] SEARCH RES conn=276 op=1 msgID=2 result=50 message="The request control with Object Identifier (OID) "1.2.840.113556.1.4.319" cannot be used due to insufficient access rights" nentries=0 etime=3
[02/Jul/2019:04:34:51 +0000] SEARCH REQ conn=276 op=2 msgID=3 base="" scope=one filter="(objectClass=orclContext)" attrs="cn,DN"
[02/Jul/2019:04:34:51 +0000] SEARCH RES conn=276 op=2 msgID=3 result=50 message="The request control with Object Identifier (OID) "1.2.840.113556.1.4.319" cannot be used due to insufficient access rights" nentries=0 etime=1
[02/Jul/2019:04:34:51 +0000] SEARCH REQ conn=276 op=3 msgID=4 base="" scope=base filter="(objectClass=orclContext)" attrs="cn,DN"
[02/Jul/2019:04:34:51 +0000] SEARCH RES conn=276 op=3 msgID=4 result=50 message="The request control with Object Identifier (OID) "1.2.840.113556.1.4.319" cannot be used due to insufficient access rights" nentries=0 etime=1
[02/Jul/2019:04:34:51 +0000] UNBIND REQ conn=276 op=4 msgID=5
[02/Jul/2019:04:34:51 +0000] DISCONNECT conn=276 reason="Client Disconnect"

There is an issue with an LDAP control as stated in error message: The request control with Object Identifier (OID) “1.2.840.113556.1.4.319” cannot be used due to insufficient access rights The control 1.2.840.113556.1.4.319 better known as Paged Results Control
allows the client to control the search results. In particular it is used to iterate through the search results a page at a time. And why is this of interest in connection with the SQL Developer?

When using the LDAP connection type, the SQL Developer uses an anonymous LDAP query to query the Oracle context and database name. Starting with SQL Developer 19.1, Oracle seems to use Paged Results Control, which makes sense if you have a large number of LDAP entries. By default Oracle Unified Directory does not allow the Paged Results Control for anonymous connections. Simplest solution to this problem is to allow Paged Results Control also for anonymous connections. For this you have to change the access control handler and the global act either by dsconfig or via OUDSM. The MOS Note 1932191.1 does provide the step by step solution how to modify the global act via dsconfig, OUDSM or by modifying the config.ldif. The latter is not recommended, because this requires a restart of the OUD server. Furthermore, the direct modification of config.ldif is only necessary in a few cases, e.g. when resetting the root user respectively directory manager password.

Procedure to modify the global aci via dsconfig:

  • Start dsconfig and directly modify the act of the Access Control Handler
dsconfig -h oud -p 4444 -D "cn=Directory Manager" -j $PWD_FILE -X set-access-control-handler-prop
  • Select the property global-aci
  • Select remove one or more values
  • Select the # that has Authenticated users control access which contains the 1.2.840.113556.1.4.319 OID)
  • Select use these values
  • Select finish and apply any changes to the Dsee Compat Access Control Handler
  • Start again dsconfig to add the new aci for the Access Control Handler
dsconfig -h oud -p 4444 -D "cn=Directory Manager" -j $PWD_FILE -X set-access-control-handler-prop
  • Select the property global-aci
  • Select add one or more values
  • When prompted with Enter another value for the global-aci property [continue]:, enter
    (targetcontrol="1.3.6.1.1.12|| 1.3.6.1.1.13.1 ||1.3.6.1.1.13.2 ||
    1.2.840.113556.1.4.319 ||1.2.826.0.1.3344810.2.3 ||
    2.16.840.1.113730.3.4.18 ||2.16.840.1.113730.3.4.9 ||
    1.2.840.113556.1.4.473 ||1.3.6.1.4.1.42.2.27.9.5.9")
    (version 3.0; acl "Authenticated users control access"; allow(read) userdn="ldap:///anyone";)
    
  • Select use these values
  • Select finish and apply any changes to the Dsee Compat Access Control Handler

The entire adjustment can also be done directly with two dsconfig commands.

First remove the global-aci for userdn all: (Make sure to remove the line wraps when copy/paste the aci)

dsconfig -h oud -p 4444 -D "cn=Directory Manager" -j $PWD_FILE -X -n set-access-control-handler-prop \
--remove global-aci:"(targetcontrol=\"1.3.6.1.1.12 || 1.3.6.1.1.13.1 || 1.3.6.1.1.13.2 || 1.2.840.113556.1.4.319 || 1.2.826.0.1.3344810.2.3 || 2.16.840.1.113730.3.4.18 || 2.16.840.1.113730.3.4.9 || 1.2.840.113556.1.4.473 || 1.3.6.1.4.1.42.2.27.9.5.9\") (version 3.0; acl \"Authenticated users control access\"; allow(read) userdn=\"ldap:///all\";)"

Add the new global-aci for userdn anyone:

dsconfig -h oud -p 4444 -D "cn=Directory Manager" -j $PWD_FILE -X -n set-access-control-handler-prop \
--add global-aci:"(targetcontrol=\"1.3.6.1.1.12|| 1.3.6.1.1.13.1 ||1.3.6.1.1.13.2 ||1.2.840.113556.1.4.319 ||1.2.826.0.1.3344810.2.3 ||2.16.840.1.113730.3.4.18 ||2.16.840.1.113730.3.4.9 ||1.2.840.113556.1.4.473 ||1.3.6.1.4.1.42.2.27.9.5.9\") (version 3.0; acl \"Authenticated users control access\"; allow(read) userdn=\"ldap:///anyone\";)"

Now the SQL Developer can load the LDAP context and database name.

The question now is whether this change is a bug or a feature. In any case, it would make sense if the SQL Developer could alternatively allow to enter the Oracle context directly and thus avoid the anonymous query.

Some links related to this blog post:

PDB_OS_CREDENTIAL with external table pre-processor

As part of a customer project I am currently enhancing PDB security and isolation. Since OS interaction is necessary, I can not just use lockdown profile to block OS access. The idea is to isolate the PDB with lockdown profiles and allow dedicated OS access. The idea is to isolate the PDB with lockdown profiles and allow dedicated OS access. However, the OS access should be performed by another user than Oracle. This is where PDB_OS_CREDENTIAL comes in. According to Oracle documentation PDB_OS_CREDENTIAL allows to specify the credentials for dedicated operating system user, which is used for OS interaction of the PDB. See PDB_OS_CREDENTIAL in Oracle® Database Reference 19c or Managing Security for a Multitenant Environment in Oracle® Multitenant Administrator’s Guide 19c.

The following OS Interaction are covered by PDB_OS_CREDENTIAL:

  • External table pre-processors
  • PL/SQL library executions
  • External jobs that do not already have an operating system credential specified

OS Configuration

First a dedicated operating system user is required. I use one OS user per PDB and one generic OS user for the CDB. Let’s create them using useradd. Create a dedicated OS group, the OS user for the CDB and set its password to manager. Which in fact is not a secure password, but sufficient for this simple test. 🙂

groupadd restricted
useradd --create-home --gid restricted \
--shell /bin/bash oracdb

echo "manager" | passwd oracdb --stdin

Create OS users for PDB1 and PDB2 and set the passwords to manager.

useradd --create-home --gid restricted \
--shell /bin/bash orapdb1

useradd --create-home --gid restricted \
--shell /bin/bash orapdb2

echo "manager" | passwd orapdb1 --stdin
echo "manager" | passwd orapdb2 --stdin

In addition to the user, we need a test script, which we can be executed as an external table pre-processor. I’ll create a dummy script executing the OS command id. This will show the user as which the script is executed.

echo "/bin/id" >/u01/eng/run_id.sh
chmod 755 /u01/eng/run_id.sh

Create Credentials (DBMS_CREDENTIAL)

Create the credentials for the different OS users using DBMS_CREDENTIAL. The credentials will be created in the root container (cdb$root).

ALTER SESSION SET CONTAINER=cdb$root;

BEGIN
dbms_credential.create_credential(
credential_name =>'GENERIC_PDB_OS_USER',
username => 'oracdb',
password => 'manager');
END;
/

BEGIN
dbms_credential.create_credential(
credential_name => 'PDB1_OS_USER',
username => 'orapdb1',
password => 'manager');
END;
/

Show the credentials:

COLUMN con_id FORMAT 999999
COLUMN owner FORMAT A10
COLUMN credential_name FORMAT A20

SELECT con_id, owner, credential_name
FROM cdb_credentials;

CON_ID OWNER CREDENTIAL_NAME
====== ===== ===============
     1 SYS   PDB1_OS_USER
     1 SYS   GENERIC_PDB_OS_USER

Assign Credentials (PDB_OS_CREDENTIAL)

The init.ora parameter PDB_OS_CREDENTIAL is now assigned to the credentials created above. By setting PDB_OS_CREDENTIAL in the CDB we define a default credential for all PDBs. Although the documentation shows, that the parameter can be set directly with an ALTER SYSTEM, this does not work. (See Multitenant : Pluggable Database (PDB) Operating System (OS) Credentials in Oracle Database 12c Release 2 (12.2) by Tim Hall)

ALTER SYSTEM SET PDB_OS_CREDENTIAL=generic_pdb_os_user SCOPE=SPFILE;

Error by the ALTER SYSTEM command.

ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65046: operation not allowed from outside a pluggable database

Setting PDB_OS_CREDENTIAL via parameter file does work.

CONNECT / AS SYSDBA
SHOW PARAMETER PDB_OS_CREDENTIAL
SHUTDOWN IMMEDIATE;
CREATE PFILE='/tmp/pfile.txt' FROM SPFILE;
HOST echo "*.pdb_os_credential=GENERIC_PDB_OS_USER" >> /tmp/pfile.txt
CREATE SPFILE FROM PFILE='/tmp/pfile.txt';
STARTUP;
SHOW PARAMETER PDB_OS_CREDENTIAL

NAME              TYPE   VALUE
================= ====== ===================
pdb_os_credential string GENERIC_PDB_OS_USER

Setting a PDB specific credential for the PDB1.

-- set the container to PDB1
ALTER SESSION SET CONTAINER=pdb1;
ALTER SYSTEM SET PDB_OS_CREDENTIAL=PDB1_OS_USER SCOPE=SPFILE;
STARTUP FORCE;
SHOW PARAMETER PDB_OS_CREDENTIAL

NAME              TYPE   VALUE
================= ====== ===================
pdb_os_credential string PDB1_OS_USER

Test the credential

To test the credential we create an external table using the simple script created above as table pre-processor.

Create the directory:

CONNECT tvd_hr/tvd_hr@ol7db18.trivadislabs.com:1521/PDB1
CREATE OR REPLACE DIRECTORY exec_dir AS '/u01/eng';

Create the external table:

CREATE TABLE id (id VARCHAR2(2000))
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY exec_dir
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
PREPROCESSOR exec_dir:'run_id.sh')
LOCATION(exec_dir:'run_id.sh')
);

Select from the external table:

SELECT * FROM id;

ID
========================================================= =
uid=1001(oracle) gid=1010(oinstall) groups=1010(oinstall)

As you can see the user returned from the external table pre-processor is still oracle. Also the log file of the external table pre-processor is created as user oracle.

host ls -al /u01/eng
total 8
drwxr-xr-x. 2 oracle oinstall 43 Jun 17 04:47 .
drwxrwxr-x. 5 oracle oinstall 47 Jun 14 12:20 ..
-rw-r--r--. 1 oracle oinstall 582 Jun 17 04:47 ID_30158.log
-rwxr-xr-x. 1 oracle oinstall 8 Jun 17 04:24 run_id.sh

Conclusion

The parameter PDB_OS_CREDENTIAL is a promising feature to enhance PDB Isolation and security. It seems to be broken for this particular use case. I have opened corresponding service requests with Oracle. But the issue is still investigated. So far there are several bugs assigned to this issue, but unfortunately none of them is public.

  • Bug 18814778 CDB: Tracking bug: PDB_OS_CREDENTIAL does not influence os environment
  • Bug 25820082 PDB_OS_CREDENTIAL parameter not working
  • Bug 29791472 12.2 Lockdown Profile not working as desired
  • Bug 29791380 12.2 Lockdown Profile not working as desired

I’ll provide an update as soon there is a workaround or solution to this issue.

SOUG Day 2019 – Oracle Database in Docker

Today I did have the opportunity to give a presentation on Oracle Database in Docker at the SOUG day in Olten. It was a great opportunity to discuss how Oracle database engineering can be simplified using Docker.

Besides the demo the following topics were discussed:

  • Docker images, container and volumes
  • Requirements to setup Oracle database image
  • Build an Oracle database image
  • Discuss the Dockerfile and build scripts
  • Create database containers
  • Use cases for Oracle database in Docker
  • Demo setup Oracle database with Enterprise User Security and Oracle Unified Directory

Combining docker-compose and custom initialisation scripts allow a various number of use cases for Oracle database in Docker. Rapid deployment of Oracle databases for engineering and testing. But is it also suitable for production environments?

The presentation and information related to event:

Some references and links related to this blog post and the presentation:

Configure Oracle EUSM to use LDAPS

With the introduction of Oracle 18c, eusm is officially designated as an Enterprise User Security Utility. It is now officially documented of the Enterprise User Security Administrator’s Guide. Before we had to be content with the somewhat sparse MOS note 1085065.1 EUSM, Command Line Tool For EUS Administration and Some EUS Good to Knows. In addition, the tool was improved with the latest release. Up to and including Oracle 12c Release 2 it was not possible to establish a secure connection with the LDAP using eusm. The tool does use SASL authentication but still required always an unencrypted LDAP connection to the directory server. For sensitiv environments with enhanced security requirements like Banks, incurrence companies etc. is the use of unencrypted network traffic a nogo. But the new documentation for eusm starts with a short paragraph “About SSL Port Connectivity through EUSM to OID”, which made me confidence.

So there are additional parameters to support SSL:

  • ldap_ssl_port ssl port of the directory server.
  • keystore path to PKCS12 format of keystore. A file path parameter takes the path to the PKCS12 format of the keystore (for example, ewallet.p12 file)
  • key_pass to control the behavior of the keystore password eg. interactive or via commandline

Initial I did get confused by the example. A file named ewallet.p12 is usually an Oracle wallet. Thats why I did start to use an Oracle wallet as keystone for eusm. But this was complete rubbish. Leaning back and thinking again helped. eusm is written in java and the parameter is named keystone. Java and keystore results in a java kestore, doesn’t it? So I was a bit more successful with my second attempt.

Configure the keystore

As soon as one realised that the required keystore file is a java keystore of type PKCS12 it is straight forward. eusm just requires the root certificate to validate the OUD certificate during the initialisation of the LDAPS connection. In an enterprise environment this certificate can be obtained from the internal certification authority. Alternatively this may also be exported from an other keystore. In my EUS test environment I do not have an enterprise CA. Therefor I have to get the corresponding certificate directly from Oracle directory server.

Login to directory server to export the certificate.

keytool -export -noprompt -rfc \
-alias server-cert \
-keystore ${OUD_INSTANCE_HOME}/OUD/config/keystore \
-storepass $(cat ${OUD_INSTANCE_HOME}/OUD/config/keystore.pin) \
-file /u01/config/oud_trusted_cert.txt

Certificate stored in file

Copy the file to the database server and import it into a java keystore. The java keytool will create a new java keystore, if you specify a keystore file which does not yet exist. Do not to specify PKCS12 as the store type. You an either specify the keystore password interactively or use -storepass to provide the password via command line. I do use the password from the keystore pin file $ORACLE_BASE/network/admin/keystore.pin.

$ORACLE_HOME/jdk/bin/keytool -import -trustcacerts \
-alias oud_root_certificate \
-storetype pkcs12 \
-keystore $ORACLE_BASE/network/admin/keystore.jks \
-storepass $(cat $ORACLE_BASE/network/admin/keystore.pin) \
-import -file /u01/oud/oud_trusted_cert.txt

Owner: CN=oud, O=Oracle Unified Directory Self-Signed Certificate
Issuer: CN=oud, O=Oracle Unified Directory Self-Signed Certificate
Serial number: c8cff33
Valid from: Thu Feb 28 06:39:40 UTC 2019 until: Sat Feb 27 06:39:40 UTC 2021
Certificate fingerprints:
MD5: E2:C2:43:8B:CD:EB:95:9E:F1:FC:D8:C3:FF:A7:91:AF
SHA1: 80:0D:9E:89:1B:BC:69:99:02:6A:E7:B5:A6:D2:63:E9:59:5A:C3:BF
SHA256: C7:14:54:1A:C3:FE:28:72:6E:B0:16:82:42:C9:6E:3B:43:BE:D6:C7:3A:31:60:1B:
60:1D:8D:5E:7F:66:D9:7B
Signature algorithm name: SHA1withRSA
Subject Public Key Algorithm: 1024-bit RSA key
Version: 3
Trust this certificate? [no]: yes
Certificate was added to keystore

List the content of your java keystore file.

$ORACLE_HOME/jdk/bin/keytool -list \
-keystore $ORACLE_BASE/network/admin/keystore.jks \
-storepass $(cat $ORACLE_BASE/network/admin/keystore.pin)

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

oud_root_certificate, Mar 1, 2019, trustedCertEntry,
Certificate fingerprint (SHA1): 80:0D:9E:89:1B:BC:69:99:02:6A:E7:B5:A6:D2:63:E9:
59:5A:C3:BF

The method is the same if you use an enterprise certificate. You just have to use the root certificate provided by the CA.

Usage of eusm

To establish a connection via SSL, you have to enter the java keystore and the keystore password or -K when invoking eusm. The following command does list the EUS Domain. The password is omitted and has to be specified via command line.

eusm listDomains realm_dn="dc=trivadislabs,dc=com" \
ldap_host=oud \
ldap_ssl_port=1636 \
ldap_user_dn="cn=eusadmin,cn=oraclecontext" \
ldap_user_password=eusadmin \
keystore=$ORACLE_BASE/network/admin/keystore.jks -K

Enter keystore password (key_pass):
LIST OF DOMAINS IN REALM: dc=trivadislabs,dc=com

OracleDefaultDomain

This command does list all the domain mappings. The password is now specified via parameter key_pass.

eusm listMappings domain_name="OracleDefaultDomain" \
realm_dn="dc=trivadislabs,dc=com" \
ldap_host=oud \
ldap_ssl_port=1636 \
ldap_user_dn="cn=eusadmin,cn=oraclecontext" \
ldap_user_password=eusadmin \
keystore=/u00/app/oracle/network/admin/keystore.jks \
key_pass=$(cat $ORACLE_BASE/network/admin/keystore.pin)

LIST OF DATABASE SCHEMA MAPPINGS::

Mapping Name: MAPPING0
Mapping Type: SUBTREE
Mapping DN: ou=People,dc=trivadislabs,dc=com
Mapping schema:EUS_USERS
Mapping Level :DOMAIN

Below you see an excerpt of the OUD access log file. The log entry for the CONNECT command does show the LDAPS protocol.

[01/Mar/2019:14:49:12 +0000] CONNECT conn=1111 from=172.18.0.3:34126 to=172.18.0.2:1636 protocol=LDAPS
[01/Mar/2019:14:49:13 +0000] BIND REQ conn=1111 op=0 msgID=1 type=SIMPLE dn="cn=eusadmin,cn=oraclecontext" version=3
[01/Mar/2019:14:49:13 +0000] BIND RES conn=1111 op=0 msgID=1 result=0 authDN="cn=eusadmin,cn=oraclecontext" etime=0
[01/Mar/2019:14:49:13 +0000] SEARCH REQ conn=1111 op=1 msgID=2 base="dc=trivadislabs,dc=com" scope=base filter="(objectclass=*)" attrs="orclversion"
[01/Mar/2019:14:49:13 +0000] SEARCH RES conn=1111 op=1 msgID=2 result=0 nentries=1 etime=1
[01/Mar/2019:14:49:13 +0000] SEARCH REQ conn=1111 op=2 msgID=3 base="cn=OracleDefaultDomain,cn=OracleDBSecurity,cn=Products,cn=OracleContext,dc=trivadislabs,dc=com" scope=one filter="(|(objectClass=orclDBEntryLevelMapping)(objectclass=orclDBSubtreeLevelMapping))" attrs="cn,orcldbdistinguishedname,orcldbnativeuser,objectclass"
[01/Mar/2019:14:49:13 +0000] SEARCH RES conn=1111 op=2 msgID=3 result=0 nentries=1 etime=1
[01/Mar/2019:14:49:13 +0000] DISCONNECT conn=1111 reason="Client Disconnect"

Conclusion

It took way to long until eusm becomes officially available. Since it was part of the binaries already since Oracle 11c. The fact that LDAPS is finally also supported is a significant step towards general improvement of the security of databases as well directory servers. It does getting much easier to harden directory servers and limit access on the LDAPS protocol. A little unfortunate in my opinion is the Oracle documentation regarding the configuration of the java keystore. A simple example would have simplified the setup.

Some links related to this blog post:

OUD 12c – SSLHandshakeException with “no cipher suites in common”

Recently I’ve update the java installation of my Oracle Unified Directory (OUD) 12.2.1.0.3 to the latest release. Java 1.8.0 update 202 to be exact (p28916775_180202_Linux-x86-64.zip). Actually a piece of cake, I’ve done this a few times in the past. My Enterprise User Security (EUS) test environment is running in Docker. A container for the database and an other one for the directory server. Updates are usually straight forward. Stop the containers, rebuild the images with the latest software / patches and recreate the containers. But not this time. After restarting OUD, my EUS authentication seems to be broken. When trying to log in, I did get a friendly ORA-01017 error.

 SQL> connect blofeld/******** ERROR: ORA-01017: invalid username/password; logon denied Warning: You are no longer connected to ORACLE. 
The control of the OUD access log file did show a cipher error.
 [21/Feb/2019:06:21:27] CONNECT conn=5 from=172.20.0.3:50376 to=172.20.0.2:1636 protocol=LDAPS [21/Feb/2019:06:21:27] DISCONNECT conn=5 reason="I/O Error" msg="no cipher suites in common" 

Groundhog Day? Endless loop? I knew I did fix this before. So I’ve checked again the solution in MOS Note 2397791.1 and 2304757.1. According to my understanding the java.security file did look ok. The required legacy ciphers has been enabled by removing 3DES_EDE_CBC from the list of jdk.tls.disabledAlgorithms.
I finally did several tests with different Java versions (1.8.0 update 192 and 1.8.0 update 202) and different java.security files. In the third attempt, database authentication with EUS and OUD in combination with Java 1.8.0 Update 202 also worked. The solution was rather simple. I did use the java.security file from java 1.8.0 update 192 rather than using the new version and enable 3DES_EDE_CBC. Running diff on both files has uncovered the culprits.

 diff java.security java.security_202_default 645c645 < EC keySize < 224 --- > EC keySize < 224, 3DES_EDE_CBC, anon, NULL 700c700,701 < RC4_128, RC4_40, DES_CBC, DES40_CBC --- > RC4_128, RC4_40, DES_CBC, DES40_CBC, \ > 3DES_EDE_CBC 
Or just the lines with jdk.tls.disabledAlgorithms.
 jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \ EC keySize < 224, 3DES_EDE_CBC, anon, NULL 
A difference due to 3DES_EDE_CBC was to be expected, since I made the comparison to the standard file java.security and there this algorithm was not yet removed. But anon, NULL

is new. The list of disabled algorithms jdk.tls.disabledAlgorithms has been altered in Java 1.8.0 update 202. I could have seen this myself if I had looked through the release notes before installing the software 🙂 . There is a java bug related to this, see JDK-8211883 Disable anon and NULL cipher suites. The problem is now that my EUS is working again, but it will use unsecure and legacy algorithms. A proper fix of this issue has to be implemented in the LDAP / EUS stack of the Oracle database binaries.

Conclusion

First of all do read the release notes before updating production environments 🙂 . As always in IT, do a little change on one side can unexpectedly break something on the other side. The solution presented here can only be a workaround, because we endanger security with legacy algorithms. Oracle should soon update the LDAP / EUS stack in the Oracle binaries.

  • Fix for Java 1.8.0 update 192 and older: Use the solution described in MOS note 2304757.1 update java.security and remove 3DES_EDE_CBC from the jdk.tls.disabledAlgorithms
  • Fix for Java 1.8.0 update 201 and newer: Use either an old java.security which does work for you EUS environment or remove 3DES_EDE_CBC, anon and NULL from the jdk.tls.disabledAlgorithms in your java.security

Links

A few links related to this post:

  • OUD 12c – EUS Integration Failing with Message “no cipher suites in common”[2397791.1]
  • OUD 11g – EUS Authentication Fails with Error Message “no cipher suites in common”[2304757.1]
  • Java 1.8.0 update 201 release notes
  • Java bug JDK-8211883 Disable anon and NULL cipher suites
  • Preview of my Docker compose files to setup an Oracle Enterprise User Security Environment on Docker GitHub oehrlis/docker

DOAG Red Stack Magazin – Oracle Unified Directory in Docker

Mid June I wrote an article for the DOAG Red Stack magazin about my work on Oracle Unified Directory in Docker. Just about the same time I did my DOAG SIG Security presentation on the same topic. In the meantime the article has been published in the latest release of the DOAG Red Stack magazin. For this reason I use the opportunity to make the PDF version of the article available on oradba.ch. The article is written in German and available as Trivadis version as well Red Stack version. Although the articles versions differ only in the number of typos and layout.

None of the articles are currently available in English. On request I will write also articles about Oracle Unified Directory in English in the future. However, currently I still have a lot of ideas for more blog posts about database security, Oracle Enterprise User Security and Oracle Unified Directory on my to-do list. And blog posts I do usually write in English… 🙂