Quantcast
Channel: Data Protector Practitioners Forum topics
Viewing all 3189 articles
Browse latest View live

DP Managed control file restore faile

$
0
0

[Normal] From: RSM@cellmanager.com ""  Time: 1/28/2016 4:24:47 AM

      Restore session 2016/01/28-21 started.

 

[Normal] From: ob2rman@xyxyxyxy.com "databasename"  Time: 01/28/2016 04:24:48 AM

      Starting restore of target database.

 

      Net service name: DATABASENAME.

      Instance status: MOUNTED.

      Instance name: instancename.

      Database DBID = 3561953013.

      Database control file type: CURRENT.

      Database log mode: ARCHIVELOG.

 

 

[Normal] From: ob2rman@xyxyxyxy.com "databasename"  Time: 01/28/2016 04:24:48 AM

      Starting restore of Data Protector managed control file backup.

 

[Critical] From: RSM@cellmanager.com ""  Time: 1/28/2016 4:38:38 AM

      Error accessing the Media Management daemon, in line 737, file ..\rcsmutil.c.

      Media Management daemon subsystem reports:

            "Details unknown."

 

[Major] From: OB2BAR_DMA@xyxyxyxy.com "databasename"  Time: 1/28/2016 4:39:15 AM

      Received ABORT request from RSM (ERR: Media Agent aborted.)

 

[Major] From: ob2rman@xyxyxyxy.com "databasename"  Time: 01/28/2016 04:39:15 AM

      [12:8010] Internal error.

 

[Major] From: ob2rman@xyxyxyxy.com "databasename"  Time: 01/28/2016 04:39:15 AM

      Restore of Data Protector managed control file backup failed.

 

[Major] From: ob2rman@xyxyxyxy.com "databasename"  Time: 01/28/2016 04:39:15 AM

      Restore of target database failed.

1.Omni Service Up and ruuning

2. Device is working fine, adhoc backup with this device

3. full permission on 7777  /var/opt/omni/tmp

 

i am not sure why i am getting medium error. even tried different device and backup version but no luck

 

 

 


Exclude files without trees

$
0
0

Hello.

is it possible to exclude a specific file type without a defined tree?
For example we want to backup a whole server but want to exclude some files.

In the "backup object summery" is only / listed. When we define our excludes there it dont work.
I guess thats because we have to define the exlucdes for the specific directorys.
But that would be unpleasant. 


So is it possible to do it without define trees or recursiv for the root directory /?

Thanks in advance!

(DP) Support Tip: Limits on the number of streams to a StoreOnce server and store.

$
0
0

Hi,


There are quite some factors that are limiting the number of streams to a StoreOnce server and store. Here's the list.



1. Physical server limit

An overall limit for the number of connections to one server is hardcoded to 100 per connection type (either data or command). This is for software servers. For hardware devices, this is typically 192 connections per node.



2. Device settings: Max. Number of Connections per Store

Device_Max_Connect.PNG


The device settings allow to set a maximum number of connections per store.



3. Gateway settings (Advanced): Max. Number of Parallel Streams per Client System

GW_Max_Connect.PNG


The advanced settings on a gateway allow to set a maximum number of parallel streams per client system.



4. Load balancing

The maximum value of the load balancing setting in the backup specification is another limiting factor.



5. Number of source objects

Obviously, there will never be more data streams than source objects backed up in parallel.



Koen

Adding drives to an existing library . . .

$
0
0

I have one virtual library with one robot and 50 drives.  I need more tape drives so from the VTL GUI, I assigned 100 more drives.  From the DP GUI, I went to "Devices & Media," selected "Devices," and clicked on "Actions" and selected "Autoconfigure Devices."  When the Device Autoconfiguration Wizard comes up, I see my robot and 150 drives.  I select the top of the tree and click on "Finish."  What I was expecting to see when I went back to "Devices" was my existing robot and 150 drives.  What I see instead is my existing robot with 50 drives and a 2nd robot with (2) added to the name of the existing robot, and all 150 drives.

It would be easy enough to delete the original robot and 50 drives and use the new robot and 150 drives but I was wondering if this is expected behavior in DP.  Will it create another robot every time I want to add more drives?

Thanks,

Randy

remove ADR from clients

$
0
0

Hello,

We have some clients that we need to remove this module from, is it possible to just remove the automatic disaster recovery module from the clients or do you have to completely uninstall everything from the client and reinstall without the ADR option.

Thanks!

importing client fails

$
0
0

I was having some problems importing an ubuntu 15.10 client into Dataprotector to be backed up.  The error I'm receiving is as follows:

 

[Normal]  Getting list of clients for installation...

[Normal]  Connecting to client spongebob.test.com...

[Normal] <spongebob.test.com>  Checking for response from system spongebob.test.com.

[Critical]  spongebob.test.com: Unsupported architecture/OS type

      (Linux mayhem 4.2.0-25-generic #30-Ubuntu SMP Mon Jan 18 12:31:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux)

      Skipping client!

It looks like it's due to the kernel being too new for DP to recognize, is there a workaround for this?

 

Thanks!

(DP) Support Tip: Parallel restore consumes User Objects on Windows

$
0
0

A parallel restore allows you to restore data concurrently from multiple objects to multiple disks or filesystems while reading the media only once, thus improving the speed of the restore.

When multiple objects are being selected on Data Protector(DP) Graphical User Interface (GUI) , the number of User Objects will be increased .
If it reaches the limit , DP GUI can not create new tab page to select other objects .

In such a case ,please consider increasing a value for User Objects .


HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\USERProcessHandleQuota

 

Regarding any risk by increasing User Objects , please contact Microsoft support .

(DP) Support Tip: Storage Space needed for ExchangeGRE

$
0
0

Error condition: 

ExchangeGRE console returns error: 

System.Management.Automation.RemoteException: Database 'DBNAME' is offline.

+ CategoryInfo: ResourceUnavailable: (0:Int32) [Restore-Mailbox], RecipientTaskException

+ FullyQualifiedErrorId: C023C151,Microsoft.Exchange.Management.RecipientTasks.RestoreMailbox

Cause: 

Database ran out of diskspace and was dismounted.

During search process in recovery there is a temporary mailbox created in production database and items are recovered to it using restore-mailbox cmdlet. Depending on mailbox search options specified it can potentially move the entire content of a mailbox from recovery database to this temporary mailbox, and this could take up to two time of a source mailbox size storage space.  So if it was 10GB at backup time, in worst case scenario of broad search criteria and .edb file with no whitespace the process would need 10 GB for transaction logs and 10 GB for .edb growth.


DP 9.05 File server backup size

$
0
0

when I start the backup of our Windows server 2012r2 file server (hosted on Hyper-v) the disks are all double in size......

In this example the drive of the server.

On OS-side used space on disk:  12.2 GB  (screenshot)

On HP dataprotector(backup size):  35.390.604KB (screenshot)

another one

OS-side: 2,2TB    -->>    Dataprotector 4,5TB

somebody any idea whats happening?

 

(DP) Support Tip: AntiVirus Programs, Firewalls and PostGreSQL

$
0
0
Should you encounter IDB service refusing to start or notice the IDB service down and note in the PG_log (Omniback/Service/DB80/PG/PG_Log/) (var/opt/omni/DB80/PG/PG_log/)the following entry
 
...WARNING C-00000000005E4B20: hpdpidb/hpdpidb_app@127.0.0.1:50911 Pooler Error: pgbouncer cannot connect to server
...WARNING C-00000000005E4B20: hpdpidb/hpdpidb_app@127.0.0.1:50914 Pooler Error: pgbouncer cannot connect to server
 
it would be recommended to check the exclusions of the AntiVirus software
 
This must be excluded:
 
C:\Program Files\PostgreSQL\8.x\bin\pg_ctl.exe
C:\Program Files\PostgreSQL\8.x\bin\postgres.exe
 
** (postgres recomendations on firewall and antivirus)
 
Antivirus software
If you have any antivirus software installed, you must exclude the data directories that are to be used by PostgreSQL and must exclude postgresql.exe process. If that still does not help, it may be required to completely uninstall the antivirus software from the machine.
Antivirus software can interfere with PostgreSQL's operation, because PostgreSQL requires file access commands in Windows to behave exactly as documented by Microsoft, and many antivirus programs contain errors or accidental behavior changes that cause these commands to misbehave subtly. Most programs do not care because they access files in fairly simple ways. Because PostgreSQL is continuously reading from and writing to the same set of files from multiple processes, it tends to trigger programming and design mistakes in antivirus software, particularly problems related to concurrency. Such problems can cause random and unpredictable errors, or even data corruption.
Antivirus software is also likely to dramatically slow down PostgreSQL's operation. For that reason, you should at least exclude postgres.exe and the data directories so the scanner ignores them.
What Anti-Virus software is compatible?
The systems used to build the Windows installers all run either Sophos AV or AVG Free Edition, and those systems pass a full set of PostgreSQL regression tests running those programs. Microsoft Security Essentials is also known to work.
Specific issues have been reported with the nod32 antivirus product. If you are using this product, add "postmaster.exe" to the list of excluded processes (available under advanced options). This has been reported to fix the problem.
Specific issues have also been reported with McAfee and Panda anti-virus software and NetLimiter network monitoring software. While some people do have PostgreSQL working with these software packages, there is no specific or even recommend solutions that have not worked in some cases, so the issues would appear to be installation specific, sometimes even requiring uninstallation.
Software firewalls
If you have any 3rd-party firewall software installed on your machine, try either disabling it or uninstalling it. There's really no need for 3rd party firewalls on Windows XP and above, as the built-in firewall provided by Microsoft does an excellent job already. Some badly-written 3rd party firewalls do not uninstall correctly, so after uninstallation you might have to tell Windows to repair its network settings.
If you had a 3rd-party firewall and have now uninstalled it, make sure to turn Windows Firewall back on, as many products turn it off during installation and fail to turn it back on during uninstallation.

Max backup sessions in parallel

$
0
0
We are able to run only 4 sessions at a time , we need to run atleast 8.
Have changed maxbsessions to 8 . Restarted dp still not able to run.
Win 2008 CM , DP 9.04

(DP) Support Tip: DP INET cluster aware

$
0
0

During filesystem backup creation of a clustered node, it is important to note, that (1) the backup should be created against the virtual host name imported into the cell (2) If an exansion of the physical hostname, displays clustered disks along with local disks, then DP INET is not running in cluster mode

IPC Read Error - Lost connection to BMA - DP9.05

$
0
0

Hello everybody,

Since the upgrade from 7.03 to 9.05 I get the following error message when the tape library had to load a new tape, 'cause the used one was full.

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_2"  Time: 1/28/2016 7:41:29 PM
	Tape1:0:0:0C
	Medium header verification completed, 0 errors found.

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_2"  Time: 1/28/2016 7:41:44 PM
	Ejecting medium '5'.

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_2"  Time: 1/28/2016 7:41:44 PM
	=> UMA@backupserver.domain.org@Changer2147483646:0:0:1
	Unloading medium to slot 5 from device Tape1:0:0:0C

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_2"  Time: 1/28/2016 7:42:06 PM
	=> UMA@backupserver.domain.org@Changer2147483646:0:0:1
	Loading medium from slot 8 to device Tape1:0:0:0C

[Critical] From: BDA-NET@iclient.domain.org "SID"  Time: 1/28/2016 7:42:43 PM
	IPC failure reading NET message (IPC Read Error
	System error: [10054] Connection reset by peer
) => aborting.

[Critical] From: OB2BAR_SAPBACK@iclient.domain.org "SID"  Time: 1/28/2016 7:42:43 PM
	Unexpected close reading NET message => aborting.

[Major] From: BSM@backupserver.domain.org "SID_Tape"  Time: 1/28/2016 7:42:43 PM
[61:3003]  	Lost connection to BMA named "IBM:ULTRIUM-TD5_2"
	on host backupserver.domain.org.
	Ipc subsystem reports: "IPC Read Error
	System error: [10054] Connection reset by peer
"
[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_1"  Time: 1/30/2016 7:14:36 AM
	Tape0:0:0:0C
	Medium header verification completed, 0 errors found.

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_1"  Time: 1/30/2016 7:14:52 AM
	Ejecting medium '7'.

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_1"  Time: 1/30/2016 7:14:52 AM
	=> UMA@backupserver.domain.org@Changer2147483646:0:0:1
	Unloading medium to slot 7 from device Tape0:0:0:0C

[Normal] From: BMA@backupserver.domain.org "IBM:ULTRIUM-TD5_1"  Time: 1/30/2016 7:15:16 AM
	=> UMA@backupserver.domain.org@Changer2147483646:0:0:1
	Loading medium from slot 2 to device Tape0:0:0:0C

[Critical] From: BDA-NET@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	IPC failure reading NET message (IPC Read Error
	System error: [10054] Connection reset by peer
) => aborting.

[Major] From: BSM@backupserver.domain.org "SID_Tape"  Time: 1/30/2016 7:15:57 AM
[61:3003]  	Lost connection to BMA named "IBM:ULTRIUM-TD5_1"
	on host backupserver.domain.org.
	Ipc subsystem reports: "IPC Read Error
	System error: [10054] Connection reset by peer
"

[Critical] From: BDA-NET@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	IPC failure reading NET message (IPC Read Error
	System error: [10054] Connection reset by peer
) => aborting.

[Critical] From: BDA-NET@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	IPC failure reading NET message (IPC Read Error
	System error: [10054] Connection reset by peer
) => aborting.

[Critical] From: BDA-NET@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	IPC failure reading NET message (IPC Read Error
	System error: [10054] Connection reset by peer
) => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Unexpected close reading NET message => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Received ABORT request from SM => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Unexpected close reading NET message => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Unexpected close reading NET message => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Unexpected close reading NET message => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Received ABORT request from SM => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Received ABORT request from SM => aborting.

[Critical] From: OB2BAR_SAPBACK@client.domain.org "SID"  Time: 1/30/2016 7:15:57 AM
	Received ABORT request from SM => aborting.

The tape library (Overland NEO2000e LTO5 FC) is directly connected to the backup server via fibrechannel (two tape drives > FC > FC-card on backup server)

The abort comes within a minute, so I didn't found a parameter in the global file, which match that one.

Anyone an idea, what it can be?

 

 

Best regards,

Tobias

(DP) Support Tip: DP / VMware Integration Support Matrix - always check supported versions

$
0
0

From TE HELP ELEVATION case notes :

Vcenter & ESXi : 5.5.0 2068190 <================ ???????????????

Not very clear and sure, and because a DEBUG file is given, better to check the real vCenter version

From the DP DEBUG, it is identified as :

 [199] 2016-01-06 15:03:52.198 ("/integ/vep/vepa/ $Rev: 2517 $ $Date:: 2009-10-28 16:12:58":66)

[199] <<=== (6) }  /* ViAPI::VimSOAPSDK::getServiceContent */

[199]

[110] [AppUtil::getServiceContent] Returning:

Type='class ns1__ServiceContent';value=(aboutInfo='Type='class ns1__AboutInfo';value=(name='VMware vCenter Server';fullName='VMware vCenter Server 4.1.0 build-258902';vendor='VMware, Inc.';version='4.1.0';build='258902';)';)

Key is build 258902 and this points to vCenter Server 4.1 , which is NOT SUPPORTED.

(DP) Support Tip: Data Protector name resolution during Push Installation

$
0
0

The Installation Server holds a repository of the Data Protector installation packages for a specific architecture. The Cell Manager is by default also an Installation Server. Each time you perform a remote installation, you access the Installation Server. The advantage of using Installation Servers is that the time required for remote installation, update, upgrade, and removal of Data Protector software is greatly reduced, especially in enterprise environments.

When a remote installation is performed the client communicates and get the Data Protector binaries from the Installation Server by accessing the Shared folder created in the Installation server, so the client tries to access either:

  1. \\Installation_Server_FQDN\OmniBack or
  2. \\Installation_Server_Shortname\Omniback

In order to access this shared folder the client must be able to resolve either the FQDN or shortname of the Installation Server.

A Windows system that will become your future Installation Server must meet the following requirements:

  • TCP/UDP 445. For new Data Protector client push installation (without any Data Protector components on the client), accessible installation server share is required. Alternatively, if the Installation Server depot share cannot be accessed, initial Data Protector client installation must be done locally.

 


(DP) Support Tip: DP / VMware Integration Support Matrix - Check Supported Versions

$
0
0

Need to be very specific on the VMware environment versions to be able to check in the support matrix.

Example :          vCenter 5.1 U3

Checking the DP Support Matreix

https://softwaresupport.hp.com/group/softwaresupport/search-result/-/facetsearch/document/KM00711729

Support Matrices for HP Data Protector 8.1

HP Data Protector 8.10
Virtualization Support Matrix
Version: 2.1 Date: October 2015


Table 3: Data Protector 8.10 Virtual Environment Agent platform support

VMware vCenter Server 5.1 U2, 5.5, 6.0

=> vCenter Server 5.1 U3 is not listed and clearly it is NOT supported on DP 8.1x. Only vCenter Server 5.1 U2 is supported.


Checking the next version :


https://softwaresupport.hp.com/group/softwaresupport/search-result/-/facetsearch/document/KM01029738

Support Matrices for DP 9.0

HPE Data Protector 9.0x
Virtualization Support Matrix
Version: 1.7 Date: December 2015

Table 3: VMware vCenter support
VMware vCenter support for Data Protector 9.0x

VMware vCenter Server 5.1 U3, 5.1 U2

=> It is very specific on the vCenter version – meaning, it will be clearly listed.


Reference :

kb.vmware.com/kb/392
Determining VMware Software Version and Build Number (392)

kb.vmware.com/kb/1014508
Correlating VMware product build numbers to update levels (1014508)


Another way of checking (DP Support side) is by DP DEBUGGING - to verify things

[199] 2016-01-06 15:03:52.198 ("/integ/vep/vepa/ $Rev: 2517 $ $Date:: 2009-10-28 16:12:58":66)
[199] <<=== (6) } /* ViAPI::VimSOAPSDK::getServiceContent */
[199]
[110] [AppUtil::getServiceContent] Returning:
Type='class ns1__ServiceContent';value=(aboutInfo='Type='class ns1__AboutInfo';value=(name='VMware vCenter Server';fullName='VMware vCenter Server 4.1.0 build-258902';vendor='VMware, Inc.';version='4.1.0';build='258902';)';)


Key entry is fullName='VMware vCenter Server 4.1.0 build-258902'

Credentials for adding new clients

$
0
0

I did the omniinetpasswd -add and omniinetpasswd -inst_srv_user and added one client successfully.  When I tried to add 10 clients, DP asked for my credentials 10 times, once for connecting to each server.  Is there a way to push/add multiple clients and only type in your credentials once?  I'm about to add 1300 clients and my fingers hurt just typing the number 1300.

Thanks,

Randy

(DP) Support Tip: omnirc OB2IPCKEEPALIVE=1 - ensure the OS ‘keepalive’ is set to 15 minutes

$
0
0

Short Technical Note on WHAT / WHY on OB2IPCKEEPALIVE=1 is set + OS 'keepalive'.

When DP has an open connection it may be disconnected by the operating system or any hardware that is in the data transport chain.

Typical disconnection timeouts are 1 or 2 hours depending of the platform, firewall or complex network. When such a silent disconnection occurs DP will loose the connection and not always be able to recover.

To keep such a connection alive we have a possibility to open the connection with a ‘keepalive’ option. Then the lower level IPC layer will send out ‘keepalive’ packages to stimulate the connection in order to prevent a silent disconnection.

The ‘keepalive’ packets are part of TCP protocol and are transparent to the program opening or using the connection. They are handled by the system and are not seen by application.

It must be clear that connections with ‘keepalive’ functionality are not a property of DP but from the lower level system code.

To open the connections with the ‘keepalive’ option the user must set OB2IPCKEEPALIVE=1 in the omnirc file of each host in the cell that can potentially suffer from a disconnection.

The issue is that by default the ‘keepalive’ interval (the time between two packets) is set to two hours on many systems, rendering them ineffective in some cases if other parties control the connection. This interval can be changed.

For a standard operating system the ‘keepalive’ packages are send out every 2 hours however some network hardware, monitoring software and firewalls may disconnect the connection already earlier.

So ‘keepalive’ packages should be interchanged at a shorter interval.

Set the ‘keepalive’ package transmit to 15 minutes = 900000 If this solves the disconnection issue you know the connectivity layer disconnects earlier than the default system value.


Example : WINDOWS

To change the parameter on Windows, check the KeepAliveTime in the registry.

http://support.microsoft.com/default.aspx?scid=kb;EN-US;120642

KeepAliveTime
Key: Tcpip\Parameters
Value Type: REG_DWORD - Time in milliseconds Valid Range: 1 - 0xFFFFFFFF
Default: 7,200,000 (two hours)
Description: The parameter controls how often TCP attempts to verify that an idle connection is still intact by sending a keep alive packet. If the remote system is still reachable and functioning, it will acknowledge the keep alive transmission. Keep alive packets are not sent by default. This feature may be enabled on a connection by an application.

NOTE : Please check the Microsoft KB for the specific WINDOWS OS version on registry parameters.

(DP) Support Tip: Basic Troubleshooting - VBDA CLI to simulate backup I/O read part

$
0
0

For backup performance issue, you can isolate things if it is I/O read (from disks) or I/O write (to tape) by running the VBDA CLI.

(examples)

vbda -vol /CONFIGURATION -out nul -profile -log > <OUTPUT_FILE>

vbda -vol /C -out nul -profile -log > <OUTPUT_FILE>

vbda -vol /F -trees "\PROD\SAPDB" -out nul -profile -log > <OUTPUT_FILE>

 

vbda -vol / -out /dev/null -profile -log > <OUTPUT_FILE>

vbda -vol / -trees "/oradata/PROD" -out /dev/null -profile -log > <OUTPUT_FILE>

It is writing to NULL (nothing).

It will show  "Display statistical info" so you can determine I/O read speed.

It is also useful to troubleshoot a backup hang or timeout on one object such as CONFIGURATION.

(DP) Support Tip: DP 7.x IDB Restore using omnidbrestore and obrindex.dat

$
0
0

IMPORTANT NOTE : Information listed below is for DP version 7.x or lower, and not cluster-aware cell manager (stand-alone CM).

 

 

omnidbrestore -- performs the restores of the Data Protector internal database (IDB)

The omnidbrestore command is used to restore the Data Protector internal database (IDB) without using the IDB, as opposed to the omnir - omnidb command which uses the IDB to retrieve the information needed for the IDB restore.

The IDB restore using the omnidbrestore command consists of four phases:

1) Stopping the Data Protector services/daemons (with the exception of the Data Protector Inet service on Windows)

2) Restore of the IDB files.

3) Roll forward of IDB transactions (if present) stored in the IDB transaction log(s) - a process called dbreplay.
Before the dbreplay is started, you are given the possibility to skip this phase by responding to a prompt.

4) Starting the Data Protector services/daemons.

Every time the backup of the IDB is started or when running omnidbinit or omnidbcheck commands or when the size of a transaction log reaches 2MB, a transaction log is created on the Cell Manager in the directory

Windows Server 2008 : Data_Protector_program_data\db40\logfiles\syslog\
other Windows systems : Data_Protector_home\db40\logfiles\syslog\
UNIX systems : /var/opt/omni/server/db40/logfiles/syslog/

The obrindex.dat file resides on the Cell Manager in the directory

Windows Server 2008 : Data_Protector_program_data\db40\logfiles\rlog
other Windows systems : Data_Protector_home\db40\logfiles\rlog
UNIX systems : /var/opt/omni/server/db40/logfiles/rlog

The obrindex.dat file is written to at every backup of the IDB and contains the Media Options,  RMA Options and VRDA options and arguments needed for the restore of the IDB and the name of the transaction log created at the IDB backup time.

 AUTORECOVER MODE

The autorecover mode is invoked using the -autorecover option. The omnidbrestore command
in the autorecover mode scans the obrindex.dat file for the Media Options, RMA Options
(Restore Media Agent options) and VRDA options (Volume Restore Disk Agent options) and
arguments needed for the restore. When the options and arguments are retrieved, the restore of
the IDB is performed using the retrieved options and arguments to the original location overwriting
the current files.


To start the restore of the IDB in the autorecover mode, run:

omnidbrestore -autorecover

Viewing all 3189 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>