Quantcast
Channel: Data Protector Practitioners Forum topics
Viewing all 3189 articles
Browse latest View live

DP 8-9 Commands

$
0
0

Hello, is there anywhere i can find a full command list for DP?

most of all i need any healthcheck commands to test status of DP.

thank you.

 

 


(DP) Support Tip: Copying from traditional media to StoreOnce

$
0
0

When data is copied from traditionl device (LTO, File  Library, Jukebox) to storeonce devices you might get error like: 

[Major] From: BMA@hostname  "Store_gw1 [GW 3192:0:6197155749911176774]" Time: 12/12/2015 12:22:18 PM
[90:51] \\hostname\Store\038da8c0_566e980a_07d8_004a
Cannot write to device (JSONizer error: Invalid inputs)

To avoid this copy specification should be configured so that it can start as many BMA processes for target gateway selected, as the concurrency setting used in the backup eg: 

1. if original backup concisted ot 10 objects multiplexed to single LTO drive (using concurrency 10)

2. copy specification should be configured with load balancing of at least 1-10. So that one BMA can be started for each multiplexed object

If this condition is not met, multiple RMAs will send data to single BMA leading to above mentioned error

(DP) Support Tip: After an upgrade to DP 9.05, VMware backups are done via LAN instead of SAN

$
0
0

After an upgrade to Data Protector 9.05, the SAN transport mode falls back to LAN on the vSphere versions 5.1 and 5.5.
This caused by an issue in on the VMware side, a workaround is documented on http://kb.vmware.com/kb/2135621
Conclusion: Use ESXi 6.0 hosts.

Contract identifier (said) data protector

$
0
0

Hello.

If my SAID for Data Protector Expired it means I can't download installation iso of DP ? /patches for sure not/

I lost my DVD with Installation of DATA Protector.

And I'm going to format server with DATA Protector and install for straches.

What Can I do ?

(DP) support tip: Copy job protection change are only stored in the database.

$
0
0

When running a copy job and electing to change the protection of the copy, the new protection is not written to copied media but only stored in the internal database. When the copied media is exported and imported, it will get back the protection of original backup from which the copy was taken.

NORELOVER Flag : HPDP 9.03

$
0
0

Hello all,

We are having issues while restorations as it gets aborted automatically. Upon a thorough investigation, we found that the underlying root cause is a bug in import functionality. It marks integration versions with NORELOVER flag.

Is there any fix available since DP 9.03? I am aware that it has been resolved in 9.05 and if anyone has an issue in 9.05 the below command will help.

omnidbutil -run_script <DP_DIR>\bin\dbscripts\CPE\QCCR2A62624_clear_bar_norelover_flag.sql -detail

But is there anything available in 9.03?

We can't wait for an upgrade as there is an urgent restore at hand.

Thanks in advance!!

(DP) Support Tip: Generating debugs using the Data Protector Scheduler

$
0
0

Although documented, but not commonly known, it is possible to use the regular Data Protector backup scheduler to enable debugging. This allows for a more surgical approach to debugging so that debugs do not have to be turned on for all backups. If there is a specific backup job which is problematic, the schedule file can be edited so that if the problematic backup runs at an inconvenient time or is intermittent debugs will be generated. The basic syntax is as follows:

    -debug 1-200 sch.txt      <------------ Add debug statement

   -full

   -only 2010

       -day 14 -month Dec

       -at 22:00

This information is documented in the Data Protector Troubleshooting Guide

 

 

Library with 20 drives - 19 show up under the library but one shows as stand alone

$
0
0

This is a new DP 9.04 install.  During the initial installation, everything went great.  I spent a few days learning the software (NetBackup veteran) and then did a fresh install.  I am attaching a Word doc with screenshots from the GUI and the command line output.  Even though the stand alone drive shows up in the GUI, omnidownload -dev_info doesn't show the device.  I have tried deleting all of the devices, rebooted, and tried the autoconfigure again with no luck.  I'm stuck here because I don't want to create sessions and/or templates just to have to go back and update all of them when I get the 20th drive to come back home.  Any help would be greatly appreciated.  If you can put it in NetBackup terms, that's an added bonus. :)

Thanks,
Randy


DP 9.03: Restoring from Media

$
0
0

I am running DP 9.03 with a HPUX 11.31 cell manager.  I have some old media that I need to restore.  However, I no longer have the media in my database.

I have imported the media, however, the object I need to restore shows as having a size of 0 KBytes (I am seeing this information under the Objects tab on the properties of the imported media).  I know the media was previously valid as it was used for restore when it still resided in the database.

I no longer have the backup session in my database to perform a restore from session.

Once upon a time, you could restore from media.  However, I do not see how this can be done now.  This was the "go to" method of restoring data whenever a search for an object was not available from the catalog.

Can anyone tell me how I can restore data from media that I no longer have the backup session available and does not appear in the catalog for search after an import?

Thanks

After Upgrade from 7.03 to 9.05 - Tape Library Problem

$
0
0

Hi guys,

I updated today my DP7.03 to DP9.05, after I tested the upgrade process in a sandbox, but unfortunately backup and restore doesn't work anymore on the tape library (Overland NEO2000e LTO5).

I tried to backup the IDB and a simple text file from the backup server, but everytime the session aborts with the following error message:

[Critical] From: VBDA@<servername> "E: [Backup]"  Time: 12/16/2015 2:11:57 PM
	IPC failure reading NET message (IPC Read Error
	System error: [10054] Connection reset by peer
) => aborting.

[Critical] From: VBDA@<servername>"E: [Backup]"  Time: 12/16/2015 2:11:57 PM
	Connection to Media Agent broken => aborting.


[Major] From: BSM@<servername> "test"  Time: 12/16/2015 2:11:57 PM
[61:3003]  	Lost connection to BMA named "IBM:ULTRIUM-TD5_1"
	on host<servername>.
	Ipc subsystem reports: "IPC Read Error
	System error: [10054] Connection reset by peer
"

[Critical] From: BSM@<servername> "test"  Time: 12/16/2015 2:11:57 PM
	None of the Disk Agents completed successfully.
	Session has failed.

First I thought it was the encryption keys, but they are all there, but I already uploaded the *.cvs again, just for sure.

So I tested backup & restore via our D2D and this way works like a charm!

The library get the command to load the tape, the tape was loaded but then the session aborts. The tape stucks and I have to move the tape manually from the tape drive back to the slot..

Any idea what problem this could be?

 

Thanks and best regards,

Tobias

 

 

 

VMWare integration configuration issue

$
0
0

Hello, We need to configure Data Protector 8.13 VMWare 5.1 integration and we are having problems with this.

We've already confgured days ago, but we tryed to move the client to another cell 9.05 with unsuccessful result. So we tryed to came back to que original cell (8.13), but we can't.

In the debug, we have found this error :

[112] 2015-12-15 19:53:33 ("/lib/ipc/ipc.c $Rev: 49174 $ $Date:: 2015-07-25 03:01:37":5688)
[112] Socket -1 of handle 1 closed.
[260] [IpcReleaseSSL] 1 0xd648e0
[260] [IpcReleaseSSL] ssl (nil) (nil)
[ 66] IpcReleaseIpcHandle: 1 (1, 1)
[ 10] [SmCloseHandleUn] SmIpcEvent. event:onClose, handle:1, hook:0x415f28
[ 21]   Sending .util info to monitor:
[ 21]     "*RETVAL*8100"
[ 21]     "System Error: [11004] No data record available"
[ 21]     "*RETVAL*0"

The windows team, configured the hostname in two different domains in DNS (one is the real domain of the server, and the other one is administrative). This adds a little of confiussion, but days ago it was working.

Do we have to import first as Data Protector Client, and then as Virtualization Environment Agent ?.

By the way, we have installed the VEAgent in the vCenter host, but in the manuals also to put it on another DP client in the cell. But in that case, we don't understand were to specify the vCenter host ... only request the user, password and port ...

Thanks,

Best Regards,

Marcelo.

 

Scheduled copy job to selected tape

$
0
0

Hello,

we are running HP Data Protector A.09.00 with HP:MSL G3 Series and work with scheduled copy jobs to tape after successful backub jobs to disk every day.

Is there a way to specify the tape for writing a copy job?

For example: Copy Job Monday to Tape Slot 1, Copy Job Tuesday to Tape Slot 2, ...

Thanks!

 

Writing to imported tape media query

$
0
0

Hi there,

We have a scenario whereby a customer has a DP 7.xx installation and would like to write to previously formatted DP tape media used within another Cell Manager.

I understand that typically in this instance we would need to do a forced tape format before adding to a media pool for use, but in this instance there is a lot of media which would obviously take a long time.

I wondered if anyone could suggest a method similar to the 'Use unformatted media first' option on the media pool which would allow the tape to be 'force formatted' before use (at backup) so that we can attempt to somewhat automate this process and avoid lengthy formatting tasks.

Any help, comments or suggestions would be greatly appreciated.

Thanks

(DP) Support Tip: Setting up a MoM CMMDB... FATAL: no pg_hba.conf entry for host

$
0
0

Problem
-------
We have a Cell Manager and try to configure it as a MOM with CMMDB for future purposes.
When doing the MMDB merge, we get the error...

`omnidbutil -mergemmdb cmhostname.region.hp.com`
About to start media management database merge from host cmhostname.region.hp.com.
Are you sure [y/n]?y
ERROR:  could not establish connection
DETAIL:  FATAL:  no pg_hba.conf entry for host "15.12.34.567", user "hpdpidb_app", database "hpdpidb"

CONTEXT:  SQL statement "SELECT dblink_connect_u(link_name, make_link_options(host, port))"
PL/pgSQL function "open_link" line 7 at PERFORM
PL/pgSQL function "register_rmmdb" line 9 at assignment

Operation failed!

Solution
--------
We ran
`omnicc -encryption -status -all`
on the Cell Manager to encrypt the cell clients

After that it was possible to set up the centralised MMDB.

Missing of data block detected, expecting block xxxxx!

$
0
0

Hi Folks,
When I try to restore an entire file system, i got this message:

Missing of data block detected, expecting block xxxxxx!
And session abort.

How I can continue the restore and ignore this block? In old version of data protector we change the sting OB2NOBLKSEQABORT, but in version 9, this string doesn't exist. 

Thanks.

 

 


VMware backups total disk space or space used?

$
0
0

I'm a DP newbie, migrating from NBU to DP, still waiting to attend the Essentials class so I'm sure these are basic questions for the seasoned veterans. And now that I've buttered you up . . .    With the few VMs that I have backed up so far, I noticed a huge discrepancy in the amount of data DP backs up in the first Full compared to the last Full backup of the same VM in NetBackup.  Further investigations seems to indicate that the NBU totals seem to correlate with the amount of data on the VM while the DP totals more closely match the total disk space on the VM.  Is that by design or is there a setting that controls that.  I pay for my storage by the amount of data I backup so this could be a major financial issue.

Does it possibly have somethign to do with backing up the vmdks?  My VM guy asked me that question and I'm not even sure what that means but I thought I'd include it and sound knowledgeable.


Thanks,,
Randy

CLI console

$
0
0

Am new to HP DP  Backup tool. could you please help, how to install a CLI console on my labtop.

As i have instatlled GUI and am on it,connecting CM's well. but want to run some cmmd on CLI.

Thanks in Advance

 

(DP) Support Tip [61:12019] Mismatch in backup group device and application data.......

$
0
0

If IDB Backup (here 5 streams to 1 gateway) to storeonce device shows following error
-->
[Major] From: BMA@bma-client.domain "D2D02_IDB_dpcs2_gw1 [GW 1060:0:10030647198765541745]"  Time: 18.09.2015 12:47:23
[90:51]   \\d2d02.xxx.uni.de\Store_IDB_dpcs2\9f65cb95_55fbeb63_0d6c_0001
Cannot write to device (JSONizer error: Invalid inputs)

[Major] From: BSM@cs2.domain "system-omni2"  Time: 18.09.2015 12:47:59
[61:3003]   Lost connection to BMA named "D2D02_IDB_dpcs2_gw1 [GW 1060:0:10030647198765541745]"
on host bma-client.domain.
Ipc subsystem reports: "IPC Read Error
System error: [10054] Verbindung wurde von Peer zurückgesetzt
"

[Critical] From: BSM@cs2.domain "system-omni2"  Time: 18.09.2015 12:47:59
[61:12019]  Mismatch in backup group device and application database concurrency configuration.
Application database concurrency is 5.


situation:
DP 8.13 build 207 (MA DPWIN_00817)                      1 gateway, #streams = 5  --> error above
DP 8.13 build 207 (MA DPWIN_00817)                      1 gateway, #streams = 1  --> works fine

 

The following should be considered:

Concurrent streams to a D2D device are not supported as per Deduplication whitepaper (Deduplication.pdf).

That is because concurrency lowers the deduplication ratios on D2D devices (if you have interleaved data,
it can make different combinations of the same files, meaning that same data can be written multiple times).

And there's also nothing to gain; concurrency is mostly for tape drives, so that they don't have to throttle -
the tape can run without having to stop for disk agent to send data (braking a tape and then starting it again takes some time).
Since D2D devices are disk based, they do not benefit from that mechanism.

JSONizer was never made for concurrency, since it was made with that limitation in mind.
That's also why it reports errors, because it can't read interleaved data.

 

White Paper HP Data Protector 9.00 Deduplication
-->
To optimize deduplication performance, Disk Agent concurrency is not supported
(this means, one Disk Agent talks to one Media Agent – there is no multiplexing of streams).

 

Solution
So a better configuration would be: Each DA writes to a separate MA

StoreOnceSoftware Housekeeping fills the disk?

$
0
0

Hi,

while monitoring my SOS housekeeping activity on 9.04 (running on W2k8R2), I noticed that, starting after a while of normal operations, HK suddenly doesn't free disk space anymore. Instead, during HK, the disk usage grows by significant amounts (hundreds of GB). At the same time, StoreOnceSoftware --list_stores tells me it freed some 13GB (Store Data Size is dropping from 4121GB to 4108GB). I first tried to explain the issue away by metadata increase, but observations made after that tell a different story:

  • After a fresh start of SOS, HK works as expected. As long as there is anything to be cleaned, disk usage usually decreases.
  • The issue starts after days or even weeks of normal operation. All of a sudden, HK will increase disk usage on every run. I don't yet know if the trigger is just time, or time in which nothing would be cleaned for several days (say, after a change in protection times which makes omnimm --delete_unprotected_media come up empty for days in a row).
  • The issue doesn't resolve by itself. Even weeks after it started, it will still only fill up the disk more, never free up anything.
  • There is apparently junk piling up in the recycled and retired directories in the store. But be warned: Do never attempt to watch these directories during HK operation. I managed to break the store by just watching one of these directories using Explorer on debugging this issue. It apparently locked the directory, so HK couldn't move some data and fell on its face. Store failed. Ridiculous. Had to hunt for s.bad_integrity and all that. But it finally forced me to restart the SOS process (by actually rebooting), and thus gave me the next insight:
  • The whole mess cleans itself up when SOS is restarted. Nice to see several TB of space getting finally freed, mostly by cleaning up the recycled directory somewhere in the store structure.
  • There is, however, a glitch: Stopping the SOS service the regular way never succeeds. Windows will just display a progress bar for a while, finally telling me the process didn't react to the STOP signal in due time. And indeed, it will linger around, as far as I tested, forever (I waited 2h). The only way to progress here is killing the process from Task Manager or likewise. I don't like that at all, knowing how intricate and brittle SOS uses to be. Fear of losing all my >100TB user data in that store runs deep.
  • Sadly, while I can --stop_store and --start_store just the (single) store in question, doing that will not trigger the cleanup.

Anyone else seeing this? Workarounds? Is there a known fix? Maybe even in 9.05?

TIA,
Andre.

DP 9.03 to 9.04 Vmware Full backup failed when there is a user snapshot exist

$
0
0

Anyone out there encounter the same problem with VM backup using DP 9.04?  We were initially on DP 9.03 and this has been working fine until we update to DP 9.04.

Viewing all 3189 articles
Browse latest View live