Netbackup Replication Director, Netapp Plugin for Netbackup

NetBackup 結合 NetApp snapshot 備份

NetBackup 7.5 Replication Director configuration demo
http://www.symantec.com/connect/videos/netbackup-75-replication-director-configuration-demo

Configuring NetApp for Replication Director
http://www.symantec.com/connect/videos/configuring-netapp-replication-director

NetBackup Replication Director Unifies End-to-End Management of Snapshots and Backup
http://www.symantec.com/connect/connect-view-protected-content/2579911

VxVM Serial Split Brain - Detection & Resolution

Serial Split Brain - Detection & Resolution
http://www.symantec.com/docs/TECH33020

文件中有詳細說明前因後果與處理流程

第一次碰到這種 vxdg import fail 的狀況
過程是要 import HDS SI 的 disk 時無法 import
(怪,S-Vol 不是應該跟 P-Vol 完全一致嗎?)

VxVM vxdg ERROR V-5-1-10978 Disk group ctidbdg: import failed:
Serial Split Brain detected. Run vxsplitlines to import the diskgroup

發生的原因是 disk 存放的 dg config 不一致
所以 vxvm 不知道要用哪一份 dg config 來 import dg

那我們就指定他要用哪一顆硬碟的 dg config
就可以順利 import 了~

那我們怎麼知道哪顆硬碟的 dg config 是 ok 的呢?

文件中有詳細步驟 vxsplitlines 怎麼用
可是奇怪我跑卻好像卡住一樣,沒東西出來
我只好手動作業了

vxdisk list disk 找到一顆 config enable 的
再用 /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/cxtxd0s2
確認是否可以看到 dg config

ok 找到這顆硬碟 dg config 可用
vxdisk list disk 中有 diskid 資訊

# /usr/sbin/vxdg (-s) -o selectcp= import newdg

耶,成功 dg import 進來了~

.





NetApp Simulator Disk Type List


Type Vendor ID Product ID       Usable Size[B] Actual Size[B]  Zero  BPS   RPM
  0  NETAPP__  VD-16MB_________     16,777,216     38,273,024   No   512  10000
  1  NETAPP__  VD-35MB_________     35,913,728     57,409,536   No   512  10000
  2  NETAPP__  VD-50MB_________     52,428,800     73,924,608   No   512  10000
  3  NETAPP__  VD-100MB________    104,857,600    126,353,408   No   512  10000
  4  NETAPP__  VD-500MB________    524,288,000    545,783,808   No   512  10000
  5  NETAPP__  VD-1000MB_______  1,048,576,000  1,070,071,808   No   512  10000
  6  NETAPP__  VD-16MB-FZ______     16,777,216     38,273,024   Yes  512  15000
  7  NETAPP__  VD-35MB-FZ______     35,913,728     57,409,536   Yes  512  15000
  8  NETAPP__  VD-50MB-FZ______     52,428,800     73,924,608   Yes  512  15000
  9  NETAPP__  VD-100MB-FZ_____    104,857,600    126,353,408   Yes  512  15000
 10  NETAPP__  VD-500MB-FZ_____    524,288,000    545,783,808   Yes  512  15000
 11  NETAPP__  VD-1000MB-FZ____  1,048,576,000  1,070,071,808   Yes  512  15000
 12  NETAPP__  VD-16MB-520_____     16,777,216     38,273,024   No   520  10000
 13  NETAPP__  VD-35MB-520_____     35,913,728     57,409,536   No   520  10000
 14  NETAPP__  VD-50MB-520_____     52,428,800     73,924,608   No   520  10000
 15  NETAPP__  VD-100MB-520____    104,857,600    126,353,408   No   520  10000
 16  NETAPP__  VD-500MB-520____    524,288,000    545,783,808   No   520  10000
 17  NETAPP__  VD-1000MB-520___  1,048,576,000  1,070,071,808   No   520  10000
 18  NETAPP__  VD-16MB-FZ-520__     16,777,216     38,273,024   Yes  520  15000
 19  NETAPP__  VD-35MB-FZ-520__     35,913,728     57,409,536   Yes  520  15000
 20  NETAPP__  VD-50MB-FZ-520__     52,428,800     73,924,608   Yes  520  15000
 21  NETAPP__  VD-100MB-FZ-520_    104,857,600    126,353,408   Yes  520  15000
 22  NETAPP__  VD-500MB-FZ-520_    524,288,000    545,783,808   Yes  520  15000
 23  NETAPP__  VD-1000MB-FZ-520  1,048,576,000  1,070,071,808   Yes  520  15000
 24  NETAPP__  VD-16MB-FZ-ATA__     16,777,216     51,388,416   Yes  512   7200
 25  NETAPP__  VD-35MB-FZ-ATA__     36,700,160     73,801,728   Yes  512   7200
 26  NETAPP__  VD-50MB-FZ-ATA__     52,428,800     91,496,448   Yes  512   7200
 27  NETAPP__  VD-100MB-FZ-ATA_    104,857,600    150,478,848   Yes  512   7200
 28  NETAPP__  VD-500MB-FZ-ATA_    524,288,000    622,338,048   Yes  512   7200
 29  NETAPP__  VD-1000MB-FZ-ATA  1,048,576,000  1,212,162,048   Yes  512   7200
 30  NETAPP__  VD-2000MB-FZ-520  2,097,512,000  2,119,007,808   Yes  520  15000
 31  NETAPP__  VD-4000MB-FZ-520  4,194,304,000  4,215,799,808   Yes  520  15000
 32  NETAPP__  VD-2000MB-FZ-ATA  2,097,512,000  2,391,810,048   Yes  512   7200
 33  NETAPP__  VD-4000MB-FZ-ATA  4,194,304,000  4,751,106,048   Yes  512   7200
 34  NETAPP__  VD-100MB-SS-512_    104,857,600    126,353,408   Yes  512  15000
 35  NETAPP__  VD-500MB-SS-520_    524,288,000    545,783,808   Yes  520  15000
 36  NETAPP__  VD-9000MB-FZ-520  9,437,184,000  9,458,679,808   Yes  520  15000
 37  NETAPP__  VD-9000MB-FZ-ATA  9,437,184,000 10,649,346,048   Yes  512   7200


add 5 disks type 34 at controller 1


fas01% sudo vsim_makedisks -n 5 -t 34 -a 1
Creating ,disks/v1.33:NETAPP__:VD-100MB-SS-512_:97135314:261248
Creating ,disks/v1.34:NETAPP__:VD-100MB-SS-512_:97135315:261248
Creating ,disks/v1.35:NETAPP__:VD-100MB-SS-512_:97135316:261248
Creating ,disks/v1.36:NETAPP__:VD-100MB-SS-512_:97135317:261248
Creating ,disks/v1.37:NETAPP__:VD-100MB-SS-512_:97135318:261248
Shelf file Shelf:DiskShelf14 updated

Windows XP 遠端桌面連線升級,開啟網路層級驗證



平時很少用遠端桌面連線,真要用時發現竟然都連不上那些新版的 Windows 了
咕狗一下原來要升級遠端桌面連線程式

KB969084 for Windows XP SP3
遠端桌面 7.0 用戶端更新程式
http://www.microsoft.com/zh-tw/download/details.aspx?id=20609

還要開啟網路層級驗證(CredSSP),因為 XP 預設是關閉的
http://support.microsoft.com/kb/951608/zh-tw
可以手動去改登錄碼,可是麻煩,微軟也有小程式幫你改
MicroSoft FixIt 50588

XP再戰十年!!

.

How to install Symantec HA guest components from vCenter


偶爾會碰到在 VMware guest os 安裝 VCS 的案子
因為目前 VMware 尚無法針對 guest os 中的 application 去做偵測
VCS 還是有其必要性

以前都要分別在 Windows guest OS 中安裝 VCS
6.01 起可以由 vCenter 中來 deploy 安裝了
Ref: http://www.symantec.com/connect/articles/achieving-high-availability-oracle-and-vmware-vcs-602

Installation Task:

1. Install SHA Console: 省事兒起見就裝在 vCenter 上吧
裝完後可以看到 vCenter 多了 Symantec High Availability 選項
這時 Install Guest Components 還看不到東西,因為我們 source 還沒裝
 
 
2. 安裝軟體 source 至 vCenter 中,才會有東西可以 deploy 到 guest os
指令 CopyInstaller.bat

現在 Install Guest Components 可以看到東西了

選擇要裝到哪幾台 guest os
 


3. Single Sign-on 可不設,用傳統的 VCS 帳號密碼登入方式


4.5. 就跟原本設定 VCS 的方式一樣
我還是覺另外單獨裝傳統的 VCS GUI Java Console 來設定比較快速方便


參考文件:
仔細看一下文件了解一下 VMware 中的限制,尤其是對 vMotion, DRS 等...

For Windows 2008
Symantec High Availability 6.0.1 Solutions Guide for VMware (Windows)
http://www.symantec.com/docs/DOC6131
For Windows 2012
Symantec High Availability 6.0.2 Windows Solutions Guide for VMware
http://www.symantec.com/docs/DOC6369

.


Netbackup Email Notification


Netbackup 設定完 BLAT 後
測了好一陣子 email notify,備份失敗寄信 ok 沒問題
但是若連備份成功也要寄信通知的話,老是踹不出來
原來是每一台 clinet 都要個別設定 email,終於ok~


DOCUMENTATION: Email Notifications Settings and Their Behaviors
http://www.symantec.com/docs/TECH64984

節錄重點:

The Global Attributes section contains the Administrator's e-mail address field which will be the email NetBackup feeds to nbmail.cmd when a backup finishes with a nonzero status. Putting an email address in Global Attributes will cause NetBackup to send e-mail notifications for only non-zero backup statuses
這裡設定的 email,只會收到備份失敗的通知信

 


The Universal Settings section contains the Client administrator's e-mail field which will be the email address NetBackup feeds to nbmail.cmd when a backup finishes with any status. Putting the email address in Universal Settings will cause NetBackup to send e-mail notifications for all backup statuses
這裡設定的 email,不論成功失敗,所有的jobs完成都會發通知信
而且每一台 clinet 都要個別設定 email


最好都選擇 Server sends mail
才可以在 blat 中定義統一的 email subject



VMware SRM with NetApp SRA



How to manually send AutoSupport files to NetApp


How to manually send AutoSupport files to NetApp
https://kb.netapp.com/support/index?page=content&id=1010077

Generate an autosupport manually:
filter>  options autosupport.doit now


但是很多客戶是在內部網路無法直接對外發信
若是可以直接抓下來是最好了

開啟 ftpd
filter>  options ftpd.enable on

ftp to NetApp controller
cd /etc/log/autosupport

抓完檔案記得關閉 ftpd
filter>  options ftpd.enable off


VEA can not start in RHEL 6.X


從 Storage Foundation 5.x 起, VEA 已不內含在套件中了,所以需要另外單獨下載安裝
RHEL 6.X 起,預設都是x64了,所以少裝了許多 32bit library
導致了 32bit 的程式 VEA 啟動失敗

原來在 VEA Readme File 裡面都有寫到,真是自己的疏忽
把缺少的 32bit library 補一補就 ok 啦

from Veritas Enterprise Administrator (VEA) Console
VRTSobgui 3.4.30.0 README file

PACKAGE INFORMATION
-----------------------------------
Product :    Veritas Enterprise Administrator (VEA) Console
Package :    VRTSobgui
Release :    3.4.30.0

SUPPORTED PLATFORMS
-----------------------------------
Operating Systems (OS):  Linux
OS Versions          :  RHEL(3.0,4.0,5.0,5.5,6.0,6.1), SLES(8.0,9.0,10.0,11)

Note: For launching VEA GUI in RHEL 6.0 onwards we need to install 32 bit versions of following packages:

1) libXau-1.0.5-1.el6.i686.rpm
2) libxcb-1.5-1.el6.i686.rpm
3) libX11-1.3-2.el6.i686.rpm
4) libXext-1.1-3.el6.i686.rpm
5) libXi-1.3-3.el6.i686.rpm
6) libXtst-1.0.99.2-3.el6.i686.rpm
7) libXrender-0.9.5-1.el6.i686.rpm

 .

VEA reports lun as failing, but "vxdisk list"does not


這狀況偶爾會碰到...
明明 vxdisk, vxprint 檢查都是 ok 沒問題的
VEA GUI 卻顯示 disk 有叉叉...可能是 VEA cache 的問題

 如何清除 VEA cache ?

VEA reports lun as failing, but "vxdisk list"does not
http://www.symantec.com/docs/TECH128994

* 關閉 VEA
* 清除 cache
# vxconfigd-k -x cleartempdir

* 重起 VEA
.

SCSI3 PGR operations on a VxDMP dmpnode result in dmp path disablement with RHEL5U8 and later kernels


RHEL 5.8 + SFCFS 一台重開另一台就 hang 住不動到 reboot ...

被這個搞死了,總算找到原因了
Linux Kernel 跑太快,竟然還加了個新的 scsi error type
VxVM 還不認識這個新 error type,導致使用 vxdmp 的 vxfen 出狀況



SCSI3 PGR operations on a VxDMP dmpnode result in dmp path disablement with RHEL5U8 and later kernels
http://www.symantec.com/docs/TECH192940

Problem
SCSI3 PGR (Persistent Group Registrations) operations on a VxDMP (VERITAS Dynamic Multi-Pathing) dmpnode result in VxDMP path disablement.

Side effects of the issue include the following:
• Reservation conflict immediately followed by VxDMP error V-5-0-112, whenever SCSI3 PGR operations are executed on a dmpnode.
         Operations that will issue SCSI3 PGR operation on a dmpnode include:
          - Stopping the cluster with hastop.
          - Issuing `/etc/init.d/vxfen start | stop`.  This command is automatically executed at server boot and shutdown.
          - Deporting a diskgroup that contain disk(s) with registrations or importing a diskgroup with groupreserve option.
• When a node is rebooted. The surviving node(s) may see diskgroups go into dgdisabled state and file system(s) get disabled. If VCS is managing resources you will see resource faults. This occurs because dmp paths and dmpnodes are being disabled as a result of multiple SCSI3 PGR operations.
• When a node is rebooted. The surviving node(s) may get paniced. The panic will be initiated by VxFEN as it is trying to avoid a split brain condition. This occurs because dmp paths and dmpnodes are being disabled as a result of multiple SCSI3 PGR operations, then reenabled by the VxDMP recovery daemon.

Error
kernel: sd 2:0:0:14: reservation conflict
kernel: VxVM vxdmp V-5-0-112 disabled path 66/0x0 belonging to the dmpnode 201/0xd0 due to path failure


Environment
RHEL5U8 running kernel 2.6.18-308.el5 and later
Storage Foundation 5.1 and later or Storage Foundation 6.0, 6.0RP1
VxFEN is configured and enabled with SCSI3 disk based fencing in either raw or dmp mode.
VxDMP is configured.


Cause
RHEL5U8 kernel SCSI layer error handling routine introduced a new error type : ID_NEXUS_FAILURE
This new error type is not handled properly by VxDMP, resulting in dmp paths getting disabled during SCSI3 PGR operations.


Solution
If you are planning to upgrade to RHEL5U8 or are currently running RHEL5U8 kernel 2.6.18-308.el5 or later kernels and are running VxVM 5.1 and later. Symantec recommends installing VxVM5.1SP1RP2P3HF5If you are planning to upgrade to RHEL5U8 or are currently running RHEL5U8 kernel 2.6.18-308.el5 or later kernels and are running VxVM 6.0 or 6.0RP1. Symantec recommends installing VxVM6.0RP1HF1

Workarounds:
Downgrade the kernel to pre RHEL5U8.
Configure vxfenmode in disabled mode.

Note: VERITAS Storage Foundation 6.0.1 will contain the fix and is slated for public release in early September. Rolling patch 5.1SP1RP3 will contain the fix and is slated for public release in early October. Rolling Patch 5.1SP1RP3 will also contain the vxfen patch noted in related articles.
.