*BSD News Article 46216


Return to BSD News archive

Path: sserve!newshost.anu.edu.au!harbinger.cc.monash.edu.au!nexus.coast.net!simtel!noc.netcom.net!news.sprintlink.net!howland.reston.ans.net!vixen.cso.uiuc.edu!news.uoregon.edu!news.rediris.es!sanson.dit.upm.es!jmrueda
From: jmrueda@diatel.upm.es (Javier Martin Rueda )
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Remote backups on a DAT drive?
Date: 22 Jun 1995 16:24:16 GMT
Organization: Dpt. Ing. Telematica
Lines: 27
Message-ID: <3sc5fg$sni@sanson.dit.upm.es>
NNTP-Posting-Host: gaudi.diatel.upm.es
X-Newsreader: TIN [version 1.2 PL2]

I have a FreeBSD machine (950412-SNAP) that performs remote backups on a
DAT tape connected to an SparcServer with Solaris 2.4.

Doing a local backup in the SparcServer, I may use just:

ufsdump 0f /dev/rmt/0n /filesystem

However, if I use the following in the FreeBSD machine:

rdump 0f zobel.lab:/dev/rmt/0n /filesystem

FreeBSD thinks it is connected to a 150 Mb tape unit or something like
that because it asks for a new tape much much earlier than filling up 2 Gb
of data.

The solution I'm using is adding to rdump invented values of tape density,
length, etc., forcing rdump to think it has a 2 Gb tape. (54,000 bpi and
13,000 feet).

Is there any better solution? If not, what values are you using for tape
density, length, or whatever? (As I said, I invented mine).

By the way, I also have a DAT tape unit directly connected to a FreeBSD
box (2.0R this time), and I also have to invent the numbers to avoid that
dump stops after 150 Mb or so. This time, I'm using a block size of 1 Kb,
and 2,000,000 blocks as parameters. Even so, dump stops at about 1 Gb, I
believe. Any better values or solutions?