TIVO on Gigabit network?

G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

I'm thinking of upgrading to 10/100/1000 (gigabit) network. I'm sure
its the next "wave of the future" anyway, and the price isn't too bad.

I know the official line (list of workin network adapters on TIVO
website), but can TIVO currently support gigabit connections?

Anybody have experience with what adapters (preferably USB) that coul
support a TIVO on a gigabit network (at full speed)?

John
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Once upon a time, John Martin <johnmartin@comcast.net> said:
>Anybody have experience with what adapters (preferably USB) that coul
>support a TIVO on a gigabit network (at full speed)?

Since a TiVo can't fill a 100M ethernet, there is no point in putting a
gigabit adapter on one. Also, the fastest USB2.0 interface can't
approach "full speed" gigabit (since USB2.0 tops out at 480M, and that
includes the USB protocol overhead). I'm not even sure if you can buy a
USB gigabit interface.

If you want your computers to talk gigabit, get a switch that can mix
speeds (most real switches can with no problem) and leave the TiVo at
100M.
--
Chris Adams <cmadams@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Chris Adams wrote:
> Once upon a time, John Martin <johnmartin@comcast.net> said:
>
>>Anybody have experience with what adapters (preferably USB) that coul
>>support a TIVO on a gigabit network (at full speed)?

I highly doubt it. The chipset of the adapter would be different than
the ones Tivo currently supports and therefore wouldn't have the drivers
available. A hacked Tivo could support it easily, assuming there was a
linux driver available.

> Since a TiVo can't fill a 100M ethernet, there is no point in putting a
> gigabit adapter on one. Also, the fastest USB2.0 interface can't
> approach "full speed" gigabit (since USB2.0 tops out at 480M, and that
> includes the USB protocol overhead). I'm not even sure if you can buy a
> USB gigabit interface.

Chris is absolutely right here. I'd also point out that most
*computers* can't handle a full Gigabit connection. 95% of Gigabit
adapters will connect to your computer through PCI (including ones w/
integrated network ports). 32 bit PCI busses (64 bit PCI is typically
found only on servers) can handle up to 32 Mhz*32 bits= 1024 Mbps. That
is greater than 1 Mbps, but remember that PCI is a *shared* bus, so it
has to share that bandwidth with all the other I/O your computer is
doing, including most disk operations (excepting DMA transfers).

A very few new motherboards (those w/ 915x and 925x Intel chipsets) have
a dedicated bus for the integrated gigabit network port, which gets
around the PCI bottleneck.

> If you want your computers to talk gigabit, get a switch that can mix
> speeds (most real switches can with no problem) and leave the TiVo at
> 100M.

I agree. I also think that Gigabit is overkill for nearly everywhere
except backbone network connections (i.e. between buildings or between
floors of buildings). Perhaps it would be useful for moving very large
video files, but TiVo DVR's (and all others at this point I would bet)
have more bottlenecks than just the network connection to remove before
it will be possible to move data at those speeds.

My recommendation would be to make sure your wiring will support Gigabit
(since wiring typically lasts 10-15 years), but hold off on the
equipment. It's easy to swap in a new switch later, and more and more
equipment is coming with gigabit standard now anyway.

Randy S.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Randy S. (rswittNO@SPAMgmail.com) wrote in alt.video.ptv.tivo:
> > If you want your computers to talk gigabit, get a switch that can mix
> > speeds (most real switches can with no problem) and leave the TiVo at
> > 100M.
>
> I agree. I also think that Gigabit is overkill for nearly everywhere
> except backbone network connections (i.e. between buildings or between
> floors of buildings).

Gigabit has the advantage of allowing multiple computers to all get >100Mbps
connections between each other at the same time. This allows the single
wire I run between upstairs and downstairs to be far more useful. Each
computer-to-computer connection can only get about 250Mbps (that's the PCI
bus limit coming into play), but total bandwidth can be about 800Mbps (which
is about the limit of gigabit with collisions).

--
Jeff Rife | "There was a guy that was killed just like this
| over in Jersey."
| "Yeah, but I figure, 'What the hell,
| that's Jersey.'"
| -- "Highlander"
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Jeff Rife wrote:
> Randy S. (rswittNO@SPAMgmail.com) wrote in alt.video.ptv.tivo:
>
>>>If you want your computers to talk gigabit, get a switch that can mix
>>>speeds (most real switches can with no problem) and leave the TiVo at
>>>100M.
>>
>>I agree. I also think that Gigabit is overkill for nearly everywhere
>>except backbone network connections (i.e. between buildings or between
>>floors of buildings).
>
>
> Gigabit has the advantage of allowing multiple computers to all get >100Mbps
> connections between each other at the same time. This allows the single
> wire I run between upstairs and downstairs to be far more useful. Each
> computer-to-computer connection can only get about 250Mbps (that's the PCI
> bus limit coming into play), but total bandwidth can be about 800Mbps (which
> is about the limit of gigabit with collisions).
>

I'd consider that a "vertical" connector, and certainly useful for
Gigabit if you're doing large intranet transfers and have multiple
computers upstairs and downstairs. We both know it's useless for
speeding up your ISP connection ;-).

I normally use Gigabit solely for switch to switch or router connections
(i.e. device to device). This way there's no network choke point as you
point out. I just haven't seen much demand for > 100 mbps
(realistically ~60-70 Mbps) individual transfer rates, but then we're
not doing much video editing. I'm sure it'll happen in the future,
which is why it's important to make sure your wiring infrastructure can
handle it.

I manage (among others) a building w/ > 100 computers in it. The whole
*building* is connected over 1 100 Mbps fiber (100baseFX) connection and
it *rarely* approaches 100% utilization. It's getting close enough
often enough now though that I am considering an upgrade for it to
Gigabit fiber for the outside connector and the verticals (there are 4
floors). The Enterprise level gigabit fiber switch is not cheap however.

Randy S.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Randy S. (rswittNO@SPAMgmail.com) wrote in alt.video.ptv.tivo:
> I normally use Gigabit solely for switch to switch or router connections
> (i.e. device to device). This way there's no network choke point as you
> point out. I just haven't seen much demand for > 100 mbps
> (realistically ~60-70 Mbps) individual transfer rates, but then we're
> not doing much video editing.

Yeah, I'm piping HD recordings from one computer to another. Gigabit gives
me 250Mbps while I was only getting about 65Mbps throughput with 100Mbps
cards/switches.

> I manage (among others) a building w/ > 100 computers in it. The whole
> *building* is connected over 1 100 Mbps fiber (100baseFX) connection and
> it *rarely* approaches 100% utilization.

At work, we had the problem that when multiple users transferred files to/from
the servers, they bogged down. Now, with 100Mbps in the client machines
and gigabit backbone and gigabit in the servers, they stay at full speed
until more than about 8 people hit it with very large transfers at the exact
same time.

> The Enterprise level gigabit fiber switch is not cheap however.

Yeah, managed gigabit switches are pricey, especially fiber. For home,
though, I use 3 Netgear 8-port 1000Mbps switches at about $80 each. They
have "auto-uplink" that lets me wire in just about any way I want, and they
have the connectors and status lights on the same side of the box, which
allows me to put them on a shelf in the rack.

--
Jeff Rife |
| http://www.nabs.net/Cartoons/OverTheHedge/Chainsaw.gif
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

> At work, we had the problem that when multiple users transferred files to/from
> the servers, they bogged down. Now, with 100Mbps in the client machines
> and gigabit backbone and gigabit in the servers, they stay at full speed
> until more than about 8 people hit it with very large transfers at the exact
> same time.

True, I should note that direct gigabit connections to high use servers
are a good idea (redundant ones if possible!). I've also found it
*very* important to use nothing but SCSI drives in servers under
multi-user load. IDE drives bog down *a lot* when being accessed
multiply. I'm very curious to see the how the new SATA IDE drives with
the NCQ features work as compared to SCSI, it would help drop costs alot.

>
>
>> The Enterprise level gigabit fiber switch is not cheap however.
>
>
> Yeah, managed gigabit switches are pricey, especially fiber. For home,
> though, I use 3 Netgear 8-port 1000Mbps switches at about $80 each. They
> have "auto-uplink" that lets me wire in just about any way I want, and they
> have the connectors and status lights on the same side of the box, which
> allows me to put them on a shelf in the rack.
>

Unmanaged switches are *a lot* cheaper thank goodness. When I own and
manage every system on the network then an Unmanaged switch works just
fine. But when I need to trace down some yutz who just threw an
infected host on the network, that management interface saves my ass!

Randy S.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Randy S. (rswittNO@SPAMgmail.com) wrote in alt.video.ptv.tivo:
> True, I should note that direct gigabit connections to high use servers
> are a good idea (redundant ones if possible!). I've also found it
> *very* important to use nothing but SCSI drives in servers under
> multi-user load. IDE drives bog down *a lot* when being accessed
> multiply. I'm very curious to see the how the new SATA IDE drives with
> the NCQ features work as compared to SCSI, it would help drop costs alot.

It depends on how you hook them up.

Using a 32-bit PCI card (even if it supports NCQ) results in decent
performance at a cheap price, but pretty much any modern SCSI adapter and
drives do better.

But, PCI-X and NCQ and 8-channel RAID-5 give better performance than all
but the most expensive SCSI adapter/drive combos. Even 64-bit PCI and NCQ
support (which is what I have at home) is "good enough" for a non-critical
server.

--
Jeff Rife |
| http://www.nabs.net/Cartoons/TiVoAndBeer.gif
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

In article <d2ms7a$1cpo$1@spnode25.nerdc.ufl.edu>,
rswittNO@SPAMgmail.com says...
>
> > At work, we had the problem that when multiple users transferred files to/from
> > the servers, they bogged down. Now, with 100Mbps in the client machines
> > and gigabit backbone and gigabit in the servers, they stay at full speed
> > until more than about 8 people hit it with very large transfers at the exact
> > same time.
>
> True, I should note that direct gigabit connections to high use servers
> are a good idea (redundant ones if possible!). I've also found it
> *very* important to use nothing but SCSI drives in servers under
> multi-user load. IDE drives bog down *a lot* when being accessed
> multiply. I'm very curious to see the how the new SATA IDE drives with
> the NCQ features work as compared to SCSI, it would help drop costs alot.

Here's something to think about: get a HW-based raid controller from
3ware. I got a 4-port unit with 3 drives in raid-5 mode, with the 4th
port for a spare (hot replacement) drive. The interface to the drives
is SATA. The controller was like $300 or so... The performance has
been very nice.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

> Here's something to think about: get a HW-based raid controller from
> 3ware. I got a 4-port unit with 3 drives in raid-5 mode, with the 4th
> port for a spare (hot replacement) drive. The interface to the drives
> is SATA. The controller was like $300 or so... The performance has
> been very nice.
>

I've heard good things about 3ware. However I've yet to see real
head-to-head comparisons of SATA w/ NCQ support versus SCSI yet. I'd be
curious to see the numbers. Theoretically it should be close, but I'm
always skeptical until I see real world tests.

Randy S.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

In article <d2ngni$1fhc$1@spnode25.nerdc.ufl.edu>,
rswittNO@SPAMgmail.com says...
>
> > Here's something to think about: get a HW-based raid controller from
> > 3ware. I got a 4-port unit with 3 drives in raid-5 mode, with the 4th
> > port for a spare (hot replacement) drive. The interface to the drives
> > is SATA. The controller was like $300 or so... The performance has
> > been very nice.
> >
>
> I've heard good things about 3ware. However I've yet to see real
> head-to-head comparisons of SATA w/ NCQ support versus SCSI yet. I'd be
> curious to see the numbers. Theoretically it should be close, but I'm
> always skeptical until I see real world tests.

True enough. Given the respective costs of drives (and controllers)
though, for mid-level servers, it seems a pretty interesting approach.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

Once upon a time, Randy S. <rswittNO@SPAMgmail.com> said:
>I agree. I also think that Gigabit is overkill for nearly everywhere
>except backbone network connections (i.e. between buildings or between
>floors of buildings).

Yep. I'm the head system/network admin for a moderately sized ISP.
We've got 3 DS3s to the Internet, but we have exactly _one_ gigabit LAN
hookup in our entire network (the network backup server to the backup
LAN switch; the other systems all do 100M to the switch).

This does mean that no one network or system device can fill up our
Internet connections (135M total Internet bandwidth), but that is a good
thing usually. :)
--
Chris Adams <cmadams@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

On Sat, 2 Apr 2005 10:47:19 -0500, Jeff Rife <wevsr@nabs.net> wrote:

>Randy S. (rswittNO@SPAMgmail.com) wrote in alt.video.ptv.tivo:
>> > If you want your computers to talk gigabit, get a switch that can mix
>> > speeds (most real switches can with no problem) and leave the TiVo at
>> > 100M.
>>
>> I agree. I also think that Gigabit is overkill for nearly everywhere
>> except backbone network connections (i.e. between buildings or between
>> floors of buildings).
>
>Gigabit has the advantage of allowing multiple computers to all get >100Mbps
>connections between each other at the same time. This allows the single
>wire I run between upstairs and downstairs to be far more useful. Each
>computer-to-computer connection can only get about 250Mbps (that's the PCI
>bus limit coming into play), but total bandwidth can be about 800Mbps (which
>is about the limit of gigabit with collisions).

I am running a netgear gig network switch. I have an AMD 1800XP and a
3 ghz Intel computer both with Netgear GA311 net card. I can only get
about 170 Mbps max per the Netgear Smart Wizard Utility. I transfer
4.6 gig video files between the two computers all the time. The gig
network did double my transfer speed but not anywhere as much as I had
hope. I have found if I try to multitask on the AMD 1800+xp which is
the sending unit, the speed will drop as low as 20Mbps. I have 4 hd in
each computer and try to be careful not to multitask on the same HD as
I am transfering to or from.
 
G

Guest

Guest
Archived from groups: alt.video.ptv.tivo (More info?)

> I am running a netgear gig network switch. I have an AMD 1800XP and a
> 3 ghz Intel computer both with Netgear GA311 net card. I can only get
> about 170 Mbps max per the Netgear Smart Wizard Utility.

Probably due to shared PCI bus limits.

> I transfer
> 4.6 gig video files between the two computers all the time. The gig
> network did double my transfer speed but not anywhere as much as I had
> hope. I have found if I try to multitask on the AMD 1800+xp which is
> the sending unit, the speed will drop as low as 20Mbps. I have 4 hd in
> each computer and try to be careful not to multitask on the same HD as
> I am transfering to or from.

Well, you've got two issues. One is that your HDD write speed is a
bottleneck in that situation. Second is that if you are using IDE
drives you will see signicant performance degradation during
multitasking. This is why SCSI drives are popular in Servers. As I
mentioned before, the *newest* SATA drives are starting to feature some
of the same benefits that SCSI drives have had (notably native command
queuing, or NCQ), but you also need an SATA controller that supports it.

Randy S.