00:01.51 | kergoth | docs/BUILD |
00:01.53 | kergoth | in the buildroot |
00:03.54 | *** join/#openzaurus james_lan-zaurus (~zic@156.26.48.28) |
00:11.42 | *** join/#openzaurus Sugar (~neotron@207.188.30.40) |
00:12.41 | *** join/#openzaurus ^X^ (x@12-232-113-54.client.attbi.com) |
00:25.03 | *** join/#openzaurus bipolar_ (bflong@ben-n-rhi.msns.flt.ptd.net) |
00:25.34 | *** join/#openzaurus jmh|away (~jmh@jmhodge.res.bgsu.edu) [NETSPLIT VICTIM] |
00:25.54 | *** join/#openzaurus ljp (~ljp@tf0140.peakpeak.com) |
00:26.26 | *** join/#openzaurus kurre (~kurre@ncircle.nullnet.fi) |
00:27.51 | *** join/#openzaurus KeyserSoze (~ksoze@12-245-37-229.client.attbi.com) |
00:43.23 | *** join/#openzaurus james_lan-zaurus (~zic@156.26.48.28) |
01:15.31 | *** join/#openzaurus gaurdian (cncvkb@12-213-124-251.client.attbi.com) |
01:24.17 | *** join/#openzaurus james_lan-zaurus (~zic@156.26.48.28) |
01:42.23 | mdz | kergoth`bbl: thanks a lot for working with rkrusty on the opie debs |
01:56.36 | *** join/#openzaurus ulyx (~ulyx@modemcable120.184-130-66.que.mc.videotron.ca) |
02:02.49 | *** join/#openzaurus nikki (~nikki@202.57.90.79) |
02:17.52 | *** join/#openzaurus ulyx (~ulyx@modemcable120.184-130-66.que.mc.videotron.ca) |
02:41.22 | kergoth`bbl | mdz: not a problem, glad to help out |
02:41.46 | KeyserSoze | kergoth`bbl: ever see this before: ***BUG in Autoconf--please report*** AC_TRY_DLOPEN_SELF |
02:42.39 | kergoth`bbl | eek |
02:42.43 | kergoth`bbl | nope, never seen that |
02:42.59 | KeyserSoze | darn. |
02:43.25 | KeyserSoze | i have make version 3.79.1. what's yours? |
02:43.35 | kergoth`bbl | 3.80 atm |
02:43.42 | kergoth`bbl | but was using 3.79.1 before with no problems |
02:44.00 | KeyserSoze | me autoconf is 2.13, and automake is 1.4-p5 |
02:45.19 | KeyserSoze | when i type "emerge -s autoconf", it says 2.54 is the latest, and that i have 2.54. autoconf itself reports 2.13 |
03:05.37 | hardwire | slow scan TV scares kitties |
03:05.38 | hardwire | more at 11 |
03:09.35 | *** join/#openzaurus mchouinar (~dieu@modemcable120.184-130-66.que.mc.videotron.ca) |
03:21.57 | *** join/#openzaurus caffeine (~khedspet@clt74-76-015.carolina.rr.com) |
03:34.05 | *** join/#openzaurus CrazyGogo (~crazygo@pD9E1F8FD.dip.t-dialin.net) |
03:45.47 | *** join/#openzaurus BiGBiGYLLaMa (~llama@pD9E1F8FD.dip.t-dialin.net) |
03:52.21 | hardwire | augh |
03:52.25 | hardwire | is slashnet dead? |
04:13.47 | KeyserSoze | kergoth: are you here? |
04:14.00 | KeyserSoze | i tried building again, and got this error: |
04:14.12 | KeyserSoze | configure: warning: CC= arm-linux-gcc: invalid host type |
04:15.03 | KeyserSoze | /usr/local/arm/2.95.3/bin is in my path, though. |
04:15.18 | kergoth | your configure line is wrong |
04:15.21 | kergoth | paste it |
04:15.45 | KeyserSoze | where is it? |
04:16.18 | KeyserSoze | configure: running /bin/sh './configure' --prefix=/usr '--host' 'arm-linux' '--build' 'i386-linux' '--prefix=/usr' '--sysconfdir=/etc' '--disable-ltdl-install' 'CC= arm-linux-gcc' 'CFLAGS=-I/home/gazicm/projects/OZ/buildroot-exported/output/staging/include -march=armv4 -mtune=strongarm1100 -mapcs-32 -fexpensive-optimizations -fomit-frame-pointer -O2 -fpermissive' 'LDFLAGS=-L/home/gazicm/projects/OZ/buildroot-exported/output/staging/li |
04:16.18 | KeyserSoze | -rpath-link,/home/gazicm/projects/OZ/buildroot-exported |
04:16.21 | KeyserSoze | is that it? |
04:16.33 | kergoth | yeah, that configure line isnt right.. |
04:16.38 | kergoth | what are you building? |
04:16.44 | kergoth | i mean, what is the buildroot building |
04:16.56 | KeyserSoze | i did "make", which i believe does "make world" |
04:17.00 | kergoth | look at the 'entered' and 'leaving' messages |
04:17.01 | kergoth | nonono |
04:17.05 | kergoth | buildroot builds a fuckload of packages |
04:17.08 | kergoth | i need to know which one is failing |
04:17.19 | KeyserSoze | oh, okay. sorry, checking now. |
04:17.37 | kergoth | np |
04:17.39 | KeyserSoze | Leaving directory `/home/gazicm/projects/OZ/buildroot-exported/packages/libtool' |
04:17.48 | KeyserSoze | (that was right after the error messages) |
04:18.01 | kergoth | libtool is built um |
04:18.05 | kergoth | buildroot isnt putting that CC= there |
04:18.14 | kergoth | what distribution are you using? |
04:18.16 | KeyserSoze | oh, crap. that was after "***BUG in Autoconf--please report*** AC_TRY_DLOPEN_SELF" |
04:18.29 | KeyserSoze | i thought updating make got rid of that, but it's still there a few pages up. |
04:18.36 | kergoth | what distribution? |
04:18.40 | KeyserSoze | gentoo |
04:19.02 | kergoth | well you're running into issues because of the autoconf used |
04:19.05 | kergoth | definately distributionisms |
04:19.28 | kergoth | note that def-vars/* facilitates a means of specifying what autoconf/automake to run |
04:19.38 | kergoth | type autoconf<tab><tab> in console |
04:19.52 | KeyserSoze | autoconf autoconf-2.13 autoconf-2.53a autoconf-2.54 |
04:19.56 | kergoth | bingo |
04:20.04 | kergoth | type |
04:20.07 | kergoth | file `which autoconf` |
04:20.18 | KeyserSoze | /usr/bin/autoconf |
04:20.22 | kergoth | no |
04:20.23 | KeyserSoze | oops, sorry |
04:20.26 | kergoth | file `which autoconf` |
04:20.41 | KeyserSoze | /usr/bin/autoconf: symbolic link to ../lib/autoconf/ac-wrapper.pl |
04:20.47 | kergoth | yeah, thats what i figured |
04:20.52 | kergoth | thats why your --version reports what it does |
04:21.03 | kergoth | that wrapper figures out which version of autoconf to run when it gets called from a build dir |
04:21.10 | kergoth | calling it from where your'e at defaults to 2.13 |
04:21.17 | KeyserSoze | is that good, or bad? |
04:21.26 | kergoth | good, its the only sane way to handle uatoconf versioning issues |
04:21.37 | kergoth | but whichever autoconf the wrapper is running to build libtool is broken |
04:22.04 | kergoth | KeyserSoze: cd build/libtool; autoconf --version |
04:22.24 | KeyserSoze | -bash: cd: build/libtool: No such file or directory |
04:22.24 | KeyserSoze | Autoconf version 2.13 |
04:22.32 | KeyserSoze | why would that happen? |
04:22.44 | kergoth | cd build/libtool* |
04:22.49 | kergoth | i dont remember the exatt directory name |
04:22.52 | kergoth | exact, rather |
04:23.18 | KeyserSoze | <PROTECTED> |
04:23.18 | KeyserSoze | glibc-2.2.4 gzip-1.3.5 hostap-2002-10-12 ipkg-x86 ipktemp libtool-1.4.3 linux oz-base qt-2.3.2 |
04:23.22 | KeyserSoze | 1.4.3? |
04:23.25 | kergoth | yep |
04:23.29 | kergoth | cd build/libtool-1.4.3 |
04:23.31 | kergoth | autoconf --version |
04:23.44 | KeyserSoze | autoconf (GNU Autoconf) 2.54 |
04:23.44 | KeyserSoze | Written by David J. MacKenzie and Akim Demaille. |
04:23.44 | KeyserSoze | Copyright 2002 Free Software Foundation, Inc. |
04:23.44 | KeyserSoze | This is free software; see the source for copying conditions. There is NO |
04:23.44 | KeyserSoze | warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. |
04:23.50 | kergoth | there ya go |
04:23.50 | KeyserSoze | wow, 2.54. |
04:23.54 | kergoth | its 2.54 failing |
04:24.14 | kergoth | hehe, i've dealt with so many stupid build issues.. |
04:24.30 | KeyserSoze | is it supposed to use 2.54, and my 2.54 is bad? or is it supposed to use 2.13, and it is using 2.54 incorrectly? |
04:24.39 | kergoth | we require 2.5x |
04:24.48 | kergoth | you're seeing a failure .. that report an error message, from 2.54 |
04:25.00 | kergoth | nothing it can do could cause that unless theres a legitimate bug in autoconf 2.54 |
04:25.05 | kergoth | uninstall it |
04:25.15 | kergoth | the wrapper will fallback to 2.53a |
04:25.20 | kergoth | which may or may not work, but its worth a short |
04:25.21 | kergoth | shot |
04:25.41 | KeyserSoze | okay. i'll try that. |
04:32.42 | *** join/#openzaurus mewyn (~knoppix@dsl081-228-057.chi1.dsl.speakeasy.net) |
04:33.25 | KeyserSoze | removing 2.54 seemed to take the symlink "autoconf" with it. hopefully re-emerging 2.53a (which remained when 2.54 was removed, anyway), will put the symlinks back. |
04:33.38 | kergoth | eh, thats lame |
04:33.42 | kergoth | k |
04:34.27 | KeyserSoze | removing 2.54 also removed 2.13, but re-emerging 2.53a put 2.13 back in place, and plain "autoconf" |
04:34.47 | kergoth | ah |
04:35.01 | mewyn | gentoo time |
04:35.35 | mewyn | kergoth: are you using my server for anything but irc? :) |
04:35.44 | kergoth | not usually, not |
04:35.45 | kergoth | no |
04:35.54 | kergoth | probably will soon tho |
04:35.57 | kergoth | with this many devices supported |
04:36.05 | kergoth | I'll need to keep like 12 buildroots around |
04:36.18 | mewyn | eep |
04:36.33 | mewyn | that's gonna take a lot of space |
04:36.41 | kergoth | yeah, at around a gig and a half a pop |
04:36.49 | mewyn | hmmm |
04:36.51 | kergoth | so i want to distribute them some |
04:36.53 | mewyn | ah |
04:37.00 | kergoth | ehhe |
04:37.15 | mewyn | when i get a job |
04:37.20 | mewyn | i'll have a new machine |
04:37.33 | mewyn | 3.06, dual mirrored 240G |
04:37.36 | mewyn | 2G ram |
04:38.06 | mewyn | it'll be always on, but behind the firewall. you'll have to hop to it through ssh :) |
04:39.06 | mewyn | once i get that, you can keep 50 buildroots on it :) |
04:39.31 | KeyserSoze | dual mirrored 240GB? |
04:39.35 | KeyserSoze | that's a lot of disk |
04:39.50 | mewyn | nope |
04:40.01 | mewyn | that's not where it ends |
04:40.22 | mewyn | 240G isn't enough |
04:40.37 | mewyn | i plan on eventually getting a 2TB array going |
04:40.54 | KeyserSoze | that's a lot of DVDs and pirated windows software.... |
04:40.59 | KeyserSoze | or are you using it for something else? |
04:41.01 | KeyserSoze | :D |
04:41.28 | mewyn | i'm ripping my dvds to the 2TB array |
04:41.43 | mewyn | plus i won't delete anything |
04:42.05 | kergoth | scsi? |
04:42.08 | kergoth | or ide? |
04:42.13 | mewyn | IDE |
04:42.18 | mewyn | no reason to go scsi |
04:42.35 | chouimat | mewyn: speed and freeing the cpu |
04:43.10 | KeyserSoze | 2GB/(200GB/disc)*(2, for mirroring)=20 hard drives |
04:43.15 | mewyn | with a 3.06 w/ht, and a 3ware ide raid controller |
04:43.18 | KeyserSoze | oops, s/2gb/2tb |
04:43.47 | kergoth | if you're going all out, use both parity and mirroring |
04:43.52 | kergoth | keep a parity drive as well |
04:43.58 | kergoth | so 21 |
04:43.59 | kergoth | :) |
04:44.06 | KeyserSoze | lol. |
04:44.43 | KeyserSoze | yeah, drives are only about $1 a GB, nowadays. so it'll only be $2100. |
04:44.56 | KeyserSoze | oops, mistaken again. $4200. |
04:45.08 | mewyn | 2 arrays of 4 320G raid 5 |
04:45.20 | mewyn | 2800 including ide raid controller |
04:46.04 | mewyn | 8 320G drives |
04:46.09 | KeyserSoze | huh? each array has 4 320GB drives? there are 320GB drives? |
04:46.17 | mewyn | not yet |
04:46.24 | KeyserSoze | ah, okay. |
04:46.30 | mewyn | give it 3 more months |
04:46.40 | mewyn | maxtor has the tech to do it |
04:47.05 | kergoth | 8 drives? thats only 2.5 terabytes. not going mirroring? |
04:47.29 | mewyn | i wouldn't do raid 1 with that much space |
04:47.33 | kergoth | well less, given the parity on each |
04:47.33 | mewyn | too wasteful |
04:47.50 | kergoth | its not raid 1 when you're doing 5 as well, but i dotn recall the # |
04:47.54 | KeyserSoze | what's 4 drives in raid 5 do? |
04:48.19 | kergoth | what do ya mean by do? |
04:48.25 | kergoth | its just a striping, parity array |
04:48.33 | kergoth | so you're striping across 3 drives with the 4th drive for parity |
04:48.41 | mewyn | in raid 5 you effectivly lose 1 drive in the array. the number of drives has to be > 3 and <= 32 |
04:48.53 | mewyn | >= 3 |
04:49.02 | kergoth | yep |
04:49.09 | KeyserSoze | is there more redundancy, but the same amount of storage space as a 3-disk raid-5 setup? |
04:49.16 | kergoth | I have 2 three channel caching scsi raid controllers sitting here |
04:49.26 | mewyn | and in raid 5 parity is striped across the array |
04:49.28 | kergoth | KeyserSoze: no, same safety as a 3 disk array, just 1 disk more of space |
04:49.37 | kergoth | not exactly |
04:49.44 | kergoth | you can choose which do to, in most cases |
04:49.52 | kergoth | either to allocate a seperate drive, or to strip it |
04:49.57 | kergoth | there are advantages to each |
04:50.15 | kergoth | at least, you can choose in scsi with a decent controller |
04:50.16 | kergoth | i dunno about ide |
04:50.56 | mewyn | with a raid 5 array, you can lose one drive and still be recoverable |
04:51.26 | KeyserSoze | if you have 3 100GB drives in raid 5, you have 200gb of storage, at twice the bandwidth of a single drive, correct? |
04:51.42 | KeyserSoze | you're saying you can add 1 100gb drive, and get 100gb more storage? |
04:51.43 | mewyn | almost 3 times the bandwidth |
04:51.48 | kergoth | um |
04:51.54 | kergoth | its not 'bandwidth' |
04:51.57 | mewyn | yes |
04:51.58 | mewyn | true |
04:52.09 | mewyn | it's speed really |
04:52.12 | mewyn | transfer speed |
04:52.14 | KeyserSoze | no, it can't be 3 times the bandwidth. 1/3 of the data isn't data you care about, it's parity stuff. |
04:52.17 | kergoth | but the performance of the array he mentions depends on whether the parity is striped or not |
04:52.44 | KeyserSoze | http://dictionary.reference.com/search?q=bandwidth |
04:52.47 | kergoth | and its not striaght up 2* or 3* the performance |
04:52.53 | KeyserSoze | def 2: The amount of data that can be passed along a communications channel in a given period of time. |
04:52.59 | kergoth | it depends on the distribution of hte data |
04:53.23 | kergoth | KeyserSoze: right. that refers to the capabilities of the channel |
04:53.30 | kergoth | the ide or scsi bus is capable of quite a bit more than your drives are actually doing |
04:53.37 | kergoth | a lot more, inf fact |
04:53.40 | kergoth | s/inf/in/ |
04:53.48 | mewyn | yah |
04:54.22 | mewyn | most drives don't exceede 10-35MB/sec. the controllers are now up to 150MB/sec with SATA |
04:54.39 | kergoth | yep. its all hype |
04:54.46 | mewyn | yup |
04:54.48 | KeyserSoze | yeah, ata100 can do 100MB/s, and most hard drives have a peak read speed below 50MB/s from the medium, and the buffer (8MB being the largest available for IDE drives) can probably saturate ATA100. what does that have to do with raid 5? |
04:55.40 | kergoth | the point is, i'm using bandwidth to describe how much data can be put down the channel |
04:55.46 | kergoth | not how much actually _is_ being put down the channel |
04:56.05 | kergoth | they're quite indepednent, and bandwidth isnt hte way to describe it |
04:56.06 | kergoth | the |
04:56.09 | kergoth | independent |
04:56.26 | kergoth | :) cant type tonight |
04:56.31 | KeyserSoze | you can't put more down the channel than can be written or read from the disk, except for short periods before the cache fills (or empties) |
04:56.37 | kergoth | eh? |
04:56.46 | kergoth | you're assuming those drives are the only ones on the channel in question |
04:56.59 | kergoth | you _can_, you just _arent_ due to the way you designed the arary |
04:57.02 | kergoth | s/arary/array/ |
04:57.48 | kergoth | anyway |
04:57.52 | kergoth | sounds like a good plan mike |
04:57.57 | kergoth | I want to throw together a new box myself |
04:57.59 | kergoth | this one is aging |
05:01.57 | *** join/#openzaurus jmhodges (~jmh@jmhodge.res.bgsu.edu) |
05:03.46 | KeyserSoze | if x is the capacity of a single drive, b is it's bandwidth, and n is the number of identical drives of size x and bandwidth b in a RAID 5 array, what is the useful storage size of the array, and what is it's theoretical maximum bandwidth, assuming that the combined bandwidth from the disks is not limited by the I/O between the disks and the disk controllers? |
05:03.48 | *** join/#openzaurus mewyn` (~knoppix@dsl081-228-057.chi1.dsl.speakeasy.net) |
05:03.50 | mewyn` | gah |
05:03.53 | mewyn` | damn pos |
05:04.13 | *** join/#openzaurus JasonNJ (~perlow@ool-435125f3.dyn.optonline.net) |
05:04.17 | KeyserSoze | if x is the capacity of a single drive, b is it's bandwidth, and n is the number of identical drives of size x and bandwidth b in a RAID 5 array, what is the useful storage size of the array, and what is it's theoretical maximum bandwidth, assuming that the combined bandwidth from the disks is not limited by the I/O between the disks and the disk controllers? |
05:05.11 | JasonNJ | who cares. Its friday. :) |
05:05.22 | mewyn` | storage is x * (n - 1) |
05:05.53 | mewyn` | theoretical throughput is n * b |
05:06.18 | mewyn` | actually n * b as long as it's <= bus speed |
05:06.33 | KeyserSoze | and the useful bandwidth would be 2*b |
05:06.34 | KeyserSoze | how can you fit the extra information (parity) for any amount of drives on only one of them? |
05:07.03 | mewyn` | raid 4 is parity on one drive, raid 5 is on all |
05:07.05 | KeyserSoze | yes, but the "useful" bandwidth would only be 2*b in a 3 disk system, since 1/3 of the information isn't even wanted, it's just overhead. |
05:07.15 | mewyn` | that's not necissarily true |
05:07.30 | kergoth | its not overhead, it wont even go down the communications channel |
05:07.32 | mewyn` | and it depends on the action taken and a bunch of other factors |
05:07.41 | mewyn` | kergoth is right |
05:07.41 | kergoth | in most cases anyway |
05:07.42 | KeyserSoze | if the size is x*(n-1), then there must be 1*x that isn't cared about |
05:08.08 | mewyn` | yes, in raid 5 it is distributed througout the array |
05:08.10 | KeyserSoze | kergoth: if it doesn't go down the communications channel, then it cannot contribute to "useable bandwidth" |
05:08.21 | kergoth | besides which, striping isnt a direct number of drives * perf times individual drive |
05:09.40 | mewyn` | KeyserSoze: in read operations, raid 5 /is/ n * drive speed as long as it doesn't max out the bus |
05:10.01 | KeyserSoze | look, if you have 3 drives that are each 100gb, then there is 300gb total. everyone agrees that only 200gb of that is "useful" right? if each drive can read all it's info in 10s, then it'll take 10s to read everything, but since there are only 200gb the user wants, the bandwidth is 200gb/10s, which is twice 100gb/10s |
05:10.22 | kergoth | nope |
05:10.24 | mewyn` | 300gb is useful |
05:10.28 | kergoth | the data is triped across three drives |
05:10.37 | KeyserSoze | useful to who? 1/3 of the data is redundant. |
05:10.39 | mewyn` | 100 is used for pairty |
05:10.48 | mewyn` | which is the reason you are using raid 5 |
05:10.50 | kergoth | even given its not all useful, its still not 2* |
05:10.53 | KeyserSoze | you cannot fit a 300gb file in a 3 disk raid 5 array of 100gb disks. |
05:11.09 | KeyserSoze | kergoth: i know it's not 2*n, the theoretical maximum is 2*n, though. |
05:11.14 | kergoth | striping across three drives means to read a given file, all three drives are reading the data |
05:11.32 | kergoth | each one picks up a stripe of the file in question |
05:11.47 | kergoth | hence for that period of time, its 3* |
05:11.51 | kergoth | but over time it wont be |
05:12.20 | KeyserSoze | if it take 10seconds to read a 100gb file from 1 disk, it'd take (ignoring overhead) 10s to read a 200gb file from 3 disks. the bandwidth that the user cares about (the person who owns the data, and wants it back) is twice as much as 1 disk |
05:12.20 | kergoth | it wont, even not using parity of course |
05:13.52 | kergoth | you're throwing numbers out as though the parity data is all on one disk. |
05:13.54 | kergoth | thast not the case |
05:14.03 | kergoth | all three drives are reading your actual data, not two |
05:14.03 | KeyserSoze | it doesn't matter which disk it's on. |
05:14.07 | kergoth | yes, it does. |
05:14.14 | kergoth | if it were only on two, you're only striping across 2 drives |
05:14.19 | KeyserSoze | there is only 200gb of actual data to be read, and it can be read in 10s. |
05:14.22 | kergoth | which is by its very nature slower than striping across three |
05:14.33 | kergoth | regardless of use of parity, in the striped partiy information case |
05:14.47 | kergoth | striping across 3 drives is faster than 2. period. |
05:15.00 | kergoth | and in the distributed parity case, your'e striping across 3 instead of 2 |
05:15.02 | kergoth | its faster |
05:15.12 | kergoth | there are benchmarks comparing raid 4 and raid 5 showing this |
05:15.14 | kergoth | look it up. |
05:16.01 | KeyserSoze | how much data can you fit in a 3 disk raid 5 array of 100gb disks? |
05:16.12 | mdz | raid 4 and raid 5 should be about the same for normal reads |
05:16.16 | mdz | but raid 5 will be much faster for writes |
05:16.36 | kergoth | well cool, didnt realize |
05:16.53 | mdz | the only difference is where the parity information is stored |
05:17.00 | *** join/#openzaurus mewyn (~knoppix@dsl081-228-057.chi1.dsl.speakeasy.net) |
05:17.02 | mewyn | god damnit |
05:17.35 | kergoth | yes i realize that, but given the read of data, you have 3 drives reading your data instead of two. |
05:17.46 | kergoth | parity checking slows you down, but it does in both cases |
05:18.02 | mdz | yeah, the parity information is only used in degraded mode or during a write |
05:18.20 | KeyserSoze | kergoth: it is 200gb, right? and if it takes 10s to read 100gb from 1 drive, how long does it take to get your 200gb from the 3 disk raid array? |
05:19.21 | kergoth | not 10s. it would be 10s if only two of the drives were reading that 200gb |
05:19.25 | mewyn | KeyserSoze: if you have 3 drives in a raid 5 array, and all of them are 30MB/s, you will get just under 90MB/s |
05:19.26 | kergoth | but that is not the case. |
05:19.32 | kergoth | the data is striped across all three. |
05:19.46 | mewyn | that's read speeds |
05:19.52 | KeyserSoze | mewyn: you will get that from the physical media. but 1/3 of the data is useless to the owner. |
05:20.02 | kergoth | you're not listening |
05:20.10 | kergoth | as mdz says, the parity data isnt used in most cases |
05:20.13 | mdz | KeyserSoze: all 3 drives will be active during a large enough read |
05:20.19 | mdz | KeyserSoze: since they all hold data |
05:20.21 | mewyn | KeyserSoze: on read, you don't touch the parity |
05:20.22 | kergoth | exactly |
05:20.24 | KeyserSoze | it doesn't matter if they are active. |
05:20.29 | kergoth | haha |
05:20.35 | mdz | KeyserSoze: er, yes it does. more spindles = higher transfer rate |
05:20.39 | kergoth | exactly |
05:21.01 | kergoth | not to mention striping across three drives means the chance of a head being near whatever data you need is higher, as there are more heads involved |
05:21.05 | kergoth | pure probability |
05:21.18 | kergoth | that'll improve latency, but thats another discussion altogether :) |
05:21.36 | *** join/#openzaurus Fed|X| (x@12-232-113-54.client.attbi.com) |
05:21.40 | mdz | it really depends on your access pattern |
05:21.45 | kergoth | good point |
05:21.57 | mdz | raid 3 is sometimes faster for big sequential operations |
05:21.58 | kergoth | but striped raid arrays are supreme in random access for that reason |
05:22.02 | kergoth | yeah |
05:22.24 | mdz | raid 5 should be faster than raid 4 for small random reads |
05:22.56 | kergoth | right, and thats just considering latency |
05:23.11 | mdz | I've never actually used raid 3 or 4 in reality, though, I only know the theory |
05:23.15 | kergoth | 5 will be faster than 4 with regard to bandwidth in general for the reasons we mentioned earlier |
05:23.19 | mewyn | raid 2, 3 and 4 are pretty much obsolete |
05:23.40 | mdz | yeah, it should be faster overall if you can keep all N disks busy |
05:23.43 | KeyserSoze | if it takes 1 hour to fill one 100GB disk, how long will it take to fill a 3 disk raid 5 array? the thing will fill in 1 hour, and it will hold 200GB. |
05:23.57 | kergoth | KeyserSoze: now we're talkinga bout writes? |
05:23.59 | mdz | that math makes no sense |
05:24.03 | kergoth | KeyserSoze: writes are a whole different ballgame |
05:24.05 | kergoth | that too |
05:24.09 | KeyserSoze | kergoth: what's the difference? |
05:24.20 | kergoth | writes write parity data, slows you down further |
05:24.27 | kergoth | not to mention your base flawed logic as mdz says |
05:24.32 | mdz | in order to write 200GB to a raid5 array of 3 100GB disks, you are actually writing 300G |
05:24.35 | KeyserSoze | what base flawed logic? |
05:24.44 | KeyserSoze | mdz: duh |
05:24.44 | mewyn | writes are fairly slow with raid 5 |
05:24.55 | mewyn | each write op needs 2 write and 2 read ops |
05:24.58 | kergoth | 3 drives reading 200gb is not twice the speed of 1 drive reading 100gb |
05:25.03 | kergoth | regardless of parity |
05:25.08 | mdz | and writing to 2 drives at once |
05:25.48 | mdz | bah, this is #openzaurus |
05:25.54 | kergoth | :) |
05:25.56 | KeyserSoze | what base flawed logic? |
05:25.59 | mdz | for lots of off-topic fun go to #zaurus |
05:26.05 | mewyn | KeyserSoze: plus your how long will it take logic is flawed, because too many factors need to be taken into account. you need to look at transfer rates only. |
05:26.10 | KeyserSoze | what math makes no sense? |
05:26.18 | kergoth | i just pointed it out. you theorize that reading from 3 drives is twice the speed of reading from two |
05:26.24 | kergoth | which regardless of parity is flawed |
05:26.53 | kergoth | and as we already pointed out, raid 5 is faster than raid 4, which negates your argument that parity location is irrelevent |
05:26.56 | kergoth | which it isnt. |
05:27.03 | kergoth | it affects performance, both in the case of latency and bandwidth |
05:27.46 | mdz | that's the whole reason for raid 5 |
05:28.05 | kergoth | exactly. but data loss has the potential to be worse becaues you can lose parity information |
05:28.13 | kergoth | hehe |
05:28.25 | KeyserSoze | are you saying this is wrong: "if it takes 1 hour to fill one 100GB disk, how long will it take to fill a 3 disk raid 5 array? the thing will fill in 1 hour, and it will hold 200GB." |
05:28.25 | mdz | bedtime |
05:28.25 | mewyn | http://www.raidweb.com/whatis.html |
05:28.41 | mdz | KeyserSoze: try it |
05:28.44 | kergoth | KeyserSoze: yes. thats wrong. |
05:28.54 | KeyserSoze | kergoth: does it not hold 200gb? |
05:28.58 | mewyn | KeyserSoze: you really can't judge that way. |
05:28.59 | KeyserSoze | or does it not fill in 1 hour? |
05:29.00 | mdz | of course it holds 200GB |
05:29.03 | kergoth | it does hold 200gb |
05:29.21 | mewyn | because drive access has hundreds of factors |
05:29.26 | kergoth | you're taking too simplistic a viewpoint, and failing to see the real world influences on the performance |
05:29.27 | KeyserSoze | what's the quickest it could ever fill up in? |
05:29.55 | KeyserSoze | dude, i'm talking about a theoretical maximum. |
05:30.26 | mewyn | KeyserSoze: adn we are saying you are simplifying too much to give you a theoretical maximum |
05:30.52 | kergoth | his theoretical maxium write is probably closer to reality than his theoretical maximum read that we already proved wrong. |
05:31.07 | KeyserSoze | it cannot ever possibly take less time to fill a 3 disk raid array than it does to fill one disk. |
05:31.30 | kergoth | yes it can, but not int he case of raid 5. |
05:31.35 | kergoth | er |
05:31.41 | kergoth | no, you're correct |
05:31.44 | kergoth | heh |
05:31.44 | KeyserSoze | no, every disk has to fill up, or it is not full. |
05:31.49 | KeyserSoze | okay, good then. |
05:31.51 | kergoth | right |
05:32.23 | KeyserSoze | so, the max write performance for a 3 disk raid 5 array is 2 times that of a single drive. |
05:33.04 | KeyserSoze | for the read perfomance to be faster, there has to be less data read than the amount of data written. |
05:33.47 | mewyn | i'm really sick of arguing this |
05:33.48 | KeyserSoze | what data doesn't need to be read? and if it isn't read, then why was it written? |
05:33.50 | mewyn | http://www.pcguide.com/ref/hdd/perf/raid/levels/single.htm |
05:33.58 | mewyn | read up on it. |
05:34.47 | kergoth | KeyserSoze: mdz already explained the parity information isnt checked on every read. |
05:34.51 | kergoth | yeah, fuck it |
05:34.55 | kergoth | i'll go back to real work also |
05:36.02 | mewyn | it's not checked on any read |
06:01.02 | *** join/#openzaurus SoopaT (~soopaman@h24-66-55-163.wp.shawcable.net) |
06:19.42 | *** join/#openzaurus james_lan-zaurus (~zic@ip68-102-114-12.ks.ok.cox.net) |
06:30.35 | *** join/#openzaurus SoopaT (~soopaman@h24-66-55-163.wp.shawcable.net) |
06:38.38 | *** join/#openzaurus mewyn (~knoppix@dsl081-228-057.chi1.dsl.speakeasy.net) |
06:50.48 | *** join/#openzaurus billytwowilly (~chris@24.86.147.212) |
07:25.47 | LordVan | my cf reader works here.. |
07:47.25 | *** join/#openzaurus frankps (~frankps@10.80-202-169.nextgentel.com) |
08:06.30 | LordVan | weird.. it complains about links it can't made .. |
08:06.37 | LordVan | s/made/create/ |
08:07.04 | *** join/#openzaurus frankps (~frankps@10.80-202-169.nextgentel.com) |
08:07.05 | kergoth | what does? |
08:07.12 | *** join/#openzaurus aPoX (apox@apox.warez.com) |
08:07.34 | LordVan | kergoth: err never mind .. it's bdicty and the packages are a bit dumb ..(every package tries to do the same symlink..) |
08:10.20 | LordVan | kergoth: you plan to add python packages to qz feed ? |
08:12.22 | LordVan | btw should opie-login work ok? |
08:13.33 | kergoth | python's in our buildsystem. |
08:13.38 | kergoth | packages will be in the feed when i update it |
08:13.44 | kergoth | opie-login works fine, but only for root |
08:13.51 | kergoth | for the moment anyway |
08:14.03 | LordVan | nice :) |
08:14.06 | LordVan | thanks |
08:15.22 | kergoth | np |
08:20.50 | LordVan | kergoth: i found kinda bug i think.. |
08:20.50 | LordVan | kergoth: when i use ntp to set the time.. |
08:21.13 | LordVan | kergoth: the date in the 'main' tab isn't changed and if i press ok then it saves the old date again.. |
08:21.38 | LordVan | kergoth: i'll try an upgrade.. |
08:21.40 | kergoth | k |
08:23.24 | LordVan | is rc2 newest in feed too ? |
08:23.33 | kergoth | ? |
08:23.39 | kergoth | unstable is the newest. |
08:23.42 | kergoth | unstable doesnt have a version. |
08:23.44 | kergoth | rc2 is testing |
08:23.57 | LordVan | kergoth: everything up2date .. |
08:26.45 | LordVan | 64-0 image is nice (when you got 128MB sd ;) |
08:39.48 | LordVan | why does installing get really slow after a few packages? (to '/') is it a filesystem issue? |
08:41.29 | kergoth | not sure |
08:44.35 | LordVan | i see |
08:51.19 | LordVan | kergoth: something completely different .. |
08:51.39 | LordVan | kergoth: do you plan to put non-gpl'd software into (maybe an extra) feed ? |
08:52.13 | kergoth | such as? |
08:52.24 | LordVan | BDicty |
08:52.27 | kergoth | I dont know of any non-GPL Z apps that are free |
08:52.30 | kergoth | ah |
08:52.31 | LordVan | is a commercial app that needs a key |
08:52.38 | LordVan | you can use it for free for 30 days .. |
08:52.40 | LordVan | as a trial |
08:52.45 | LordVan | then you need to enter registration keys |
08:52.54 | kergoth | I dont know if i want demo/shareware apps in the feeds |
08:52.56 | kergoth | I'll think about it |
08:53.12 | LordVan | kergoth: well imho an extra feed might be nice for things like this |
08:53.39 | LordVan | cuz the ipkg's of beiks are stupid :) |
08:53.46 | LordVan | btw should i format my SD ext2 ? |
08:53.49 | LordVan | or keep vfat? |
09:41.11 | LordVan | i got a really weird bug.. |
09:44.27 | *** join/#openzaurus billytwowilly (~chris@24.86.147.212) |
09:45.35 | *** join/#openzaurus SoopaKDE (~root@h24-66-55-163.wp.shawcable.net) |
09:47.04 | SoopaKDE | does antone know where i can get the openzaurus source packages? |
09:48.15 | LordVan | SoopaKDE: http://www.openzaurus.org/oz_website/faq/faq?id=84 |
09:48.34 | SoopaKDE | thanx |
09:49.45 | LordVan | SoopaKDE: there's a nice search in the faq ;) |
09:49.58 | SoopaKDE | i have the build root |
09:50.09 | SoopaKDE | i was hoping to find the actual app packages |
09:52.50 | LordVan | well search the faq yourself then .. (i don't know ;) |
09:52.55 | LordVan | or wait til kergoth answers ;) |
09:53.38 | *** join/#openzaurus FrenkYo (~0_o@host216-144.pool80182.interbusiness.it) |
09:57.49 | SoopaKDE | i think kergorth ditched me yet again |
10:10.38 | LordVan | ? |
10:37.46 | LordVan | weird .. how do i close mooview? |
11:03.50 | *** join/#openzaurus oob (~oob@81-5-138-97.dsl.eclipse.net.uk) |
11:05.36 | LordVan | weird.. my 'hotkeys' are borked.. |
11:06.17 | *** join/#openzaurus Piete (~abri@61.98.19.77) |
11:06.24 | Piete | hey guys |
11:06.58 | Piete | anyone else have problems with spontaneous suspends in rc2? |
11:20.16 | *** join/#openzaurus mark (~mark@s.westcott.freeuk.com) |
12:22.48 | *** join/#openzaurus Bovine (~moo@dsl-217-155-87-1.zen.co.uk) |
12:27.53 | *** join/#openzaurus ljp (~ljp@tf0140.peakpeak.com) |
12:44.20 | *** join/#openzaurus ljp (~ljp@tf0140.peakpeak.com) |
12:51.16 | *** join/#openzaurus FrenkYo (~0_o@host54-144.pool80182.interbusiness.it) |
13:28.06 | *** join/#openzaurus Walid (~wshaari@pc-62-30-151-169-hr.blueyonder.co.uk) |
13:55.27 | BiGBiGYLLaMa | how do i restart pcmcia services? |
14:10.03 | *** join/#openzaurus Epignosis (~Epignosis@pc3-lisb1-3-cust152.blfs.cable.ntl.com) |
14:10.57 | Epignosis | im having to manually start opie eacg time i reboot, is there any script i can change so it happens automagically? |
15:11.48 | *** join/#openzaurus psb154 (~user@modem-708.lemur.dialup.pol.co.uk) |
15:26.14 | *** join/#openzaurus chouimat (~dieu@modemcable120.184-130-66.que.mc.videotron.ca) |
15:27.18 | chouimat | morning |
15:29.21 | *** join/#openzaurus ulyx (~ulyx@modemcable120.184-130-66.que.mc.videotron.ca) |
15:38.35 | *** join/#openzaurus FrenkYo (~0_o@host143-144.pool80182.interbusiness.it) |
15:58.43 | *** join/#openzaurus treke|home (~ggilbert@lsanca2-ar29-4-41-064-064.lsanca2.elnk.dsl.genuity.net) |
15:58.51 | *** part/#openzaurus treke|home (~ggilbert@lsanca2-ar29-4-41-064-064.lsanca2.elnk.dsl.genuity.net) |
17:19.34 | *** join/#openzaurus chouimat (~dieu@modemcable120.184-130-66.que.mc.videotron.ca) |
17:33.19 | *** join/#openzaurus kurre (~kurre@ncircle.nullnet.fi) |
17:43.23 | *** join/#openzaurus gaurdian (axydil@12-213-124-251.client.attbi.com) |
17:49.25 | *** join/#openzaurus Mewyn (~mike@dsl081-228-056.chi1.dsl.speakeasy.net) |
17:59.54 | *** join/#openzaurus Walid (~wshaari@pc-62-30-151-169-hr.blueyonder.co.uk) |
18:12.08 | *** join/#openzaurus mewyn` (~knoppix@dsl081-228-057.chi1.dsl.speakeasy.net) |
18:21.12 | *** join/#openzaurus asys3 (~uwe@dialin-145-254-143-046.arcor-ip.net) |
18:26.43 | LordVan | is it normal that 'Creating symbolic links for task-opie-applets' takes more than half an hour?(still running) |
19:20.28 | *** join/#openzaurus kurre (~kurre@ncircle.nullnet.fi) |
19:26.15 | mark|food | kurre: slicker kurre? |
19:32.36 | kurre | excuse me ? |
19:39.51 | *** join/#openzaurus SoopaKDE-2 (~root@h24-66-55-163.wp.shawcable.net) |
19:39.55 | SoopaKDE-2 | hello |
19:42.19 | mark|food | kurre: dont worry, you must be another kurre in finland |
19:48.25 | *** join/#openzaurus noda (~noda@modemcable063.97-200-24.mtl.mc.videotron.ca) |
19:59.19 | BiGBiGYLLaMa | screen -D |
20:04.11 | kurre | mark|food: jep, I do know that there exists another fellow here in finland with the same nick ... |
20:04.57 | kurre | has anyone noticed that the aqpkg software doesn't wrap the description lines ? |
20:05.27 | kurre | seems quite hard to decide if I should download the package, when you can't read the description |
20:37.19 | SoopaKDE-2 | can i grab the latest OZ sources from CVS? |
20:38.11 | kurre | i think you have to use bitkeeper |
20:39.19 | SoopaKDE-2 | and how do i get bitkeeper? |
20:44.00 | *** join/#openzaurus _ibz (~ibz@host217-34-76-229.in-addr.btopenworld.com) |
20:44.08 | _ibz | hi |
20:44.11 | _ibz | 1 sec |
20:48.55 | kergoth | ibot: tell SoopaKDE-2 about bitkeeper |
20:49.02 | *** join/#openzaurus _ibz (~ibz@host217-34-76-229.in-addr.btopenworld.com) |
20:49.34 | mewyn` | kergoth: you gonna need bk installed on zelda? |
20:49.46 | SoopaKDE-2 | so no debian packages? |
20:49.48 | SoopaKDE-2 | :( |
20:50.33 | kergoth | mewyn`: already installed it into my homedir and linked it into $HOME/bin |
20:50.37 | kergoth | mewyn`: :) |
20:51.31 | mewyn` | ah |
20:51.34 | mewyn` | ok |
20:53.03 | mewyn` | iesh, someone is selling a OTF encryption external hdd. problem is, it is 40 bit DES |
20:53.16 | kergoth | yeah saw that |
20:53.18 | kergoth | heh |
20:53.25 | kergoth | nothing spectactular, thats for sure |
20:53.44 | mewyn` | anyone'd be a fool to use it |
20:53.52 | mewyn` | 40 bit des can be cracked in minuets |
20:54.10 | kergoth | heheh |
20:54.15 | kergoth | and they propose it for banks and shit |
20:54.16 | kergoth | fucking joke |
20:54.51 | mewyn` | yah |
20:57.40 | mewyn` | i need to figure out a way to do crypto on my systems |
20:58.47 | SoopaKDE-2 | do you mean encrypt your hardrive? |
20:59.02 | SoopaKDE-2 | and all the data/files on it? |
20:59.59 | noda | I'm sure one of the partition types Linux supports (if not in 2.4 then in 2.5 at least) has encryption... no? |
21:00.15 | *** join/#openzaurus chouimat (~dieu@modemcable120.184-130-66.que.mc.videotron.ca) |
21:03.01 | mewyn` | you can do crypto loop |
21:03.19 | mewyn` | i would selectivally encrypt |
21:05.22 | *** join/#openzaurus TheMasterMind1 (foobar@h-69-3-0-23.MCLNVA23.covad.net) |
21:06.29 | noda | Hell, just encrypt ~ |
21:06.35 | noda | Why bother with the rest? |
21:07.21 | *** join/#openzaurus chouimat (~dieu@modemcable120.184-130-66.que.mc.videotron.ca) |
21:32.14 | *** join/#openzaurus superM (~matthew@host213-120-108-223.in-addr.btopenworld.com) |
21:41.41 | *** join/#openzaurus james_lan-zaurus (~zic@156.26.13.11) |
21:44.26 | *** join/#openzaurus TimRiker (timr@rikers.org) |
21:50.46 | *** join/#openzaurus _ibz (~ibz@host217-34-76-229.in-addr.btopenworld.com) |
22:15.03 | *** join/#openzaurus midway (~me@dclient217-162-4-254.hispeed.ch) |
22:15.15 | midway | hi all |
22:16.40 | midway | i saw in the unstable changelog something about new sounddrivers giving dsp for the buzzer. How to test this? |
22:29.28 | _ibz | does /bin/sh have to link to busybox? i *think* when i link /bin/sh to bash, it fscks up the boot sequence... |
22:31.59 | _ibz | yep, it does. |
22:32.15 | _ibz | what's the correct way to use bash instead of ash? |
22:42.33 | *** join/#openzaurus ljp (~ljp@tf0140.peakpeak.com) |
22:46.56 | kergoth | midway: you wait for rc3 |
22:49.27 | midway | ok ok |
22:49.37 | midway | :-) |
22:50.15 | kergoth | _ibz: change your shell in passwd |
22:56.23 | midway | cu all |
23:08.34 | *** join/#openzaurus LordVan|out (~lordvan@62.47.64.183) |
23:11.28 | _ibz | kergoth: thanx |
23:16.45 | *** join/#openzaurus ^X^ (x@12-232-113-54.client.attbi.com) |
23:19.11 | _ibz | where are launcher settings saved? does the launchersettings application save it anywhere where it can be reset to after a flash? |
23:23.30 | *** join/#openzaurus mewyn` (~knoppix@dsl081-228-057.chi1.dsl.speakeasy.net) |
23:24.27 | *** join/#openzaurus walters (walters@verbum.org) |
23:27.22 | *** join/#openzaurus badalex (~GeorgiePo@cpe-66-1-177-91.ut.sprintbbd.net) |
23:27.40 | *** part/#openzaurus badalex (~GeorgiePo@cpe-66-1-177-91.ut.sprintbbd.net) |
23:31.35 | BiGBiGYLLaMa | anyone here? |
23:31.42 | BiGBiGYLLaMa | i need some help |
23:32.18 | *** part/#openzaurus mark (~mark@s.westcott.freeuk.com) |
23:32.58 | *** part/#openzaurus BiGBiGYLLaMa (~llama@pD9E1F8FD.dip.t-dialin.net) |
23:37.05 | *** join/#openzaurus KeyserSoze (~ksoze@12-245-37-229.client.attbi.com) |
23:44.41 | *** join/#openzaurus shinote (~shinote@209-150-58-168.c3-0.wob-ubr2.sbo-wob.ma.cable.rcn.com) |
23:44.49 | *** part/#openzaurus shinote (~shinote@209-150-58-168.c3-0.wob-ubr2.sbo-wob.ma.cable.rcn.com) |
23:49.08 | *** join/#openzaurus LordVan (~lordvan@62.47.64.183) |