|David Francheski||Oct 4, 2002 5:08 pm|
|Nate Lawson||Oct 4, 2002 5:22 pm|
|Juli Mallett||Oct 5, 2002 12:29 am|
|Terry Lambert||Oct 5, 2002 2:27 am|
|Antony T Curtis||Oct 5, 2002 7:27 am|
|Robert Clark||Oct 6, 2002 11:57 am|
|Nathan Hawkins||Oct 6, 2002 12:35 pm|
|Terry Lambert||Oct 6, 2002 5:14 pm|
|Nate Lawson||Oct 6, 2002 7:48 pm|
|Terry Lambert||Oct 6, 2002 9:10 pm|
|Nate Lawson||Oct 6, 2002 9:56 pm|
|Gary Thorpe||Oct 7, 2002 10:38 am|
|Terry Lambert||Oct 7, 2002 2:13 pm|
|Robert Clark||Oct 7, 2002 7:05 pm|
|Nate Lawson||Oct 7, 2002 9:54 pm|
|Subject:||Re: Running independent kernel instances on dual-Xeon/E7500 system|
|From:||Terry Lambert (tlam...@mindspring.com)|
|Date:||Oct 5, 2002 2:27:42 am|
Nate Lawson wrote:
On Fri, 4 Oct 2002, David Francheski wrote:
I have a dual-Xeon processor (with E7500 chipset) motherboard. Can anybody tell me what the development effort would be to boot and run two independent copies of the FreeBSD kernel, one on each Xeon processor? By this I mean that an SMP enabled kernel would not be utilized, each kernel would be UP.
Regards, David L. Francheski
Not possible without another BIOS, PCI bus, and separate memory -- i.e. another PC.
IPL'ing is not the same as "running". So long as you crafted the memory image of the second OS and its page tables, etc., using the first processor, there should be no problem running a second copy of an OS on an AP, as a result of a START IPI from the BP, after the code is crafted. Thus there is no need for a separate BIOS.
For running, there are two types of devices which one cares about: devices which can be duplicated, and therefore assigned as seperate resources, and devices which cannot. For PCI devices, this breaks down to an interrupt routing issue. There are four PCI interrupts: A, B, C, and D. So long as no device allocated to each processor does not share an interrupt, there is no problem. Thus you do not need a separate PCI bus.
Note: For devices which cannot be shared, but which are required, there are two approaches: the device may be virtualized, and then access to it contended between the processors, or the device may be virtualized in one instance, and accessed via proxy to the other processor (e.g. via IPI triggers for IPC). VMWare operates this way for a number of its own devices, which can not be physical devices, since they must be shared with the host OS, rather than assigned directly to the VMWare "machine", or to the host OS (both are available options for many devices).
The memory can be seperated logically, rather than physically. In fact, one could either use the PAE mode in exclusively 4K page mode, or the PSE-36, exclusively in 4M page mode, without significant changes to the VM system, to permit motherboards that can handle it to wupport up to 4G of physical RAM per CPU, up to 16 CPUs (the practical limitations on this due to motherboard availability is 4). Thus there is no need for physically seperate memory. The 4K mode would require an additional layer of indirection (Peter Wemm may actually have completed some or all of the code necessary for PAE use alread), and the 4M (PSE-36) mode would require hacking the system to be able to use 4M pages, rather than 4K (mostly, this effect the paging paths themselves; you would likely get 2M pages (for PAE large pages, which are 2M instead of 4M in size) for use in PAE out of this for free, if you went to a "power of two multiple of 4K" size parameter for paging operations.
I've personally considered pursuing the ability to run code seperately, though with the same 4G address space, seperated, so as to permit running a debugger against a "crashed" FreeBSD "system" running on an AP, doing the debugging from the BP, as a hosted system. The cost in labor would be 2-3 months of continuous work, I think... that is the estimate I arrived at, when I considered the project previously. Doing this certaily beats the cost of buying an ICE to get similar capability.
It would be interesting to see what other people have to say on this, other than "can't be done" (not to pick on you in particular, here; this is the knee-jerk reaction many people have to things like this).
To Unsubscribe: send mail to majo...@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message