> This subthread in the Xen patch thread has now digressed onto discussions
> about entropy and security. Perhaps you guys could add some points.
Well, I can try. I don't think this answers any questions, but
perhaps it informs the discussion. Apologies if the Cc: list is
getting a bit bloated.
On Thursday 11 May 2006 18:48, Rick Jones wrote:
> From the peanut gallery...
>
> Can remote TCP ISN's be considered a source of entropy these days? How
> about checksums?
Indirectly - we measure how long it takes to compute them.
-Andi
-
To unsubscribe from this list: send the line "unsubscr
From the peanut gallery...
Can remote TCP ISN's be considered a source of entropy these days? How
about checksums?
rick
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-in
On Thu, 11 May 2006 11:47:52 +0200
Andi Kleen <[EMAIL PROTECTED]> wrote:
> On Thursday 11 May 2006 09:49, Keir Fraser wrote:
> > On 11 May 2006, at 01:33, Herbert Xu wrote:
> > >> But if sampling virtual events for randomness is really unsafe (is it
> > >> really?) then native guests in Xen would
On Thursday 11 May 2006 09:49, Keir Fraser wrote:
> On 11 May 2006, at 01:33, Herbert Xu wrote:
> >> But if sampling virtual events for randomness is really unsafe (is it
> >> really?) then native guests in Xen would also get bad random numbers
> >> and this would need to be somehow addressed.
> >
On Thu, May 11, 2006 at 08:49:04AM +0100, Keir Fraser wrote:
>
> The alternatives are unattractive:
> 1. We have no good way to distinguish interrupts caused by packets
> from local VMs versus packets from remote hosts. Both get muxed on the
> same virtual interface.
> 2. An entropy front/back
On 11 May 2006, at 01:33, Herbert Xu wrote:
But if sampling virtual events for randomness is really unsafe (is it
really?) then native guests in Xen would also get bad random numbers
and this would need to be somehow addressed.
Good point. I wonder what VMWare does in this situation.
Well,
Andi Kleen <[EMAIL PROTECTED]> wrote:
>
> But if sampling virtual events for randomness is really unsafe (is it
> really?) then native guests in Xen would also get bad random numbers
> and this would need to be somehow addressed.
Good point. I wonder what VMWare does in this situation.
--
Visi
On Tuesday 09 May 2006 22:46, Roland Dreier wrote:
> Keir> Where should we get our entropy from in a VM environment?
> Keir> Leaving the pool empty can cause processes to hang.
>
> You could have something like a virtual HW RNG driver (with a frontend
> and backend), which steals from the d
On 10 May 2006, at 00:51, Chris Wright wrote:
* Herbert Xu ([EMAIL PROTECTED]) wrote:
Chris Wright <[EMAIL PROTECTED]> wrote:
+ netdev->features= NETIF_F_IP_CSUM;
Any reason why IP_CSUM was chosen instead of HW_CSUM? Doing the latter
would seem to be in fact easier for a virt
* Herbert Xu ([EMAIL PROTECTED]) wrote:
> Chris Wright <[EMAIL PROTECTED]> wrote:
> >
> > + netdev->features= NETIF_F_IP_CSUM;
>
> Any reason why IP_CSUM was chosen instead of HW_CSUM? Doing the latter
> would seem to be in fact easier for a virtual driver, no?
That, I really don't
* Christoph Hellwig ([EMAIL PROTECTED]) wrote:
> On Tue, May 09, 2006 at 12:00:34AM -0700, Chris Wright wrote:
> > The network device frontend driver allows the kernel to access network
> > devices exported exported by a virtual machine containing a physical
> > network device driver.
>
> Please d
* Stephen Hemminger ([EMAIL PROTECTED]) wrote:
> The stuff in /proc could easily just be added attributes to the class_device
> kobject
> of the net device (and then show up in sysfs).
Agreed, it's on the todo list to drop proc support there. Thought that
was marked in the patch.
> > +#define G
Chris Wright <[EMAIL PROTECTED]> wrote:
>
> + netdev->features= NETIF_F_IP_CSUM;
Any reason why IP_CSUM was chosen instead of HW_CSUM? Doing the latter
would seem to be in fact easier for a virtual driver, no?
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu
Keir> Where should we get our entropy from in a VM environment?
Keir> Leaving the pool empty can cause processes to hang.
You could have something like a virtual HW RNG driver (with a frontend
and backend), which steals from the dom0 /dev/random pool.
- R.
-
To unsubscribe from this list
On 9 May 2006, at 21:25, Stephen Hemminger wrote:
+ memcpy(netdev->dev_addr, info->mac, ETH_ALEN);
+ network_connect(netdev);
+ info->irq = bind_evtchn_to_irqhandler(
+ info->evtchn, netif_int, SA_SAMPLE_RANDOM,
netdev->name,
This doesn't look like a real rand
* Stephen Hemminger ([EMAIL PROTECTED]) wrote:
> > + info->irq = bind_evtchn_to_irqhandler(
> > + info->evtchn, netif_int, SA_SAMPLE_RANDOM,
> > netdev->name,
>
> This doesn't look like a real random entropy source. packets
> arriving from another domain are easily timed.
Heh, given t
> +static int setup_device(struct xenbus_device *dev, struct
> netfront_info *info) +{
> + struct netif_tx_sring *txs;
> + struct netif_rx_sring *rxs;
> + int err;
> + struct net_device *netdev = info->netdev;
> +
> + info->tx_ring_ref = GRANT_INVALID_REF;
> + info->rx_ring_
The stuff in /proc could easily just be added attributes to the class_device
kobject
of the net device (and then show up in sysfs).
> +
> +#define GRANT_INVALID_REF0
> +
> +#define NET_TX_RING_SIZE __RING_SIZE((struct netif_tx_sring *)0, PAGE_SIZE)
> +#define NET_RX_RING_SIZE __RING_SIZE((st
[EMAIL PROTECTED] wrote on 05/09/2006 09:00:27 AM:
> On Tue, May 09, 2006 at 11:26:03PM +1000, Herbert Xu wrote:
> > Christian Limpach <[EMAIL PROTECTED]> wrote:
> > >
> > > Possibly having to page in the process and switching to it would add
> > > to the live migration time. More importantly, ha
On Tue, May 09, 2006 at 11:26:03PM +1000, Herbert Xu wrote:
> Christian Limpach <[EMAIL PROTECTED]> wrote:
> >
> > Possibly having to page in the process and switching to it would add
> > to the live migration time. More importantly, having to install an
> > additional program in the guest is cer
Christian Limpach <[EMAIL PROTECTED]> wrote:
>
> Possibly having to page in the process and switching to it would add
> to the live migration time. More importantly, having to install an
> additional program in the guest is certainly not very convenient.
Sorry I'm still not convinced. What's th
On Tue, May 09, 2006 at 11:01:05PM +1000, Herbert Xu wrote:
> Christian Limpach <[EMAIL PROTECTED]> wrote:
> >
> > There's at least two reasons why having it in the driver is preferable:
> > - synchronizing sending the fake ARP request with when the device is
> > operational -- you really want to
On Tuesday 09 May 2006 15:01, Herbert Xu wrote:
> Christian Limpach <[EMAIL PROTECTED]> wrote:
> >
> > There's at least two reasons why having it in the driver is preferable:
> > - synchronizing sending the fake ARP request with when the device is
> > operational -- you really want to make this w
Christian Limpach <[EMAIL PROTECTED]> wrote:
>
> There's at least two reasons why having it in the driver is preferable:
> - synchronizing sending the fake ARP request with when the device is
> operational -- you really want to make this well synchronized to keep
> unreachability as short as pos
On Tue, May 09, 2006 at 09:55:33PM +1000, Herbert Xu wrote:
> Hi Chris:
>
> Chris Wright <[EMAIL PROTECTED]> wrote:
> >
> > +/** Send a packet on a net device to encourage switches to learn the
> > + * MAC. We send a fake ARP request.
> > + *
> > + * @param dev device
> > + * @return 0 on success,
On Tue, May 09, 2006 at 12:00:34AM -0700, Chris Wright wrote:
> The network device frontend driver allows the kernel to access network
> devices exported exported by a virtual machine containing a physical
> network device driver.
Please don't add procfs code to new network drivers. Especially if
Hi Chris:
Chris Wright <[EMAIL PROTECTED]> wrote:
>
> +/** Send a packet on a net device to encourage switches to learn the
> + * MAC. We send a fake ARP request.
> + *
> + * @param dev device
> + * @return 0 on success, error code otherwise
> + */
> +static int send_fake_arp(struct net_device *de
The network device frontend driver allows the kernel to access network
devices exported exported by a virtual machine containing a physical
network device driver.
Signed-off-by: Ian Pratt <[EMAIL PROTECTED]>
Signed-off-by: Christian Limpach <[EMAIL PROTECTED]>
Signed-off-by: Chris Wright <[EMAIL P
On Tue, 21 Mar 2006 22:31:14 -0800
Chris Wright <[EMAIL PROTECTED]> wrote:
> The network device frontend driver allows the kernel to access network
> devices exported exported by a virtual machine containing a physical
> network device driver.
>
> Signed-off-by: Ian Pratt <[EMAIL PROTECTED]>
> Si
30 matches
Mail list logo