Given how long this is stuck and how big and touching many subsystems,
can we start to make progress on this pice-mail?
I think patches 1-13 look pretty solid, and assuming an review for the
dma-iommu bits these patches could probaby be queued up ASAP.
_
On Wed, Jun 15, 2022 at 10:12:32AM -0600, Logan Gunthorpe wrote:
> A pseudo mount is used to allocate an inode for each PCI device. The
> inode's address_space is used in the file doing the mmap so that all
> VMAs are collected and can be unmapped if the PCI device is unbound.
> After unmapping, th
On Wed, Jun 15, 2022 at 10:12:28AM -0600, Logan Gunthorpe wrote:
> Consecutive zone device pages should not be merged into the same sgl
> or bvec segment with other types of pages or if they belong to different
> pgmaps. Otherwise getting the pgmap of a given segment is not possible
> without scann
I think this is going to have massive conflicts with Al's iov_iter
support..
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Wed, Jun 15, 2022 at 10:12:26AM -0600, Logan Gunthorpe wrote:
> GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
> allow obtaining P2PDMA pages. If GUP is called without the flag and a
> P2PDMA page is found, it will return an error.
>
> FOLL_PCI_P2PDMA cannot be set if F
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Wed, Jun 15, 2022 at 10:12:24AM -0600, Logan Gunthorpe wrote:
> dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
> is no longer necessary and may be dropped. This means the
> rdma_rw_[un]map_sg() helpers are no longer necessary. Remove it all.
Looks good:
Reviewed-by: C
On Wed, Jun 15, 2022 at 10:12:23AM -0600, Logan Gunthorpe wrote:
> Introduce the helper function ib_dma_pci_p2p_dma_supported() to check
> if a given ib_device can be used in P2PDMA transfers. This ensures
> the ib_device is not using virt_dma and also that the underlying
> dma_device supports P2PD
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Wed, Jun 29, 2022 at 08:28:37AM +0200, Christoph Hellwig wrote:
> Any comments or additional testing? It would be really great to get
> this off the table.
For the USB bits:
Acked-by: Greg Kroah-Hartman
___
iommu mailing list
iommu@lists.linux-foun
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Wed, Jun 15, 2022 at 10:12:13AM -0600, Logan Gunthorpe wrote:
> Make use of the third free LSB in scatterlist's page_link on 64bit systems.
>
> The extra bit will be used by dma_[un]map_sg_p2pdma() to determine when a
> given SGL segments dma_address points to a PCI bus address.
> dma_unmap_sg_
Any comments or additional testing? It would be really great to get
this off the table.
On Tue, Jun 14, 2022 at 11:20:37AM +0200, Christoph Hellwig wrote:
> Hi all,
>
> arm is the last platform not using the dma-direct code for directly
> mapped DMA. With the dmaboune removal from Arnd we can e
On Wed, Jun 29, 2022 at 09:38:00AM +1200, Michael Schmitz wrote:
> That's one of the 'liberties' I alluded to. The reason I left these in is
> that I'm none too certain what device feature the DMA API uses to decide a
> device isn't cache-coherent.
The DMA API does not look at device features at a
On Wed, Jun 29, 2022 at 11:09:00AM +1200, Michael Schmitz wrote:
> And all SCSI buffers are allocated using kmalloc? No way at all for user
> space to pass unaligned data?
Most that you will see actually comes from the page allocator. But
the block layer has a dma_alignment limit, and when usersp
在 2022/6/28 22:20, Jean-Philippe Brucker 写道:
On Tue, Jun 28, 2022 at 07:53:39PM +0800, Baolu Lu wrote:
Once the iopf_handle_single() is removed, the name of
iopf_handle_group() looks a little weired
and confused, does this group mean the iommu group (domain) ?
while I take some minutes to
No.
On 6/29/22 14:40, Christoph Hellwig wrote:
> On Tue, Jun 28, 2022 at 12:33:58PM +0100, John Garry wrote:
>> Well Christoph originally offered to take this series via the dma-mapping
>> tree.
>>
>> @Christoph, is that still ok with you? If so, would you rather I send this
>> libata patch separatel
On Tue, Jun 28, 2022 at 12:33:58PM +0100, John Garry wrote:
> Well Christoph originally offered to take this series via the dma-mapping
> tree.
>
> @Christoph, is that still ok with you? If so, would you rather I send this
> libata patch separately?
The offer still stands, and I don't really car
On 2022/6/29 09:54, Tian, Kevin wrote:
From: Baolu Lu
Sent: Tuesday, June 28, 2022 7:34 PM
On 2022/6/28 16:50, Tian, Kevin wrote:
From: Baolu Lu
Sent: Tuesday, June 28, 2022 1:41 PM
struct iommu_domain {
unsigned type;
const struct iommu_domain_ops *ops;
unsigned l
> From: Baolu Lu
> Sent: Tuesday, June 28, 2022 7:34 PM
>
> On 2022/6/28 16:50, Tian, Kevin wrote:
> >> From: Baolu Lu
> >> Sent: Tuesday, June 28, 2022 1:41 PM
> struct iommu_domain {
> unsigned type;
> const struct iommu_domain_ops *ops;
> unsig
On 2022/6/28 22:20, Jean-Philippe Brucker wrote:
On Tue, Jun 28, 2022 at 07:53:39PM +0800, Baolu Lu wrote:
Once the iopf_handle_single() is removed, the name of
iopf_handle_group() looks a little weired
and confused, does this group mean the iommu group (domain) ?
while I take some minutes to
Hi Bart,
On 29/06/22 12:01, Michael Schmitz wrote:
An example of a user space application that passes an SG I/O data
buffer to the kernel that is aligned to a four byte boundary but not
to an eight byte boundary if the -s (scattered) command line option
is used:
https://github.com/osandov/b
Hi Bart,
On 29/06/22 11:50, Bart Van Assche wrote:
On 6/28/22 16:09, Michael Schmitz wrote:
On 29/06/22 09:50, Arnd Bergmann wrote:
On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz
wrote:
On 28/06/22 19:03, Geert Uytterhoeven wrote:
The driver allocates bounce buffers using kmalloc if it hi
On 6/28/22 16:09, Michael Schmitz wrote:
On 29/06/22 09:50, Arnd Bergmann wrote:
On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz
wrote:
On 28/06/22 19:03, Geert Uytterhoeven wrote:
The driver allocates bounce buffers using kmalloc if it hits an
unaligned data buffer - can such buffers still
Hi Arnd,
On 29/06/22 09:55, Arnd Bergmann wrote:
On Tue, Jun 28, 2022 at 11:38 PM Michael Schmitz wrote:
On 28/06/22 19:08, Arnd Bergmann wrote:
I see two other problems with your patch though:
a) you still duplicate the cache handling: the cache_clear()/cache_push()
is supposed to already b
Hi Arnd,
On 29/06/22 09:50, Arnd Bergmann wrote:
On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz wrote:
On 28/06/22 19:03, Geert Uytterhoeven wrote:
The driver allocates bounce buffers using kmalloc if it hits an
unaligned data buffer - can such buffers still even happen these days?
No idea
On Tue, Jun 28, 2022 at 11:38 PM Michael Schmitz wrote:
> On 28/06/22 19:08, Arnd Bergmann wrote:
> > I see two other problems with your patch though:
> >
> > a) you still duplicate the cache handling: the cache_clear()/cache_push()
> > is supposed to already be done by dma_map_single() when the d
On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz wrote:
> On 28/06/22 19:03, Geert Uytterhoeven wrote:
> >> The driver allocates bounce buffers using kmalloc if it hits an
> >> unaligned data buffer - can such buffers still even happen these days?
> > No idea.
> Hmmm - I think I'll stick a WARN_ON
Hi Arnd,
On 28/06/22 19:08, Arnd Bergmann wrote:
On Tue, Jun 28, 2022 at 5:25 AM Michael Schmitz wrote:
Am 28.06.2022 um 09:12 schrieb Michael Schmitz:
Leaving the bounce buffer handling in place, and taking a few other
liberties - this is what converting the easiest case (a3000 SCSI) might
l
Hi Geert,
On 28/06/22 19:03, Geert Uytterhoeven wrote:
Leaving the bounce buffer handling in place, and taking a few other
liberties - this is what converting the easiest case (a3000 SCSI) might
look like. Any obvious mistakes? The mvme147 driver would be very
similar to handle (after conversi
On Mon, 2 May 2022 14:07:45 +0530, Rohit Agarwal wrote:
> Add smem node to support shared memory manager on SDX65 platform.
>
>
Applied, thanks!
[4/4] ARM: dts: qcom: sdx65: Add Shared memory manager support
commit: e378b965330d99e8622eb369021d0dac01591046
Best regards,
--
Bjorn Anderss
On Tue, Jun 28, 2022 at 7:00 AM Tobias Klauser wrote:
>
> On 2022-06-28 at 04:01:03 +0200, Saravana Kannan wrote:
> > diff --git a/drivers/tty/serial/8250/8250_acorn.c
> > b/drivers/tty/serial/8250/8250_acorn.c
> > index 758c4aa203ab..5a6f2f67de4f 100644
> > --- a/drivers/tty/serial/8250/8250_ac
On Tue, 28 Jun 2022, Chao Gao wrote:
> Currently, each slot tracks the number of contiguous free slots starting
> from itself. It helps to quickly check if there are enough contiguous
> entries when dealing with an allocation request. But maintaining this
> information can leads to some overhead.
I tested the patch and works as expect.
Tested-by: Tony Zhu tony@intel.com
Tony(zhu, xinzhan)
Cube:SHZ1-3W-279
iNet:8821-5077
-Original Message-
From: Baolu Lu
Sent: Tuesday, June 28, 2022 2:13 PM
To: Zhangfei Gao ; Joerg Roedel ;
Jason Gunthorpe ; Christoph Hellwig ;
Tian, Kevin
On 2022-06-28 at 04:01:03 +0200, Saravana Kannan wrote:
> diff --git a/drivers/tty/serial/8250/8250_acorn.c
> b/drivers/tty/serial/8250/8250_acorn.c
> index 758c4aa203ab..5a6f2f67de4f 100644
> --- a/drivers/tty/serial/8250/8250_acorn.c
> +++ b/drivers/tty/serial/8250/8250_acorn.c
> @@ -114,7 +114
On Tue, Jun 28, 2022 at 07:53:39PM +0800, Baolu Lu wrote:
> > > > Once the iopf_handle_single() is removed, the name of
> > > > iopf_handle_group() looks a little weired
> > > >
> > > > and confused, does this group mean the iommu group (domain) ?
> > > > while I take some minutes to
> > >
> > >
On Thu, Jun 23, 2022 at 12:05 PM sascha hauer wrote:
> Also consider SoCs in early upstreaming phases
> when the device tree is merged with "dmas" or "hwlock" properties,
> but the corresponding drivers are not yet upstreamed. It's not nice
> to defer probing of all these devices for a long time.
On Wed, Jun 22, 2022 at 9:40 PM Saravana Kannan wrote:
> Actually, why isn't earlyconsole being used? That doesn't get blocked
> on anything and the main point of that is to have console working from
> really early on.
For Arm (arch/arm) there is a special low-level debug option call low-level
d
On 2022/6/25 20:51, Lu Baolu wrote:
Hi folks,
This is a follow-up series of changes proposed by this patch:
https://lore.kernel.org/linux-iommu/20220615183650.32075-1-steve.w...@hpe.com/
It removes several static arrays of size DMAR_UNITS_SUPPORTED and sets
the DMAR_UNITS_SUPPORTED to 1024.
P
On 2022/6/28 18:02, Tian, Kevin wrote:
From: Jean-Philippe Brucker
Sent: Tuesday, June 28, 2022 5:44 PM
On Tue, Jun 28, 2022 at 08:39:36AM +, Tian, Kevin wrote:
From: Lu Baolu
Sent: Tuesday, June 21, 2022 10:44 PM
Tweak the I/O page fault handling framework to route the page faults to
th
On 2022/6/28 17:10, Ethan Zhao wrote:
Hi, Baolu
在 2022/6/28 14:28, Baolu Lu 写道:
Hi Ethan,
On 2022/6/27 21:03, Ethan Zhao wrote:
Hi,
在 2022/6/21 22:43, Lu Baolu 写道:
Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved fr
On 2022/6/28 16:50, Tian, Kevin wrote:
+
+ mutex_lock(&group->mutex);
+ curr = xa_cmpxchg(&group->pasid_array, pasid, NULL, domain,
GFP_KERNEL);
+ if (curr)
+ goto out_unlock;
Need check xa_is_err(old).
Either
(1) old entry is a valid pointer, or
return -EBUSY
On 28/06/2022 10:14, Damien Le Moal wrote:
BTW, this patch has no real dependency on the rest of the series, so
could be taken separately if you prefer.
Sure, you can send it separately. Adding it through the scsi tree is fine too.
Well Christoph originally offered to take this series via the
On 2022/6/28 16:50, Tian, Kevin wrote:
From: Baolu Lu
Sent: Tuesday, June 28, 2022 1:41 PM
struct iommu_domain {
unsigned type;
const struct iommu_domain_ops *ops;
unsigned long pgsize_bitmap;/* Bitmap of page sizes in use */
- iommu_fault_handler_t handler;
On Tue, Jun 28, 2022 at 4:59 AM Martin K. Petersen
wrote:
> Hi Arnd!
>
> > If there are no more issues identified with this series, I'll merge it
> > through the asm-generic tree. The SCSI patches can also get merged
> > separately through the SCSI maintainers' tree if they prefer.
>
> I put patch
On 28/06/2022 12:23, Robin Murphy wrote:
+
+ size_t
+ dma_opt_mapping_size(struct device *dev);
+
+Returns the maximum optimal size of a mapping for the device. Mapping
large
+buffers may take longer so device drivers are advised to limit total DMA
+streaming mappings length to the return
On 2022-06-27 16:25, John Garry wrote:
Streaming DMA mapping involving an IOMMU may be much slower for larger
total mapping size. This is because every IOMMU DMA mapping requires an
IOVA to be allocated and freed. IOVA sizes above a certain limit are not
cached, which can have a big impact on DMA
On Mon, Jun 27, 2022 at 07:01:01PM -0700, Saravana Kannan wrote:
> Since the series that fixes console probe delay based on stdout-path[1] got
> pulled into driver-core-next, I made these patches on top of them.
>
> Even if stdout-path isn't set in DT, this patch should take console
> probe times
On 2022/6/28 16:39, Tian, Kevin wrote:
static void iopf_handle_group(struct work_struct *work)
{
struct iopf_group *group;
+ struct iommu_domain *domain;
struct iopf_fault *iopf, *next;
enum iommu_page_response_code status =
IOMMU_PAGE_RESP_SUCCESS;
grou
On 2022-06-27 16:25, John Garry wrote:
Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
allows the drivers to know the optimal mapping limit and thus limit the
requested IOVA lengths.
This value is based on the IOVA rcache range limit, as IOVAs allocated
above this limit
On 2022/6/28 16:29, Tian, Kevin wrote:
From: Lu Baolu
Sent: Tuesday, June 21, 2022 10:44 PM
+/*
+ * I/O page fault handler for SVA
+ */
+enum iommu_page_response_code
+iommu_sva_handle_iopf(struct iommu_fault *fault, void *data)
+{
+ vm_fault_t ret;
+ struct mm_struct *mm;
+ st
> From: Jean-Philippe Brucker
> Sent: Tuesday, June 28, 2022 5:44 PM
>
> On Tue, Jun 28, 2022 at 08:39:36AM +, Tian, Kevin wrote:
> > > From: Lu Baolu
> > > Sent: Tuesday, June 21, 2022 10:44 PM
> > >
> > > Tweak the I/O page fault handling framework to route the page faults to
> > > the dom
On Tue, Jun 28, 2022 at 08:39:36AM +, Tian, Kevin wrote:
> > From: Lu Baolu
> > Sent: Tuesday, June 21, 2022 10:44 PM
> >
> > Tweak the I/O page fault handling framework to route the page faults to
> > the domain and call the page fault handler retrieved from the domain.
> > This makes the I/
Hi, Baolu
在 2022/6/28 14:28, Baolu Lu 写道:
Hi Ethan,
On 2022/6/27 21:03, Ethan Zhao wrote:
Hi,
在 2022/6/21 22:43, Lu Baolu 写道:
Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page
On 6/28/22 16:54, John Garry wrote:
> On 28/06/2022 00:24, Damien Le Moal wrote:
>> On 6/28/22 00:25, John Garry wrote:
>>> ATA devices (struct ata_device) have a max_sectors field which is
>>> configured internally in libata. This is then used to (re)configure the
>>> associated sdev request queue
Hi Joerg,
On 6/23/2022 1:45 PM, Joerg Roedel wrote:
> On Fri, Jun 03, 2022 at 04:51:00PM +0530, Vasant Hegde wrote:
>> - Part 1 (patch 1-4 and 6)
>> Refactor the current IOMMU page table code to adopt the generic IO page
>> table framework, and add AMD IOMMU Guest (v2) page table management co
> From: Baolu Lu
> Sent: Tuesday, June 28, 2022 1:54 PM
> >> +u32 iommu_sva_get_pasid(struct iommu_sva *handle)
> >> +{
> >> + struct iommu_domain *domain =
> >> + container_of(handle, struct iommu_domain, bond);
> >> +
> >> + return domain->mm->pasid;
> >> +}
> >> +EXPORT_SYMBO
> From: Baolu Lu
> Sent: Tuesday, June 28, 2022 1:41 PM
> >
> >> struct iommu_domain {
> >>unsigned type;
> >>const struct iommu_domain_ops *ops;
> >>unsigned long pgsize_bitmap;/* Bitmap of page sizes in use */
> >> - iommu_fault_handler_t handler;
> >> - void *handler_token;
> From: Lu Baolu
> Sent: Tuesday, June 21, 2022 10:44 PM
>
> Rename iommu-sva-lib.c[h] to iommu-sva.c[h] as it contains all code
> for SVA implementation in iommu core.
>
> Signed-off-by: Lu Baolu
> Reviewed-by: Jean-Philippe Brucker
Reviewed-by: Kevin Tian
__
> From: Lu Baolu
> Sent: Tuesday, June 21, 2022 10:44 PM
>
> Tweak the I/O page fault handling framework to route the page faults to
> the domain and call the page fault handler retrieved from the domain.
> This makes the I/O page fault handling framework possible to serve more
> usage scenarios
> From: Lu Baolu
> Sent: Tuesday, June 21, 2022 10:44 PM
> +/*
> + * I/O page fault handler for SVA
> + */
> +enum iommu_page_response_code
> +iommu_sva_handle_iopf(struct iommu_fault *fault, void *data)
> +{
> + vm_fault_t ret;
> + struct mm_struct *mm;
> + struct vm_area_struct *vma;
> -Original Message-
> From: Robin Murphy [mailto:robin.mur...@arm.com]
> Sent: 27 June 2022 13:26
> To: Shameerali Kolothum Thodi ;
> Steven Price ; linux-arm-ker...@lists.infradead.org;
> linux-a...@vger.kernel.org; iommu@lists.linux-foundation.org
> Cc: j...@solid-run.com; Linuxarm ;
On 28/06/2022 00:24, Damien Le Moal wrote:
On 6/28/22 00:25, John Garry wrote:
ATA devices (struct ata_device) have a max_sectors field which is
configured internally in libata. This is then used to (re)configure the
associated sdev request queue max_sectors value from how it is earlier set
in _
Hi Joerg,
On 6/23/2022 1:42 PM, Joerg Roedel wrote:
> On Fri, Jun 03, 2022 at 04:51:07PM +0530, Vasant Hegde wrote:
>> +amd_iommu_pgtable= [HW,X86-64]
>> +Specifies one of the following AMD IOMMU page table to
>> +be used for DMA remapping for DMA-API:
Hi Joerg,
On 6/23/2022 1:24 PM, Joerg Roedel wrote:
> Hi Vasant,
>
> On Wed, May 11, 2022 at 12:51:06PM +0530, Vasant Hegde wrote:
>> .../admin-guide/kernel-parameters.txt | 34 +-
>> drivers/iommu/amd/amd_iommu.h | 13 +-
>> drivers/iommu/amd/amd_iommu_types.h
On 2022/6/27 22:01, Peter Zijlstra wrote:
> On Mon, Jun 27, 2022 at 09:25:42PM +0800, Yicong Yang wrote:
>> On 2022/6/27 21:12, Greg KH wrote:
>>> On Mon, Jun 27, 2022 at 07:18:12PM +0800, Yicong Yang wrote:
Hi Greg,
Since the kernel side of this device has been reviewed for 8 versio
On Tue, Jun 28, 2022 at 5:25 AM Michael Schmitz wrote:
> Am 28.06.2022 um 09:12 schrieb Michael Schmitz:
>
> Leaving the bounce buffer handling in place, and taking a few other
> liberties - this is what converting the easiest case (a3000 SCSI) might
> look like. Any obvious mistakes? The mvme147
Hi Michael,
On Tue, Jun 28, 2022 at 5:26 AM Michael Schmitz wrote:
> Am 28.06.2022 um 09:12 schrieb Michael Schmitz:
> > On 27/06/22 20:26, Geert Uytterhoeven wrote:
> >> On Sat, Jun 18, 2022 at 3:06 AM Michael Schmitz
> >> wrote:
> >>> Am 18.06.2022 um 00:57 schrieb Arnd Bergmann:
> From:
From: Andi Kleen
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
signi
Currently, swiotlb uses a global index to indicate the starting point
of next search. The index increases from 0 to the number of slots - 1
and then wraps around. It is straightforward but not cache-friendly
because the "oldest" slot in swiotlb tends to be used first.
Freed slots are probably acce
Currently, each slot tracks the number of contiguous free slots starting
from itself. It helps to quickly check if there are enough contiguous
entries when dealing with an allocation request. But maintaining this
information can leads to some overhead. Specifically, if a slot is
allocated/freed, pr
Intent of this post:
Seek reviews from Intel reviewers and anyone else in the list
interested in IO performance in confidential VMs. Need some acked-by
reviewed-by tags before I can add swiotlb maintainers to "to/cc" lists
and ask for a review from them.
swiotlb is now widely used by confident
78 matches
Mail list logo