Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work around this. One way
could possibly be to have a buffer mapping -and validate order for
shared buffers.
If mapping never blocks on anything other than the fence, then there
isn't any dead lock possibility. What this says is that ordering of
rendering between clients is *not DRMs problem*. I think that's a good
solution though; I want to let multiple apps work on DRM-able memory
with their own CPU without contention.
I don't recall if Eric layed out the proposed rules, but:
1) Map never blocks on map. Clients interested in dealing with this
are on their own.
2) Submit blocks on map. You must unmap all buffers before submitting
them. Doing the relocations in the kernel makes this all possible.
3) Map blocks on the fence from submit. We can play with pending the
flush until the app asks for the buffer back, or we can play with
figuring out when flushes are useful automatically. Doesn't matter
if the policy is in the kernel.
I'm interested in making deadlock avoidence trivial and eliminating any
map-map contention.
It's rare to have two clients access the same buffer at the same time.
In what situation will this occur?
If we think of map / unmap and validation / fence as taking a buffer
mutex either for the CPU or for the GPU, that's the way implementation
is done today. The CPU side of the mutex should IIRC be per-client
recursive. OTOH, the TTM implementation won't stop the CPU from
accessing the buffer when it is unmapped, but then you're on your own.
"Mutexes" need to be taken in the correct order, otherwise a deadlock
will occur, and GL will, as outlined in Eric's illustration, more or
less encourage us to take buffers in the "incorrect" order.
In essence what you propose is to eliminate the deadlock problem by just
avoiding taking the buffer mutex unless we know the GPU has it. I see
two problems with this:
* It will encourage different DRI clients to simultaneously access
the same buffer.
* Inter-client and GPU data coherence can be guaranteed if we issue
a mb() / write-combining flush with the unmap operation (which,
BTW, I'm not sure is done today). Otherwise it is up to the
clients, and very easy to forget.
I'm a bit afraid we might in the future regret taking the easy way out?
OTOH, letting DRM resolve the deadlock by unmapping and remapping shared
buffers in the correct order might not be the best one either. It will
certainly mean some CPU overhead and what if we have to do the same with
buffer validation? (Yes for some operations with thousands and thousands
of relocations, the user space validation might need to stay).
Personally, I'm slightly biased towards having DRM resolve the deadlock,
but I think any solution will do as long as the implications and why we
choose a certain solution are totally clear.
For item 3) above the kernel must have a way to issue a flush when
needed for buffer eviction.
The current implementation also requires the buffer to be completely
flushed before mapping.
Other than that the flushing policy is currently completely up to the
DRM drivers.
/Thomas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]