I am not a SWD expert; it is several years since I read the spec. So I apologise if you are way ahead of me.
I believe ARM's SWD documentation, e.g. "ARM® Debug Interface v5 Architecture Specification" (ARM IHI 0031A) says that transfers are fixed length, starting with an 8 bits request.
The code:
void clockOut(int x, int bits) {
pinMode(swdio, OUTPUT);
while (bits > 0) {
...
bits--;
...
}
...
}
is invoked with clockOut(0xE79E, 16)
A packet request is 8 bits, followed by a response from the target. So this doesn't appear to follow the protocol.
Similarly clockOut(0x25, 7) appears to send 7 bits, which also looks wrong, and may explain why the read fails.
Further, there should be a 'turnaround bit' after sending the packet.
This code:
void clockOut(int x, int bits) {
pinMode(swdio, OUTPUT);
...
pinMode(swdio, INPUT);
digitalWrite(swdio, HIGH);
}
looks like it is setting up to do that, however, it also needs to signal another clock pulse, e.g.:
void clockOut(int x, int bits) {
pinMode(swdio, OUTPUT);
...
pinMode(swdio, INPUT);
digitalWrite(swdio, HIGH);
/* signal a clock pulse */
digitalWrite(swdck, LOW);
delay(1);
digitalWrite(swdck, HIGH);
delay(1);
}
A SWD read should return 33 bits, so x = clockIn(5); looks a bit fragile.
Even so, it should get the three ACK bits. It should leave another pile of bits unread, which is what I mean by fragile.
However, x = clockIn(5); might be incorrect because of the short clockOut(0x25, 7) and lack of a turnaround bit.
Small points:
- I'd change that
clockOut while loop to be a for loop like clockIn to make it clear that it is running for a specific 'count' (bits) of bits to be transferred.
delay(1) seems to be a long time between bits. I don't remember any
timing restrictions. However, I would expect the transfers to run
10-100x faster.
- IMHO
clockIn(26); clockIn(26); seems to be an unnecessarily obscure way
to generate the 50 cycle connection/reset sequence. I'd probably have
a slightly simpler function than clockIn which only generates the
connection/reset sequence.