我一直在查看 stream.py 示例,该示例使用更积极的策略将 G 代码流式传输到 Grbl,以防止规划器缓冲区不足。该示例可以在此存储库中找到:https ://github.com/gnea/grbl/blob/master/doc/script/stream.py 。在第 164 行,它说: while sum(c_line) >= RX_BUFFER_SIZE-1 | s.inWaiting() : 这是我感到困惑的那一行,在我解释为什么我感到困惑之前,我将展示围绕它的代码片段,以便阅读本文的任何人对这一部分的内容有一个基本的了解代码应该这样做:
# Send g-code program via a more agressive streaming protocol that forces characters into
# Grbl's serial read buffer to ensure Grbl has immediate access to the next g-code command
# rather than wait for the call-response serial protocol to finish. This is done by careful
# counting of the number of characters sent by the streamer to Grbl and tracking Grbl's
# responses, such that we never overflow Grbl's serial read buffer.
g_count = 0
c_line = []
for line in f:
l_count += 1 # Iterate line counter
l_block = re.sub('\s|\(.*?\)','',line).upper() # Strip comments/spaces/new line and capitalize
# l_block = line.strip()
c_line.append(len(l_block)+1) # Track number of characters in grbl serial read buffer
grbl_out = ''
while sum(c_line) >= RX_BUFFER_SIZE-1 | s.inWaiting() :
out_temp = s.readline().strip() # Wait for grbl response
if out_temp.find('ok') < 0 and out_temp.find('error') < 0 :
print " MSG: \""+out_temp+"\"" # Debug response
else :
if out_temp.find('error') >= 0 : error_count += 1
g_count += 1 # Iterate g-code counter
if verbose: print " REC<"+str(g_count)+": \""+out_temp+"\""
del c_line[0] # Delete the block character count corresponding to the last 'ok'
s.write(l_block + '\n') # Send g-code block to grbl
我一直在查看 stream.py 示例,该示例使用更积极的策略将 G 代码流式传输到 Grbl,以防止规划器缓冲区不足。该示例可以在此存储库中找到:https ://github.com/gnea/grbl/blob/master/doc/script/stream.py 。在第 164 行,它说:
while sum(c_line) >= RX_BUFFER_SIZE-1 | s.inWaiting() :这是我感到困惑的那一行,在我解释为什么我感到困惑之前,我将展示围绕它的代码片段,以便阅读本文的任何人对这一部分的内容有一个基本的了解代码应该这样做:
第 164 行有两件事我不明白。首先,为什么RX_BUFFER_SIZE 减去“1”?
RX_BUFFER_SIZE-1其次,对and使用按位或运算的目的是什么s.inWaiting()?我不明白为什么我不能只对第 164 行执行此操作:while sum(c_line) >= RX_BUFFER_SIZE:对我来说,这意味着如果 Grbl 的 RX 缓冲区已满,我们将不会发送更多命令,而是读取来自Grbl 控制器,直到确定我们可以发送下一个块。
当我看到这行代码时,我的脑子里一片混乱,如果我问的是愚蠢的问题,请原谅我。